survey_title
stringlengths 19
197
| section_num
int64 3
56
| references
stringlengths 4
1.34M
| section_outline
stringlengths 531
9.08k
|
---|---|---|---|
Adoption of CRM technology in multichannel environment: a review (2006‐2010) | 7 | ---
paper_title: Review: Application of data mining techniques in customer relationship management: A literature review and classification
paper_content:
Despite the importance of data mining techniques to customer relationship management (CRM), there is a lack of a comprehensive literature review and a classification scheme for it. This is the first identifiable academic literature review of the application of data mining techniques to CRM. It provides an academic database of literature between the period of 2000-2006 covering 24 journals and proposes a classification scheme to classify the articles. Nine hundred articles were identified and reviewed for their direct relevance to applying data mining techniques to CRM. Eighty-seven articles were subsequently selected, reviewed and classified. Each of the 87 selected papers was categorized on four CRM dimensions (Customer Identification, Customer Attraction, Customer Retention and Customer Development) and seven data mining functions (Association, Classification, Clustering, Forecasting, Regression, Sequence Discovery and Visualization). Papers were further classified into nine sub-categories of CRM elements under different data mining techniques based on the major focus of each paper. The review and classification process was independently verified. Findings of this paper indicate that the research area of customer retention received most research attention. Of these, most are related to one-to-one marketing and loyalty programs respectively. On the other hand, classification and association models are the two commonly used models for data mining in CRM. Our analysis provides a roadmap to guide future research and facilitate knowledge accumulation and creation concerning the application of data mining techniques in CRM.
---
paper_title: Electronic commerce customer relationship management: an assessment of research
paper_content:
The status and maturity of electronic commerce customer relationship management (ECCRM), an emerging subfield of management information systems (MIS), are investigated through an exhaustive literature review of 369 articles, from the first published article in 1984 through conference papers given in 2001 and 2002. The results indicate some trends that should be of interest and concern to researchers in this area and in MIS as a whole. First, exploratory surveys dominate the research literature, which in itself may be problematic. More troubling, most of the survey instruments were not validated, and the authors did not mention validation procedures. Second, there has been little theoretical development, and few empirical studies use hypothesis testing. Third, cumulative tradition has hardly emerged, with each study developing a new conceptual model, new constructs, and new instruments. On the positive side, ECCRM researchers have employed a wide range of methods and studied a broad range of topics. The subfield of ECCRM is young, but is growing rapidly, and professional activity in the MIS research community illustrates its importance. Specific recommendations for further development are provided.
---
paper_title: Determining technology trends and forecasts of CRM through a historical review and bibliometric analysis of data from 1991 to 2005
paper_content:
Customer Relationship Management (CRM) has been identified as one of the greatest technological contributions to enterprises in the 21st century. This technology surged into the market rapidly. More and more enterprises are applying CRM to improve efficiency of operation and gain competitive advantage. In light of the awareness of the CRM trend's contribution, a historical review and bibliometric methods are applied in this research. CRM is examined using the bibliometric analysis technique on SCI and SSCI journals from 1991 to 2005. Also, the historical review method was applied to analyse CRM innovation, organisations' adoption, and diffusion. Moreover, from retrospective analysis findings, business, the health industry and privacy are the major trends and issues of adoption by enterprises. Furthermore, the contribution of CRM and forecast of the technology trend are also analysed. CRM will diffuse and be assimilated into our daily lives in the near future.
---
paper_title: Interaction channel choice in a multichannel environment, an empirical study
paper_content:
Purpose – The purpose of this paper is to investigate consumer channel preferences and the motives that induce consumers to use a particular channel in a context of multichannel contact.Design/methodology/approach – This paper analyses some factors that influence consumer behaviour in channel selection through an empirical study in the financial sector. Some hypotheses are presented and tested.Findings – The paper reveals the influence of some variables (perceived convenience, social relationships, knowledge of channel and privacy) on the channel selected (counter, ATM or internet) for the performance of certain operations with the company.Research limitations/implications – To generalise these findings this study needs to be replicated in other geographical areas and companies.Practical implications – The multichannel contact centre is one of the fundamental pillars of customer relationship management. It is not enough simply to have the necessary technology (hardware, software and telecommunications). C...
---
paper_title: Methodology for customer relationship management
paper_content:
Customer relationship management (CRM) is a customer-focused business strategy that dynamically integrates sales, marketing and customer care service in order to create and add value for the company and its customers.This change towards a customer-focused strategy is leading to a strong demand for CRM solutions by companies. However, in spite of companies' interest in this new management model, many CRM implementations fail. One of the main reasons for this lack of success is that the existing methodologies being used to approach a CRM project are not adequate, since they do not satisfactorily integrate and complement the strategic and technological aspects of CRM.This paper describes a formal methodology for directing the process of developing and implementing a CRM System that considers and integrates various aspects, such as defining a customer strategy, re-engineering customer-oriented business processes, human resources management, the computer system, management of change and continuous improvement.
---
paper_title: Review: Application of data mining techniques in customer relationship management: A literature review and classification
paper_content:
Despite the importance of data mining techniques to customer relationship management (CRM), there is a lack of a comprehensive literature review and a classification scheme for it. This is the first identifiable academic literature review of the application of data mining techniques to CRM. It provides an academic database of literature between the period of 2000-2006 covering 24 journals and proposes a classification scheme to classify the articles. Nine hundred articles were identified and reviewed for their direct relevance to applying data mining techniques to CRM. Eighty-seven articles were subsequently selected, reviewed and classified. Each of the 87 selected papers was categorized on four CRM dimensions (Customer Identification, Customer Attraction, Customer Retention and Customer Development) and seven data mining functions (Association, Classification, Clustering, Forecasting, Regression, Sequence Discovery and Visualization). Papers were further classified into nine sub-categories of CRM elements under different data mining techniques based on the major focus of each paper. The review and classification process was independently verified. Findings of this paper indicate that the research area of customer retention received most research attention. Of these, most are related to one-to-one marketing and loyalty programs respectively. On the other hand, classification and association models are the two commonly used models for data mining in CRM. Our analysis provides a roadmap to guide future research and facilitate knowledge accumulation and creation concerning the application of data mining techniques in CRM.
---
paper_title: Voids in the Current CRM Literature: Academic Literature Review and Classification (2000-2005)
paper_content:
The status of the CRM literature is investigated for the period of 2000 to 2005, in order to provide an overview of academic research on the subject and to identify gaps in the current literature. To provide as complete picture of CRM as possible, the information systems (IS) as well as the marketing literature was systematically reviewed. From both disciplines the top journals and a number of international conferences were analyzed. Selected publications were reviewed in a structured way and categorized according to the different phases in the CRM lifecycle: adoption, acquisition, implementation, use & maintenance, evolution and retirement. It appears that less attention has been devoted to implementation issues and to the evolution and retirement phases. Furthermore, a difference in attention was found between the IS and marketing literature: while researchers of the latter focused mainly on the adoption and use phases, IS researchers' attention was more evenly distributed over the lifecycle
---
paper_title: Electronic commerce customer relationship management: an assessment of research
paper_content:
The status and maturity of electronic commerce customer relationship management (ECCRM), an emerging subfield of management information systems (MIS), are investigated through an exhaustive literature review of 369 articles, from the first published article in 1984 through conference papers given in 2001 and 2002. The results indicate some trends that should be of interest and concern to researchers in this area and in MIS as a whole. First, exploratory surveys dominate the research literature, which in itself may be problematic. More troubling, most of the survey instruments were not validated, and the authors did not mention validation procedures. Second, there has been little theoretical development, and few empirical studies use hypothesis testing. Third, cumulative tradition has hardly emerged, with each study developing a new conceptual model, new constructs, and new instruments. On the positive side, ECCRM researchers have employed a wide range of methods and studied a broad range of topics. The subfield of ECCRM is young, but is growing rapidly, and professional activity in the MIS research community illustrates its importance. Specific recommendations for further development are provided.
---
paper_title: Assessing the readiness of firms for CRM: a literature review and research model
paper_content:
The concept of customer relationship management (CRM) resonates with managers in today's competitive economy. Yet recent articles in the business press have described CRM implementation failures, and consequent company reluctance to invest in CRM. The potential for substantially improved customer relationship management, coupled with the high uncertainty surrounding failed implementation efforts, calls for a critical new look at the determinants of, and influences upon, a firm's decision to adopt CRM. This paper responds by underscoring the criticality of performing a deep analysis of a firm's readiness to undertake a CRM initiative. We suggest that this assessment provide detailed answers to two fundamental questions: What is a firm's current CRM capability? And what changes must be in place before embarking on a CRM initiative? A model to assess readiness is developed based upon the premise that business value is enhanced through the alignment of complementary factors occurring along three dimensions, intellectual, social, and technological.
---
paper_title: Customer relationship management research (1992‐2002): An academic literature review and classification
paper_content:
Purpose – To review the academic literature on customer relationship management (CRM), provide a comprehensive bibliography and propose a method of classifying that literature.Design/methodology/approach – A range of online databases were searched to provide a comprehensive listing of journal articles on CRM. Six hundred articles were identified and reviewed for their direct relevance to CRM. Two hundred and five articles were subsequently selected. Each of these articles was further reviewed and classified. The review and classification process was independently verified. All papers were allocated to the main and sub‐categories based on the major focus of each paper.Findings – Papers and research on CRM falls into five broad categories (CRM – General, Marketing, Sales, Service and Support, and IT and IS) and a further 34 sub‐categories. The most popular areas covered by the papers lay in the sub‐category of CRM management, planning and strategy; and CRM general, concept, and study followed by papers in s...
---
paper_title: Challenges and Opportunities in Multichannel Customer Management
paper_content:
Multichannel customer management is the design, deployment, coordination, and evaluation of channels through which firms and customers interact, with the goal of enhancing customer value through effective customer acquisition, retention, and development. The authors identify five major challenges practitioners must address to manage the multichannel environment more effectively: (a) data integration, (b) understanding consumer behavior, (c) channel evaluation, (d) allocation of resources across channels, and (e) coordination of channel strategies. The authors also propose a framework that shows the linkages among these challenges and provides a means to conceptualize the field of multichannel customer management. A review of academic research reveals that this field has experienced significant research growth, but the growth has not been distributed evenly across the five major challenges. The authors discuss what has been learned to date and identify emerging generalizations as appropriate. They conclude with a summary of where the research-generated knowledge base stands on several issues pertaining to the five challenges.
---
paper_title: Review: Application of data mining techniques in customer relationship management: A literature review and classification
paper_content:
Despite the importance of data mining techniques to customer relationship management (CRM), there is a lack of a comprehensive literature review and a classification scheme for it. This is the first identifiable academic literature review of the application of data mining techniques to CRM. It provides an academic database of literature between the period of 2000-2006 covering 24 journals and proposes a classification scheme to classify the articles. Nine hundred articles were identified and reviewed for their direct relevance to applying data mining techniques to CRM. Eighty-seven articles were subsequently selected, reviewed and classified. Each of the 87 selected papers was categorized on four CRM dimensions (Customer Identification, Customer Attraction, Customer Retention and Customer Development) and seven data mining functions (Association, Classification, Clustering, Forecasting, Regression, Sequence Discovery and Visualization). Papers were further classified into nine sub-categories of CRM elements under different data mining techniques based on the major focus of each paper. The review and classification process was independently verified. Findings of this paper indicate that the research area of customer retention received most research attention. Of these, most are related to one-to-one marketing and loyalty programs respectively. On the other hand, classification and association models are the two commonly used models for data mining in CRM. Our analysis provides a roadmap to guide future research and facilitate knowledge accumulation and creation concerning the application of data mining techniques in CRM.
---
paper_title: Voids in the Current CRM Literature: Academic Literature Review and Classification (2000-2005)
paper_content:
The status of the CRM literature is investigated for the period of 2000 to 2005, in order to provide an overview of academic research on the subject and to identify gaps in the current literature. To provide as complete picture of CRM as possible, the information systems (IS) as well as the marketing literature was systematically reviewed. From both disciplines the top journals and a number of international conferences were analyzed. Selected publications were reviewed in a structured way and categorized according to the different phases in the CRM lifecycle: adoption, acquisition, implementation, use & maintenance, evolution and retirement. It appears that less attention has been devoted to implementation issues and to the evolution and retirement phases. Furthermore, a difference in attention was found between the IS and marketing literature: while researchers of the latter focused mainly on the adoption and use phases, IS researchers' attention was more evenly distributed over the lifecycle
---
paper_title: Analyzing the past to prepare for the future: Writing a literature review
paper_content:
A review of prior, relevant literature is an essential feature of any academic project. An effective review creates a firm foundation for advancing knowledge. It facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed.
---
paper_title: Electronic commerce customer relationship management: an assessment of research
paper_content:
The status and maturity of electronic commerce customer relationship management (ECCRM), an emerging subfield of management information systems (MIS), are investigated through an exhaustive literature review of 369 articles, from the first published article in 1984 through conference papers given in 2001 and 2002. The results indicate some trends that should be of interest and concern to researchers in this area and in MIS as a whole. First, exploratory surveys dominate the research literature, which in itself may be problematic. More troubling, most of the survey instruments were not validated, and the authors did not mention validation procedures. Second, there has been little theoretical development, and few empirical studies use hypothesis testing. Third, cumulative tradition has hardly emerged, with each study developing a new conceptual model, new constructs, and new instruments. On the positive side, ECCRM researchers have employed a wide range of methods and studied a broad range of topics. The subfield of ECCRM is young, but is growing rapidly, and professional activity in the MIS research community illustrates its importance. Specific recommendations for further development are provided.
---
paper_title: Assessing the readiness of firms for CRM: a literature review and research model
paper_content:
The concept of customer relationship management (CRM) resonates with managers in today's competitive economy. Yet recent articles in the business press have described CRM implementation failures, and consequent company reluctance to invest in CRM. The potential for substantially improved customer relationship management, coupled with the high uncertainty surrounding failed implementation efforts, calls for a critical new look at the determinants of, and influences upon, a firm's decision to adopt CRM. This paper responds by underscoring the criticality of performing a deep analysis of a firm's readiness to undertake a CRM initiative. We suggest that this assessment provide detailed answers to two fundamental questions: What is a firm's current CRM capability? And what changes must be in place before embarking on a CRM initiative? A model to assess readiness is developed based upon the premise that business value is enhanced through the alignment of complementary factors occurring along three dimensions, intellectual, social, and technological.
---
paper_title: Customer relationship management research (1992‐2002): An academic literature review and classification
paper_content:
Purpose – To review the academic literature on customer relationship management (CRM), provide a comprehensive bibliography and propose a method of classifying that literature.Design/methodology/approach – A range of online databases were searched to provide a comprehensive listing of journal articles on CRM. Six hundred articles were identified and reviewed for their direct relevance to CRM. Two hundred and five articles were subsequently selected. Each of these articles was further reviewed and classified. The review and classification process was independently verified. All papers were allocated to the main and sub‐categories based on the major focus of each paper.Findings – Papers and research on CRM falls into five broad categories (CRM – General, Marketing, Sales, Service and Support, and IT and IS) and a further 34 sub‐categories. The most popular areas covered by the papers lay in the sub‐category of CRM management, planning and strategy; and CRM general, concept, and study followed by papers in s...
---
paper_title: A consumer‐based view of multi‐channel service
paper_content:
Purpose – Consumers increasingly experience multi‐channel service and a significant challenge for the service organization is to ensure that the design of the multi‐channel interface contributes to the service experience and helps to build bonds with customers. The purpose of this paper is to elucidate four features (i.e. problem‐handling, record accuracy, usability, and scalability) used by customers to evaluate multi‐channel service and investigates their impact on customer relationship and loyalty intentions.Design/methodology/approach – The study involves an online survey with customers selected randomly in two service industries. Empirical data are analyzed using structural equation models.Findings – Customer evaluations of the multi‐channel service interface have a strong influence on customer trust in the organization but a negligible impact on customer commitment. Trust, however, has a positive effect on commitment, thus enhancing customer loyalty.Research limitations/implications – The measures d...
---
paper_title: Mobilizing Customer Relationship Management — A Journey from Strategy to System Design
paper_content:
The bursting of the e-bubble affected expectations with regard to mobile initiatives and willingness to invest in them very negatively. In both academia and practice, there has recently been renewed interest in mobile business and mobile commerce. Now, however, business managers request detailed and thorough analyses prior to engaging in mobile initiatives. Multiple examples of successful customer relationship management (CRM) have appeared in academic and non-academic publications, although, none of them describe exactly how the transformation to a successful mobile CRM solution was accomplished. Consequently, we present a method, based on the business engineering, with which mobile business can be introduced to the CRM field. We provide a framework for the definition of a mobile CRM strategy as derived from the corporate strategy, suggest a method for the identification of the mobilization potential in CRM processes that is aligned with the strategy, and provide guidance with regard to the design of mobile information systems with which to support these processes.
---
paper_title: CRM in Data-Rich Multichannel Retailing Environments: A Review and Future Research Directions
paper_content:
Many retailers have collected large amounts of customer data using, for example, loyalty programs. We provide an overview of the extant literature on customer relationship management (CRM), with a specific focus on retailing. We discuss how retailers can gather customer data and how they can analyze these data to gain useful customer insights. We provide an overview of the methods predicting customer responses and behavior over time. We also discuss the existing knowledge on the application of marketing actions in a CRM context, while providing an in-depth discussion on CRM and firm value. We outline future research directions based on the literature review and retail practice insights.
---
paper_title: Review: Application of data mining techniques in customer relationship management: A literature review and classification
paper_content:
Despite the importance of data mining techniques to customer relationship management (CRM), there is a lack of a comprehensive literature review and a classification scheme for it. This is the first identifiable academic literature review of the application of data mining techniques to CRM. It provides an academic database of literature between the period of 2000-2006 covering 24 journals and proposes a classification scheme to classify the articles. Nine hundred articles were identified and reviewed for their direct relevance to applying data mining techniques to CRM. Eighty-seven articles were subsequently selected, reviewed and classified. Each of the 87 selected papers was categorized on four CRM dimensions (Customer Identification, Customer Attraction, Customer Retention and Customer Development) and seven data mining functions (Association, Classification, Clustering, Forecasting, Regression, Sequence Discovery and Visualization). Papers were further classified into nine sub-categories of CRM elements under different data mining techniques based on the major focus of each paper. The review and classification process was independently verified. Findings of this paper indicate that the research area of customer retention received most research attention. Of these, most are related to one-to-one marketing and loyalty programs respectively. On the other hand, classification and association models are the two commonly used models for data mining in CRM. Our analysis provides a roadmap to guide future research and facilitate knowledge accumulation and creation concerning the application of data mining techniques in CRM.
---
paper_title: Electronic commerce customer relationship management: an assessment of research
paper_content:
The status and maturity of electronic commerce customer relationship management (ECCRM), an emerging subfield of management information systems (MIS), are investigated through an exhaustive literature review of 369 articles, from the first published article in 1984 through conference papers given in 2001 and 2002. The results indicate some trends that should be of interest and concern to researchers in this area and in MIS as a whole. First, exploratory surveys dominate the research literature, which in itself may be problematic. More troubling, most of the survey instruments were not validated, and the authors did not mention validation procedures. Second, there has been little theoretical development, and few empirical studies use hypothesis testing. Third, cumulative tradition has hardly emerged, with each study developing a new conceptual model, new constructs, and new instruments. On the positive side, ECCRM researchers have employed a wide range of methods and studied a broad range of topics. The subfield of ECCRM is young, but is growing rapidly, and professional activity in the MIS research community illustrates its importance. Specific recommendations for further development are provided.
---
paper_title: Enhancing e-CRM in the insurance industry by mobile e-services
paper_content:
With customers increasingly demanding full-scale solutions, insurance companies are more and more forced to continuously increase their portfolio of products and services. As customers also expect high quality and variety of services when needed, insurance companies have to find ways to present the right service at the right moment and with the right quality. An approach how to develop e-services and plan customer contacts with the usage of mobile technology is presented. Aspects of mobile customer relationship management in the insurance industry are described first. After that a method for developing and planning service-contacts using mobile technology is introduced.
---
paper_title: A strategy-based process for effectively determining system requirements in eCRM development
paper_content:
Customer relationship management (CRM) is an important concept to maintain competitiveness at e-commerce. Thus, many organizations hastily implement eCRM and fail to achieve its goal. CRM concept consists of a number of compound components on product designs, marketing attributes, and consumer behaviors. This requires different approaches from traditional ones in developing eCRM. Requirements engineering is one of the important steps in software development. Without a well-defined requirements specification, developers do not know how to proceed with requirements analysis. This research proposes a strategy-based process for requirements elicitation. This framework contains three steps: define customer strategies, identify consumer and marketing characteristics, and determine system requirements. Prior literature lacks discussing the important role of customer strategies in eCRM development. Empirical findings reveal that this strategy-based view positively improves the performance of requirements elicitation.
---
paper_title: HOW CAN THE WEB HELP BUILD CUSTOMER RELATIONSHIPS? AN EMPIRICAL STUDY ON E-TAILING
paper_content:
The Web is increasingly being viewed as a tool and place to enhance customer relationship. In this paper we defined a model to analyze the Web characteristics that aid in building customer relationships and then used this model to examine consumer relationship building mechanisms in online retailing (e-tailing). Through a survey of 177 shoppers who had bought books, CDs, or DVDs online, the causal model was validated using LISREL; 13 out of 14 hypotheses were supported. This research has contributed to both theory and practice by providing a validated model to analyze online consumer relationship building and suggesting mechanisms to help e-tailers focus on online consumer relationship management.
---
paper_title: Multi-Channel Customer Management: Delighting Consumers, Driving Efficiency
paper_content:
In today’s maturing consumer markets, emphasis is shifting from straightforward sales to a more holistic approach to customer life cycle management, with a stronger emphasis on how sales are generated and service provided all along the customer journey. Effectively managing these different marketing, sales and service channels poses a signifi cant challenge. Companies need new strategies, structures, processes and tools to deliver customer value across all channels. A multi-channel, integrative customer model that delivers customer value and significant return on investment (ROI) requires both a strong understanding of customer preferences and behaviours and a robust IT architecture that supports the overarching customer relationship management (CRM) strategy. Even those organizations that have embraced the need for sophisticated multi-channel orchestration often still fall short in their execution. We offer strategic guidelines in four areas to ensure success with multi-channel orchestration. First, marketing operations need to explicitly define a channel strategy with respect to customer segmentation, the channel journey (how and where sales and services are delivered) and targeted incentives that reward multi-channel sales and service support. Secondly, marketers need to optimize the online channel, which is fast becoming the primary platform for accessing product and service information, and completing an ever-growing number of transactions. It is also the entry hub to other touch points; customers go online to contact the company or to fi nd store locations or telephone numbers. Thirdly, companies need to build an IT foundation that underpins their CRM strategy. This CRM IT architecture needs to enable the transformation from vertical, single-channel operations to true horizontal business processes that deliver cross-channel integration. Fourthly, marketers can learn by example. Companies across the consumer products spectrum are experimenting with the multi-channel experience. Marketers should broadly consider the best practices that they could adapt to their own industry.
---
paper_title: Data Mining for Retail Inventory Management
paper_content:
A composite structural frame component which has a high quality appearance and substantial structural rigidity. The component comprises an elongated substantially flat member having a plurality of V-shaped grooves extending through the thickness of such member and tapered marginal edges, such grooves and tapered edges being mutually parallel and parallel to the longitudinal axis of such member. A flexible member, for example having a thickness substantially less than the thickness of the flat member, is secured to the surface of the flat member at which the apexes of the grooves are directed. The flexible member has a surface facing away from the flat member which is of a high quality appearance. Therefore, when the flat member is folded along the grooves to form an elongated structure of rectangular cross-section with a hollow interior, such structure is encased by flexible member. The resulting composite structure thus has a high quality appearance with substantial structural rigidity.
---
paper_title: The Effects of Mobile Customer Relationship Management on Customer Loyalty: Brand Image Does Matter
paper_content:
With the expansive growth of mobile commerce come opportunities for business and mobile service providers. To distinguish its service from another's and build a loyal customer base, a mobile service provider must look beyond technology and appeal to their customers' individuality through CRM. A study to examine the relationships of CRM practices and mobile services with customer loyalty, and the moderating effects of brand image was conducted. The results suggest that all contribute to loyalty. However, brand image moderates the relationships of customer service and customization, and mobile usage with customer loyalty
---
paper_title: A novel decision rules approach for customer relationship management of the airline market
paper_content:
Customer churn means the loss of existing customers to a competitor. Accurately predicting customer behavior may help firms to minimize this loss by proactively building a lasting relationship with their customers. In this paper, the application of the factor analysis and the Variable Consistency Dominance-based Rough Set Approach (VC-DRSA) in the customer relationship management (CRM) of the airline market is introduced. A set of ''if...then...'' decision rules are used as the preference model to classify customers by a set of criteria and regular attributes. The proposed method can determine the competitive position of an airline by understanding the behavior of its customers based on their perception of choice, and so develop the appropriate marketing strategies. A large sample of customers from an international airline is used to derive a set of rules and to evaluate its prediction ability.
---
paper_title: Customer preferred channel attributes in multi‐channel electronic banking
paper_content:
Purpose – The purpose of the study is to increase the understanding of the diverse retail channel preferences of online bank customers by examining their channel attribute preferences in electronic bill paying. Two different groups of online customers were examined: those who pay their bills over the internet and those who, in addition, have experience of using a mobile phone for this service.Design/methodology/approach – A large internet survey was implemented and conjoint analysis was used in order to identify the utilities of the attribute levels and relative importance of the different attributes. Moreover, cluster analysis was used to group the individuals into homogenous attribute preference segments.Findings – The empirical findings indicate that internet users and mobile users differ in their channel attribute preferences. The results suggest coherent customer preference segments in both groups. In addition, the study identifies a group of potential mobile banking users among those who have never ...
---
paper_title: Integration of E-CRM in Healthcare Services: A Framework for Analysis
paper_content:
The quality of service which could be delivered by the U.S. healthcare system is in contrast with the customer’s perceived expectations and reported levels of satisfaction. Due to the uncertainty about stakeholder views and the anomaly of the third-party payment system, healthcare service providers are accused of not relating to their patients. This article examines how—by using an analytical framework—a healthcare provider can develop competitive advantage through implementing electronic customer relationship management (e-CRM) systems that create per-ceived customer value for its patients. This framework allows the firm to systematically look at points where the customer interacts with specific organizational assets. By examining individual interactions and understanding how the customer perceives an interaction, the firm may then develop specific e-CRM systems to maximize the value a customer may realize through that in-teraction. Due to the in-depth and lengthy nature of most patient relationships with a healthcare provider, the healthcare industry is used as an example of how this framework can be used by all service providers.
---
paper_title: Strategic relationships between boundary-spanning functions: Aligning customer relationship management with supplier relationship management
paper_content:
Abstract This review focuses on the potential impact of enhanced strategic relationships between the boundary-spanning functions in supplier organizations. Specifically, the concern is with alignment between the organizational groups managing: marketing, sales and strategic account management; purchasing and supply strategy; and, collaborations and external partnerships. The topic is framed by the organizational evolution being driven by market change, and the search for superior innovation capabilities and business agility. These changes bring new challenges in cross-boundary integration and managing complex market networks. The logic is that strategic external relationships (with customers, supplier and partners) should be mirrored in strategic internal relationships (between the functions with lead responsibilities for managing relationships with customers, supplier and partners). Approaches to enhancing this capability include process management, internal partnering strategies and internal marketing activities. The discussion identifies a number of implications for practice and new research directions.
---
paper_title: Customer-centric strategy: a longitudinal study of implementation of a customer relationship management solution
paper_content:
This paper aims to contribute to extant literature on how to integrate IT to support the successful implementation of a Customer Relationship Management (CRM) solution. Relevant writings are reviewed to address the question of: What are the key factors that influence the integration of IT to enhance business efficiency, focusing on CRM and its implementation? A longitudinal case study was conducted. Findings include: the importance of managerial commitment and a corporate vision that incorporates a relationship orientation; wider actor involvement from the project's inception; and managers who themselves are convinced of the value of customer-centric strategy, who communicate their commitment to their subordinates and develop positive attitude towards change in order to properly manage the change process. Among other conclusions, the paper finds that there is currently too much reliance on a technological perspective of CRM. The paper recommends that the business problem first be defined, the business processes be defined for the solution, and that technology then be used as an enabler.
---
paper_title: Business/IT-alignment for customer relationship management: framework and case studies
paper_content:
In this paper we apply business/IT-alignment principles to create a framework for CRM. We build upon earlier related work that showed that CRM performance has a positive correlation with the degree of maturity and alignment between business and IT. The business/IT alignment CRM framework facilitates assessing an organisation's current CRM maturity and identifying future CRM improvements. The framework is validated by two different cases, a Caribbean telecommunications firm and a Dutch insurance company. The results of these applications support the validity and applicability of the framework.
---
| Title: Adoption of CRM technology in multichannel environment: a review (2006‐2010)
Section 1: Introduction
Description 1: This section introduces CRM technology and outlines the purpose and objectives of the paper.
Section 2: Previous literature reviews in CRM
Description 2: This section presents existing literature reviews in the field of CRM and establishes the positioning of the current review.
Section 3: Review methodology
Description 3: This section explains the structured review methodology used for selecting and classifying relevant articles for the review.
Section 4: Findings from literature review
Description 4: This section discusses the results obtained from the literature review, including the distribution of articles and main themes identified.
Section 5: Discussion
Description 5: This section synthesizes the findings from the literature review, highlighting key issues and trends in CRM implementation in multichannel environments.
Section 6: Future research directions
Description 6: This section outlines potential future research areas and questions that have emerged from the literature review.
Section 7: Conclusion and limitations
Description 7: This section provides a summary of the study's findings, acknowledges its limitations, and offers concluding remarks. |
A Survey of Adaptive and Real-Time Protocols Based on IEEE 802.15.4 | 6 | ---
paper_title: A Smart Gateway for Health Care System Using Wireless Sensor Network
paper_content:
Using Wireless Sensor Networks (WSNs) in health care system has yielded a tremendous effort in recent years. However, in most of these researches, tasks like sensor data processing, health state decisions making and emergency messages sending are completed by a remote server. Transmitting and handing with a large scale of data from body sensors and home sensors consume a lot of communication resource, bring a burden to the remote server and delay the decision time and notification time. In this paper, we present a prototype of a smart gateway that we have implemented. This gateway is an interconnection and services management platform especially for WSN health care systems at home environment. By building a bridge between a WSN and public communication networks, and being compatible with an on-board data decision system and a lightweight database, our smart gateway system is enabled to make patients' health state decisions in low-power and low-cost embedded system and get faster response time o the emergencies. We have also designed the communication protocols between WSN, gateway and remote servers. Additionally Ethernet, Wi-Fi and GSM/GPRS communication module are integrated into the smart gateway in order to report and notify information to care-givers. We have conducted experiments on the proposed smart gateway by performing it together with a wireless home e-health care sensor network. The results show that the smart gateway design is feasible and has low latency.
---
paper_title: Improving the IEEE 802.15.4 Slotted CSMA/CA MAC for Time-Critical Events in Wireless Sensor Networks
paper_content:
In beacon-enabled mode, IEEE 802.15.4 is ruled by the slotted CSMA/CA Medium Access Control (MAC) protocol. The standard slotted CSMA/CA mechanism does not provide any means of differentiated services to improve the quality of service for timecritical events (such as alarms, time slot reservation, PAN management messages etc.). In this paper, we present and discuss practical service differentiation mechanisms to improve the performance of slotted CSMA/CA for time-critical events, with only minor add-ons to the protocol. The contribution of our proposal is more practical than theoretical since our initial requirement is to leave the original algorithm of the slotted CSMA/CA unchanged, but rather tuning its parameters adequately according to the criticality of the messages. We present a simulation study based on an accurate model of the IEEE 802.15.4 MAC protocol, to evaluate the differentiated service strategies. Four scenarios with different settings of the slotted CSMA/CA parameters are defined. Each scenario is evaluated for FIFO and Priority Queuing. The impact of the hiddennode problem is also analyzed, and a solution to mitigate it is proposed.
---
paper_title: Performance Evaluation of IEEE 802.15.4 MAC with Different Backoff Ranges in Wireless Sensor Networks
paper_content:
The IEEE 802.15.4 MAC (Medium Access Control) is a protocol used in many applications including the wireless sensor network. Yet the IEEE 802.15.4 MAC layer cannot support different throughput performance for individual nodes with the current specifications. However, if certain nodes are sending data more frequently compared to others, with the standard MAC, it is hard to achieve network efficiency. Therefore, we modified the IEEE 802.15.4 MAC and additionally proposed a new State Transition Scheme. By adjusting the minBE value of some nodes to a smaller value and by dynamically changing the value depending on the transmission conditions, we shortened the backoff delay of nodes with frequent transmission. It was observed through our simulations that the throughput of the node with a lower minBE value increased significantly, compared to nodes with the original BE range of 3 to 5. Also by the use of the State Transition Scheme the total network efficiency increased leading to increase in throughput performance.
---
paper_title: Adaptation of MAC Layer for QoS in WSN
paper_content:
In this paper, we propose QoS aware MAC protocol for Wireless Sensor Networks. In WSNs, there can be two types of traffic one is event driven traffic which requires immediate attention and another is periodic reporting. Event driven traffic is classified as Class I(delay sensitive) traffic and periodic reporting is classified as Class II(Best Effort) Traffic. MAC layer adaptation can take place in terms of (i) Dynamic contention window adjustment per class, (ii) Reducing the delay suffered by difference in Sleep schedules(DSS) of communicating nodes by dynamically adjusting Duty Cycle based on Utilization and DSS delay of class I traffic, (iii) Different DIFS (DCF Inter Frame Spacing) per class, (iv) Adjusting all the three schemes proposed above simultaneously.
---
paper_title: Analysis of the contention access period of IEEE 802.15.4 MAC
paper_content:
The recent ratification of IEEE 802.15.4 PHY-MAC specifications for low-rate wireless personal area networks represents a significant milestone in promoting deployment of wireless sensor networks (WSNs) for a variety of commercial uses. The 15.4 specifications specifically target wireless networking among low-rate, low-power and low-cost devices that is expected to be a key market segment for a large number of WSN applications. In this article, we first analyze the performance of the contention access period specified in the IEEE 802.15.4 standard in terms of throughput and energy consumption. This analysis is facilitated by a modeling of the contention access period as nonpersistent CSMA with backoff. We show that, in certain applications in which having an inactive period in the superframe may not be desirable due to delay constraints, shutting down the radio between transmissions provides significant savings in power without significantly compromising the throughput. We also propose and analyze the performance of a modification to the specification which could be used for applications in which MAC-level acknowledgements are not used. Extensive ns-2 simulations are used to verify the analysis.
---
paper_title: Resource-aware scheduled control of distributed process systems over wireless sensor networks
paper_content:
This paper presents an integrated model-based networked control and sensor scheduling framework for spatially-distributed processes modeled by parabolic PDEs controlled over a resource-constrained wireless sensor network (WSN). The framework aims to enforce closed-loop stability with minimal information transfer over the WSN. Based on an approximate finite-dimensional system that captures the dominant dynamics of the PDE, a feedback controller is initially designed together with a state observer a copy of which is embedded within each sensor. Information transfer over the WSN is reduced by embedding within the controller and the sensors a finite-dimensional model. Communication is suspended periodically for extended time periods during which the model is used by the controller to generate the necessary control action and by the observers to generate state estimates. Communication is then re-established at discrete times according to a certain scheduling strategy in which only one sensor is allowed to transmit its state estimate at a time to update the states of the models, while the rest are kept dormant. A hybrid system formulation is used to explicitly characterize the interplays between the communication rate, the sensor transmission schedule, the model uncertainty and the spatial placement of the sensors. Finally, the proposed methodology is illustrated through an application to a diffusion-reaction process example.
---
paper_title: An Efficient Analysis for Reliable Data Transmission in Wireless Sensor Network
paper_content:
Ubiquitous technology through sensor networks is being applied to numerous industrial fields specially to increase the quality of human life (QoL). Therefore, Wireless Sensor Networks (WSNs) lossless data is one of the communications challenges to provide accurate data. Although, end-to-end data retransmission has evolved as a reliable transportation in Internet, this method is not applicable to WSNs due to the lack of reliability of wireless link and resource constraints in sensor nodes. In our previous paper, we proposed a reliable data transfer using path-reliability and implicit ACK called RTOD on WSN. However, path reliability calculation components such as RSSI, channel error rate, number of transmission in RTOD method have not been studied thoroughly. In this paper, we analysis path reliability components and simulate by using NS-2. Moreover, we propose limited number of transmission method (LTM) for WSNs. Proposed scheme shows average 4.1% fault tolerance.
---
paper_title: Distributed Activity Recognition with Fuzzy-Enabled Wireless Sensor Networks
paper_content:
Wireless sensor nodes can act as distributed detectors for recognizing activities online, with the final goal of assisting the users in their working environment. We propose an activity recognition architecture based on fuzzy logic, through which multiple nodes collaborate to produce a reliable recognition result from unreliable sensor data. As an extension to the normal fuzzy inference, we incorporate temporal order knowledge of the sequences of operations involved in the activities. The performance evaluation is based on experimental data from a car assembly trial. The average recognition accuracy is 93.53\%. We also present early experiences with implementing the recognition system on sensor nodes. The results show that the algorithms can run online, with execution times less than 40ms for the whole recognition chain and memory overhead in the order of 1.5kB RAM.
---
paper_title: Building Automation Systems Using Wireless Sensor Networks: Radio Characteristics and Energy Efficient Communication Protocols
paper_content:
Building automation systems (BAS) are typically used to monitor and control heating, ventila- tion, and air conditioning (HVAC) systems, manage building facilities (e.g., lighting, safety, and security), and automate meter reading. In recent years, the technology of wireless sensor network (WSN) has been at- tracting extensive research and development efforts to replace the traditional wired solutions for BAS. Key challenges of integrating WSN to a BAS include characterizing the radio features of BAS environments, and fulfilling the requirements of the extremely low energy consumption. In this survey paper, we first describe the radio characteristics of indoor environments, and then introduce the important medium access control (MAC) protocols developed for WSN which can be potentially used in BAS systems.
---
paper_title: An Analytical Model for Evaluating IEEE 802.15.4 CSMA/CA Protocol in Low-Rate Wireless Application
paper_content:
The IEEE 802.15.4/ZigBee is developed to meet the needs in wireless personal area networks by adopting CSMA/CA algorithm as the retransfer mechanism after the collision occurred. But existed evaluation for 802.15.4 mainly bases on simulation and the only few existed mathematical models are either developed immaturely or based on inaccurate assumption, thus failing to meet the rapid evolution and increasing complexity for low-rate wireless applications such as wireless sensor networks (WSN). Therefore, to evaluate the performance of 802.15.4 protocol and better utilize its strength for low-data-rate wireless application, this paper presents an accurate two- dimensional discrete Markov chain model for evaluating the 802.15.4 CSMA/CA mechanism, focusing on the analysis of relationship between throughput and related parameters. By building a Markov chain for the backoff counter, computing the stable distribution probability and success transmission probability in channel collision, this paper finally gives the throughput and energy consumption formula. In the experimental part, we use ns2 simulator to evaluate the performance of 802.15.4 at the 2.4 GHz scale under different circumstances and the final performance well validates our theoretical conclusion. Considering the former work mainly bases on simulation and existing models lack accuracy, our theory and simulation combined analytical model is both novel and valuable.
---
paper_title: Wireless sensor networks for structure health monitoring: recent advances and future research directions
paper_content:
Purpose – The purpose of this paper is to provide a contemporary look at the current state‐of‐the‐art in wireless sensor networks (WSNs) for structure health monitoring (SHM) applications and discuss the still‐open research issues in this field and, hence, to make the decision‐making process more effective and direct.Design/methodology/approach – This paper presents a comprehensive review of WSNs for SHM. It also introduces research challenges, opportunities, existing and potential applications. Network architecture and the state‐of‐the‐art wireless sensor communication technologies and standards are explained. Hardware and software of the existing systems are also clarified.Findings – Existing applications and systems are presented along with their advantages and disadvantages. A comparison landscape and open research issues are also presented.Originality/value – The paper presents a comprehensive and recent review of WSN systems for SHM applications along with open research issues.
---
paper_title: A Novel MAC Protocol for Event-Based Wireless Sensor Networks: Improving the Collective QoS
paper_content:
WSNs usually combine periodic readings with messages generated by unexpected events. When an event is detected by a group of sensors, several notification messages are sent simultaneously to the sink, resulting in sporadic increases of the network load. Additionally, these messages sometimes require a lower latency and higher reliability as they can be associated to emergency situations. Current MAC protocols for WSNs are not able to react rapidly to these sporadic changes on the traffic load, mainly due to the duty cycle operation, adopted to save energy in the sensor nodes, resulting in message losses or high delays that compromise the event detection at sink. In this work, two main contributions are provided: first, the collective QoS definitions are applied to measure event detection capabilities and second, a novel traffic-aware Low Power Listening MAC to improve the network response to sporadic changes in the traffic load is presented. Results show that the collective QoS in terms of collective throughput, latency and reliability are improved maintaining a low energy consumption at each individual sensor node.
---
paper_title: Analysis of the contention access period of IEEE 802.15.4 MAC
paper_content:
The recent ratification of IEEE 802.15.4 PHY-MAC specifications for low-rate wireless personal area networks represents a significant milestone in promoting deployment of wireless sensor networks (WSNs) for a variety of commercial uses. The 15.4 specifications specifically target wireless networking among low-rate, low-power and low-cost devices that is expected to be a key market segment for a large number of WSN applications. In this article, we first analyze the performance of the contention access period specified in the IEEE 802.15.4 standard in terms of throughput and energy consumption. This analysis is facilitated by a modeling of the contention access period as nonpersistent CSMA with backoff. We show that, in certain applications in which having an inactive period in the superframe may not be desirable due to delay constraints, shutting down the radio between transmissions provides significant savings in power without significantly compromising the throughput. We also propose and analyze the performance of a modification to the specification which could be used for applications in which MAC-level acknowledgements are not used. Extensive ns-2 simulations are used to verify the analysis.
---
paper_title: Performance modeling and analysis of IEEE 802.15.4 slotted CSMA/CA protocol with ACK mode
paper_content:
Abstract In this paper, we propose an analytic model to analyze a wireless sensor network (WSN) operating the IEEE 802.15.4 slotted CSMA/CA protocol with ACK mode. Based on the renewal approximation for the data frame transmission process, we first develop an analytic model for a homogeneous WSN where all nodes in the network have the same traffic characteristic. Using our analytical model, we obtain performance metrics such as network throughput, average service time for successful transmissions, the transmission success probability, the frame dropping probability due to transmission failures, and the frame dropping probability due to CCA failures. Through numerical studies and simulations, we validate our analytic model.
---
paper_title: Modeling and analysis of IEEE 802.15.4 CSMA/CA with sleep mode enabled
paper_content:
According to the widely used IEEE 802.15.4, one of the most effective ways to reduce the power consumption of sensor nodes is to enable the optional sleep mode by radio shutdown. In this paper, we propose an extended Markov-based analytical model for IEEE 802.15.4 slotted carrier sense multiple access/collision avoidance (CSMA/CA) algorithm considering the newly enabled sleep mode. For taking account of the active/sleep transitions in our model, we particularly analyze the impact of duty cycle on throughput and power consumption. Moreover, some discussion based on the numerical results of the model is carried out for further understanding. It is showed that the results given by our model have an accurate match with the simulations, and the impacts of the sleep mode on IEEE 802.15.4 can be studied by the model well.
---
paper_title: An Analytical Model for Evaluating IEEE 802.15.4 CSMA/CA Protocol in Low-Rate Wireless Application
paper_content:
The IEEE 802.15.4/ZigBee is developed to meet the needs in wireless personal area networks by adopting CSMA/CA algorithm as the retransfer mechanism after the collision occurred. But existed evaluation for 802.15.4 mainly bases on simulation and the only few existed mathematical models are either developed immaturely or based on inaccurate assumption, thus failing to meet the rapid evolution and increasing complexity for low-rate wireless applications such as wireless sensor networks (WSN). Therefore, to evaluate the performance of 802.15.4 protocol and better utilize its strength for low-data-rate wireless application, this paper presents an accurate two- dimensional discrete Markov chain model for evaluating the 802.15.4 CSMA/CA mechanism, focusing on the analysis of relationship between throughput and related parameters. By building a Markov chain for the backoff counter, computing the stable distribution probability and success transmission probability in channel collision, this paper finally gives the throughput and energy consumption formula. In the experimental part, we use ns2 simulator to evaluate the performance of 802.15.4 at the 2.4 GHz scale under different circumstances and the final performance well validates our theoretical conclusion. Considering the former work mainly bases on simulation and existing models lack accuracy, our theory and simulation combined analytical model is both novel and valuable.
---
paper_title: Improving the IEEE 802.15.4 Slotted CSMA/CA MAC for Time-Critical Events in Wireless Sensor Networks
paper_content:
In beacon-enabled mode, IEEE 802.15.4 is ruled by the slotted CSMA/CA Medium Access Control (MAC) protocol. The standard slotted CSMA/CA mechanism does not provide any means of differentiated services to improve the quality of service for timecritical events (such as alarms, time slot reservation, PAN management messages etc.). In this paper, we present and discuss practical service differentiation mechanisms to improve the performance of slotted CSMA/CA for time-critical events, with only minor add-ons to the protocol. The contribution of our proposal is more practical than theoretical since our initial requirement is to leave the original algorithm of the slotted CSMA/CA unchanged, but rather tuning its parameters adequately according to the criticality of the messages. We present a simulation study based on an accurate model of the IEEE 802.15.4 MAC protocol, to evaluate the differentiated service strategies. Four scenarios with different settings of the slotted CSMA/CA parameters are defined. Each scenario is evaluated for FIFO and Priority Queuing. The impact of the hiddennode problem is also analyzed, and a solution to mitigate it is proposed.
---
paper_title: Performance Evaluation of IEEE 802.15.4 MAC with Different Backoff Ranges in Wireless Sensor Networks
paper_content:
The IEEE 802.15.4 MAC (Medium Access Control) is a protocol used in many applications including the wireless sensor network. Yet the IEEE 802.15.4 MAC layer cannot support different throughput performance for individual nodes with the current specifications. However, if certain nodes are sending data more frequently compared to others, with the standard MAC, it is hard to achieve network efficiency. Therefore, we modified the IEEE 802.15.4 MAC and additionally proposed a new State Transition Scheme. By adjusting the minBE value of some nodes to a smaller value and by dynamically changing the value depending on the transmission conditions, we shortened the backoff delay of nodes with frequent transmission. It was observed through our simulations that the throughput of the node with a lower minBE value increased significantly, compared to nodes with the original BE range of 3 to 5. Also by the use of the State Transition Scheme the total network efficiency increased leading to increase in throughput performance.
---
paper_title: Adaptive backoff exponent algorithm for zigbee (IEEE 802.15.4)
paper_content:
The IEEE 802.15.4 is a new wireless personal area network standard designed for wireless monitoring and control applications. In this paper a study of the backoff exponent (BE) management in CSMA-CA for 802.15.4 is conducted. The BEs determine the number of backoff periods the device shall wait before accessing the channel. The power consumption requirements make CSMA-CA use fewer Bes which increase the probability of devices choosing identical Bes and as a result wait for the same number of backoff periods in some cases. This inefficiency degrades system performance at congestion scenarios, by bringing in more collisions. This paper addresses the problem by proposing an efficient management of Bes based on a decision criterion. As a result of the implementation potential packet collisions with other devices are restricted. The results of NS-2 simulations indicate an overall improvement in effective data bandwidth, validating our claim.
---
paper_title: Priority-Based Service-Differentiation Scheme for IEEE 802.15.4 Sensor Networks in Nonsaturation Environments
paper_content:
In this paper, we propose two mechanisms, i.e., 1) backoff exponent differentiation (BED) and 2) contention window differentiation (CWD), to provide multilevel differentiated services for IEEE 802.15.4 sensor networks. The beacon-enabled mode with the slotted carrier-sense multiple-access mechanism with collision avoidance (CSMA/CA) algorithm is considered for nonsaturation. A mathematical model based on a discrete-time Markov chain is presented and analyzed to measure the performance of the proposed mechanisms. Numerical results show the effects of varying parameters. It shows that BED is better for service differentiation and channel utilization, regardless of the number of devices and the packet-arrival rate. It also shows that the throughput depends on the backoff exponent (BE), contention window (CW), and packet size and that the relative delay for successfully transmitted packets for BED is longer than that for CWD, which enables a higher relative gain for CWD. In addition, we obtain the number of unit backoff periods required to process all of the head packets of devices. Based on the numerical results, we obtain a criterion that provides guidelines for choosing a mechanism that is suitable for the objective function and determining the optimal length of the superframe to process the packets in the system.
---
paper_title: Priority-based delay mitigation for event-monitoring IEEE 802.15.4 LR-WPANs
paper_content:
IEEE 802.15.4 slotted carrier-sense multiple access with collision avoidance (CSMA-CA) adopts periodic sleeping for energy efficiency support. However, such a periodic sleeping mechanism, especially with contention-based medium access, tends to cause additional sleep delay due to heavy contention. In this letter, we propose a priority-based scheme, comprising frame tailoring and priority toning, in order to relax such a problematic delay and guarantee time-bounded delivery of high priority packets in event-monitoring networks.
---
paper_title: An implicit GTS allocation mechanism in IEEE 802.15.4 for time-sensitive wireless sensor networks: theory and practice
paper_content:
Timeliness guarantee is an important feature of the recently standardized IEEE 802.15.4 protocol, turning it quite appealing for Wireless Sensor Network (WSN) applications under timing constraints. When operating in beacon-enabled mode, this protocol allows nodes with real-time requirements to allocate Guaranteed Time Slots (GTS) in the contention-free period. The protocol natively supports explicit GTS allocation, i.e. a node allocates a number of time slots in each superframe for exclusive use. The limitation of this explicit GTS allocation is that GTS resources may quickly disappear, since a maximum of seven GTSs can be allocated in each superframe, preventing other nodes to benefit from guaranteed service. Moreover, the GTS may be underutilized, resulting in wasted bandwidth. To overcome these limitations, this paper proposes i-GAME, an implicit GTS Allocation Mechanism in beacon-enabled IEEE 802.15.4 networks. The allocation is based on implicit GTS allocation requests, taking into account the traffic specifications and the delay requirements of the flows. The i-GAME approach enables the use of one GTS by multiple nodes, still guaranteeing that all their (delay, bandwidth) requirements are satisfied. For that purpose, we propose an admission control algorithm that enables to decide whether to accept a new GTS allocation request or not, based not only on the remaining time slots, but also on the traffic specifications of the flows, their delay requirements and the available bandwidth resources. We show that our approach improves the bandwidth utilization as compared to the native explicit allocation mechanism defined in the IEEE 802.15.4 standard. We also present some practical considerations for the implementation of i-GAME, ensuring backward compatibility with the IEEE 801.5.4 standard with only minor add-ons. Finally, an experimental evaluation on a real system that validates our theoretical analysis and demonstrates the implementation of i-GAME is also presented.
---
paper_title: An Adaptive GTS Allocation Scheme for IEEE 802.15.4
paper_content:
IEEE 802.15.4 is a new standard uniquely designed for low-rate wireless personal area networks. It targets ultralow complexity, cost, and power for low-rate wireless connectivity among inexpensive, portable, and moving devices. IEEE 802.15.4 provides a guaranteed time slot (GTS) mechanism to allocate a specific duration within a superframe for time-critical transmissions. This paper proposes an adaptive GTS allocation (AGA) scheme for IEEE 802.15.4, which considers low latency and fairness. The scheme is designed based on the existing IEEE 802.15.4 medium access control protocol, and IEEE 802.15.4 devices can receive this AGA service without any modification. A simulation model and an analytical model are developed to investigate the performance of our AGA scheme. The numerical results show that the proposed scheme significantly outperforms the existing IEEE 802.15.4 implementation.
---
paper_title: Adaptive polling algorithm for PCF mode of IEEE 802.11 wireless LANs
paper_content:
An adaptive PCF polling algorithm based on recent polling feedback is proposed to improve the medium utilisation rate of IEEE 802.11 wireless LANs. It is compatible with the IEEE 802.11 standard and requires a simple extension. Simulation studies show that the PCF performance can be improved in terms of the successful poll rate and the aggregate throughput.
---
paper_title: An implicit GTS allocation mechanism in IEEE 802.15.4 for time-sensitive wireless sensor networks: theory and practice
paper_content:
Timeliness guarantee is an important feature of the recently standardized IEEE 802.15.4 protocol, turning it quite appealing for Wireless Sensor Network (WSN) applications under timing constraints. When operating in beacon-enabled mode, this protocol allows nodes with real-time requirements to allocate Guaranteed Time Slots (GTS) in the contention-free period. The protocol natively supports explicit GTS allocation, i.e. a node allocates a number of time slots in each superframe for exclusive use. The limitation of this explicit GTS allocation is that GTS resources may quickly disappear, since a maximum of seven GTSs can be allocated in each superframe, preventing other nodes to benefit from guaranteed service. Moreover, the GTS may be underutilized, resulting in wasted bandwidth. To overcome these limitations, this paper proposes i-GAME, an implicit GTS Allocation Mechanism in beacon-enabled IEEE 802.15.4 networks. The allocation is based on implicit GTS allocation requests, taking into account the traffic specifications and the delay requirements of the flows. The i-GAME approach enables the use of one GTS by multiple nodes, still guaranteeing that all their (delay, bandwidth) requirements are satisfied. For that purpose, we propose an admission control algorithm that enables to decide whether to accept a new GTS allocation request or not, based not only on the remaining time slots, but also on the traffic specifications of the flows, their delay requirements and the available bandwidth resources. We show that our approach improves the bandwidth utilization as compared to the native explicit allocation mechanism defined in the IEEE 802.15.4 standard. We also present some practical considerations for the implementation of i-GAME, ensuring backward compatibility with the IEEE 801.5.4 standard with only minor add-ons. Finally, an experimental evaluation on a real system that validates our theoretical analysis and demonstrates the implementation of i-GAME is also presented.
---
paper_title: Guaranteeing Real-Time Services for Industrial Wireless Sensor Networks With IEEE 802.15.4
paper_content:
Industrial applications of wireless sensor networks require timeliness in exchanging messages among nodes. Although IEEE 802.15.4 provides a superframe structure for real-time communication, a real-time message-scheduling algorithm is still required to schedule a large number of real-time messages to meet their timing constraints. We propose a distance-constrained real-time offline message-scheduling algorithm which generates the standard specific parameters such as beacon order, superframe order, and guaranteed-time-slot information and allocates each periodic real-time message to superframe slots for a given message set. The proposed scheduling algorithm is evaluated and analyzed extensively through simulations. In addition, a guaranteed time service is implemented in a typical industrial sensor node platform with a well-known IEEE 802.15.4-compliant transceiver CC2420 and ATmega128L to verify the feasibility of the guaranteed time service with the schedule generated by the proposed scheduling algorithm. Through experiments, we prove that the real system runs accurately according to the schedule calculated by the proposed algorithm.
---
paper_title: An Optimization-Based GTS Allocation Scheme for IEEE 802.15.4 MAC with Application to Wireless Body-Area Sensor Networks
paper_content:
IEEE 802.15.4 standard is widely used in wireless personal area networks (WPANs). This standard supports a limited number of guaranteed time slots (GTSs) for time-critical or delay-sensitive data transmission.We propose a GTS allocation scheme to improve reliability and bandwidth utilization in IEEE 802.15.4-based wireless body area sensor networks (WiBaSe-Nets). A knapsack problem is formulated to obtain optimal GTS allocation such that a minimum bandwidth requirement is satisfied for the sensor devices. Simulation results show that the proposed scheme can achieve better GTS utilization and higher packet delivery ratio than the standard IEEE 802.15.4 scheme does.
---
paper_title: Body Area Sensor Networks: Challenges and Opportunities
paper_content:
Body area sensors can enable novel applications in and beyond healthcare, but research must address obstacles such as size, cost, compatibility, and perceived value before networks that use such sensors can become widespread.
---
paper_title: An optimal GTS scheduling algorithm for time-sensitive transactions in IEEE 802.15.4 networks
paper_content:
IEEE 802.15.4 is a new enabling standard for low-rate wireless personal area networks and has been widely accepted as a de facto standard for wireless sensor networking. While primary motivations behind 802.15.4 are low power and low cost wireless communications, the standard also supports time and rate sensitive applications because of its ability to operate in TDMA access modes. The TDMA mode of operation is supported via the Guaranteed Time Slot (GTS) feature of the standard. In a beacon-enabled network topology, the Personal Area Network (PAN) coordinator reserves and assigns the GTS to applications on a first-come-first-served (FCFS) basis in response to requests from wireless sensor nodes. This fixed FCFS scheduling service offered by the standard may not satisfy the time constraints of time-sensitive transactions with delay deadlines. Such operating scenarios often arise in wireless video surveillance and target detection applications running on sensor networks. In this paper, we design an optimal work-conserving scheduling algorithm for meeting the delay constraints of time-sensitive transactions and show that the proposed algorithm outperforms the existing scheduling model specified in IEEE 802.15.4.
---
paper_title: A comprehensive simulation study of slotted CSMA/CA for IEEE 802.15.4 wireless sensor networks
paper_content:
In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent) on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
---
paper_title: Evaluating IEEE 802.15.4 for Cyber-Physical Systems
paper_content:
With rapid advancements in sensing, networking, and computing technologies, recent years have witnessed the emergence of cyber-physical systems (CPS) in a broad range of application domains. CPS is a new class of engineered systems that features the integration of computation, communications, and control. In contrast to general-purpose computing systems, many cyber-physical applications are safety critical. These applications impose considerable requirements on quality of service (QoS) of the employed networking infrastruture. Since IEEE 802.15.4 has been widely considered as a suitable protocol for CPS over wireless sensor and actuator networks, it is of vital importance to evaluate its performance extensively. Serving for this purpose, this paper will analyze the performance of IEEE 802.15.4 standard operating in different modes respectively. Extensive simulations have been conducted to examine how network QoS will be impacted by some critical parameters. The results are presented and analyzed, which provide some useful insights for network parameter configuration and optimization for CPS design.
---
paper_title: PECAP: Priority-Based Delay Alleviation Algorithm for IEEE 802.15.4 Beacon-Enabled Networks
paper_content:
A new priority-based algorithm for IEEE 802.15.4 beacon-enabled networks is proposed in order to alleviate an end-to-end delay while maintaining energy efficiency. In this paper, the active period is temporally increased to reduce the sleep delay that makes up a large portion of the end-to-end delay. Nodes having high-priority packets request the coordinator to execute an extended contention access period by sending a priority toning signal. The simulation results show that the proposed algorithm alleviates the end-to-end delay of high priority packets.
---
| Title: A Survey of Adaptive and Real-Time Protocols Based on IEEE 802.15.4
Section 1: Introduction
Description 1: Introduce the deployment of Wireless Sensor Networks (WSNs) using IEEE 802.15.4, its limitations, and the requirements for adaptive and real-time protocols.
Section 2: Overview of IEEE 802.15.4 Medium Access Control
Description 2: Provide an overview of the IEEE 802.15.4 standard, including channel access methods, superframe structure, CSMA/CA algorithm, and GTS mechanism.
Section 3: Approaches for Contention Access Period
Description 3: Discuss approaches to improve network efficiency during the contention access period, highlighting mechanisms such as adaptive backoff exponent, adaptive contention window, and other CSMA/CA-based solutions.
Section 4: Approaches for Contention-Free Period
Description 4: Examine strategies for enhancing the performance of the GTS mechanism during the contention-free period, including adaptive GTS allocation, implicit allocation mechanism, knapsack algorithm, and GTS scheduling algorithm.
Section 5: Cross-Period Approaches
Description 5: Explore approaches that dynamically adjust the length of CAP or CFP based on operating conditions, including the setting of BO and SO parameters and strategies for handling different traffic loads.
Section 6: Conclusions
Description 6: Summarize the limitations of the original IEEE 802.15.4 MAC protocols and review the various adaptive and real-time protocols discussed, highlighting the existing challenges and future research directions. |
A Literature Review: Stemming Algorithms for Indian Languages. | 4 | ---
paper_title: Stemming algorithms: a case study for detailed evaluation
paper_content:
The majority of information retrieval experiments are evaluated by measures such as average precision and average recall. Fundamental decisions about the superiority of one retrieval technique over another are made solely on the basis of these measures. We claim that average performance figures need to be validated with a careful statistical analysis and that there is a great deal of additional information that can be uncovered by looking closely at the results of individual queries. This article is a case study of stemming algorithms which describes a number of novel approaches to evaluation and demonstrates their value. © 1996 John Wiley & Sons, Inc.
---
paper_title: Discovering suffixes: A Case Study for Marathi Language
paper_content:
Suffix stripping is a pre-processing step required in a number of natural language processing applications. Stemmer is a tool used to perform this step. This paper presents and evaluates a rule-based and an unsupervised Marathi stemmer. The rule-based stemmer uses a set of manually extracted suffix stripping rules whereas the unsupervised approach learns suffixes automatically from a set of words extracted from raw Marathi text. The performance of both the stemmers has been compared on a test dataset consisting of 1500 manually stemmed word.
---
| Title: A Literature Review: Stemming Algorithms for Indian Languages
Section 1: INTRODUCTION
Description 1: Write an introduction about the relevance of a literature review in the study of stemming algorithms for Indian languages and the key concepts addressed such as Data Mining, Text Mining, Stemming, and Clustering.
Section 2: STEMMING: A REVIEW
Description 2: Summarize various stemming algorithms and significant studies from prior years relevant to stemming and their impact on Information Retrieval Systems (IRS).
Section 3: STEMMERS FOR INDIAN LANGUAGES: A REVIEW
Description 3: Review the literature specific to stemming algorithms developed for various Indian languages, detailing methods and evaluations conducted on languages like Hindi, Marathi, Telugu, Gujarati, and Tamil.
Section 4: CONCLUSION
Description 4: Discuss the role and impact of stemming in information retrieval systems, summarize the findings from the literature review, and suggest future directions for research in stemming algorithms for Indian languages. |
A Survey on Video Inpainting | 5 | ---
paper_title: Video Inpainting Under Constrained Camera Motion
paper_content:
A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.
---
paper_title: Video completion using tracking and fragment merging
paper_content:
Video completion is the problem of automatically filling space–time holes in video sequences left by the removal of unwanted objects in a scene. We solve it using texture synthesis, filling a hole inwards using three steps iteratively: we select the most promising target pixel at the edge of the hole, we find the source fragment most similar to the known part of the target’s neighborhood, and we merge source and target fragments to complete the target neighborhood, reducing the size of the hole.
---
paper_title: Video epitomes
paper_content:
Recently, "epitomes" were introduced as patch-based probability models that are learned by compiling together a large number of examples of patches from input images. In this paper, we describe how epitomes can be used to model video data and we describe significant computational speedups that can be incorporated into the epitome inference and learning algorithm. In the case of videos, epitomes are estimated so as to model most of the small space-time cubes from the input data. Then, the epitome can be used for various modeling and reconstruction tasks, of which we show results for video super-resolution, video interpolation, and object removal. Besides computational efficiency, an interesting advantage of the epitome as a representation is that it can be reliably estimated even from videos with large amounts of missing data. We illustrate this ability on the task of reconstructing the dropped frames in video broadcast using only the degraded video.
---
paper_title: Contour-Based Video Inpainting
paper_content:
NA
---
paper_title: Video Completion for Perspective Camera Under Constrained Motion
paper_content:
This paper presents a novel technique to fill in missing background and moving foreground of a video captured by a static or moving camera. Different from previous efforts which are typically based on processing in the 3D data volume, we slice the volume along the motion manifold of the moving object, and therefore reduce the search space from 3D to 2D, while still preserve the spatial and temporal coherence. In addition to the computational efficiency, based on geometric video analysis, the proposed approach is also able to handle real videos under perspective distortion, as well as common camera motions, such as panning, tilting, and zooming. The experimental results demonstrate that our algorithm performs comparably to 3D search based methods, and however extends the current state-of-the-art repairing techniques to videos with projective effects, as well as illumination changes.
---
paper_title: Space-Time Completion of Video
paper_content:
This paper presents a new framework for the completion of missing information based on local structures. It poses the task of completion as a global optimization problem with a well-defined objective function and derives a new algorithm to optimize it. Missing values are constrained to form coherent structures with respect to reference examples. We apply this method to space-time completion of large space-time "holes" in video sequences of complex dynamic scenes. The missing portions are filled in by sampling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to realistic looking video sequences and images. Space-time video completion is useful for a variety of tasks, including, but not limited to: 1) sophisticated video removal (of undesired static or dynamic objects) by completing the appropriate static or dynamic background information. 2) Correction of missing/corrupted video frames in old movies. 3) Modifying a visual story by replacing unwanted elements. 4) Creation of video textures by extending smaller ones. 5) Creation of complete field-of-view stabilized video. 6) As images are one-frame videos, we apply the method to this special case as well
---
paper_title: A Rank Minimization Approach to Video Inpainting
paper_content:
This paper addresses the problem of video inpainting, that is seamlessly reconstructing missing portions in a set of video frames. We propose to solve this problem proceeding as follows: (i) finding a set of descriptors that encapsulate the information necessary to reconstruct a frame, (ii) finding an optimal estimate of the value of these descriptors for the missing/corrupted frames, and (iii) using the estimated values to reconstruct the frames. The main result of the paper shows that the optimal descriptor estimates can be efficiently obtained by minimizing the rank of a matrix directly constructed from the available data, leading to a simple, computationally attractive, dynamic inpainting algorithm that optimizes the use of spatio/temporal information. Moreover, contrary to most currently available techniques, the method can handle non-periodic target motions, non-stationary backgrounds and moving cameras. These results are illustrated with several examples, including reconstructing dynamic textures and object disocclusion in cases involving both moving targets and camera.
---
paper_title: Video repairing: inference of foreground and background under severe occlusion
paper_content:
We propose a new method, video repairing, to robustly infer missing static background and moving foreground due to severe damage or occlusion from a video. To recover background pixels, we extend the image repairing method, where layer segmentation and homography blending are used to preserve temporal coherence and avoid flickering. By exploiting the constraint imposed by periodic motion and a subclass of camera and object motions, we adopt a two-phase approach to repair moving foreground pixels: In the sampling phase, motion data are sampled and regularized by 3D tensor voting to maintain temporal coherence and motion periodicity. In the alignment phase, missing moving foreground pixels are inferred by spatial and temporal alignment of the sampled motion data at multiple scales. We experimented our system with some difficult examples, where the camera can be stationary or in motion.
---
paper_title: Human Object Inpainting Using Manifold Learning-Based Posture Sequence Estimation
paper_content:
We propose a human object inpainting scheme that divides the process into three steps: 1) human posture synthesis; 2) graphical model construction; and 3) posture sequence estimation. Human posture synthesis is used to enrich the number of postures in the database, after which all the postures are used to build a graphical model that can estimate the motion tendency of an object. We also introduce two constraints to confine the motion continuity property. The first constraint limits the maximum search distance if a trajectory in the graphical model is discontinuous, and the second confines the search direction in order to maintain the tendency of an object's motion. We perform both forward and backward predictions to derive local optimal solutions. Then, to compute an overall best solution, we apply the Markov random field model and take the potential trajectory with the maximum total probability as the final result. The proposed posture sequence estimation model can help identify a set of suitable postures from the posture database to restore damaged/missing postures. It can also make a reconstructed motion sequence look continuous.
---
paper_title: Efficient Object-Based Video Inpainting
paper_content:
Video inpainting describes the process of removing a portion of a video and filling in the missing part (hole) in a visually consistent manner. Most existing video inpainting techniques are computationally intensive and cannot handle large holes. In this paper, we propose a complete and efficient video in-painting system. Our system applies different strategies to handle static and dynamic portions of the hole. To inpaint the static portion, our system uses background replacement and image inpainting techniques. To inpaint moving objects in the hole, we utilizes background subtraction and object segmentation to extract a set of object templates and perform optimal object interpolation using dynamic programming. We evaluate the performance of our system based on a set of indoor surveillance sequences with different types of occlusions.
---
paper_title: Video repairing under variable illumination using cyclic motions
paper_content:
This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.
---
| Title: A Survey on Video Inpainting
Section 1: INTRODUCTION
Description 1: This section introduces the concept of inpainting and its applications, specifically focusing on the differences and challenges between image and video inpainting.
Section 2: PATCH BASED TECHNIQUES
Description 2: This section describes various patch-based video inpainting methods, their evolution from image inpainting techniques, and their advantages and limitations.
Section 3: OBJECT BASED TECHNIQUES
Description 3: This section discusses object-based approaches to video inpainting, which consider both spatial and temporal consistencies, and highlights different algorithms, their benefits, and drawbacks.
Section 4: COMPARISON OF VARIOUS ALGORITHMS
Description 4: This section compares the merits and demerits of different video inpainting algorithms covered in the previous sections, focusing on aspects like computational complexity, artifact generation, and suitability for different types of motions and scenes.
Section 5: CONCLUSION
Description 5: This section summarizes the survey, emphasizing the differences between patch-based and object-based algorithms, and discusses potential guidelines for choosing an appropriate algorithm based on specific requirements. |
Computer poker: a review | 10 | ---
paper_title: General game playing: Overview of the AAAI competition
paper_content:
A general game playing system is one that can accept a formal description of a game and play the game effectively without human intervention. Unlike specialized game players, such as Deep Blue, general game players do not rely on algorithms designed in advance for specific games; and, unlike Deep Blue, they are able to play different kinds of games. In order to promote work in this area, the AAAI is sponsoring an open competition at this summer's Twentieth National Conference on Artificial Intelligence. This article is an overview of the technical issues and logistics associated with this summer's competition, as well as the relevance of general game playing to the long range-goals of artificial intelligence.
---
paper_title: Poker as Testbed for AI Research
paper_content:
For years, games researchers have used chess, checkers and other board games as a testbed for artificial intelligence research. The success of world-championship-caliber programs for these games has resulted in a number of interesting games being overlooked. Specifically, we show that poker can serve as an interesting testbed for machine intelligence research related to decision making problems. Poker is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. The heuristic search and evaluation methods successfully employed in chess are not helpful here. This paper outlines the difficulty of playing strong poker, and describes our first steps towards building a world-class poker-playing program.
---
paper_title: Machine Learning in Games: A Survey
paper_content:
This paper provides a survey of previously published work on machine learning in game playing. The material is organized around a variety of problems that typically arise in game playing and that can be solved with machine learning methods. This approach, we believe, allows both, researchers in game playing to find appropriate learning techniques for helping to solve their problems as well as machine learning researchers to identify rewarding topics for further research in game-playing domains. The chapter covers learning techniques that range from neural networks to decision tree learning in games that range from poker to chess. However, space constraints prevent us from giving detailed introductions to the used learning techniques or games. Overall, we aimed at striking a fair balance between being exhaustive and being exhausting.
---
paper_title: The Monte Carlo Method
paper_content:
Abstract We shall present here the motivation and a general description of a method dealing with a class of problems in mathematical physics. The method is, essentially, a statistical approach to the study of differential equations, or more generally, of integro-differential equations that occur in various branches of the natural sciences.
---
paper_title: A Gamut of Games
paper_content:
In 1950, Claude Shannon published his seminal work on how to program a computer to play chess. Since then, developing game-playing programs that can compete with (and even exceed) the abilities of the human world champions has been a long-sought-after goal of the AI research community. In Shannon's time, it would have seemed unlikely that only a scant 50 years would be needed to develop programs that play world-class backgammon, checkers, chess, Othello, and Scrabble. These remarkable achievements are the result of a better understanding of the problems being solved, major algorithmic insights, and tremendous advances in hardware technology. Computer games research is one of the important success stories of AI. This article reviews the past successes, current projects, and future research directions for AI using computer games as a research test bed.
---
paper_title: The Games Computers ( and People ) Play
paper_content:
The development of high-performance game-playing programs has been one of the major successes of artificial intelligence research. The results have been outstanding but, with one notable exception (Deep Blue), they have not been widely disseminated. This talk will discuss the past, present, and future of the development of games-playing programs. Case studies for backgammon, bridge, checkers, chess, go, hex, Othello, poker, and Scrabble will be used. The research emphasis of the past has been on high performance (synonymous with brute-force search) for twoplayer perfect-information games. The research emphasis of the present encompasses multi-player imperfect/nondeterministic information games. And what of the future? There are some surprising changes of direction occurring that will result in games being more of an experimental testbed for mainstream AI research, with less emphasis on building world-championship-caliber programs. One of the most profound contributions to mankind’s knowledge has been made by the artificial intelligence (AI) research community: the realization that intelligence is not uniquely human. 1 Using computers, it is possible to achieve human-like behavior in nonhumans. In other words, the illusion of human intelligence can be created in a computer. This idea has been vividly illustrated throughout the history of computer games research. Unlike most of the early work in AI, game researchers were interested in developing high-performance, real-time solutions to challenging problems. This led to an ends-justify-the-means attitude: the result—a strong chess program—was all that mattered, not the means by which it was achieved. In contrast, much of the mainstream AI work used simplified domains, while eschewing real-time performance objectives. This research typically used human intelligence as a model: one only had to emulate the human example to achieve intelligent behavior. The battle (and philosophical) lines were drawn. The difference in philosophy can be easily illustrated. The human brain and the computer are different machines, each with its own sets of strengths and weaknesses. Humans are good at, for example, learning, reasoning by analogy, and
---
paper_title: GIB: Steps Toward an Expert-Level Bridge-Playing Program
paper_content:
This paper describes GIB, the first bridge-playing program to approach the level of a human expert. (GIB finished twelfth in a handpicked field of thirty-four experts at an invitational event at the 1998 World Bridge Championships.) We give a basic overview of the algorithms used, describe their strengths and weaknesses, and present the results of experiments comparing GIB to both human opponents and other programs.
---
paper_title: World-Championship-Caliber Scrabble
paper_content:
Computer Scrabble programs have achieved a level of performance that exceeds that of the strongest human players. MAVEN was the first program to demonstrate this against human opposition. Scrabble is a game of imperfect information with a large branching factor. The techniques successfully applied in two-player games such as chess do not work here. MAVEN combines a selective move generator, simulations of likely game scenarios, and the B* algorithm to produce a world-championship-caliber Scrabble-playing program.
---
paper_title: The Theory of Poker
paper_content:
A process for preparing waterproof leathers by fat-liquoring with an aqueous liquor containing esters of citric acid with higher fatty alcohols which are used as the impregnating agent, characterized in that the fat-liquoring agent contains a mixture emulsified in water which consists of (A) an acid ester of citric acid with a higher fatty alcohol having chain lengths of 12 to 22 carbon atoms and (B) an organic alcohol solvent for the citric acid ester which is totally or partially soluble in water and which have a boiling point above 100 DEG C.
---
paper_title: The challenge of poker
paper_content:
Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect information, where multiple competing agents must deal with probabilistic knowledge, risk assessment, and possible deception, not unlike decisions made in the real world. Opponent modeling is another difficult problem in decision-making applications, and it is essential to achieving high performance in poker. This paper describes the design considerations and architecture of the poker program Poki. In addition to methods for hand evaluation and betting strategy, Poki uses learning techniques to construct statistical models of each opponent, and dynamically adapts to exploit observed patterns and tendencies. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at world-class level. Copyright 2001 Elsevier Science B.V.
---
paper_title: A tool for the direct assessment of poker decisions
paper_content:
A ferroelectric focussing and defocussing device for operation at millimeter wavelengths applicable for use as a component in radar systems. Electrodes direct fields reversibly and continuously modify the refractive character of the ferroelectric material of the device as incoming radiation seeks to proceed along the optic axis of the material. The device includes first and second material media sharing complementary sides with Fresnel contours.
---
paper_title: The challenge of poker
paper_content:
Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect information, where multiple competing agents must deal with probabilistic knowledge, risk assessment, and possible deception, not unlike decisions made in the real world. Opponent modeling is another difficult problem in decision-making applications, and it is essential to achieving high performance in poker. This paper describes the design considerations and architecture of the poker program Poki. In addition to methods for hand evaluation and betting strategy, Poki uses learning techniques to construct statistical models of each opponent, and dynamically adapts to exploit observed patterns and tendencies. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at world-class level. Copyright 2001 Elsevier Science B.V.
---
paper_title: Pseudo-Optimal Strategies in No-Limit Poker
paper_content:
Games have always been a strong driving force in Artificial Intelligence. In the last ten years huge improvements has been made in perfect information games like chess and Othello. The strongest co ...
---
paper_title: Using Selective-Sampling Simulations in Poker
paper_content:
Until recently, AI research that used games as an experimental testbed has concentrated on perfect information games. Many of these games have been amenable to so-called brute-force search techniques. In contrast, games of imperfect information, such as bridge and poker, contain hidden knowledge making similar search techniques impractical. This paper describes work being done on developing a world-class poker-playing program. Part of the program’s playing strength comes from real-time simulations. The program generates an instance of the missing data, subject to any constraints that have been learned, and then searches the game tree to determine a numerical result. By repeating this a sufficient number of times, a statistically meaningful sample can be obtained to be used in the program’s decision-making process. For constructing programs to play two-player deterministic perfect information games, there is a well-defined framework based on the alpha-beta search algorithm. For imperfect information games, no comparable framework exists. In this paper we propose selective sampling simulations as a general-purpose framework for building programs to achieve high performance in imperfect information games.
---
paper_title: Using Probabilistic Knowledge and Simulation to Play Poker
paper_content:
Until recently, artificial intelligence researchers who use games as their experimental testbed have concentrated on games of perfect information. Many of these games have been amenable to brute-force search techniques. In contrast, games of imperfect information, such as bridge and poker, contain hidden information making similar search techniques impractical. This paper describes recent progress in developing a high-performance pokerplaying program. The advances come in two forms. First, we introduce a new betting strategy that returns a probabilistic betting decision, a probability triple, that gives the likelihood of a fold, call or raise occurring in a given situation. This component unifies all the expert knowledge used in the program, does a better job of representing the type of decision making needed to play strong poker, and improves the way information is propagated throughout the program. Second, real-time simulations are used to compute the expected values of betting decisions. The program generates an instance of the missing data, subject to any constraints that have been learned, and then simulates the rest of the game to determine a numerical result. By repeating this a sufficient number of times, a statistically meaningful sample is used in the program's decision-making process. Experimental results show that these enhancements each represent major advances in the strength of computer poker programs.
---
paper_title: SOAR: an architecture for general intelligence
paper_content:
Abstract The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present SOAR, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities.
---
paper_title: The challenge of poker
paper_content:
Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect information, where multiple competing agents must deal with probabilistic knowledge, risk assessment, and possible deception, not unlike decisions made in the real world. Opponent modeling is another difficult problem in decision-making applications, and it is essential to achieving high performance in poker. This paper describes the design considerations and architecture of the poker program Poki. In addition to methods for hand evaluation and betting strategy, Poki uses learning techniques to construct statistical models of each opponent, and dynamically adapts to exploit observed patterns and tendencies. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at world-class level. Copyright 2001 Elsevier Science B.V.
---
paper_title: Opponent Modeling in Poker
paper_content:
Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents' play.
---
paper_title: Finite-time Analysis of the Multiarmed Bandit Problem
paper_content:
Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.
---
paper_title: Using Selective-Sampling Simulations in Poker
paper_content:
Until recently, AI research that used games as an experimental testbed has concentrated on perfect information games. Many of these games have been amenable to so-called brute-force search techniques. In contrast, games of imperfect information, such as bridge and poker, contain hidden knowledge making similar search techniques impractical. This paper describes work being done on developing a world-class poker-playing program. Part of the program’s playing strength comes from real-time simulations. The program generates an instance of the missing data, subject to any constraints that have been learned, and then searches the game tree to determine a numerical result. By repeating this a sufficient number of times, a statistically meaningful sample can be obtained to be used in the program’s decision-making process. For constructing programs to play two-player deterministic perfect information games, there is a well-defined framework based on the alpha-beta search algorithm. For imperfect information games, no comparable framework exists. In this paper we propose selective sampling simulations as a general-purpose framework for building programs to achieve high performance in imperfect information games.
---
paper_title: Monte-Carlo Tree Search in Poker using Expected Reward Distributions
paper_content:
We investigate the use of Monte-Carlo Tree Search (MCTS) within the field of computer Poker, more specifically No-Limit Texas Hold'em. The hidden information in Poker results in so called miximax game trees where opponent decision nodes have to be modeled as chance nodes. The probability distribution in these nodes is modeled by an opponent model that predicts the actions of the opponents. We propose a modification of the standard MCTS selection and backpropagation strategies that explicitly model and exploit the uncertainty of sampled expected values. The new strategies are evaluated as a part of a complete Poker bot that is, to the best of our knowledge, the first exploiting no-limit Texas Hold'em bot that can play at a reasonable level in games of more than two players.
---
paper_title: Bandit Based Monte-Carlo Planning
paper_content:
For large state-space Markovian Decision Problems Monte-Carlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.
---
paper_title: Using Probabilistic Knowledge and Simulation to Play Poker
paper_content:
Until recently, artificial intelligence researchers who use games as their experimental testbed have concentrated on games of perfect information. Many of these games have been amenable to brute-force search techniques. In contrast, games of imperfect information, such as bridge and poker, contain hidden information making similar search techniques impractical. This paper describes recent progress in developing a high-performance pokerplaying program. The advances come in two forms. First, we introduce a new betting strategy that returns a probabilistic betting decision, a probability triple, that gives the likelihood of a fold, call or raise occurring in a given situation. This component unifies all the expert knowledge used in the program, does a better job of representing the type of decision making needed to play strong poker, and improves the way information is propagated throughout the program. Second, real-time simulations are used to compute the expected values of betting decisions. The program generates an instance of the missing data, subject to any constraints that have been learned, and then simulates the rest of the game to determine a numerical result. By repeating this a sufficient number of times, a statistically meaningful sample is used in the program's decision-making process. Experimental results show that these enhancements each represent major advances in the strength of computer poker programs.
---
paper_title: Efficient selectivity and backup operators in Monte-Carlo tree search
paper_content:
A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 × 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament.
---
paper_title: The challenge of poker
paper_content:
Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect information, where multiple competing agents must deal with probabilistic knowledge, risk assessment, and possible deception, not unlike decisions made in the real world. Opponent modeling is another difficult problem in decision-making applications, and it is essential to achieving high performance in poker. This paper describes the design considerations and architecture of the poker program Poki. In addition to methods for hand evaluation and betting strategy, Poki uses learning techniques to construct statistical models of each opponent, and dynamically adapts to exploit observed patterns and tendencies. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at world-class level. Copyright 2001 Elsevier Science B.V.
---
paper_title: The *-minimax search procedure for trees containing chance nodes
paper_content:
An extention of the alpha-beta tree pruning strategy to game trees with 'probability' nodes, whose values are defined as the (possibly weighted) average of their successors' values, is developed. These '*-minimax' trees pertain to games involving chance but no concealed information. Based upon our search strategy, we formulate and then analyze several algorithms for *-minimax trees. An initial left-to-right depth-first algorithm is developed and shown to reduce the complexity of an exhaustive search strategy by 25-30 percent. An improved algorithm is then formulated to 'probe' beneath the chance nodes of 'regular' *-minimax trees, where players alternate in making moves with chance events interspersed. With random ordering of successor nodes, this modified algorithm is shown to reduce search by more than 50 percent. With optimal ordering, it is shown to reduce search complexity by an order of magnitude. After examining the savings of the first two algorithms on deeper trees, two additional algorithms are presented and analyzed.
---
paper_title: Opponent Modeling in Poker
paper_content:
Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents' play.
---
paper_title: An Exploitative Monte-Carlo Poker Agent
paper_content:
We describe the poker agent AKI-REALBOT which participated in the 6-player Limit Competition of the third Annual AAAI Computer Poker Challenge in 2008. It finished in second place, its performance being mostly due to its superior ability to exploit weaker bots. This paper describes the architecture of the program and the Monte-Carlo decision tree-based decision engine that was used to make the bot's decision. It will focus the attention on the modifications which made the bot successful in exploiting weaker bots.
---
paper_title: Linear Programming: Foundations and Extensions
paper_content:
Preface. Part 1: Basic Theory - The Simplex Method and Duality. 1. Introduction. 2. The Simplex Method. 3. Degeneracy. 4. Efficiency of the Simplex Method. 5. Duality Theory. 6. The Simplex Method in Matrix Notation. 7. Sensitivity and Parametric Analyses. 8. Implementation Issues. 9. Problems in General Form. 10. Convex Analysis. 11. Game Theory. 12. Regression. Part 2: Network-Type Problems. 13. Network Flow Problems. 14. Applications. 15. Structural Optimization. Part 3: Interior-Point Methods. 16. The Central Path. 17. A Path-Following Method. 18. The KKT System. 19. Implementation Issues. 20. The Affine-Scaling Method. 21. The Homogeneous Self-Dual Method. Part 4: Extensions. 22. Integer Programming. 23. Quadratic Programming. 24. Convex Programming. Appendix A: Source Listings. Answers to Selected Exercises. Bibliography. Index.
---
paper_title: Lossless abstraction of imperfect information games
paper_content:
Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is o(n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes—over four orders of magnitude more than in the largest poker game solved previously. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.
---
paper_title: Generating and Solving Imperfect Information Games
paper_content:
Work on game playing in AI has typically ignored games of imperfect information such as poker. In this paper we present a framework for dealing with such games. We point out several important issues that arise only in the context of imperfect information games particularly the insufficiency of a simple game tree model to represent the players information state and the need for randomization in the players optimal strategies. We describe Gala an implemented system that provides the user with a very natural and expressive language for describing games. From a game description Gala creates an augmented game tree with information sets which can be used by various algorithms in order to find optimal strategies for that game. In particular Gala implements the first practical algorithm for finding optimal randomized strategies in two player imperfect information competitive games [Koller et al 1994]. The running time of this algorithm is palinomial in the size of the game tree whereas previous algorithms were exponential. We present experimental results showing that this algorithm is also efficient in practice and can therefore form the basis for a game playing system.
---
paper_title: Pseudo-Optimal Strategies in No-Limit Poker
paper_content:
Games have always been a strong driving force in Artificial Intelligence. In the last ten years huge improvements has been made in perfect information games like chess and Othello. The strongest co ...
---
paper_title: A competitive Texas Hold’em poker player via automated abstraction and realtime equilibrium computation
paper_content:
We present a game theory-based heads-up Texas Hold'em poker player, GS1. To overcome the computational obstacles stemming from Texas Hold'em's gigantic game tree, the player employs our automated abstraction techniques to reduce the complexity of the strategy computations. Texas Hold'em consists of four betting rounds. Our player solves a large linear program (offline) to compute strategies for the abstracted first and second rounds. After the second betting round, our player updates the probability of each possible hand based on the observed betting actions in the first two rounds as well as the revealed cards. Using these updated probabilities, our player computes in real-time an equilibrium approximation for the last two abstracted rounds. We demonstrate that our player, which incorporates very little poker-specific knowledge, is competitive with leading poker-playing programs which incorporate extensive domain knowledge, as well as with advanced human players.
---
paper_title: Some methods for classification and analysis of multivariate observations
paper_content:
The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special
---
paper_title: Potential-aware automated abstraction of sequential games, and holistic equilibrium analysis of Texas Hold’em poker
paper_content:
We present a new abstraction algorithm for sequential imperfect information games. While most prior abstraction algorithms employ a myopic expected-value computation as a similarity metric, our algorithm considers a higher-dimensional space consisting of histograms over abstracted classes of states from later stages of the game. This enables our bottom-up abstraction algorithm to automatically take into account potential: a hand can become relatively better (or worse) over time and the strength of different hands can get resolved earlier or later in the game. We further improve the abstraction quality by making multiple passes over the abstraction, enabling the algorithm to narrow the scope of analysis to information that is relevant given abstraction decisions made for earlier parts of the game. We also present a custom indexing scheme based on suit isomorphisms that enables one to work on significantly larger models than before. We apply the techniques to heads-up limit Texas Hold'em poker. Whereas all prior game theory-based work for Texas Hold'em poker used generic off-the-shelf linear program solvers for the equilibrium analysis of the abstracted game, we make use of a recently developed algorithm based on the excessive gap technique from convex optimization. This paper is, to our knowledge, the first to abstract and game-theoretically analyze all four betting rounds in one run (rather than splitting the game into phases). The resulting player, GS3, beats BluffBot, GS2, Hyperborean, Monash-BPP, Sparbot, Teddy, and Vexbot, each with statistical significance. To our knowledge, those competitors are the best prior programs for the game.
---
paper_title: Expectation-Based Versus Potential-Aware Automated Abstraction in Imperfect Information Games: An Experimental Comparison Using Poker
paper_content:
Automated abstraction algorithms for sequential imperfect information games have recently emerged as a key component in developing competitive game theory-based agents. The existing literature has not investigated the relative performance of different abstraction algorithms. Instead, agents whose construction has used automated abstraction have only been compared under confounding effects: different granularities of abstraction and equilibrium-finding algorithms that yield different accuracies when solving the abstracted game. ::: ::: This paper provides the first systematic evaluation of abstraction algorithms. Two families of algorithms have been proposed. The distinguishing feature is the measure used to evaluate the strategic similarity between game states. One algorithm uses the probability of winning as the similarity measure. The other uses a potential-aware similarity measure based on probability distributions over future states. We conduct experiments on Rhode Island Hold'em poker. We compare the algorithms against each other, against optimal play, and against each agent's nemesis. We also compare them based on the resulting game's value. Interestingly, for very coarse abstractions the expectation-based algorithm is better, but for moderately coarse and fine abstractions the potential-aware approach is superior. Furthermore, agents constructed using the expectation-based approach are highly exploitable beyond what their performance against the game's optimal strategy would suggest.
---
paper_title: A Practical Use of Imperfect Recall
paper_content:
Perfect recall is the common and natural assumption that an agent never forgets. As a consequence, the agent can always condition its choice of action on any prior observations. In this paper, we explore relaxing this assumption. We observe the negative impact this relaxation has on algorithms: some algorithms are no longer well-defined, while others lose their theoretical guarantees on the quality of a solution. Despite these disadvantages, we show that removing this restriction can provide considerable empirical advantages when modeling extremely large extensive games. In particular, it allows fine granularity of the most relevant observations without requiring decisions to be contingent on all past observations. In the domain of poker, this improvement enables new types of information to be used in the abstraction. By making use of imperfect recall and new types of information, our poker program was able to win the limit equilibrium event as well as the no-limit event at the 2008 AAAI Computer Poker Competition. We show experimental results to verify that our pro- grams using imperfect recall are indeed stronger than their perfect recall counterparts.
---
paper_title: Some methods for classification and analysis of multivariate observations
paper_content:
The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special
---
paper_title: A competitive Texas Hold’em poker player via automated abstraction and realtime equilibrium computation
paper_content:
We present a game theory-based heads-up Texas Hold'em poker player, GS1. To overcome the computational obstacles stemming from Texas Hold'em's gigantic game tree, the player employs our automated abstraction techniques to reduce the complexity of the strategy computations. Texas Hold'em consists of four betting rounds. Our player solves a large linear program (offline) to compute strategies for the abstracted first and second rounds. After the second betting round, our player updates the probability of each possible hand based on the observed betting actions in the first two rounds as well as the revealed cards. Using these updated probabilities, our player computes in real-time an equilibrium approximation for the last two abstracted rounds. We demonstrate that our player, which incorporates very little poker-specific knowledge, is competitive with leading poker-playing programs which incorporate extensive domain knowledge, as well as with advanced human players.
---
paper_title: Better automated abstraction techniques for imperfect information games, with application to Texas Hold'em poker
paper_content:
We present new approximation methods for computing game-theoretic strategies for sequential games of imperfect information. At a high level, we contribute two new ideas. First, we introduce a new state-space abstraction algorithm. In each round of the game, there is a limit to the number of strategically different situations that an equilibrium-finding algorithm can handle. Given this constraint, we use clustering to discover similar positions, and we compute the abstraction via an integer program that minimizes the expected error at each stage of the game. Second, we present a method for computing the leaf payoffs for a truncated version of the game by simulating the actions in the remaining portion of the game. This allows the equilibrium-finding algorithm to take into account the entire game tree while having to explicitly solve only a truncated version. Experiments show that each of our two new techniques improves performance dramatically in Texas Hold'em poker. The techniques lead to a drastic improvement over prior approaches for automatically generating agents, and our agent plays competitively even against the best agents overall.
---
paper_title: Lossless abstraction of imperfect information games
paper_content:
Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is o(n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes—over four orders of magnitude more than in the largest poker game solved previously. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.
---
paper_title: Pseudo-Optimal Strategies in No-Limit Poker
paper_content:
Games have always been a strong driving force in Artificial Intelligence. In the last ten years huge improvements has been made in perfect information games like chess and Othello. The strongest co ...
---
paper_title: A heads-up no-limit Texas Hold ’ em poker player : Discretized betting models and automatically generated equilibrium-finding programs
paper_content:
We present Tartanian, a game theory-based player for heads-up no-limit Texas Hold'em poker. Tartanian is built from three components. First, to deal with the virtually infinite strategy space of no-limit poker, we develop a discretized betting model designed to capture the most important strategic choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries.
---
paper_title: A competitive Texas Hold’em poker player via automated abstraction and realtime equilibrium computation
paper_content:
We present a game theory-based heads-up Texas Hold'em poker player, GS1. To overcome the computational obstacles stemming from Texas Hold'em's gigantic game tree, the player employs our automated abstraction techniques to reduce the complexity of the strategy computations. Texas Hold'em consists of four betting rounds. Our player solves a large linear program (offline) to compute strategies for the abstracted first and second rounds. After the second betting round, our player updates the probability of each possible hand based on the observed betting actions in the first two rounds as well as the revealed cards. Using these updated probabilities, our player computes in real-time an equilibrium approximation for the last two abstracted rounds. We demonstrate that our player, which incorporates very little poker-specific knowledge, is competitive with leading poker-playing programs which incorporate extensive domain knowledge, as well as with advanced human players.
---
paper_title: Better automated abstraction techniques for imperfect information games, with application to Texas Hold'em poker
paper_content:
We present new approximation methods for computing game-theoretic strategies for sequential games of imperfect information. At a high level, we contribute two new ideas. First, we introduce a new state-space abstraction algorithm. In each round of the game, there is a limit to the number of strategically different situations that an equilibrium-finding algorithm can handle. Given this constraint, we use clustering to discover similar positions, and we compute the abstraction via an integer program that minimizes the expected error at each stage of the game. Second, we present a method for computing the leaf payoffs for a truncated version of the game by simulating the actions in the remaining portion of the game. This allows the equilibrium-finding algorithm to take into account the entire game tree while having to explicitly solve only a truncated version. Experiments show that each of our two new techniques improves performance dramatically in Texas Hold'em poker. The techniques lead to a drastic improvement over prior approaches for automatically generating agents, and our agent plays competitively even against the best agents overall.
---
paper_title: A near-optimal strategy for a heads-up no-limit Texas Hold'em poker tournament
paper_content:
We analyze a heads-up no-limit Texas Hold'em poker tournament with a fixed small blind of 300 chips, a fixed big blind of 600 chips and a total amount of 8000 chips on the table (until recently, these parameters defined the heads-up endgame of sit-n-go tournaments on the popular Party-Poker.com online poker site). Due to the size of this game, a computation of an optimal (i.e. minimax) strategy for the game is completely infeasible. However, combining an algorithm due to Koller, Megiddo and von Stengel with concepts of Everett and suggestions of Sklansky, we compute an optimal jam/fold strategy, i.e. a strategy that would be optimal if any bet made by the player playing by the strategy (but not bets of his opponent) had to be his entire stack. Our computations establish that the computed strategy is near-optimal for the unrestricted tournament (i.e., with post-flop play being allowed) in the rigorous sense that a player playing by the computed strategy will win the tournament with a probability within 1.4 percentage points of the probability that an optimal strategy (allowing post-flop play) would give.
---
paper_title: Using Fictitious Play to Find Pseudo-optimal Solutions for Full-scale Poker
paper_content:
A pseudo-optimal solution to the poker variant, Two-Player Limit Texas Hold’em was developed and tested against existing world-class poker algorithms. Techniques used in creating the pseudo-optimal solution were able to simplify the problem from complexity from O(10^18) to O(10^7). To achieve this reduction, bucketing/grouping techniques were employed, as were methods replacing the chance nodes in the game tree; reducing it from a tree with millions of billions of terminal nodes, to a game tree with only a few thousand. When played in competition against several world-class algorithms, our algorithm displayed strong results, gaining and maintaining leads against each of the opponents it faced. Using proper abstraction techniques it is shown that we are able to succeed in approaching Nash Equilibria in complex game theoretical problems such as full-scale poker.
---
paper_title: A competitive Texas Hold’em poker player via automated abstraction and realtime equilibrium computation
paper_content:
We present a game theory-based heads-up Texas Hold'em poker player, GS1. To overcome the computational obstacles stemming from Texas Hold'em's gigantic game tree, the player employs our automated abstraction techniques to reduce the complexity of the strategy computations. Texas Hold'em consists of four betting rounds. Our player solves a large linear program (offline) to compute strategies for the abstracted first and second rounds. After the second betting round, our player updates the probability of each possible hand based on the observed betting actions in the first two rounds as well as the revealed cards. Using these updated probabilities, our player computes in real-time an equilibrium approximation for the last two abstracted rounds. We demonstrate that our player, which incorporates very little poker-specific knowledge, is competitive with leading poker-playing programs which incorporate extensive domain knowledge, as well as with advanced human players.
---
paper_title: A new algorithm for generating equilibria in massive zero-sum games
paper_content:
In normal scenarios, computer scientists often consider the number of states in a game to capture the difficulty of learning an equilibrium. However, players do not see games in the same light: most consider Go or Chess to be more complex than Monopoly. In this paper, we discuss a new measure of game complexity that links existing state-of-the-art algorithms for computing approximate equilibria to a more human measure. In particular, we consider the range of skill in a game, i.e. how many different skill levels exist. We then modify existing techniques to design a new algorithm to compute approximate equilibria whose performance can be captured by this new measure. We use it to develop the first near Nash equilibrium for a four round abstraction of poker, and show that it would have been able to win handily the bankroll competition from last year's AAAI poker competition.
---
paper_title: Pseudo-Optimal Strategies in No-Limit Poker
paper_content:
Games have always been a strong driving force in Artificial Intelligence. In the last ten years huge improvements has been made in perfect information games like chess and Othello. The strongest co ...
---
paper_title: Using Probabilistic Knowledge and Simulation to Play Poker
paper_content:
Until recently, artificial intelligence researchers who use games as their experimental testbed have concentrated on games of perfect information. Many of these games have been amenable to brute-force search techniques. In contrast, games of imperfect information, such as bridge and poker, contain hidden information making similar search techniques impractical. This paper describes recent progress in developing a high-performance pokerplaying program. The advances come in two forms. First, we introduce a new betting strategy that returns a probabilistic betting decision, a probability triple, that gives the likelihood of a fold, call or raise occurring in a given situation. This component unifies all the expert knowledge used in the program, does a better job of representing the type of decision making needed to play strong poker, and improves the way information is propagated throughout the program. Second, real-time simulations are used to compute the expected values of betting decisions. The program generates an instance of the missing data, subject to any constraints that have been learned, and then simulates the rest of the game to determine a numerical result. By repeating this a sufficient number of times, a statistically meaningful sample is used in the program's decision-making process. Experimental results show that these enhancements each represent major advances in the strength of computer poker programs.
---
paper_title: A heads-up no-limit Texas Hold ’ em poker player : Discretized betting models and automatically generated equilibrium-finding programs
paper_content:
We present Tartanian, a game theory-based player for heads-up no-limit Texas Hold'em poker. Tartanian is built from three components. First, to deal with the virtually infinite strategy space of no-limit poker, we develop a discretized betting model designed to capture the most important strategic choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries.
---
paper_title: Gradient-based algorithms for finding nash equilibria in extensive form games
paper_content:
We present a computational approach to the saddle-point formulation for the Nash equilibria of two-person, zero-sum sequential games of imperfect information. The algorithm is a first-order gradient method based on modern smoothing techniques for non-smooth convex optimization. The algorithm requires O(1/Ɛ) iterations to compute an Ɛ-equilibrium, and the work per iteration is extremely low. These features enable us to find approximate Nash equilibria for sequential games with a tree representation of about 1010 nodes. This is three orders of magnitude larger than what previous algorithms can handle. We present two heuristic improvements to the basic algorithm and demonstrate their efficacy on a range of real-world games. Furthermore, we demonstrate how the algorithm can be customized to a specific class of problems with enormous memory savings.
---
paper_title: Potential-aware automated abstraction of sequential games, and holistic equilibrium analysis of Texas Hold’em poker
paper_content:
We present a new abstraction algorithm for sequential imperfect information games. While most prior abstraction algorithms employ a myopic expected-value computation as a similarity metric, our algorithm considers a higher-dimensional space consisting of histograms over abstracted classes of states from later stages of the game. This enables our bottom-up abstraction algorithm to automatically take into account potential: a hand can become relatively better (or worse) over time and the strength of different hands can get resolved earlier or later in the game. We further improve the abstraction quality by making multiple passes over the abstraction, enabling the algorithm to narrow the scope of analysis to information that is relevant given abstraction decisions made for earlier parts of the game. We also present a custom indexing scheme based on suit isomorphisms that enables one to work on significantly larger models than before. We apply the techniques to heads-up limit Texas Hold'em poker. Whereas all prior game theory-based work for Texas Hold'em poker used generic off-the-shelf linear program solvers for the equilibrium analysis of the abstracted game, we make use of a recently developed algorithm based on the excessive gap technique from convex optimization. This paper is, to our knowledge, the first to abstract and game-theoretically analyze all four betting rounds in one run (rather than splitting the game into phases). The resulting player, GS3, beats BluffBot, GS2, Hyperborean, Monash-BPP, Sparbot, Teddy, and Vexbot, each with statistical significance. To our knowledge, those competitors are the best prior programs for the game.
---
paper_title: Regret minimization in games with incomplete information
paper_content:
Extensive games are a powerful model of multiagent decision-making scenarios with incomplete information. Finding a Nash equilibrium for very large instances of these games has received a great deal of recent attention. In this paper, we describe a new technique for solving large games based on regret minimization. In particular, we introduce the notion of counterfactual regret, which exploits the degree of incomplete information in an extensive game. We show how minimizing counterfactual regret minimizes overall regret, and therefore in self-play can be used to compute a Nash equilibrium. We demonstrate this technique in the domain of poker, showing we can solve abstractions of limit Texas Hold'em with as many as 1012 states, two orders of magnitude larger than previous methods.
---
paper_title: A competitive Texas Hold’em poker player via automated abstraction and realtime equilibrium computation
paper_content:
We present a game theory-based heads-up Texas Hold'em poker player, GS1. To overcome the computational obstacles stemming from Texas Hold'em's gigantic game tree, the player employs our automated abstraction techniques to reduce the complexity of the strategy computations. Texas Hold'em consists of four betting rounds. Our player solves a large linear program (offline) to compute strategies for the abstracted first and second rounds. After the second betting round, our player updates the probability of each possible hand based on the observed betting actions in the first two rounds as well as the revealed cards. Using these updated probabilities, our player computes in real-time an equilibrium approximation for the last two abstracted rounds. We demonstrate that our player, which incorporates very little poker-specific knowledge, is competitive with leading poker-playing programs which incorporate extensive domain knowledge, as well as with advanced human players.
---
paper_title: Monte carlo sampling for regret minimization in extensive games
paper_content:
Sequential decision-making with multiple agents and imperfect information is commonly modeled as an extensive game. One efficient method for computing Nash equilibria in large, zero-sum, imperfect information games is counterfactual regret minimization (CFR). In the domain of poker, CFR has proven effective, particularly when using a domain-specific augmentation involving chance outcome sampling. In this paper, we describe a general family of domain-independent CFR sample-based algorithms called Monte Carlo counterfactual regret minimization (MCCFR) of which the original and poker-specific versions are special cases. We start by showing that MCCFR performs the same regret updates as CFR on expectation. Then, we introduce two sampling schemes: outcome sampling and external sampling, showing that both have bounded overall regret with high probability. Thus, they can compute an approximate equilibrium using self-play. Finally, we prove a new tighter bound on the regret for the original CFR algorithm and relate this new bound to MCCFR's bounds. We show empirically that, although the sample-based algorithms require more iterations, their lower cost per iteration can lead to dramatically faster convergence in various games.
---
paper_title: Regret minimization in games with incomplete information
paper_content:
Extensive games are a powerful model of multiagent decision-making scenarios with incomplete information. Finding a Nash equilibrium for very large instances of these games has received a great deal of recent attention. In this paper, we describe a new technique for solving large games based on regret minimization. In particular, we introduce the notion of counterfactual regret, which exploits the degree of incomplete information in an extensive game. We show how minimizing counterfactual regret minimizes overall regret, and therefore in self-play can be used to compute a Nash equilibrium. We demonstrate this technique in the domain of poker, showing we can solve abstractions of limit Texas Hold'em with as many as 1012 states, two orders of magnitude larger than previous methods.
---
paper_title: A Practical Use of Imperfect Recall
paper_content:
Perfect recall is the common and natural assumption that an agent never forgets. As a consequence, the agent can always condition its choice of action on any prior observations. In this paper, we explore relaxing this assumption. We observe the negative impact this relaxation has on algorithms: some algorithms are no longer well-defined, while others lose their theoretical guarantees on the quality of a solution. Despite these disadvantages, we show that removing this restriction can provide considerable empirical advantages when modeling extremely large extensive games. In particular, it allows fine granularity of the most relevant observations without requiring decisions to be contingent on all past observations. In the domain of poker, this improvement enables new types of information to be used in the abstraction. By making use of imperfect recall and new types of information, our poker program was able to win the limit equilibrium event as well as the no-limit event at the 2008 AAAI Computer Poker Competition. We show experimental results to verify that our pro- grams using imperfect recall are indeed stronger than their perfect recall counterparts.
---
paper_title: Using counterfactual regret minimization to create competitive multiplayer poker agents
paper_content:
Games are used to evaluate and advance Multiagent and Artificial Intelligence techniques. Most of these games are deterministic with perfect information (e.g. Chess and Checkers). A deterministic game has no chance element and in a perfect information game, all information is visible to all players. However, many real-world scenarios with competing agents are stochastic (non-deterministic) with imperfect information. For two-player zero-sum perfect recall games, a recent technique called Counterfactual Regret Minimization (CFR) computes strategies that are provably convergent to an e-Nash equilibrium. A Nash equilibrium strategy is useful in two-player games since it maximizes its utility against a worst-case opponent. However, for multiplayer (three or more player) games, we lose all theoretical guarantees for CFR. However, we believe that CFR-generated agents may perform well in multiplayer games. To test this hypothesis, we used this technique to create several 3-player limit Texas Hold'em poker agents and two of them placed first and second in the 3-player event of the 2009 AAAI/IJCAI Computer Poker Competition. We also demonstrate that good strategies can be obtained by grafting sets of two-player subgame strategies to a 3-player base strategy after one of the players is eliminated.
---
paper_title: Probabilistic State Translation in Extensive Games with Large Action Sets
paper_content:
Equilibrium or near-equilibrium solutions to very large extensive form games are often computed by using abstractions to reduce the game size. A common abstraction technique for games with a large number of available actions is to restrict the number of legal actions in every state. This method has been used to discover equilibrium solutions for the game of no-limit heads-up Texas Hold'em. When using a solution to an abstracted game to play one side in the un-abstracted (real) game, the real opponent actions may not correspond to actions in the abstracted game. The most popular method for handling this situation is to translate opponent actions in the real game to the closest legal actions in the abstracted game. We show that this approach can result in a very exploitable player and propose an alternative solution. We use probabilistic mapping to translate a real action into a probability distribution over actions, whose weights are determined by a similarity metric. We show that this approach significantly reduces the exploitability when using an abstract solution in the real game.
---
paper_title: The *-minimax search procedure for trees containing chance nodes
paper_content:
An extention of the alpha-beta tree pruning strategy to game trees with 'probability' nodes, whose values are defined as the (possibly weighted) average of their successors' values, is developed. These '*-minimax' trees pertain to games involving chance but no concealed information. Based upon our search strategy, we formulate and then analyze several algorithms for *-minimax trees. An initial left-to-right depth-first algorithm is developed and shown to reduce the complexity of an exhaustive search strategy by 25-30 percent. An improved algorithm is then formulated to 'probe' beneath the chance nodes of 'regular' *-minimax trees, where players alternate in making moves with chance events interspersed. With random ordering of successor nodes, this modified algorithm is shown to reduce search by more than 50 percent. With optimal ordering, it is shown to reduce search complexity by an order of magnitude. After examining the savings of the first two algorithms on deeper trees, two additional algorithms are presented and analyzed.
---
paper_title: Adaptive play in Texas Hold’em Poker
paper_content:
We present a Texas Hold'em poker player for limit headsup games. Our bot is designed to adapt automatically to the strategy of the opponent and is not based on Nash equilibrium computation. The main idea is to design a bot that builds beliefs on his opponent's hand. A forest of game trees is generated according to those beliefs and the solutions of the trees are combined to make the best decision. The beliefs are updated during the game according to several methods, each of which corresponding to a basic strategy. We then use an exploration-exploitation bandit algorithm, namely the UCB (Upper Confidence Bound), to select a strategy to follow. This results in a global play that takes into account the opponent's strategy, and which turns out to be rather unpredictable. Indeed, if a given strategy is exploited by an opponent, the UCB algorithm will detect it using change point detection, and will choose another one. ::: ::: The initial resulting program, called Brennus, participated to the AAAI'07 Computer Poker Competition in both online and equilibrium competition and ranked eight out of seventeen competitors.
---
paper_title: Finite-time Analysis of the Multiarmed Bandit Problem
paper_content:
Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.
---
paper_title: Computing robust counter-strategies
paper_content:
Adaptation to other initially unknown agents often requires computing an effective counter-strategy. In the Bayesian paradigm, one must find a good counter-strategy to the inferred posterior of the other agents' behavior. In the experts paradigm, one may want to choose experts that are good counter-strategies to the other agents' expected behavior. In this paper we introduce a technique for computing robust counter-strategies for adaptation in multiagent scenarios under a variety of paradigms. The strategies can take advantage of a suspected tendency in the decisions of the other agents, while bounding the worst-case performance when the tendency is not observed. The technique involves solving a modified game, and therefore can make use of recently developed algorithms for solving very large extensive games. We demonstrate the effectiveness of the technique in two-player Texas Hold'em. We show that the computed poker strategies are substantially more robust than best response counter-strategies, while still exploiting a suspected tendency. We also compose the generated strategies in an experts algorithm showing a dramatic improvement in performance over using simple best responses.
---
paper_title: MCRNR: Fast Computing of Restricted Nash Responses by Means of Sampling
paper_content:
This paper presents a sample-based algorithm for the computation of restricted Nash strategies in complex extensive form games. Recent work indicates that regret-minimization algorithms using selective sampling, such as Monte-Carlo Counterfactual Regret Minimization (MCCFR), converge faster to Nash equilibrium (NE) strategies than their nonsampled counterparts which perform a full tree traversal. In this paper, we show that MCCFR is also able to establish NE strategies in the complex domain of Poker. Although such strategies are defensive (i.e. safe to play), they are oblivious to opponent mistakes. We can thus achieve better performance by using (an estimation of) opponent strategies. The Restricted Nash Response (RNR) algorithm was proposed to learn robust counter-strategies given such knowledge. It solves a modified game, wherein it is assumed that opponents play according to a fixed strategy with a certain probability, or to a regret-minimizing strategy otherwise. We improve the rate of convergence of the RNR algorithm using sampling. Our new algorithm, MCRNR, samples only relevant parts of the game tree. It is therefore able to converge faster to robust best-response strategies than RNR. We evaluate our algorithm on a variety of imperfect information games that are small enough to solve yet large enough to be strategically interesting, as well as a large game, Texas Hold'em Poker.
---
paper_title: Computing robust counter-strategies
paper_content:
Adaptation to other initially unknown agents often requires computing an effective counter-strategy. In the Bayesian paradigm, one must find a good counter-strategy to the inferred posterior of the other agents' behavior. In the experts paradigm, one may want to choose experts that are good counter-strategies to the other agents' expected behavior. In this paper we introduce a technique for computing robust counter-strategies for adaptation in multiagent scenarios under a variety of paradigms. The strategies can take advantage of a suspected tendency in the decisions of the other agents, while bounding the worst-case performance when the tendency is not observed. The technique involves solving a modified game, and therefore can make use of recently developed algorithms for solving very large extensive games. We demonstrate the effectiveness of the technique in two-player Texas Hold'em. We show that the computed poker strategies are substantially more robust than best response counter-strategies, while still exploiting a suspected tendency. We also compose the generated strategies in an experts algorithm showing a dramatic improvement in performance over using simple best responses.
---
paper_title: Computing robust counter-strategies
paper_content:
Adaptation to other initially unknown agents often requires computing an effective counter-strategy. In the Bayesian paradigm, one must find a good counter-strategy to the inferred posterior of the other agents' behavior. In the experts paradigm, one may want to choose experts that are good counter-strategies to the other agents' expected behavior. In this paper we introduce a technique for computing robust counter-strategies for adaptation in multiagent scenarios under a variety of paradigms. The strategies can take advantage of a suspected tendency in the decisions of the other agents, while bounding the worst-case performance when the tendency is not observed. The technique involves solving a modified game, and therefore can make use of recently developed algorithms for solving very large extensive games. We demonstrate the effectiveness of the technique in two-player Texas Hold'em. We show that the computed poker strategies are substantially more robust than best response counter-strategies, while still exploiting a suspected tendency. We also compose the generated strategies in an experts algorithm showing a dramatic improvement in performance over using simple best responses.
---
paper_title: Finite-time Analysis of the Multiarmed Bandit Problem
paper_content:
Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.
---
paper_title: CASPER: a Case-Based Poker-Bot
paper_content:
This paper investigates the use of the case-based reasoning methodology applied to the game of Texas hold'em poker. The development of a CASe-based Poker playER (CASPER) is described. CASPER uses knowledge of previous poker scenarios to inform its betting decisions. CASPER improves upon previous case-based reasoning approaches to poker and is able to play evenly against the University of Alberta's Pokibots and Simbots, from which it acquired its case-bases and updates previously published research by showing that CASPER plays profitably against human online competitors for play money. However, against online players for real money CASPER is not profitable. The reasons for this are briefly discussed.
---
paper_title: Inside Case-Based Reasoning
paper_content:
Case-based reasoning, broadly construed, is the process of solving new problems based on the solutions of similar past problems. An auto mechanic who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents is using case-based reasoning. It has been argued that case-based reasoning is not only a powerful method for computer reasoning, but also a pervasive behavior in everyday human problem solving. Case-based reasoning (CBR) has been formalized as a four-step process:N 1. Retrieve: Given a target problem, retrieve cases from memory that are relevant to solving it. A case consists of a problem, its solution, and, typically, annotations about how the solution was derived. For example, suppose Fred wants to prepare blueberry pancakes. Being a novice cook, the most relevant experience he can recall is one in which he successfully made plain pancakes. The procedure he followed for making the plain pancakes, together with justifications for decisions made along the way, constitutes Fred's retrieved case. 2. Reuse: Map the solution from the previous case to the target problem. This may involve adapting the solution as needed to fit the new situation. In the pancake example, Fred must adapt his retrieved solution to include the addition of blueberries. 3. Revise: Having mapped the previous solution to the target situation, test the new solution in the real world (or a simulation) and, if necessary, revise. Suppose Fred adapted his pancake solution by adding blueberries to the batter. After mixing, he discovers that the batter has turned blue -- an undesired effect. This suggests the following revision: delay the addition of blueberries until after the batter has been ladled into the pan. 4. Retain: After the solution has been successfully adapted to the target problem, store the resulting experience as a new case in memory. Fred, accordingly, records his newfound procedure for making blueberry pancakes, thereby enriching his set of stored experiences, and better preparing him for future pancake-making demands. At first glance, CBR may seem similar to the rule-induction algorithmsP of machine learning.N Like a rule-induction algorithm, CBR starts with a set of cases or training examples; it forms generalizations of these examples, albeit implicit ones, by identifying commonalities between a retrieved case and the target problem. For instance, when Fred mapped his procedure for plain pancakes to blueberry pancakes, he decided to use the same basic batter and frying method, thus implicitly generalizing the set of situations under which the batter and frying method can be used. The key difference, however, between the implicit generalization in CBR and the generalization in rule induction lies in when the generalization is made. A rule-induction algorithm draws its generalizations from a set of training examples before the target problem is even known; that is, it performs eager generalization. For instance, if a rule-induction algorithm were given recipes for plain pancakes, Dutch apple pancakes, and banana pancakes as its training examples, it would have to derive, at training time, a set of general rules for making all types of pancakes. It would not be until testing time that it would be given, say, the task of cooking blueberry pancakes. The difficulty for the rule-induction algorithm is in anticipating the different directions in which it should attempt to generalize its training examples. This is in contrast to CBR, which delays (implicit) generalization of its cases until testing time -- a strategy of lazy generalization. In the pancake example, CBR has already been given the target problem of cooking blueberry pancakes; thus it can generalize its cases exactly as needed to cover this situation. CBR therefore tends to be a good approach for rich, complex domains in which there are myriad ways to generalize a case.
---
paper_title: CASPER: DESIGN AND DEVELOPMENT OF A CASE-BASED POKER PLAYER
paper_content:
A power unit for a conduit cleaner. The conduit cleaner has a motor with a housing carrying a shaft that is rotatable about an axis. The motor operates in response to the introduction of a pressurized fluid. The housing has inlets for admitting pressurized fluid and an outlet to permit the discharge thereof. A nozzle is connected to the housing and is configured to direct fluid from a pressurized supply to the housing inlet and direct fluid discharged from the outlet out of the power unit. The nozzle has a substantially cylindrical outer surface. In one form, the nozzle does not project beyond the cylindrical outer surface. Accordingly, a compact unit can be made according to the present invention. The absence of radially projecting structure also avoids protrusions that may intercept roots or other foreign matter within a conduit and thereby interrupt free movement of the conduit cleaner through a conduit.
---
paper_title: A memory-based approach to two-player Texas hold’em
paper_content:
A Case-Based Reasoning system, nicknamed SARTRE, that uses a memory-based approach to play two-player, limit Texas Hold'em is introduced. SARTRE records hand histories from strong players and attempts to re-use this information to handle novel situations. SARTRE'S case features and their representations are described, followed by the results obtained when challenging a world-class computerised opponent. Our experimental methodology attempts to address how well SARTRE'S performance can approximate the performance of the expert player, who SARTRE originally derived the experience-base from.
---
paper_title: Pareto coevolution: using performance against coevolved opponents in a game as dimensions for Pareto selection
paper_content:
When using an automatic discovery method to find a good strategy in a game, we hope to find one that performs well against a wide variety of opponents. An appealing notion in the use of evolutionary algorithms to coevolve strategies is that the population represents a set of different strategies against which a player must do well. Implicit here is the idea that different players represent different "dimensions" of the domain, and being a robust player means being good in many (preferably all) dimensions of the game. Pareto coevolution makes this idea of "players as dimensions" explicit. By explicitly treating each player as a dimension, or objective, we may then use established multi-objective optimization techniques to find robust strategies. In this paper, we apply Pareto coevolution to Texas Hold'em poker, a complex real-world game of imperfect information. The performance of our Pareto coevolution algorithm is compared with that of a conventional genetic algorithm and shown to be promising.
---
paper_title: No-Limit Texas Hold'em Poker agents created with evolutionary neural networks
paper_content:
In order for computer Poker agents to play the game well, they must analyse their current quality despite imperfect information, predict the likelihood of future game states dependent upon random outcomes, model opponents who are deliberately trying to mislead them, and manage finances to improve their current condition. This leads to a game space that is large compared to other classic games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems such as Poker. In this paper, we develop No-Limit Texas Hold'em Poker agents using a hybrid method known as evolving neural networks. We also investigate the appropriateness of evolving these agents using evolutionary heuristics such as co-evolution and halls of fame. Our agents were experimentally evaluated against several benchmark agents as well as agents previously developed in other work. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., no co-evolution) using a large hall of fame. These results demonstrate an effective use of evolving neural networks to create competitive No-Limit Texas Hold'em Poker agents.
---
| Title: Computer Poker: A Review
Section 1: Introduction
Description 1: This section introduces the game of poker as a beneficial domain for AI research, highlighting the historical context and the importance of poker competitions in driving AI advancements.
Section 2: Texas Hold'em
Description 2: This section provides a brief description of the game of Texas Hold'em, explaining its rules and common terms used throughout the paper.
Section 3: Performance Metrics & Evaluation
Description 3: This section summarizes the different types of performance measurements and agent evaluations mentioned in the paper, including types of strategies and performance evaluators.
Section 4: Knowledge-Based Systems
Description 4: This section reviews early work on knowledge-based poker agents, detailing rule-based expert systems, formula-based strategies, and the agents that have applied these approaches.
Section 5: Simulation-Based Poker Agents
Description 5: This section introduces Monte-Carlo simulation and other simulation-based methods used in poker agents, discussing their strengths and challenges along with notable agents developed using these methods.
Section 6: Game Theoretic Equilibrium Solutions
Description 6: This section discusses the computation of equilibrium solutions using game theory, including the field of game theory, equilibrium strategies for Texas Hold'em, and the creation of near-equilibrium poker agents.
Section 7: Iterative Algorithms for Finding -Nash Equilibria
Description 7: This section presents iterative algorithms such as fictitious play and counterfactual regret minimisation (CFR) used for computing -Nash equilibria in poker, along with the poker agents developed using these techniques.
Section 8: Exploitive Counter-Strategies
Description 8: This section reviews agents that attempt to exploit their opponents by constructing opponent models, discussing methods such as imperfect information game tree search and game-theoretic counter-strategies.
Section 9: Alternative Approaches
Description 9: This section introduces alternative approaches for constructing poker strategies, including case-based reasoning, evolutionary algorithms, neural networks, and Bayesian poker.
Section 10: Conclusion
Description 10: This section provides a summary of the algorithms and approaches discussed, highlighting the opportunities and challenges in AI research within the poker domain. |
A Survey of Approaches and Challenges in the Real-time Multimedia Streaming | 3 | ---
paper_title: Development and Coverage Evaluation of ZigBee-Based Wireless Network Applications
paper_content:
Network coverage is one of the basic issues for information collection and data processing in ZigBee-based wireless sensor networks. Each node may be randomly distributed in a monitoring area, reflecting the network event of tracking in ZigBee network applications. This paper presents the development and coverage evaluation of a ZigBee-based wireless network application. A stack structure node available for home service integration is proposed, and all data of sensing nodes with an adaptive weighted fusion (AWF) processing are passed to the gateway and through the gateway to reexecute packet processing and then reported to the monitoring center, which effectively optimize the wireless network to the scale of the data processing efficiency. The linear interpolation theory is used for background graphical user interface so as to evaluate the working status of each node and the whole network coverage case. A testbed has been created for validating the basic functions of the proposed ZigBee-based home network system. Network coverage capabilities were tested, and packet loss and energy saving of the proposed system in longtime wireless network monitoring tasks were also verified.
---
paper_title: Scalable Video Multicast Using Expanding Window Fountain Codes
paper_content:
Fountain codes were introduced as an efficient and universal forward error correction (FEC) solution for data multicast over lossy packet networks. They have recently been proposed for large scale multimedia content delivery in practical multimedia distribution systems. However, standard fountain codes, such as LT or Raptor codes, are not designed to meet unequal error protection (UEP) requirements typical in real-time scalable video multicast applications. In this paper, we propose recently introduced UEP expanding window fountain (EWF) codes as a flexible and efficient solution for real-time scalable video multicast. We demonstrate that the design flexibility and UEP performance make EWF codes ideally suited for this scenario, i.e., EWF codes offer a number of design parameters to be ldquotunedrdquo at the server side to meet the different reception criteria of heterogeneous receivers. The performance analysis using both analytical results and simulation experiments of H.264 scalable video coding (SVC) multicast to heterogeneous receiver classes confirms the flexibility and efficiency of the proposed EWF-based FEC solution.
---
paper_title: Tuning Skype's Redundancy Control Algorithm for User Satisfaction
paper_content:
Determining how to transport delay-sensitive voice data has long been a problem in multimedia networking. The difficulty arises because voice and best-effort data are different by nature. It would not be fair to give priority to voice traffic and starve its best-effort counterpart; however, the voice data delivered might not be perceptible if each voice call is limited to the rate of an average TCP flow. To address the problem, we approach it from a user-centric perspective by tuning the voice data rate based on user satisfaction. Our contribution in this work is threefold. First, we investigate how Skype, the largest and fastest growing VoIP service on the Internet, adapts its voice data rate (i.e., the redundancy ratio) to network conditions. Second, by exploiting implementations of public domain codecs, we discover that Skype's mechanism is not really geared to user satisfaction. Third, based on a set of systematic experiments that quantify user satisfaction under different levels of packet loss and burstiness, we derive a concise model that allows user-centric redundancy control. The model can be easily incorporated into general VoIP services (not only Skype) to ensure consistent user satisfaction. Index Terms—MOS, PESQ, Piggyback, QoE (Quality of Ex- perience), QoS (Quality of Service), VoIP
---
paper_title: OneClick: A Framework for Measuring Network Quality of Experience
paper_content:
As the service requirements of network applications shift from high throughput to high media quality, interactivity, and responsiveness, the definition of QoE (Quality of Experience) has become multidimensional. Although it may not be difficult to measure individual dimensions of the QoE, how to capture users' overall perceptions when they are using network applications remains an open question. In this paper, we propose a framework called OneClick to capture users' perceptions when they are using network applications. The framework only requires a subject to click a dedicated key whenever he/she feels dissatisfied with the quality of the application in use. OneClick is particularly effective because it is intuitive, lightweight, efficient, time-aware, and application-independent. We use two objective quality assessment methods, PESQ and VQM, to validate OneClick's ability to evaluate the quality of audio and video clips. To demonstrate the proposed framework's efficiency and effectiveness in assessing user experiences, we implement it on two applications, one for instant messaging applications, and the other for first- person shooter games. A Flash implementation of the proposed framework is also presented.
---
paper_title: Impact of Network Performance on Cloud Speech Recognition
paper_content:
Interactive real-time communication between people and machine enables innovations in transportation, health care, etc. Using voice or gesture commands improves usability and broad public appeal of such systems. In this paper we experimentally evaluate Google speech recognition and Apple Siri - two of the most popular cloud-based speech recognition systems. Our goal is to evaluate the performance of these systems under different network conditions in terms of command recognition accuracy and round trip delay - two metrics that affect interactive application usability. Our results show that speech recognition systems are affected by loss and jitter, commonly present in cellular and WiFi networks. Finally, we propose and evaluate a network coding transport solution to improve the quality of voice transmission to cloud-based speech recognition systems. Experiments show that our approach improves the accuracy and delay of cloud speech recognizers under different loss and jitter values.
---
paper_title: Video telephony for end-consumers: measurement study of Google+, iChat, and Skype
paper_content:
Video telephony requires high-bandwidth and low-delay voice and video transmissions between geographically distributed users. It is challenging to deliver high-quality video telephony to end-consumers through the best-effort Internet. In this paper, we present our measurement study on three popular video telephony systems on the Internet: Google+, iChat, and Skype. Through a series of carefully designed active and passive measurements, we are able to unveil important information about their key design choices and performance, including application architecture, video generation and adaptation schemes, loss recovery strategies, end-to-end voice and video delays, resilience against random and bursty losses, etc. Obtained insights can be used to guide the design of applications that call for high-bandwidth and low-delay data transmissions under a wide range of "best-effort" network conditions.
---
paper_title: Video quality evaluation for Internet streaming applications
paper_content:
We carried out a number of subjective experiments using typical streaming content, codecs, bitrates and network conditions. In an attempt to review subjective testing procedures for video streaming applications, we used both Single Stimulus Continuous Quality Evaluation (SSCQE) and Double Stimulus Impairment Scale (DSIS) methods on the same test material. We thus compare these testing methods and present an analysis of the experimental results in view of codec performance. Finally, we use the subjective data to corroborate the prediction accuracy of a real-time non-reference quality metric.
---
paper_title: Video quality evaluation for mobile streaming applications
paper_content:
This paper presents the results of our quality evaluations of video sequences encoded for and transmitted over a wireless channel. We selected content, codecs, bitrates and bit error patterns representative of mobile applications and we concentrated on the MPEG-4 and Motion JPEG2000 coding standards. We carried out subjective experiments using the Single Stimulus Continuous Quality Evaluation (SSCQE) method on this test material. We analyze the results of the subjective data and use them to compare codec performance and resilience to transmission errors. Finally, we use the subjective data to validate the prediction performance of a real-time non-reference quality metric.
---
paper_title: Mesh-Pull-Based P2P Video Streaming System Using Fountain Codes
paper_content:
In this work, we propose a simple but effective mesh-pull-based P2P video streaming system using Fountain codes with variable symbol sizes for video-on-demand services. The goal of the proposed system is to provide a stable video streaming service of high quality with a low computational complexity and short initial latency. Fountain codes are adopted in the proposed system to simplify the handshaking procedure which causes a large initial latency, and to support a robust video streaming service. The proposed system works by using feedback information to reduce unnecessary encoded symbol generation. In addition, the Fountain code symbol size is continuously adjusted to minimize additional computational overhead required for Fountain encoding/decoding.
---
paper_title: Improved coexistence and loss tolerance for delay based TCP congestion control
paper_content:
Loss based TCP congestion control has been shown to not perform well in environments were there is non-congestion related packet losses. Delay based TCP congestion control algorithms provide a low latency connection with no congestion related packet losses, and have the potential for being tolerant to non-congestion related losses. Unfortunately, delay based TCP does not compete well with loss based TCP, currently limiting its deployment. We propose a delay based algorithm which extends work by Budzisz et al. [1] to provide tolerance to non-congestion related losses, and better coexistence with loss based TCP in lightly multiplexed environments. We demonstrate that our algorithm improves the throughput when there are 1% packet losses by about 150%, and gives more than 50% improvement in the ability to share capacity with NewReno in lightly multiplexed environments.
---
paper_title: On the limit of fountain MDC codes for video Peer-To-Peer networks
paper_content:
Video streaming for heterogeneous types of devices, where nodes have different devices characteristics in terms of computational capacity and display, is usually handled by encoding the video with different qualities. This is not well suited for Peer-To-Peer (P2P) systems, as a single peer group can only share content of the same quality, thus limiting the peer group size and efficiency. To address this problem, several existing works propose the use of Multiple Descriptions Coding (MDC). The concept of this type of video codec is to split a video in a number of descriptions which can be used on their own, or aggregated to improve the global quality of the video. Unfortunately existing MDC codes are not flexible, as the video is split in a defined number of descriptions. In this paper, we focus on the practical feasibility of using a Fountain MDC code with properties similar to existing Fountain erasure codes, including the ability to create any number of descriptions when needed (on the fly). We perform simulations using selected pictures to assess the feasibility of using these codes, knowing that they should improve the availability of the video pieces in a P2P system and hence the video streaming quality. We observe that, although this idea seems promising, the evaluated benefits, demonstrated by the PSNR values, are limited when used in a real P2P video streaming system.
---
paper_title: A crowdsourceable QoE evaluation framework for multimedia content
paper_content:
Until recently, QoE (Quality of Experience) experiments had to be conducted in academic laboratories; however, with the advent of ubiquitous Internet access, it is now possible to ask an Internet crowd to conduct experiments on their personal computers. Since such a crowd can be quite large, crowdsourcing enables researchers to conduct experiments with a more diverse set of participants at a lower economic cost than would be possible under laboratory conditions. However, because participants carry out experiments without supervision, they may give erroneous feedback perfunctorily, carelessly, or dishonestly, even if they receive a reward for each experiment. In this paper, we propose a crowdsourceable framework to quantify the QoE of multimedia content. The advantages of our framework over traditional MOS ratings are: 1) it enables crowdsourcing because it supports systematic verification of participants' inputs; 2) the rating procedure is simpler than that of MOS, so there is less burden on participants; and 3) it derives interval-scale scores that enable subsequent quantitative analysis and QoE provisioning. We conducted four case studies, which demonstrated that, with our framework, researchers can outsource their QoE evaluation experiments to an Internet crowd without risking the quality of the results; and at the same time, obtain a higher level of participant diversity at a lower monetary cost.
---
paper_title: A measurement-based study of MultiPath TCP performance over wireless networks
paper_content:
With the popularity of mobile devices and the pervasive use of cellular technology, there is widespread interest in hybrid networks and on how to achieve robustness and good performance from them. As most smart phones and mobile devices are equipped with dual interfaces (WiFi and 3G/4G), a promising approach is through the use of multi-path TCP, which leverages path diversity to improve performance and provide robust data transfers. In this paper we explore the performance of multi-path TCP in the wild, focusing on simple 2-path multi-path TCP scenarios. We seek to answer the following questions: How much can a user benefit from using multi-path TCP over cellular and WiFi relative to using the either interface alone? What is the impact of flow size on average latency? What is the effect of the rate/route control algorithm on performance? We are especially interested in understanding how application level performance is affected when path characteristics (e.g., round trip times and loss rates) are diverse. We address these questions by conducting measurements using one commercial Internet service provider and three major cellular carriers in the US.
---
| ```
<format>
Title: A Survey of Approaches and Challenges in the Real-time Multimedia Streaming
Section 1: Introduction
Description 1: Introduce the significance of multimedia streaming, the motivation for the study, and the scope of the survey.
Section 2: Related Work
Description 2: Discuss various experimental evaluations and studies conducted on multimedia streaming applications, focusing on their findings and implications.
Section 3: Conclusion
Description 3: Summarize the general trends, challenges, and potential solutions in streaming applications, and highlight open research areas and advancements in the field.
</format>
``` |
A survey of human-computer interaction design in science fiction movies | 8 | ---
paper_title: Connectables: dynamic coupling of displays for the flexible creation of shared workspaces
paper_content:
We present the ConnecTable, a new mobile, networked and context-aware information appliance that provides affordances for pen-based individual and cooperative work as well as for the seamless transition between the two. In order to dynamically enlarge an interaction area for the purpose of shared use, a flexible coupling of displays has been realized that overcomes the restrictions of display sizes and borders. Two ConnecTable displays dynamically form a homogeneous display area when moved close to each other. The appropriate triggering signal comes from built-in sensors allowing users to temporally combine their individual displays to a larger shared one by a simple physical movement in space. Connected ConnecTables allow their users to work in parallel on an ad-hoc created shared workspace as well as exchanging information by simply shuffling objects from one display to the other. We discuss the user interface and related issues as well as the software architecture. We also present the physical realization of the ConnecTables.
---
paper_title: Modelling personality in voices of talking products through prosodic parameters
paper_content:
In this paper we report preliminary findings from two user studies that on the one hand investigate how prosodic parameters of synthetic speech can influence the perceived impression of the speakers personality and on the other hand explores if and how people attribute personality to objects such as typical products of daily shopping. The results show that a) prosodic parameters have a strong influence on the perceived personality and can be partially used to achieve a desired impression and b) that subjects clearly attribute personalities to products. Both findings encourage us to continue our work on a dialogue shell for talking products.
---
paper_title: Rendering for an interactive 360° light field display
paper_content:
We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery.
---
paper_title: Navigational - and shopping assistance on the basis of user interactions in intelligent environments
paper_content:
This paper presents an overview about ongoing work in the project REAL, where we have set up the Saarland University Pervasive Instrumented Environment (SUPIE). In particular we introduce the intelligent environment’s architecture, which serves as the basis for different services and applications running in the environment and supporting their users in different task. On the basis of this information we outline our user and location-modeling component needed to establish the navigational – and shopping-assistants developed so far. Both assistants support their users with especially customized presentations. These presentations will be automatically scheduled and presented on public displays in the environment, as explained in the remarks about the presentation manager. Finally, we provide a short outlook on planned future work in the project.
---
paper_title: Connectables: dynamic coupling of displays for the flexible creation of shared workspaces
paper_content:
We present the ConnecTable, a new mobile, networked and context-aware information appliance that provides affordances for pen-based individual and cooperative work as well as for the seamless transition between the two. In order to dynamically enlarge an interaction area for the purpose of shared use, a flexible coupling of displays has been realized that overcomes the restrictions of display sizes and borders. Two ConnecTable displays dynamically form a homogeneous display area when moved close to each other. The appropriate triggering signal comes from built-in sensors allowing users to temporally combine their individual displays to a larger shared one by a simple physical movement in space. Connected ConnecTables allow their users to work in parallel on an ad-hoc created shared workspace as well as exchanging information by simply shuffling objects from one display to the other. We discuss the user interface and related issues as well as the software architecture. We also present the physical realization of the ConnecTables.
---
| Title: A Survey of Human-Computer Interaction Design in Science Fiction Movies
Section 1: INTRODUCTION
Description 1: Discuss the impact of science fiction movies on public perception and the interaction between real-world trends and fictional technology, particularly focusing on the depiction of human-computer interfaces.
Section 2: INFLUENCING FACTORS FOR INTERACTION DESIGN IN MOVIES
Description 2: Examine key factors such as special effects technologies, budget, and the importance of technology in the movies that influence the design of human-computer interaction.
Section 3: MOVIES ADOPTING CURRENT HCI
Description 3: Review movies that either lack innovative HCI concepts or merely adapt common technologies of their time.
Section 4: MOVIES WITH UNREALIZED HCI VISIONS
Description 4: Categorize movies that have their unique visions of HCI, some of which have never been implemented or are unlikely to be realized.
Section 5: MOVIES ANTICIPATING OR INSPIRING FUTURE HCI CONCEPTS
Description 5: Provide examples of films that show technology and HCI concepts which were later realized in the real world or inspired future developments.
Section 6: COLLABORATION BETWEEN MOVIE AND HCI VISIONARIES
Description 6: Discuss the collaboration between filmmakers and HCI scientists to create realistic yet inspiring visions of future computer usage, focusing on exemplars such as Minority Report.
Section 7: ANECDOTES
Description 7: Present entertaining and satirical interactions with human-computer interfaces in movies that reflect or exaggerate real interaction techniques.
Section 8: DISCUSSION
Description 8: Analyze the patterns observed in the collaboration between filmmakers and scientists regarding HCI in movies, and discuss the impact of movies on public expectations and researchers' designs. |
Electrochemical Biosensors for Rapid Detection of Foodborne Salmonella: A Critical Overview | 13 | ---
paper_title: Fresh Produce: A Growing Cause of Outbreaks of Foodborne Illness in the United States, 1973 through 1997
paper_content:
Fresh produce is an important part of a healthy diet. During the last three decades, the number of outbreaks caused by foodborne pathogens associated with fresh produce consumption reported to the Centers for Disease Control and Prevention has increased. To identify trends we analysed data for 1973 through 1997 from the Foodborne Outbreak Surveillance System. We defined a produce-associated outbreak as the occurrence of two or more cases of the same illness in which epidemiologic investigation implicated the same uncooked fruit, vegetable, salad, or juice. A total of 190 produce-associated outbreaks were reported, associated with 16,058 illnesses, 598 hospitalizations, and eight deaths. Produce-associated outbreaks accounted for an increasing proportion of all reported foodborne outbreaks with a known food item, rising from 0.7% in the 1970s to 6% in the 1990s. Among produce-associated outbreaks, the food items most frequently implicated included salad, lettuce, juice, melon, sprouts and berries. Among 103 (54%) produce-associated outbreaks with a known pathogen, 62 (60%) were caused by bacterial pathogens, of which 30 (48%) were caused by Salmonella. During the study period, Cyclospora and Escherichia coli O157:H7 were newly recognized as causes of foodborne illness. Foodborne outbreaks associated with fresh produce in the United States have increased in absolute numbers and as a proportion of all reported foodborne outbreaks. Fruit and vegetables are major components of a healthy diet, but eating fresh uncooked produce is not risk free. Further efforts are needed to better understand the complex interactions between microbes and produce and the mechanisms by which contamination occurs from farm to table.
---
paper_title: Salmonellosis outbreaks in the United States due to fresh produce: sources and potential intervention measures.
paper_content:
Abstract Foodborne Salmonella spp. is a leading cause of foodborne illness in the United States each year. Traditionally, most cases of salmonellosis were thought to originate from meat and poultry products. However, an increasing number of salmonellosis outbreaks are occurring as a result of contaminated produce. Several produce items specifically have been identified in outbreaks, and the ability of Salmonella to attach or internalize into vegetables and fruits may be factors that make these produce items more likely to be sources of Salmonella. In addition, environmental factors including contaminated water sources used to irrigate and wash produce crops have been implicated in a large number of outbreaks. Salmonella is carried by both domesticated and wild animals and can contaminate freshwater by direct or indirect contact. In some cases, direct contact of produce or seeds with contaminated manure or animal wastes can lead to contaminated crops. This review examines outbreaks of Salmonella due to con...
---
paper_title: Risk Factors for Microbial Contamination in Fruits and Vegetables at the Preharvest Level: A Systematic Review
paper_content:
The objective of this study was to perform a systematic review of risk factors for contamination of fruits and vegetables with Listeria monocytogenes, Salmonella, and Escherichia coli O157:H7 at the preharvest level. Relevant studies were identified by searching six electronic databases: MEDLINE, EMBASE, CAB Abstracts, AGRIS, AGRICOLA, and FSTA, using the following thesaurus terms: L. monocytogenes, Salmonella, E. coli O157 AND fruit, vegetable. All search terms were exploded to find all related subheadings. To be eligible, studies had to be prospective controlled trials or observational studies at the preharvest level and had to show clear and sufficient information on the process in which the produce was contaminated. Of the 3,463 citations identified, 68 studies fulfilled the eligibility criteria. Most of these studies were on leafy greens and tomatoes. Six studies assessed produce contamination with respect to animal host-related risk factors, and 20 studies assessed contamination with respect to pathogen characteristics. Sixty-two studies assessed the association between produce contamination and factors related to produce, water, and soil, as well as local ecological conditions of the production location. While evaluations of many risk factors for preharvest-level produce contamination have been reported, the quality assessment of the reviewed studies confirmed the existence of solid evidence for only some of them, including growing produce on clay-type soil, the application of contaminated or non-pH-stabilized manure, and the use of spray irrigation with contaminated water, with a particular risk of contamination on the lower leaf surface. In conclusion, synthesis of the reviewed studies suggests that reducing microbial contamination of irrigation water and soil are the most effective targets for the prevention and control of produce contamination. Furthermore, this review provides an inventory of the evaluated risk factors, including those requiring more research.
---
paper_title: Ecology of E. coli O157:H7 and Salmonella enterica in the Primary Vegetable Production Chain
paper_content:
There is an increased concern that plants might be more important as a carrier for human enteric pathogens like E. coli O157:H7 and Salmonella enterica serovars than previously thought. This review summarizes the knowledge available on the ecology of E. coli O157:H7 and Salmonella enterica in the primary production chain of leafy green vegetables (in particular lettuce), including manure, manure-amended soil, and crop. Based on the available literature, suggestions are made for the control of these pathogens. The suggested approach of oligotrophication of agro-ecosystems fits in the wider approach to lower environmental emissions of nutrients from manure application and to enhance the suppression against plant pathogens.Keywords
---
paper_title: Electrochemical coding technology for simultaneous detection of multiple DNA targets.
paper_content:
Nucleic-acid hybridization assays based on the use of different inorganic-colloid (quantum dots) nanocrystal tracers for the simultaneous electrochemical measurements of multiple DNA targets are described. Three encoding nanoparticles (zinc sulfide, cadmium sulfide, and lead sulfide) are used to differentiate the signals of three DNA targets in connection to stripping-voltammetric measurements of the heavy metal dissolution products. These products yield well-defined and resolved stripping peaks at -1.12 V (Zn), -0.68 V (Cd), and -0.53 V (Pb) at the mercury-coated glassy-carbon electrode (vs Ag/AgCl reference). The position and size of these peaks reflect the identity and level of the corresponding DNA target. The multi-target detection capability is coupled to the amplification feature of stripping voltammetry (to yield femtomole detection limits) and with an efficient magnetic removal of nonhybridized nucleic acids to offer high sensitivity and selectivity. The protocol is illustrated for the simultaneous detection of three DNA sequences related to the BCRA1 breast-cancer gene in a single sample in connection to magnetic beads bearing the corresponding oligonucleotide probes. The new electrochemical coding is expected to bring new capabilities for DNA diagnostics, and for bioanalysis, in general.
---
paper_title: Tunable Conjugated Polymers for Bacterial Differentiation
paper_content:
A novel rapid method for bacterial differentiation is explored based on the specific adhesion pattern of bacteria to tunable polymer surfaces. Different types of counter ions were used to electrochemically fabricate dissimilar polypyrrole (PPy) films with diverse physicochemical properties such as hydrophobicity, thickness and roughness. In order to expand the number of individual sensors in the array, three different redox states (as fabricated, oxidised and reduced) of each PPy film were also employed. These dissimilar PPy surfaces were exposed to five different bacteria, Deinococcus proteolyticus, Staphylococcus epidermidis, Alcaligenes faecalis, Pseudomonas fluorescens and Serratia marcescens, , which were seeded onto the various PPy surfaces. Fluorescent microscope images were taken and used to quantify the number of cells adhering to the surfaces. Generally, the number of cells of a particular bacterial strain that adhered varied when exposed to dissimilar polymer surfaces, due to the effects of the surface properties of the polymer on bacterial attachment. Similarly, the number of cells that adhered varied with different bacteria exposed to the same surface, reflecting the different surface properties of the bacteria. Statistical analysis and principal component analysis showed that all had their own specific adhesion pattern with respect to the array of PPy surfaces. Hence, these bacteria could be discriminated by this simple label-free method. In summary, this provides a proof-of-concept for using specific adhesion properties of bacterial in conjunction with tunable polymer arrays and pattern recognition as a method for rapid bacterial identification in situ.
---
paper_title: New materials for electrochemical sensing VI: Carbon nanotubes
paper_content:
Carbon nanotubes (CNTs) combine in a unique way high electrical conductivity, high chemical stability and extremely high mechanical strength. These special properties of both single-wall (SW) and multi-wall (MW) CNTs have attracted the interest of many researchers in the field of electrochemical sensors. This article demonstrates the latest advances and future trends in producing, modifying, characterizing and integrating CNTs into electrochemical sensing systems. ::: ::: CNTs can be either used as single probes after formation in situ or even individually attached onto a proper transducing surface after synthesis. Both SWCNTs and MWCNTs can be used to modify several electrode surfaces in either vertically oriented “nanotube forests” or even a non-oriented way. They can be also used in sensors after mixing them with a polymer matrix to form CNT composites. ::: ::: We discuss novel applications of CNTs in electrochemical sensors, including enzyme-based biosensors, DNA sensors and immunosensors, and propose future challenges and applications.
---
paper_title: Conductive polymer-based sensors for biomedical applications.
paper_content:
A class of organic polymers, known as conducting polymers (CPs), has become increasingly popular due to its unique electrical and optical properties. Material characteristics of CPs are similar to those of some metals and inorganic semiconductors, while retaining polymer properties such as flexibility, and ease of processing and synthesis, generally associated with conventional polymers. Owing to these characteristics, research efforts in CPs have gained significant traction to produce several types of CPs since its discovery four decades ago. CPs are often categorised into different types based on the type of electric charges (e.g., delocalized pi electrons, ions, or conductive nanomaterials) responsible for conduction. Several CPs are known to interact with biological samples while maintaining good biocompatibility and hence, they qualify as interesting candidates for use in a numerous biological and medical applications. In this paper, we focus on CP-based sensor elements and the state-of-art of CP-based sensing devices that have potential applications as tools in clinical diagnosis and surgical interventions. Representative applications of CP-based sensors (electrochemical biosensor, tactile sensing 'skins', and thermal sensors) are briefly discussed. Finally, some of the key issues related to CP-based sensors are highlighted.
---
paper_title: Quantum Dots for Sensing
paper_content:
Quantum confinement has become a powerful tool for creating new materials with extraordinary properties. Since 1980s, the quantum effects on materials have become relevant as far as the scientific community has focused its attention on smaller devices. When certain particle scale is trespassed, quantum confinement effects start to play a relevant role in the macroscopic properties of the matter. Since their beginning, quantum-confined structures have been widely used in optoelectronic device technology rather than in sensor applications. Nevertheless, sensor applications based on quantum dots experiment a real boost thanks to the semiconductor nanocrystals. The possibility of having high-quality, industrially scaled-up, biocompatible quantum dot nanocrystals has supposed a real breakthrough in the biological and medical fields. Quantum dots significantly improve the sensing tools in applications such as cellular assays, cancer detection, or DNA sequencing. This chapter summarizes the state of the art of the use of quantum dots in the sensor field.
---
paper_title: Quantum-Dot/Aptamer-Based Ultrasensitive Multi-Analyte Electrochemical Biosensor
paper_content:
The coupling of aptamers with the coding and amplification features of inorganic nanocrystals is shown for the first time to offer a highly sensitive and selective simultaneous bioelectronic detection of several protein targets. This is accomplished in a single-step displacement assay in connection to a self-assembled monolayer of several thiolated aptamers conjugated to proteins carrying different inorganic nanocrystals. Electrochemical stripping detection of the nondisplaced nanocrystal tracers results in a remarkably low (attomole) detection limit, that is, significantly lower than those of existing aptamer biosensors. The new device offers great promise for measuring a large panel of disease markers present at ultralow levels during early stages of the disease progress.
---
paper_title: Magnetic Particles Coupled to Disposable Screen Printed Transducers for Electrochemical Biosensing
paper_content:
Ultrasensitive biosensing is currently a growing demand that has led to the development of numerous strategies for signal amplification. In this context, the unique properties of magnetic particles; both of nano- and micro-size dimensions; have proved to be promising materials to be coupled with disposable electrodes for the design of cost-effective electrochemical affinity biosensing platforms. This review addresses, through discussion of selected examples, the way that nano- and micro-magnetic particles (MNPs and MMPs; respectively) have contributed significantly to the development of electrochemical affinity biosensors, including immuno-, DNA, aptamer and other affinity modes. Different aspects such as type of magnetic particles, assay formats, detection techniques, sensitivity, applicability and other relevant characteristics are discussed. Research opportunities and future development trends in this field are also considered.
---
paper_title: Screen-Printed Electrodes Modified with Carbon Nanomaterials: A Comparison among Carbon Black, Carbon Nanotubes and Graphene
paper_content:
In this work a comparative study using Screen-Printed Electrodes (SPEs) modified by drop casting with Carbon Black, Single Walled Carbon Nanotubes[BOND]COOH, Graphene Oxide, and reduced Graphene Oxide is reported. The carbon nanomaterials employed were characterized by X-ray photoelectron and Raman spectroscopy, while the modified SPEs have been morphologically and electrochemically characterized. Nanoengineered SPEs have been tested with ferricyanide, NADH, ascorbic acid and cysteine. We observed valuable electroanalytical performances of Carbon Black with the advantage to be i) cost-effective ii) suitable to obtain stable and homogenous dispersion and iii) mass-producible following a well established route.
---
paper_title: Gold Nanoparticles in Chemical and Biological Sensing
paper_content:
Detection of chemical and biological agents plays a fundamental role in biomedical, forensic and environmental sciences1–4 as well as in anti bioterrorism applications.5–7 The development of highly sensitive, cost effective, miniature sensors is therefore in high demand which requires advanced technology coupled with fundamental knowledge in chemistry, biology and material sciences.8–13 ::: ::: In general, sensors feature two functional components: a recognition element to provide selective/specific binding with the target analytes and a transducer component for signaling the binding event. An efficient sensor relies heavily on these two essential components for the recognition process in terms of response time, signal to noise (S/N) ratio, selectivity and limits of detection (LOD).14,15 Therefore, designing sensors with higher efficacy depends on the development of novel materials to improve both the recognition and transduction processes. Nanomaterials feature unique physicochemical properties that can be of great utility in creating new recognition and transduction processes for chemical and biological sensors15–27 as well as improving the S/N ratio by miniaturization of the sensor elements.28 ::: ::: Gold nanoparticles (AuNPs) possess distinct physical and chemical attributes that make them excellent scaffolds for the fabrication of novel chemical and biological sensors (Figure 1).29–36 First, AuNPs can be synthesized in a straightforward manner and can be made highly stable. Second, they possess unique optoelectronic properties. Third, they provide high surface-to-volume ratio with excellent biocompatibility using appropriate ligands.30 Fourth, these properties of AuNPs can be readily tuned varying their size, shape and the surrounding chemical environment. For example, the binding event between recognition element and the analyte can alter physicochemical properties of transducer AuNPs, such as plasmon resonance absorption, conductivity, redox behavior, etc. that in turn can generate a detectable response signal. Finally, AuNPs offer a suitable platform for multi-functionalization with a wide range of organic or biological ligands for the selective binding and detection of small molecules and biological targets.30–32,36 Each of these attributes of AuNPs has allowed researchers to develop novel sensing strategies with improved sensitivity, stability and selectivity. In the last decade of research, the advent of AuNP as a sensory element provided us a broad spectrum of innovative approaches for the detection of metal ions, small molecules, proteins, nucleic acids, malignant cells, etc. in a rapid and efficient manner.37 ::: ::: ::: ::: Figure 1 ::: ::: Physical properties of AuNPs and schematic illustration of an AuNP-based detection system. ::: ::: ::: ::: In this current review, we have highlighted the several synthetic routes and properties of AuNPs that make them excellent probes for different sensing strategies. Furthermore, we will discuss various sensing strategies and major advances in the last two decades of research utilizing AuNPs in the detection of variety of target analytes including metal ions, organic molecules, proteins, nucleic acids, and microorganisms.
---
paper_title: Methods and applications of antibody microarrays in cancer research
paper_content:
Antibody microarrays have great potential for significant value in biological research. Cancer research in particular could benefit from the unique experimental capabilities of this technology. This article examines the current state of antibody microarray technological developments and assay formats, along with a review of the demonstrated applications to cancer research. Work is ongoing in the refinement of various aspects of the protocols and the development of robust methods for routine use. Antibody microarray experimental formats can be broadly categorized into two classes: (1) direct labeling experiments, and (2) dual antibody sandwich assays. In the direct labeling method, the covalent labeling of all proteins in a complex mixture provides a means for detecting bound proteins after incubation on an antibody microarray. If proteins are labeled with a tag, such as biotin, the signal from bound proteins can be amplified. In the sandwich assay, proteins captured on an antibody microarray are detected by a cocktail of detection antibodies, each antibody matched to one of the spotted antibodies. Each format has distinct advantages and disadvantages. Several applications of antibody arrays to cancer research have been reported, including the analysis of proteins in blood serum, resected frozen tumors, cell lines, and on membranes of blood cells. These demonstrations clearly show the utility of antibody microarrays for cancer research and signal the imminent expansion of this platform to many areas of biological research.
---
paper_title: Designing label-free electrochemical immunosensors for cytochrome c using nanocomposites functionalized screen printed electrodes
paper_content:
We have designed here a label-free direct electrochemical immunosensor for the detection of cytochrome c (cyt c), a heme containing metalloprotein using its specific monoclonal antibody. Two nanocomposite-based electrochemical immunosensor platforms were evaluated for the detection of cyt c; (i) self-assembled monolayer (SAM) on gold nanoparticles (GNP) in polypyrrole (PPy) grafted screen printed electrodes (SPE) and (ii) carbon nanotubes (CNT) integrated PPy/SPE. The nanotopologies of the modified electrodes were confirmed by scanning electron microscopy. Electrochemical impedance spectroscopy and cyclic voltammetry were employed to monitor the stepwise fabrication of the nanocomposite immunosensor platforms. In the present method, the label-free quantification of cyt c is based on the direct electron transfer between Fe (III)/Fe (II)-heme redox active site of cyt c selectively bound to anti-cyt c nanocomposite modified SPE. GNP/PPy and CNT/PPy nanocomposites promoted the electron transportation through the conductive pore channels. The overall analytical performance of GNP/PPy based immunosensor (detection limit 2 nM; linear range: 2 nM to 150 µM) was better than the anti-cyt c/CNT/PPy (detection limit 10 nM; linear range: 10 nM to 50 µM). Further, the measurement of cyt c release in cell lysates of cardiomyocytes using the GNP/PPy based immunosensor gave an excellent correlation with standard ELISA.
---
paper_title: Electrochemical biosensors based on nanomodified screen-printed electrodes: Recent applications in clinical analysis
paper_content:
Abstract This review addresses recent advances in the development of screen-printed electrode based biosensors modified with different nanomaterials such as carbon nanotubes, graphene, metallic nanoparticles as gold, silver and magnetic nanoparticles, and mediator nanoparticles (Prussian Blue, Cobalt Phthalocyanine, etc.), coupled with biological recognition elements such as enzymes, antibodies, DNA and aptamers to obtain probes with improved analytical features. Examples of clinical applications are illustrated, together with examples of paper-based electrochemical devices, of multiple detections using arrays of screen printed electrodes, and of the most recent developments in the field of wearable biosensors. Also the use of smartphones as final detectors is briefly depicted.
---
paper_title: A novel electrochemical sensing strategy for rapid and ultrasensitive detection of Salmonella by rolling circle amplification and DNA-AuNPs probe.
paper_content:
A novel electrochemical sensing strategy was developed for ultrasensitive and rapid detection of Salmonella by combining the rolling circle amplification with DNA-AuNPs probe. The target DNA could be specifically captured by probe 1 on the sensing interface. Then the circularization mixture was added to form a typical sandwich structure. In the presence of dNTPs and phi29 DNA polymerase, the RCA was initiated to produce micrometer-long single-strand DNA. Finally, the detection probe (DNA-AuNPs) could recognize RCA product to produce enzymatic electrochemical signal. Under optimal conditions, the calibration curve of synthetic target DNA had good linearity from 10aM to 10pM with a detection limit of 6.76aM (S/N=3). The developed method had been successfully applied to detect Salmonella as low as 6CFUmL(-1) in real milk sample. This proposed strategy showed great potential for clinical diagnosis, food safety and environmental monitoring.
---
paper_title: An electrochemical genosensor for Salmonella typhi on gold nanoparticles-mercaptosilane modified screen printed electrode.
paper_content:
In this work, we fabricated a system of integrated self-assembled layer of organosilane 3-mercaptopropyltrimethoxy silane (MPTS) on the screen printed electrode (SPE) and electrochemically deposited gold nanoparticle for Salmonella typhi detection employing Vi gene as a molecular marker. Thiolated DNA probe was immobilized on a gold nanoparticle (AuNP) modified SPE for DNA hybridization assay using methylene blue as redox (electroactive) hybridization indicator, and signal was monitored by differential pulse voltammetry (DPV) method. The modified SPE was characterized by cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS), and atomic force microscopy (AFM) method. The DNA biosensor showed excellent performances with high sensitivity and good selectivity. The current response was linear with the target sequence concentrations ranging from 1.0 × 10(-11) to 0.5 × 10(-8)M and the detection limit was found to be 50 (± 2.1)pM. The DNA biosensor showed good discrimination ability to the one-base, two-base and three-base mismatched sequences. The fabricated genosensor could also be regenerated easily and reused for three to four times for further hybridization studies.
---
paper_title: Aptamer-based viability impedimetric sensor for bacteria.
paper_content:
The development of an aptamer-based viability impedimetric sensor for bacteria (AptaVISens-B) is presented. Highly specific DNA aptamers to live Salmonella typhimurium were selected via the cell-systematic evolution of ligands by exponential enrichment (SELEX) technique. Twelve rounds of selection were performed; each comprises a positive selection step against viable S. typhimurium and a negative selection step against heat killed S. typhimurium and a mixture of related pathogens, including Salmonella enteritidis, Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, and Citrobacter freundii to ensure the species specificity of the selected aptamers. The DNA sequence showing the highest binding affinity to the bacteria was further integrated into an impedimetric sensor via self-assembly onto a gold nanoparticle-modified screen-printed carbon electrode (GNP-SPCE). Remarkably, this aptasensor is highly selective and can successfully detect S. typhimurium down to 600 CFU mL(-1) (equivalent to 18 live cells in 30 μL of assay volume) and distinguish it from other Salmonella species, including S. enteritidis and S. choleraesuis. This report is envisaged to open a new venue for the aptamer-based viability sensing of a variety of microorganisms, particularly viable but nonculturable (VBNC) bacteria, using a rapid, economic, and label-free electrochemical platform.
---
paper_title: Signal amplification technology based on entropy-driven molecular switch for ultrasensitive electrochemical determination of DNA and Salmonella typhimurium
paper_content:
Abstract A methodology based on entropy-driven molecule switch signal amplification strategy has been developed for the ultrasensitive detection of DNA. The gold electrode modified with gold nanoparticles was used to immobilize capture hairpin DNA. In the presence of target DNA, the stem of hairpin DNA was opened once the hybridization reaction occurred between the capture DNA and target DNA. With the addition of the link DNA which has more complementary bases with the capture DNA, the link DNA would hybridize with the capture DNA, and the target DNA would be displaced under the entropy-driven. The released target DNA could open another hairpin DNA. Through such a cycle the target DNA could be recycled, and more hairpin DNA was opened and more link DNA would hybridize with the capture DNA. When the electrochemical nanoparticle probe which consisted of the nanoparticles, probe DNA and the electrochemical reagent was added, the link DNA would hybridize with the probe DNA. As a result, the electrochemical response can be amplified and monitored. In this work, the Salmonella typhimurium aptamer complementary DNA was chosen as a model target, and S. typhimurium was also tested by target induced strand release technology coupling the entropy-driven molecule switch signal amplified strategy. The electrochemical sensor demonstrated excellent sensing performances such as ultra low detection limit (0.3 fmol L −1 for DNA and 13 cfu mL −1 for S. typhimurium ) and high specificity, indicating that it is highly promising to provide a sensitive, selective, cost-effective, and convenient approach for DNA and S. typhimurium detection.
---
paper_title: Diazonium-based impedimetric aptasensor for the rapid label-free detection of Salmonella typhimurium in food sample.
paper_content:
Fast and accurate detection of microorganisms is of key importance in clinical analysis and in food and water quality monitoring. Salmonella typhimurium is responsible for about a third of all cases of foodborne diseases and consequently, its fast detection is of great importance for ensuring the safety of foodstuffs. We report the development of a label-free impedimetric aptamer-based biosensor for S. typhimurium detection. The aptamer biosensor was fabricated by grafting a diazonium-supporting layer onto screen-printed carbon electrodes (SPEs), via electrochemical or chemical approaches, followed by chemical immobilisation of aminated-aptamer. FTIR-ATR, contact angle and electrochemical measurements were used to monitor the fabrication process. Results showed that electrochemical immobilisation of the diazonium-grafting layer allowed the formation of a denser aptamer layer, which resulted in higher sensitivity. The developed aptamer-biosensor responded linearly, on a logarithm scale, over the concentration range 1 × 10(1) to 1 × 10(8)CFU mL(-1), with a limit of quantification (LOQ) of 1 × 10(1) CFU mL(-1) and a limit of detection (LOD) of 6 CFU mL(-1). Selectivity studies showed that the aptamer biosensor could discriminate S. typhimurium from 6 other model bacteria strains. Finally, recovery studies demonstrated its suitability for the detection of S. typhimurium in spiked (1 × 10(2), 1 × 10(4) and 1 × 10(6) CFU mL(-1)) apple juice samples.
---
paper_title: Phagomagnetic Separation and Electrochemical Magneto-Genosensing of Pathogenic Bacteria
paper_content:
This paper addresses the use of bacteriophages immobilized on magnetic particles for the biorecognition of the pathogenic bacteria, followed by electrochemical magneto-genosensing of the bacteria. The P22 bacteriophage specific to Salmonella (serotypes A, B, and D1) is used as a model. The bacteria are captured and preconcentrated by the bacteriophage-modified magnetic particles through the host interaction with high specificity and efficiency. DNA amplification of the captured bacteria is then performed by double-tagging polymerase chain reaction (PCR). Further detection of the double-tagged amplicon is achieved by electrochemical magneto-genosensing. The strategy is able to detect in 4 h as low as 3 CFU mL(-1) of Salmonella in Luria-Bertani (LB) media. This approach is compared with conventional culture methods and PCR-based assay, as well as with immunological screening assays for bacteria detection, highlighting the outstanding stability and cost-efficient and animal-free production of bacteriophages as biorecognition element in biosensing devices.
---
paper_title: Sensitive electrochemical detection of Salmonella with chitosan–gold nanoparticles composite film
paper_content:
Abstract An ultrasensitive electrochemical immunosensor for detection of Salmonella has been developed based on using high density gold nanoparticles (GNPs) well dispersed in chitosan hydrogel and modified glassy carbon electrode. The composite film has been oxidized in NaCl solution and used as a platform for the immobilization of capture antibody (Ab1) for biorecognition. After incubation in Salmonella suspension and horseradish peroxidase (HRP) conjugated secondary antibody (Ab2) solution, a sandwich electrochemical immunosensor has been constructed. The electrochemical signal was obtained and improved by comparing the composite film with chitosan film. The result has shown that the constructed sensor provides a wide linear range from 10 to 105 CFU/mL with a low detection limit of 5 CFU/mL (at the ratio of signal to noise, S/N=3:1). Furthermore, the proposed immunosensor has demonstrated good selectivity and reproducibility, which indicates its potential in the clinical diagnosis of Salmonella contaminations.
---
paper_title: Electrochemical Aptasensor for Rapid and Sensitive Determination of Salmonella Based on Target-Induced Strand Displacement and Gold Nanoparticle Amplification
paper_content:
ABSTRACTA simple, rapid, and sensitive electrochemical aptasensor based on target-induced strand displacement and gold nanoparticle amplification was developed for the determination of Salmonella. The aptamer for Salmonella was captured on the sensing interface by hybridizing with a capture probe. In the presence of Salmonella, the aptamer dissociated from the capture probe as the stronger interaction between the aptamer and the Salmonella. The single-strand capture probe was then hybridized with the biotinylated detection probe assembled on gold nanoparticles and catalyzed by streptavidin–alkaline phosphatase, providing a sensitive electrochemical response. The gold nanoparticles significantly amplified the detection probe signal and increased the sensitivity. The linear dynamic range was from 2 × 101 to 2 × 106 CFU mL−1 with a detection limit of 20 CFU mL−1. This strategy was utilized for the determination of Salmonella in milk, demonstrating potential application in clinical diagnostics, food safety, a...
---
paper_title: An aptamer-based electrochemical biosensor for the detection of Salmonella.
paper_content:
Salmonella is one of the most common causes of food-associated disease. An electrochemical biosensor was developed for Salmonella detection using a Salmonella-specific recognition aptamer. The biosensor was based on a glassy carbon electrode modified with graphene oxide and gold nanoparticles. Then, the aptamer ssDNA sequence could be linked to the electrode. Each assembly step was accompanied by changes to the electrochemical parameters. After incubation of the modified electrode with Salmonella, the electrochemical properties between the electrode and the electrolyte changed accordingly. The electrochemical impedance spectrum was measured to quantify the Salmonella. The results revealed that, when more Salmonella were added to the reaction system, the current between the electrode and electrolyte decreased; in other words, the impendence gradually increased. A detection limit as low as 3 cfu/mL was obtained. This novel method is specific and fast, and it has the potential for real sample detection.
---
paper_title: ELIME assay vs Real-Time PCR and conventional culture method for an effective detection of Salmonella in fresh leafy green vegetables.
paper_content:
The detection of Salmonella according to EC regulation is still primarily based on traditional microbiological culture methods that may take several days to be completed. The purpose of this work is to demonstrate the applicability of an Enzyme-Linked-Immuno-Magnetic-Electrochemical (ELIME) assay, recently developed by our research group for the detection of salmonella in irrigation water, in fresh (raw and ready-to-eat) leafy green vegetables by comparison with Real-Time PCR (RTi-PCR) and ISO culture methods. Since vegetables represent a more complex matrix than irrigation water, preliminary experiments were carried out on two leafy green vegetables that resulted negative for salmonella by the ISO method. 25g of these samples were experimentally inoculated with 1-10 CFU of S. Napoli or S. Thompson and pre-enriched for 20h in two different broths. At this time aliquots were taken, concentrated at different levels by centrifugation, and analyzed by ELIME and RTi-PCR. Once selected the best culture medium for salmonella growth, and the optimal concentration factor suitable to reduce the sample matrix effect, enhancing the out-put signal, several raw and ready-to-eat leafy green vegetables were artificially inoculated and pre-enriched. Aliquots were then taken at different incubation times and analyzed with both techniques. Results obtained showed that 20 and 8h of pre-enrichment were required to allow the target salmonella (1-10 CFU/25g) to multiply until reaching a detectable concentration by ELIME and RTi-PCR assays, respectively. A confirmation with the ISO culture method was carried out. Based on the available literature, this is the first report of the application of an ELISA based method for the detection of Salmonella in vegetables.
---
paper_title: Rapid detection of Escherichia coli O157:H7 and Salmonella Typhimurium in foods using an electrochemical immunosensor based on screen-printed interdigitated microelectrode and immunomagnetic separation
paper_content:
Abstract Foodborne pathogens have continuously been a serious food safety issue and there is a growing demand for a rapid and sensitive method to screen the pathogens for on-line or in-field applications. Therefore, an impedimetric immunosensor based on the use of magnetic beads (MBs) for separation and a screen-printed interdigitated microelectrode (SP-IDME) for measurement was studied for the rapid detection of Escherichia coli O157:H7 and Salmonella Typhimurium in foods. Streptavidin coated MBs were functionalized with corresponding biotinylated antibodies (Ab) to capture the target bacteria. The glucose oxidase (GOx)–Ab conjugates were employed to label the MBs–Ab–cell complexes. The yielded MBs–Ab–cell–Ab–GOx biomass was mixed with the glucose solution to trigger an enzymatic reaction which produced gluconic acid. This increased the ion strength of the solution, thus decreasing the impedance of the solution measured on the SP-IDME. Our results showed that the immunosensor was capable of specifically detecting E. coli O157:H7 and S. Typhimurium within the range of 10 2 –10 6 cfu ml −1 in the pure culture samples. E. coli O157:H7 in ground beef and S. Typhimurium in chicken rinse water were also examined. The limits of detection (LODs) for the two bacteria in foods were 2.05×10 3 cfu g −1 and 1.04×10 3 cfu ml −1 , respectively. This immunosensor required only a bare electrode to measure the impedance changes, and no surficial modification on the electrode was needed. It was low-cost, reproducible, easy-to-operate, and easy-to-preserve. All these merits demonstrated this immunosensor has great potential for the rapid and on-site detection of pathogenic bacteria in foods.
---
paper_title: Immunomagnetic separation of Salmonella with tailored magnetic micro and nanocarriers. A comparative study.
paper_content:
This paper addresses a comparative study of immunomagnetic separation of Salmonella using micro and nano-sized magnetic carriers. In this approach, nano (300 nm) and micro (2.8 μm) sized magnetic particles were modified with anti-Salmonella antibody to pre-concentrate the bacteria from the samples throughout an immunological reaction. The performance of the immunomagnetic separation on the different magnetic carriers was evaluated using classical culturing, confocal and scanning electron microscopy to study the binding pattern, as well as a magneto-actuated immunosensor with electrochemical read-out for the rapid detection of the bacteria in spiked milk samples. In this approach, a second polyclonal antibody labeled with peroxidase as electrochemical reporter was used. The magneto-actuated electrochemical immunosensor was able to clearly distinguish between food pathogenic bacteria such as Salmonella enterica and Escherichia coli, showing a limit of detection (LOD) as low as 538 CFU mL(-1) and 291 CFU mL(-1) for magnetic micro and nanocarriers, respectively, in whole milk, although magnetic nanoparticles showed a noticeable higher matrix effect and higher agglomeration effect. These LODs were achieved in a total assay time of 1h without any previous culturing pre-enrichment step. If the samples were pre-enriched for 8 h, the magneto immunosensor based on the magnetic nanoparticles was able to detect as low as 1 CFU in 25 mL of milk (0.04 CFU mL(-1)).
---
paper_title: Rapid detection of Salmonella using a redox cycling-based electrochemical method
paper_content:
Abstract An electrochemical method based on redox cycling combined with immunomagnetic separation and pre-concentration was developed for rapid and sensitive detection of Salmonella. Electrochemical methods for the detection of bacteria offer the advantages of instant quantification with minimal equipment. Unfortunately, the limits of detection are often poor compare to other transduction methods such as fluorescence and chemiliuminescence. We demonstrated an electrochemical method which is both rapid and has a low limit of detection. A two-step strategy, which included immunomagentic pre-concentration and redox cycling was used to amplify the signal. Magnetic beads modified with anti-Salmonella antibodies were used for separation and pre-concentration of Salmonella from phosphate buffered saline (PBS) and agricultural water. Then anti-Salmonella antibodies conjugated with alkaline phosphatase were employed for labeling the Salmonella which had been captured by magnetic beads. Alkaline phosphatase (ALP) catalyzed the substrate l -ascorbic acid 2-phosphate (AAP) to electroactive species l -ascorbic acid (AA) while tris(2-carboxyethyl)phosphine (TCEP) facilitated the regeneration of AA on the gold electrode to form redox cycling resulting in an amplified signal. Under the optimal conditions, the Salmonella in PBS buffer as well as in agricultural water were detected. The limit of detection of this approach was approximately 7.6 × 102 CFU/mL and 6.0 × 102 CFU/mL in PBS buffer and agricultural water, respectively, without pre-enrichment in 3 h. When the agricultural water has been pre-enriched for 4 h, the limit of detection was approximately 10 CFU/mL.
---
paper_title: Signal amplification technology based on entropy-driven molecular switch for ultrasensitive electrochemical determination of DNA and Salmonella typhimurium
paper_content:
Abstract A methodology based on entropy-driven molecule switch signal amplification strategy has been developed for the ultrasensitive detection of DNA. The gold electrode modified with gold nanoparticles was used to immobilize capture hairpin DNA. In the presence of target DNA, the stem of hairpin DNA was opened once the hybridization reaction occurred between the capture DNA and target DNA. With the addition of the link DNA which has more complementary bases with the capture DNA, the link DNA would hybridize with the capture DNA, and the target DNA would be displaced under the entropy-driven. The released target DNA could open another hairpin DNA. Through such a cycle the target DNA could be recycled, and more hairpin DNA was opened and more link DNA would hybridize with the capture DNA. When the electrochemical nanoparticle probe which consisted of the nanoparticles, probe DNA and the electrochemical reagent was added, the link DNA would hybridize with the probe DNA. As a result, the electrochemical response can be amplified and monitored. In this work, the Salmonella typhimurium aptamer complementary DNA was chosen as a model target, and S. typhimurium was also tested by target induced strand release technology coupling the entropy-driven molecule switch signal amplified strategy. The electrochemical sensor demonstrated excellent sensing performances such as ultra low detection limit (0.3 fmol L −1 for DNA and 13 cfu mL −1 for S. typhimurium ) and high specificity, indicating that it is highly promising to provide a sensitive, selective, cost-effective, and convenient approach for DNA and S. typhimurium detection.
---
paper_title: Phagomagnetic Separation and Electrochemical Magneto-Genosensing of Pathogenic Bacteria
paper_content:
This paper addresses the use of bacteriophages immobilized on magnetic particles for the biorecognition of the pathogenic bacteria, followed by electrochemical magneto-genosensing of the bacteria. The P22 bacteriophage specific to Salmonella (serotypes A, B, and D1) is used as a model. The bacteria are captured and preconcentrated by the bacteriophage-modified magnetic particles through the host interaction with high specificity and efficiency. DNA amplification of the captured bacteria is then performed by double-tagging polymerase chain reaction (PCR). Further detection of the double-tagged amplicon is achieved by electrochemical magneto-genosensing. The strategy is able to detect in 4 h as low as 3 CFU mL(-1) of Salmonella in Luria-Bertani (LB) media. This approach is compared with conventional culture methods and PCR-based assay, as well as with immunological screening assays for bacteria detection, highlighting the outstanding stability and cost-efficient and animal-free production of bacteriophages as biorecognition element in biosensing devices.
---
paper_title: Electrochemical immunoassay for Salmonella Typhimurium based on magnetically collected Ag-enhanced DNA biobarcode labels
paper_content:
We describe a sensitive electrochemical immunoassay for Salmonella enterica serovar Typhimurium, a common foodborne pathogen which can cause infection at extremely small doses. The assay is based on the recognition of DNA biobarcode labels by differential pulse anodic stripping voltammetry (DPASV), following Ag enhancement. The biobarcodes consist of latex spheres (mean diameter 506 nm ± 22 nm) modified by ferromagnetic Fe3O4 particles. Each biobarcode is loaded by adsorption with approx. 27 molecules of mouse monoclonal antibody against S. Typhimurium and 3.5 × 10(5) molecules of 12 mer ssDNA. The assay is performed by adding the biobarcode, S. Typhimurium cells, and biotin-conjugated rabbit polyclonal antibody against Salmonella into well plates. After antigen-antibody binding, magnetic collection enables the excess polyclonal antibody to be washed off. Exposure to avidin-coated screen printed electrodes, and formation of the avidin-biotin bond, then enables the excess biobarcode to be removed. The biobarcode remaining on the electrode is quantified by DPASV measurement of Ag(+) ions following catalytic Ag deposition. The assay showed a negligible response to 10(7) CFU mL(-1)E. coli and had a limit of detection of 12 CFU mL(-1) in buffer, and 13 to 26 CFU mL(-1) for heat-killed and whole cell S. Typhimurium in plain milk, green bean sprouts and raw eggs. To the best of our knowledge, this is the lowest reported limit of detection for Salmonella by an electrochemical immunoassay not requiring sample pre-enrichment.
---
paper_title: Development and evaluation of an ELIME assay to reveal the presence of Salmonella in irrigation water: Comparison with Real-Time PCR and the Standard Culture Method
paper_content:
Abstract A reliable, low-cost and easy-to-use ELIME (Enzyme-Linked-Immuno-Magnetic-Electrochemical) assay for detection of Salmonella enterica in irrigation water is presented. Magnetic beads (MBs), coupled to a strip of eight-magnetized screen-printed electrodes localized at the bottom of eight wells (8-well/SPE strip), effectively supported a sandwich immunological chain. Enzymatic by-product is quickly measured by chronoamperometry, using a portable instrument. With the goal of developing a method able to detect a wide range of Salmonella serotypes, including S. Napoli and S. Thompson strains responsible for various community alerts, different kinds of MBs, antibodies and blocking agents were tested. The final system employs MBs coated with a broad reactivity monoclonal antibody anti-salmonella and blocked with dry milk. For a simple and rapid assay these two steps were performed in a preliminary phase, while the two sequential incubations for the immuno-recognition events were merged in a single step of 1 h. In parallel a Real-Time PCR (RTi-PCR) method, based on a specific locked nucleic acid (LNA) fluorescent probe and an internal amplification control (IAC), was carried out. The selectivity of the ELIME and RTi-PCR assays was proved by inclusivity and exclusivity tests performed analyzing different Salmonella serotypes and non-target microorganisms, most commonly isolated from environmental sources. Furthermore, both methods were applied to experimentally and not experimentally contaminated irrigation water samples. Results confirmed by the ISO culture method, demonstrated the effectiveness of ELIME and RTi-PCR assays to detect a low number of salmonella cells (1-10 CFU/L) reducing drastically the long analysis time usually required to reveal this pathogen.
---
paper_title: Nanoparticle-Based Bio-Bar Codes for the Ultrasensitive Detection of Proteins
paper_content:
An ultrasensitive method for detecting protein analytes has been developed. The system relies on magnetic microparticle probes with antibodies that specifically bind a target of interest [prostate-specific antigen (PSA) in this case] and nanoparticle probes that are encoded with DNA that is unique to the protein target of interest and antibodies that can sandwich the target captured by the microparticle probes. Magnetic separation of the complexed probes and target followed by dehybridization of the oligonucleotides on the nanoparticle probe surface allows the determination of the presence of the target protein by identifying the oligonucleotide sequence released from the nanoparticle probe. Because the nanoparticle probe carries with it a large number of oligonucleotides per protein binding event, there is substantial amplification and PSA can be detected at 30 attomolar concentration. Alternatively, a polymerase chain reaction on the oligonucleotide bar codes can boost the sensitivity to 3 attomolar. Comparable clinically accepted conventional assays for detecting the same target have sensitivity limits of ∼3 picomdar, six orders of magnitude less sensitive than what is observed with this method.
---
paper_title: Electrochemical genosensing of Salmonella, Listeria and Escherichia coli on silica magnetic particles.
paper_content:
A magneto-genosensing approach for the detection of the three most common pathogenic bacteria in food safety, such as Salmonella, Listeria and Escherichia coli is presented. The methodology is based on the detection of the tagged amplified DNA obtained by single-tagging PCR with a set of specific primers for each pathogen, followed by electrochemical magneto-genosensing on silica magnetic particles. A set of primers were selected for the amplification of the invA (278 bp), prfA (217 bp) and eaeA (151 bp) being one of the primers for each set tagged with fluorescein, biotin and digoxigenin coding for Salmonella enterica, Listeria monocytogenes and E. coli, respectively. The single-tagged amplicons were then immobilized on silica MPs based on the nucleic acid-binding properties of silica particles in the presence of the chaotropic agent as guanidinium thiocyanate. The assessment of the silica MPs as a platform for electrochemical magneto-genosensing is described, including the main parameters to selectively attach longer dsDNA fragments instead of shorter ssDNA primers based on their negative charge density of the sugar-phosphate backbone. This approach resulted to be a promising detection tool with sensing features of rapidity and sensitivity very suitable to be implemented on DNA biosensors and microfluidic platforms.
---
paper_title: Iron oxide/gold core/shell nanomagnetic probes and CdS biolabels for amplified electrochemical immunosensing of Salmonella typhimurium.
paper_content:
Abstract There is an imminent need for rapid methods to detect and determine pathogenic bacteria in food products as alternatives to the laborious and time-consuming culture procedures. In this work, an electrochemical immunoassay using iron/gold core/shell nanoparticles (Fe@Au) conjugated with anti- Salmonella antibodies was developed. The chemical synthesis and functionalization of magnetic and gold-coated magnetic nanoparticles is reported. Fe@Au nanoparticles were functionalized with different self-assembled monolayers and characterized using ultraviolet-visible spectrometry, transmission electron microscopy, and voltammetric techniques. The determination of Salmonella typhimurium , on screen-printed carbon electrodes, was performed by square-wave anodic stripping voltammetry through the use of CdS nanocrystals. The calibration curve was established between 1×10 1 and 1×10 6 cells/mL and the limit of detection was 13 cells/mL. The developed method showed that it is possible to determine the bacteria in milk at low concentrations and is suitable for the rapid (less than 1 h) and sensitive detection of S. typhimurium in real samples. Therefore, the developed methodology could contribute to the improvement of the quality control of food samples.
---
paper_title: Rapid Immunosensing ofSalmonellaTyphimurium Using Electrochemical Impedance Spectroscopy: the Effect of Sample Treatment
paper_content:
A label-free immunosensor for rapid detection of Salmonella Typhimurium based on electrochemical impedance spectroscopy was developed. Specific antibody was immobilized to a screen-printed electrode via cysteamine monolayer activated with glutaraldehyde and the impedance was measured between two gold electrodes. Different procedures for sample treatment (combinations of heat and sonication) were tested and their impact on the assay performance was compared. Atomic force microscopy was used to study the effect of the treatment on the cell shape and to confirm the specific binding of Salmonella to the sensing surface. The immunosensor allowed detection of 1×103 CFU ⋅ mL−1 in 20 min with negligible interference from other bacteria. Wide linear response was obtained in the range between 103 CFU ⋅ mL−1 and 108 CFU ⋅ mL−1. The successful detection of Salmonella in spiked milk demonstrates the suitability of sensor for the analysis of real samples.
---
paper_title: A label-free electrochemical impedance immunosensor based on AuNPs/PAMAM-MWCNT-Chi nanocomposite modified glassy carbon electrode for detection of Salmonella typhimurium in milk
paper_content:
Abstract A sensitive and stable label-free electrochemical impedance immunosensor for the detection of Salmonella typhimurium was developed by immobilising anti- Salmonella antibodies onto the gold nanoparticles and poly(amidoamine)-multiwalled carbon nanotubes-chitosan nanocomposite film modified glassy carbon electrode (AuNPs/PAMAM-MWCNT-Chi/GCE). Electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV) were used to verify the stepwise assembly of the immunosensor. Co-addition of MWCNT, PAMAM and AuNPs greatly enhanced the sensitivity of the immunosensor. The immobilisation of antibodies and the binding of Salmonella cells to the modified electrode increased the electron-transfer resistance ( R et ), which was directly measured with EIS using [Fe(CN) 6 ] 3−/4− as a redox probe. A linear relationship of R et and Salmonella concentration was obtained in the Salmonella concentration range of 1.0 × 10 3 to 1.0 × 10 7 CFU mL −1 with a detection limit of 5.0 × 10 2 CFU mL −1 . Additionally, the proposed method was successfully applied to determine S. typhimurium content in milk samples with satisfactory results.
---
paper_title: Label-free as-grown double wall carbon nanotubes bundles for Salmonella typhimuriumimmunoassay
paper_content:
BACKGROUND ::: A label-free immunosensor from as-grown double wall carbon nanotubes (DW) bundles was developed for detecting Salmonella typhimurium. The immunosensor was fabricated by using the as-grown DW bundles as an electrode material with an anti-Salmonella impregnated on the surface. The immunosensor was electrochemically characterized by cyclic voltammetry. The working potential (100, 200, 300 and 400 mV vs. Ag/AgCl) and the anti-Salmonella concentration (10, 25, 50, 75, and 100 μg/mL) at the electrode were subsequently optimized. Then, chronoamperometry was used with the optimum potential of 100 mV vs. Ag/AgCl) and the optimum impregnated anti-Salmonella of 10 μg/mL to detect S. typhimurium cells (0-10(9) CFU/mL). ::: ::: ::: RESULTS ::: The DW immunosensor exhibited a detection range of 10(2) to 10(7) CFU/mL for the bacteria with a limit of detection of 8.9 CFU/mL according to the IUPAC recommendation. The electrode also showed specificity to S. typhimurium but no current response to Escherichia coli. ::: ::: ::: CONCLUSIONS ::: These findings suggest that the use of a label-free DW immunosensor is promising for detecting S. typhimurium.
---
paper_title: Rapid Evaluation of Salmonella pullorum Contamination in Chicken Based on a Portable Amperometric Sensor
paper_content:
In this study, anti-Salmonella polyclonal antibodies immobilized on cellulose nitrate membrane were used to capture Salmonella pullorum (S. pullorum) in biological samples. The rapid evaluation of S. pullorum contamination was based on the analysis of the activities of catalase, a biomarker of this bacterium. After a screen printed electrode (SPE) modified with multi-wall carbon nanotubes (MWCN)-chitosan-peroxidase was connected to a portable selfmade amperometric sensor, the determination of S. pullorum contamination was carried out by adding the reaction product, which was obtained from the hydrogen peroxide dismutation catalyzed by the bacterial catalase, to the reacting area of the SPEs. A working potential of 0.55 V was applied in the sensing system and the current value displayed on the amperometric sensor was used as the detection signal. This method allowed the quantification of S. pullorum with the detection limit of 100cfu mL-1 in culture media and chicken samples. The stability, reproducibility and sensitivity of the modified SPE were also investigated. Moreover, successive analysis was conveniently accomplished by replacing the one-off SPE. This portable sensing system is a rapid, cost-effective and straightforward approach for screening S. pullorum contamination in food samples.
---
paper_title: Novel surface antigen based impedimetric immunosensor for detection of Salmonella typhimurium in water and juice samples.
paper_content:
A specific surface antigen, OmpD has been reported first time as a surface biomarker in the development of selective and sensitive immunosensor for detecting Salmonella typhimurium species. The OmpD surface antigen extraction was done from Salmonella typhimurium serovars, under the optimized growth conditions for its expression. Anti-OmpD antibodies were generated and used as detector probe in immunoassay format on graphene-graphene oxide (G-GO) modified screen printed carbon electrodes. The water samples were spiked with standard Salmonella typhimurium cells, and detection was done by measuring the change in impedimetric response of developed immunosensor to know the concentration of serovar Salmonella typhimurium. The developed immunosensor was able to specifically detect S. typhimurium in spiked water and juice samples with a sensitivity upto 10(1)CFUmL(-1), with high selectivity and very low cross-reactivity with other strains. This is the first report on the detection of Salmonella typhimurum species using a specific biomarker, OmpD. The developed technique could be very useful for the detection of nontyphoidal Salmonellosis and is also important from an epidemiological point of view.
---
paper_title: A label-free electrochemical DNA biosensor based on covalent immobilization of salmonella DNA sequences on the nanoporous glassy carbon electrode
paper_content:
Abstract Herein, an easy and cost-effective approach to the immobilization of probe was performed. The amino modified salmonella ssDNA probe sequence was covalently linked with carboxylic group on the surface of nanoporous glassy carbon electrode to prepare the DNA biosensor. The differential pulse voltammetry (DPV) and electrochemical impedance spectroscopy (EIS) techniques were used for the determination of salmonella DNA in the concentration ranges of 10–400 pM and 1–400 pM with limits of detection of 2.1 pM and 0.15 pM, respectively.
---
paper_title: An electrochemical genosensor for Salmonella typhi on gold nanoparticles-mercaptosilane modified screen printed electrode.
paper_content:
In this work, we fabricated a system of integrated self-assembled layer of organosilane 3-mercaptopropyltrimethoxy silane (MPTS) on the screen printed electrode (SPE) and electrochemically deposited gold nanoparticle for Salmonella typhi detection employing Vi gene as a molecular marker. Thiolated DNA probe was immobilized on a gold nanoparticle (AuNP) modified SPE for DNA hybridization assay using methylene blue as redox (electroactive) hybridization indicator, and signal was monitored by differential pulse voltammetry (DPV) method. The modified SPE was characterized by cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS), and atomic force microscopy (AFM) method. The DNA biosensor showed excellent performances with high sensitivity and good selectivity. The current response was linear with the target sequence concentrations ranging from 1.0 × 10(-11) to 0.5 × 10(-8)M and the detection limit was found to be 50 (± 2.1)pM. The DNA biosensor showed good discrimination ability to the one-base, two-base and three-base mismatched sequences. The fabricated genosensor could also be regenerated easily and reused for three to four times for further hybridization studies.
---
paper_title: Disposable DNA biosensor based on thin-film gold electrodes for selective Salmonella detection
paper_content:
Abstract The development of a disposable electrochemical biosensor for selective Salmonella detection in presence of other pathogens is described. The device is based on thin-film gold electrodes and is fabricated employing standard microsystems technology. The method involves the immobilization of a thiolated capture probe able to hybridize with its complementary sequence (target). The hybridization event is detected using the ruthenium complex, [Ru(NH 3 ) 5 L ] 2+ , where L is [3-(2-phenanthren-9-yl-vinyl)-pyridine] as electrochemical indicator. The combination of MEMS technology to fabricate electrodes with a predetermined configuration and the use of a hybridization redox indicator which interacts preferentially with dsDNA gear to the development of an approach that not only quantifies complementary target sequence, but also is selective to Salmonella in presence of other pathogens, which can act as potential interferents. In base of these results, a multianalyte detection platform including Salmonella , Lysteria and Escherichia coli has been developed.
---
paper_title: Recent Advances in Bacteriophage Based Biosensors for Food-Borne Pathogen Detection
paper_content:
Foodborne diseases are a major health concern that can have severe impact on society and can add tremendous financial burden to our health care systems. Rapid early detection of food contamination is therefore relevant for the containment of food-borne pathogens. Conventional pathogen detection methods, such as microbiological and biochemical identification are time-consuming and laborious, while immunological or nucleic acid-based techniques require extensive sample preparation and are not amenable to miniaturization for on-site detection. Biosensors have shown tremendous promise to overcome these limitations and are being aggressively studied to provide rapid, reliable and sensitive detection platforms for such applications. Novel biological recognition elements are studied to improve the selectivity and facilitate integration on the transduction platform for sensitive detection. Bacteriophages are one such unique biological entity that show excellent host selectivity and have been actively used as recognition probes for pathogen detection. This review summarizes the extensive literature search on the application of bacteriophages (and recently their receptor binding proteins) as probes for sensitive and selective detection of foodborne pathogens, and critically outlines their advantages and disadvantages over other recognition elements.
---
paper_title: Pathogen detection using engineered bacteriophages
paper_content:
Bacteriophages, or phages, are bacterial viruses that can infect a broad or narrow range of host organisms. Knowing the host range of a phage allows it to be exploited in targeting various pathogens. Applying phages for the identification of microorganisms related to food and waterborne pathogens and pathogens of clinical significance to humans and animals has a long history, and there has to some extent been a recent revival in these applications as phages have become more extensively integrated into novel detection, identification, and monitoring technologies. Biotechnological and genetic engineering strategies applied to phages are responsible for some of these new methods, but even natural unmodified phages are widely applicable when paired with appropriate innovative detector platforms. This review highlights the use of phages as pathogen detector interfaces to provide the reader with an up-to-date inventory of phage-based biodetection strategies.
---
paper_title: Phagomagnetic Separation and Electrochemical Magneto-Genosensing of Pathogenic Bacteria
paper_content:
This paper addresses the use of bacteriophages immobilized on magnetic particles for the biorecognition of the pathogenic bacteria, followed by electrochemical magneto-genosensing of the bacteria. The P22 bacteriophage specific to Salmonella (serotypes A, B, and D1) is used as a model. The bacteria are captured and preconcentrated by the bacteriophage-modified magnetic particles through the host interaction with high specificity and efficiency. DNA amplification of the captured bacteria is then performed by double-tagging polymerase chain reaction (PCR). Further detection of the double-tagged amplicon is achieved by electrochemical magneto-genosensing. The strategy is able to detect in 4 h as low as 3 CFU mL(-1) of Salmonella in Luria-Bertani (LB) media. This approach is compared with conventional culture methods and PCR-based assay, as well as with immunological screening assays for bacteria detection, highlighting the outstanding stability and cost-efficient and animal-free production of bacteriophages as biorecognition element in biosensing devices.
---
paper_title: Application of bacteriophages in sensor development
paper_content:
Bacteriophage-based bioassays are a promising alternative to traditional antibody-based immunoassays. Bacteriophages, shortened to phages, can be easily conjugated or genetically engineered. Phages are robust, ubiquitous in nature, and harmless to humans. Notably, phages do not usually require inoculation and killing of animals; and thus, the production of phages is simple and economical. In recent years, phage-based biosensors have been developed featuring excellent robustness, sensitivity, and selectivity in combination with the ease of integration into transduction devices. This review provides a critical overview of phage-based bioassays and biosensors developed in the last few years using different interrogation methods such as colorimetric, enzymatic, fluorescence, surface plasmon resonance, quartz crystal microbalance, magnetoelastic, Raman, or electrochemical techniques.
---
paper_title: A label-free electrochemical DNA biosensor based on covalent immobilization of salmonella DNA sequences on the nanoporous glassy carbon electrode
paper_content:
Abstract Herein, an easy and cost-effective approach to the immobilization of probe was performed. The amino modified salmonella ssDNA probe sequence was covalently linked with carboxylic group on the surface of nanoporous glassy carbon electrode to prepare the DNA biosensor. The differential pulse voltammetry (DPV) and electrochemical impedance spectroscopy (EIS) techniques were used for the determination of salmonella DNA in the concentration ranges of 10–400 pM and 1–400 pM with limits of detection of 2.1 pM and 0.15 pM, respectively.
---
paper_title: Disposable DNA biosensor based on thin-film gold electrodes for selective Salmonella detection
paper_content:
Abstract The development of a disposable electrochemical biosensor for selective Salmonella detection in presence of other pathogens is described. The device is based on thin-film gold electrodes and is fabricated employing standard microsystems technology. The method involves the immobilization of a thiolated capture probe able to hybridize with its complementary sequence (target). The hybridization event is detected using the ruthenium complex, [Ru(NH 3 ) 5 L ] 2+ , where L is [3-(2-phenanthren-9-yl-vinyl)-pyridine] as electrochemical indicator. The combination of MEMS technology to fabricate electrodes with a predetermined configuration and the use of a hybridization redox indicator which interacts preferentially with dsDNA gear to the development of an approach that not only quantifies complementary target sequence, but also is selective to Salmonella in presence of other pathogens, which can act as potential interferents. In base of these results, a multianalyte detection platform including Salmonella , Lysteria and Escherichia coli has been developed.
---
paper_title: Electrochemical immunosensors for Salmonella detection in food
paper_content:
Pathogen detection is a critical point for the identification and the prevention of problems related to food safety. Failures at detecting contaminations in food may cause outbreaks with drastic consequences to public health. In spite of the real need for obtaining analytical results in the shortest time possible, conventional methods may take several days to produce a diagnosis. Salmonella spp. is the major cause of foodborne diseases worldwide and its absence is a requirement of the health authorities. Biosensors are bioelectronic devices, comprising bioreceptor molecules and transducer elements, able to detect analytes (chemical and/or biological species) rapidly and quantitatively. Electrochemical immunosensors use antibody molecules as bioreceptors and an electrochemical transducer. These devices have been widely used for pathogen detection at low cost. There are four main techniques for electrochemical immunosensors: amperometric, impedimetric, conductometric, and potentiometric. Almost all types of immunosensors are applicable to Salmonella detection. This article reviews the developments and the applications of electrochemical immunosensors for Salmonella detection, particularly the advantages of each specific technique. Immunosensors serve as exciting alternatives to conventional methods, allowing "real-time" and multiple analyses that are essential characteristics for pathogen detection and much desired in health and safety control in the food industry.
---
paper_title: Rapid and sensitive detection of Salmonella Typhimurium on eggshells by using wireless biosensors.
paper_content:
This article presents rapid, sensitive, direct detection of Salmonella Typhimurium on eggshells by using wireless magnetoelastic (ME) biosensors. The biosensor consists of a freestanding, strip-shaped ME resonator as the signal transducer and the E2 phage as the biomolecular recognition element that selectively binds with Salmonella Typhimurium. This ME biosensor is a type of mass-sensitive biosensor that can be wirelessly actuated into mechanical resonance by an externally applied time-varying magnetic field. When the biosensor binds with Salmonella Typhimurium, the mass of the sensor increases, resulting in a decrease in the sensor's resonant frequency. Multiple E2 phage–coated biosensors (measurement sensors) were placed on eggshells spiked with Salmonella Typhimurium of various concentrations (1.6 to 1.6 × 107 CFU/cm2). Control sensors without phage were also used to compensate for environmental effects and nonspecific binding. After 20 min in a humidity-controlled chamber (95%) to allow binding of th...
---
paper_title: Sequential detection of Salmonella typhimurium and Bacillus anthracis spores using magnetoelastic biosensors.
paper_content:
Multiple phage-based magnetoelastic (ME) biosensors were simultaneously monitored for the detection of different biological pathogens that were sequentially introduced to the measurement system. The biosensors were formed by immobilizing phage and 1 mg/ml BSA (blocking agent) onto the magnetoelastic resonator’s surface. The detection system included a reference sensor as a control, an E2 phage-coated sensor specific to S. typhimurium, and a JRB7 phage-coated sensor specific to B. anthracis spores. The sensors were free standing during the test, being held in place by a magnetic field. Upon sequential exposure to single pathogenic solutions, only the biosensor coated with the corresponding specific phage responded. As the cells/spores were captured by the specific phage-coated sensor, the mass of the sensor increased, resulting in a decrease in the sensor’s resonance frequency. Additionally, non-specific binding was effectively eliminated by BSA blocking and was verified by the reference sensor, which showed no frequency shift. Scanning electron microscopy was used to visually verify the interaction of each biosensor with its target analyte. The results demonstrate that multiple magnetoelastic sensors may be simultaneously monitored to detect specifically targeted pathogenic species with good selectivity. This research is the first stage of an ongoing effort to simultaneously detect the presence of multiple pathogens in a complex analyte.
---
paper_title: Phagomagnetic Separation and Electrochemical Magneto-Genosensing of Pathogenic Bacteria
paper_content:
This paper addresses the use of bacteriophages immobilized on magnetic particles for the biorecognition of the pathogenic bacteria, followed by electrochemical magneto-genosensing of the bacteria. The P22 bacteriophage specific to Salmonella (serotypes A, B, and D1) is used as a model. The bacteria are captured and preconcentrated by the bacteriophage-modified magnetic particles through the host interaction with high specificity and efficiency. DNA amplification of the captured bacteria is then performed by double-tagging polymerase chain reaction (PCR). Further detection of the double-tagged amplicon is achieved by electrochemical magneto-genosensing. The strategy is able to detect in 4 h as low as 3 CFU mL(-1) of Salmonella in Luria-Bertani (LB) media. This approach is compared with conventional culture methods and PCR-based assay, as well as with immunological screening assays for bacteria detection, highlighting the outstanding stability and cost-efficient and animal-free production of bacteriophages as biorecognition element in biosensing devices.
---
paper_title: Electrochemical immunoassay for Salmonella Typhimurium based on magnetically collected Ag-enhanced DNA biobarcode labels
paper_content:
We describe a sensitive electrochemical immunoassay for Salmonella enterica serovar Typhimurium, a common foodborne pathogen which can cause infection at extremely small doses. The assay is based on the recognition of DNA biobarcode labels by differential pulse anodic stripping voltammetry (DPASV), following Ag enhancement. The biobarcodes consist of latex spheres (mean diameter 506 nm ± 22 nm) modified by ferromagnetic Fe3O4 particles. Each biobarcode is loaded by adsorption with approx. 27 molecules of mouse monoclonal antibody against S. Typhimurium and 3.5 × 10(5) molecules of 12 mer ssDNA. The assay is performed by adding the biobarcode, S. Typhimurium cells, and biotin-conjugated rabbit polyclonal antibody against Salmonella into well plates. After antigen-antibody binding, magnetic collection enables the excess polyclonal antibody to be washed off. Exposure to avidin-coated screen printed electrodes, and formation of the avidin-biotin bond, then enables the excess biobarcode to be removed. The biobarcode remaining on the electrode is quantified by DPASV measurement of Ag(+) ions following catalytic Ag deposition. The assay showed a negligible response to 10(7) CFU mL(-1)E. coli and had a limit of detection of 12 CFU mL(-1) in buffer, and 13 to 26 CFU mL(-1) for heat-killed and whole cell S. Typhimurium in plain milk, green bean sprouts and raw eggs. To the best of our knowledge, this is the lowest reported limit of detection for Salmonella by an electrochemical immunoassay not requiring sample pre-enrichment.
---
paper_title: Electrochemical Aptasensor for Rapid and Sensitive Determination of Salmonella Based on Target-Induced Strand Displacement and Gold Nanoparticle Amplification
paper_content:
ABSTRACTA simple, rapid, and sensitive electrochemical aptasensor based on target-induced strand displacement and gold nanoparticle amplification was developed for the determination of Salmonella. The aptamer for Salmonella was captured on the sensing interface by hybridizing with a capture probe. In the presence of Salmonella, the aptamer dissociated from the capture probe as the stronger interaction between the aptamer and the Salmonella. The single-strand capture probe was then hybridized with the biotinylated detection probe assembled on gold nanoparticles and catalyzed by streptavidin–alkaline phosphatase, providing a sensitive electrochemical response. The gold nanoparticles significantly amplified the detection probe signal and increased the sensitivity. The linear dynamic range was from 2 × 101 to 2 × 106 CFU mL−1 with a detection limit of 20 CFU mL−1. This strategy was utilized for the determination of Salmonella in milk, demonstrating potential application in clinical diagnostics, food safety, a...
---
paper_title: Diazonium-based impedimetric aptasensor for the rapid label-free detection of Salmonella typhimurium in food sample.
paper_content:
Fast and accurate detection of microorganisms is of key importance in clinical analysis and in food and water quality monitoring. Salmonella typhimurium is responsible for about a third of all cases of foodborne diseases and consequently, its fast detection is of great importance for ensuring the safety of foodstuffs. We report the development of a label-free impedimetric aptamer-based biosensor for S. typhimurium detection. The aptamer biosensor was fabricated by grafting a diazonium-supporting layer onto screen-printed carbon electrodes (SPEs), via electrochemical or chemical approaches, followed by chemical immobilisation of aminated-aptamer. FTIR-ATR, contact angle and electrochemical measurements were used to monitor the fabrication process. Results showed that electrochemical immobilisation of the diazonium-grafting layer allowed the formation of a denser aptamer layer, which resulted in higher sensitivity. The developed aptamer-biosensor responded linearly, on a logarithm scale, over the concentration range 1 × 10(1) to 1 × 10(8)CFU mL(-1), with a limit of quantification (LOQ) of 1 × 10(1) CFU mL(-1) and a limit of detection (LOD) of 6 CFU mL(-1). Selectivity studies showed that the aptamer biosensor could discriminate S. typhimurium from 6 other model bacteria strains. Finally, recovery studies demonstrated its suitability for the detection of S. typhimurium in spiked (1 × 10(2), 1 × 10(4) and 1 × 10(6) CFU mL(-1)) apple juice samples.
---
| Title: Electrochemical Biosensors for Rapid Detection of Foodborne Salmonella: A Critical Overview
Section 1: Introduction
Description 1: Introduce the significance of Salmonella detection in food products, the limitations of conventional methods, and the need for rapid and sensitive detection techniques.
Section 2: Nano-and Micro-Sized Materials for Improving Detection
Description 2: Discuss the various nano and micro-sized materials used to enhance the performance of biosensors for Salmonella detection.
Section 3: Electrochemical Immunosensors for Salmonella Detection
Description 3: Describe the principles, formats, and examples of electrochemical immunosensors used for Salmonella detection.
Section 4: GNPs
Description 4: Provide detailed examples and performances of electrochemical immunosensors using gold nanoparticles (GNPs).
Section 5: MBs
Description 5: Discuss the use of magnetic beads (MBs) in the development of electrochemical immunosensors for Salmonella detection.
Section 6: QDs
Description 6: Cover the application of quantum dots (QDs) in electrochemical immunosensors for the detection of foodborne Salmonella.
Section 7: Label-Free Electrochemical Immunosensors
Description 7: Explain the concept and examples of label-free electrochemical immunosensors for detecting Salmonella.
Section 8: CNTs
Description 8: Detail the use of carbon nanotubes (CNTs) in enhancing the sensitivity and performance of electrochemical immunosensors for Salmonella.
Section 9: Electrochemical Genosensors, Phagosensors, and Aptasensors for Salmonella Detection
Description 9: Introduce and elaborate on the various electrochemical platforms like genosensors, phagosensors, and aptasensors used for Salmonella detection.
Section 10: Genosensors
Description 10: Discuss the principles, methods, and examples of electrochemical genosensors for Salmonella detection.
Section 11: Phagosensors
Description 11: Describe the use of bacteriophages in electrochemical biosensing for the detection of Salmonella.
Section 12: Aptasensors
Description 12: Explain the application and advantages of aptamer-based electrochemical sensors in Salmonella detection.
Section 13: Conclusions
Description 13: Summarize the findings, emphasize the importance of developing rapid and sensitive detection methods, and outline the challenges and future perspectives in the field. |
Artificial Intelligence in Knowledge Management: Overview and Trends | 17 | ---
paper_title: Knowledge representation and ontologies
paper_content:
Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com–putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.
---
paper_title: Decision support through knowledge management: the role of the artificial intelligence
paper_content:
Knowledge management (KM) has recently received considerable attention in the computer information systems community and is continuously gaining interest by industry, enterprises and government. Decision support and KM processes are interdependent activities in many organizations. In all cases, decision makers always combine different types of data and knowledge available in various forms in the organization. One of the key – but also criticized – building blocks for advancing this field of knowledge management and consequently supporting the decision making is artificial intelligence (AI). In this framework, this paper aims to improve understanding of AI towards knowledge management. It examines and discusses both the potential and the limitations of basic AI technologies in terms of their capability to support the KM process and shares thoughts and estimations on further research on the development of the next generation decision support environments.
---
paper_title: Combining inductive and deductive inference in knowledge management tasks
paper_content:
This paper indicates how different logic programming technologies can underpin an architecture for distributed knowledge management in which higher throughput in information supply is achieved by a (semi-)automated solution to the more challenging problem of knowledge creation. The paper first proposes working definitions of the notions of data, knowledge and information in purely logical terms, and then shows how existing technologies can be combined into an inference engine, referred to as a knowledge, information and data engine (KIDE), integrating inductive and deductive capabilities. The paper then briefly introduces the notion of virtual organizations and uses the set-up stage of virtual organizations to exemplify the value-adding potential of KIDEs in knowledge management contexts.
---
paper_title: Decision support through knowledge management: the role of the artificial intelligence
paper_content:
Knowledge management (KM) has recently received considerable attention in the computer information systems community and is continuously gaining interest by industry, enterprises and government. Decision support and KM processes are interdependent activities in many organizations. In all cases, decision makers always combine different types of data and knowledge available in various forms in the organization. One of the key – but also criticized – building blocks for advancing this field of knowledge management and consequently supporting the decision making is artificial intelligence (AI). In this framework, this paper aims to improve understanding of AI towards knowledge management. It examines and discusses both the potential and the limitations of basic AI technologies in terms of their capability to support the KM process and shares thoughts and estimations on further research on the development of the next generation decision support environments.
---
paper_title: Decision support through knowledge management: the role of the artificial intelligence
paper_content:
Knowledge management (KM) has recently received considerable attention in the computer information systems community and is continuously gaining interest by industry, enterprises and government. Decision support and KM processes are interdependent activities in many organizations. In all cases, decision makers always combine different types of data and knowledge available in various forms in the organization. One of the key – but also criticized – building blocks for advancing this field of knowledge management and consequently supporting the decision making is artificial intelligence (AI). In this framework, this paper aims to improve understanding of AI towards knowledge management. It examines and discusses both the potential and the limitations of basic AI technologies in terms of their capability to support the KM process and shares thoughts and estimations on further research on the development of the next generation decision support environments.
---
paper_title: Knowledge management and environmental decision support systems
paper_content:
Artificial Intelligence (AI) researches have begun to use AI techniques and methodologies for developing Knowledge Management Systems (KMS) for different domains and technologies. Expert systems are considered one of the well know techniques for knowledge management and can be aid in solving problems in specific domain. Case-based reasoning has been successfully used for managing knowledge of implicit time. Data mining is a new methodology for searching and discover hidden pattern in very huge data bases. On the other hand, an effective protection of our environment is largely dependent on the quality of the available information used to make an appropriate decision. Recently, a new discipline called Environmental Informatics integrates environmental science with computer science is emerging. This paper discusses the role of AI technologies; namely, expert systems and data mining, in environmental KMS.
---
paper_title: Decisional DNA and the Smart Knowledge Management System: Knowledge Engineering and Knowledge Management applied to an Intelligent Platform
paper_content:
Experience has made species to survive and cultures to prevail, as experience makes organizations to succeed. Thus, capturing the experience of every decision taken in an explicit representation form is highlighted as the utmost importance in knowledge engineering presented in this book. Decisional DNA knowledge representation allows building the experiential fingerprints of an organization by implementing a model for transforming information into knowledge. The Smart Knowledge Management System (SKMS) is a self-learning and intelligent knowledge management hybrid platform developed to help decision makers in their daily operation. Technologies such as expert systems, simulation, statistical tools, knowledge-based systems, and multiple AI technologies are integrated into the SKMS allowing the combination of different perspectives for acquiring the required explicit decisional experiential knowledge by the means of Decisional DNA. The presented tools offer useful insights to professionals, students and researchers interested in knowledge engineering and knowledge management, but more generally, in the artificial intelligence and information systems fields.
---
paper_title: Improving knowledge management programs using marginal utility in a metric space generated by conceptual graphs
paper_content:
Knowledge management has emerged as a field of endeavor that blends a systems approach with methods drawn from organizational management and learning. In contrast, knowledge representation, a branch of artificial intelligence, is grounded in formal methods. Research in the separate behavioral and the structural disciplines—knowledge management and knowledge engineering—has not traditionally cross-pollinated. This has prevented the development of many practical practices useful in organizations. Organization managers—line and senior—lack guidance in where to direct improvement efforts targeted at specific groups of company knowledge workers. Demonstrated here is Knowledge Improvement Measurement Space (KIMS), a model providing a solution to that improvement problem. It employs marginal utility theory in a metric space, with formal reasoning via software agents realized in Sowa's conceptual graphs, operating over a knowledge management conceptual structure. These components allow repeated evaluation of knowledge improvement measurements. Knowledge representation technology was applied to organize and encourage knowledge sharing, to achieve competitive advantage, and to measure progress toward that achievement. The KIMS re-entrant process, a method of using the KIMS model, was shown to consist of metrics data calculated by executing joined conceptual graphs, consolidated into a distance variable to be estimated via a Minkowski metrics space. The metric space was shown to be equivalent to a marginal utility, which may be evaluated to determine the new level of knowledge capability. The procedure may be repeated until knowledge management goals are achieved. The solution took into account the body of knowledge related to human understanding and learning, and formal methods of knowledge organization. These were shown to include surface ontologies based in a knowledge management program, principles of business strategy, and organizational learning. KIMS was validated through a demonstration based on empirical data collected over a five-year program in a large aerospace company during its progress in applying the Software Engineering Institute Capability Maturity Model.
---
paper_title: Knowledge value chain: an effective tool to measure knowledge value
paper_content:
Knowledge value is a significant issue in knowledge management, but its related problems are still challenging. This paper aims at discussing how knowledge value changes in the knowledge evolution process and develops a knowledge value chain (KVC) to measure knowledge value. By applying the notions of knowledge state and knowledge maturity, the knowledge finite state machine (KFSM) and knowledge maturity model (KMM) are introduced to characterise the KVC. Based on these concepts, knowledge value is measured by calculating the difference between two maturity states rather than by direct calculation. This point of view of knowledge value, the construction of KVC and the association of knowledge value and knowledge maturity are insightful for both researchers and practitioners.
---
paper_title: Knowledge management and environmental decision support systems
paper_content:
Artificial Intelligence (AI) researches have begun to use AI techniques and methodologies for developing Knowledge Management Systems (KMS) for different domains and technologies. Expert systems are considered one of the well know techniques for knowledge management and can be aid in solving problems in specific domain. Case-based reasoning has been successfully used for managing knowledge of implicit time. Data mining is a new methodology for searching and discover hidden pattern in very huge data bases. On the other hand, an effective protection of our environment is largely dependent on the quality of the available information used to make an appropriate decision. Recently, a new discipline called Environmental Informatics integrates environmental science with computer science is emerging. This paper discusses the role of AI technologies; namely, expert systems and data mining, in environmental KMS.
---
paper_title: Concept maps and case-based reasoning: a perspective for the intelligent teaching/learning systems
paper_content:
The use of present pedagogical methods and Information and Communication Technologies produce a new quality that favors the task of generating, transmitting and sharing knowledge. That is the case of the pedagogical effect that produces the use of the Concept Maps, which are considered a learning technique as a way to increase meaningful learning in the sciences. It is also used for the knowledge management as an aid to personalize the learning process, to exchange knowledge, and to learn how to learn. Concept Mapping provides a framework for making this internal knowledge explicit in a visual form that can easily be examined and shared. Concept Maps are relevant since they can be retrieved or adapted to new problems. In the other hand, Case-Based Reasoning as a technique in Artificial Intelligence plays an important role in knowledge retrieval and reuse of memories. In this paper the authors present a new approach to elaborate Intelligent Teaching/Learning Systems, where the techniques of Concept Maps and Artificial Intelligence are combined, using the Case-Based Reasoning as theoretical framework for the Student Model. The proposed model has been implemented in the computational system HESEI, which has been successfully used in the teaching/learning process by laymen in the Computer Science field.
---
paper_title: The Knowledge Creating Company
paper_content:
Japanese companies, masters of manufacturing, have also been leaders in the creation, management, and use of knowledge-especially the tacit and often subjective insights, intuitions, and ideas of employees.
---
paper_title: The Artificial Intelligence in Personal Knowledge Management
paper_content:
With the development of the society, it is necessary to apply artificial intelligence to personal knowledge management. This paper researches on it. At first, this paper introduces the meaning of personal knowledge management and the meaning of artificial intelligence. And then introduce the problems in personal knowledge management at present. There are three problems:(1) The problem of information overload. (2) The problem of unstructured information. (3) The problem of tacit knowledge. At third, in order to solve these problems, introduce the application of artificial intelligence in personal knowledge management. The applications are: (1)The intelligent search of knowledge. (2) The automatic classification of knowledge. (3) The conversion of tacit knowledge. This paper explains the application detailed. Finally, summarize that the application of artificial intelligence in personal knowledge management is still at the initial stage, but it has a beautiful future.
---
paper_title: ICT Perceptions and Meanings: Implications for Knowledge Transfer
paper_content:
Driven by global competition and economic pressures, organizations are increasingly interested in transferring and leveraging local expertise at the global level. While many of the challenges of knowledge transfer (KT) have been discussed in the literature (e.g., incentives, cognitive limitations), the challenge of KT in distributed, or "virtual", settings and the role of information and communication technologies (ICTs) have received limited attention. While any given ICT may be described in terms of one's perceptual awareness of its capabilities (e.g., richness, interactivity), it may also be described relative to the meanings an individual attaches to it, i.e., the idea of it or its purpose, rather than capability. We propose that understanding both perceptions and meanings, particularly as new ICTs are introduced, is critical to understanding selection and use by KT participants, and ultimately outcomes. In this paper, we conceptually explore the implications of meanings and perceptions on KT in virtual settings.
---
paper_title: The Artificial Intelligence in Personal Knowledge Management
paper_content:
With the development of the society, it is necessary to apply artificial intelligence to personal knowledge management. This paper researches on it. At first, this paper introduces the meaning of personal knowledge management and the meaning of artificial intelligence. And then introduce the problems in personal knowledge management at present. There are three problems:(1) The problem of information overload. (2) The problem of unstructured information. (3) The problem of tacit knowledge. At third, in order to solve these problems, introduce the application of artificial intelligence in personal knowledge management. The applications are: (1)The intelligent search of knowledge. (2) The automatic classification of knowledge. (3) The conversion of tacit knowledge. This paper explains the application detailed. Finally, summarize that the application of artificial intelligence in personal knowledge management is still at the initial stage, but it has a beautiful future.
---
paper_title: Concept maps and case-based reasoning: a perspective for the intelligent teaching/learning systems
paper_content:
The use of present pedagogical methods and Information and Communication Technologies produce a new quality that favors the task of generating, transmitting and sharing knowledge. That is the case of the pedagogical effect that produces the use of the Concept Maps, which are considered a learning technique as a way to increase meaningful learning in the sciences. It is also used for the knowledge management as an aid to personalize the learning process, to exchange knowledge, and to learn how to learn. Concept Mapping provides a framework for making this internal knowledge explicit in a visual form that can easily be examined and shared. Concept Maps are relevant since they can be retrieved or adapted to new problems. In the other hand, Case-Based Reasoning as a technique in Artificial Intelligence plays an important role in knowledge retrieval and reuse of memories. In this paper the authors present a new approach to elaborate Intelligent Teaching/Learning Systems, where the techniques of Concept Maps and Artificial Intelligence are combined, using the Case-Based Reasoning as theoretical framework for the Student Model. The proposed model has been implemented in the computational system HESEI, which has been successfully used in the teaching/learning process by laymen in the Computer Science field.
---
paper_title: The Tyranny of Tacit Knowledge: What Artificial Intelligence Tells us About Knowledge Representation
paper_content:
Polanyi's tacit knowledge captures the idea "we can know more than we can tell." Many researchers in the knowledge management community have used the idea of tacit knowledge to draw a distinction between that which cannot be formally represented (tacit knowledge) and knowledge which can be so represented (explicit knowledge). I argue that the deference that knowledge management researchers give to tacit knowledge hinders potentially fruitful work for two important reasons. First, the inability to explicate knowledge does not imply that the knowledge cannot be formally represented. Second, assuming the inability to formalize tacit knowledge as it exists in the minds of people does not exclude the possibility that computer systems might perform the same tasks using alternative representations. By reviewing work from artificial intelligence, I will argue that a richer model of cognition and knowledge representation is needed to study and build knowledge management systems.
---
paper_title: Information Extraction in Semantic Wikis
paper_content:
This paper deals with information extraction technologies supporting semantic annotation and logical organization of textual content in semantic wikis. We describe our work in the context of the KiWi project which aims at developing a new knowledge management system motivated by the wiki way of collaborative content creation that is enhanced by the semantic web technology. The specific characteristics of semantic wikis as advanced community knowledge-sharing platforms are discussed from the perspective of the functionality providing automatic suggestions of semantic tags. We focus on the innovative aspects of the implemented methods. The interfaces of the user-interaction tools as well as the back-end web services are also tackled. We conclude that though there are many challenges related to the integration of information extraction into semantic wikis, this fusion brings valuable results.
---
paper_title: Wikis: 'From Each According to His Knowledge'
paper_content:
Wikis offer tremendous potential to capture knowledge from large groups of people, making tacit, hidden content explicit and widely available. They also efficiently connect those with information to those seeking it.
---
paper_title: Hierarchical Document Classification Based on a Backtracking Algorithm
paper_content:
Hierarchical document classification refers to assigning one or more suitable categories from a hierarchical category space to a document. This paper proposes a new hierarchical document classification method based on a backtracking algorithm. Utilizing the relationships between categories in category tree, a suitable threshold for every category is found to determine whether a document could be classified into the category. And the backtracking algorithm in our hierarchical classification approach effectively solves the problem that a misclassification at higher level directly leads to the misclassification at a lower level. Moreover, feature set is selected by integrating information gain with hierarchy information, which accords with the characteristic of a category tree. Experiments show that the method performs well when enough training documents are given.
---
paper_title: A Hierarchical Classification Model for Document Categorization
paper_content:
We propose a novel hierarchical classification method for documents categorization in this paper. The approach consists of multiple levels of classification for different hierarchies. Regularized Least Square (RLS)binary classifiers are applied in the middle levels of the hierarchy to classify documents into smaller set of categories and K-nearest-neighbor (KNN) multi-class classifiers are used at the bottom to classify documents into final classes. Experiments on large-scale real world tax documents show that the proposed hierarchical approach outperforms traditional flat classification method.
---
paper_title: The Artificial Intelligence in Personal Knowledge Management
paper_content:
With the development of the society, it is necessary to apply artificial intelligence to personal knowledge management. This paper researches on it. At first, this paper introduces the meaning of personal knowledge management and the meaning of artificial intelligence. And then introduce the problems in personal knowledge management at present. There are three problems:(1) The problem of information overload. (2) The problem of unstructured information. (3) The problem of tacit knowledge. At third, in order to solve these problems, introduce the application of artificial intelligence in personal knowledge management. The applications are: (1)The intelligent search of knowledge. (2) The automatic classification of knowledge. (3) The conversion of tacit knowledge. This paper explains the application detailed. Finally, summarize that the application of artificial intelligence in personal knowledge management is still at the initial stage, but it has a beautiful future.
---
paper_title: Knowledge management and environmental decision support systems
paper_content:
Artificial Intelligence (AI) researches have begun to use AI techniques and methodologies for developing Knowledge Management Systems (KMS) for different domains and technologies. Expert systems are considered one of the well know techniques for knowledge management and can be aid in solving problems in specific domain. Case-based reasoning has been successfully used for managing knowledge of implicit time. Data mining is a new methodology for searching and discover hidden pattern in very huge data bases. On the other hand, an effective protection of our environment is largely dependent on the quality of the available information used to make an appropriate decision. Recently, a new discipline called Environmental Informatics integrates environmental science with computer science is emerging. This paper discusses the role of AI technologies; namely, expert systems and data mining, in environmental KMS.
---
paper_title: Concept maps and case-based reasoning: a perspective for the intelligent teaching/learning systems
paper_content:
The use of present pedagogical methods and Information and Communication Technologies produce a new quality that favors the task of generating, transmitting and sharing knowledge. That is the case of the pedagogical effect that produces the use of the Concept Maps, which are considered a learning technique as a way to increase meaningful learning in the sciences. It is also used for the knowledge management as an aid to personalize the learning process, to exchange knowledge, and to learn how to learn. Concept Mapping provides a framework for making this internal knowledge explicit in a visual form that can easily be examined and shared. Concept Maps are relevant since they can be retrieved or adapted to new problems. In the other hand, Case-Based Reasoning as a technique in Artificial Intelligence plays an important role in knowledge retrieval and reuse of memories. In this paper the authors present a new approach to elaborate Intelligent Teaching/Learning Systems, where the techniques of Concept Maps and Artificial Intelligence are combined, using the Case-Based Reasoning as theoretical framework for the Student Model. The proposed model has been implemented in the computational system HESEI, which has been successfully used in the teaching/learning process by laymen in the Computer Science field.
---
paper_title: Improving knowledge management programs using marginal utility in a metric space generated by conceptual graphs
paper_content:
Knowledge management has emerged as a field of endeavor that blends a systems approach with methods drawn from organizational management and learning. In contrast, knowledge representation, a branch of artificial intelligence, is grounded in formal methods. Research in the separate behavioral and the structural disciplines—knowledge management and knowledge engineering—has not traditionally cross-pollinated. This has prevented the development of many practical practices useful in organizations. Organization managers—line and senior—lack guidance in where to direct improvement efforts targeted at specific groups of company knowledge workers. Demonstrated here is Knowledge Improvement Measurement Space (KIMS), a model providing a solution to that improvement problem. It employs marginal utility theory in a metric space, with formal reasoning via software agents realized in Sowa's conceptual graphs, operating over a knowledge management conceptual structure. These components allow repeated evaluation of knowledge improvement measurements. Knowledge representation technology was applied to organize and encourage knowledge sharing, to achieve competitive advantage, and to measure progress toward that achievement. The KIMS re-entrant process, a method of using the KIMS model, was shown to consist of metrics data calculated by executing joined conceptual graphs, consolidated into a distance variable to be estimated via a Minkowski metrics space. The metric space was shown to be equivalent to a marginal utility, which may be evaluated to determine the new level of knowledge capability. The procedure may be repeated until knowledge management goals are achieved. The solution took into account the body of knowledge related to human understanding and learning, and formal methods of knowledge organization. These were shown to include surface ontologies based in a knowledge management program, principles of business strategy, and organizational learning. KIMS was validated through a demonstration based on empirical data collected over a five-year program in a large aerospace company during its progress in applying the Software Engineering Institute Capability Maturity Model.
---
paper_title: Knowledge value chain: an effective tool to measure knowledge value
paper_content:
Knowledge value is a significant issue in knowledge management, but its related problems are still challenging. This paper aims at discussing how knowledge value changes in the knowledge evolution process and develops a knowledge value chain (KVC) to measure knowledge value. By applying the notions of knowledge state and knowledge maturity, the knowledge finite state machine (KFSM) and knowledge maturity model (KMM) are introduced to characterise the KVC. Based on these concepts, knowledge value is measured by calculating the difference between two maturity states rather than by direct calculation. This point of view of knowledge value, the construction of KVC and the association of knowledge value and knowledge maturity are insightful for both researchers and practitioners.
---
paper_title: The Tyranny of Tacit Knowledge: What Artificial Intelligence Tells us About Knowledge Representation
paper_content:
Polanyi's tacit knowledge captures the idea "we can know more than we can tell." Many researchers in the knowledge management community have used the idea of tacit knowledge to draw a distinction between that which cannot be formally represented (tacit knowledge) and knowledge which can be so represented (explicit knowledge). I argue that the deference that knowledge management researchers give to tacit knowledge hinders potentially fruitful work for two important reasons. First, the inability to explicate knowledge does not imply that the knowledge cannot be formally represented. Second, assuming the inability to formalize tacit knowledge as it exists in the minds of people does not exclude the possibility that computer systems might perform the same tasks using alternative representations. By reviewing work from artificial intelligence, I will argue that a richer model of cognition and knowledge representation is needed to study and build knowledge management systems.
---
paper_title: Decisional DNA and the Smart Knowledge Management System: Knowledge Engineering and Knowledge Management applied to an Intelligent Platform
paper_content:
Experience has made species to survive and cultures to prevail, as experience makes organizations to succeed. Thus, capturing the experience of every decision taken in an explicit representation form is highlighted as the utmost importance in knowledge engineering presented in this book. Decisional DNA knowledge representation allows building the experiential fingerprints of an organization by implementing a model for transforming information into knowledge. The Smart Knowledge Management System (SKMS) is a self-learning and intelligent knowledge management hybrid platform developed to help decision makers in their daily operation. Technologies such as expert systems, simulation, statistical tools, knowledge-based systems, and multiple AI technologies are integrated into the SKMS allowing the combination of different perspectives for acquiring the required explicit decisional experiential knowledge by the means of Decisional DNA. The presented tools offer useful insights to professionals, students and researchers interested in knowledge engineering and knowledge management, but more generally, in the artificial intelligence and information systems fields.
---
paper_title: Wikis: 'From Each According to His Knowledge'
paper_content:
Wikis offer tremendous potential to capture knowledge from large groups of people, making tacit, hidden content explicit and widely available. They also efficiently connect those with information to those seeking it.
---
| Title: Artificial Intelligence in Knowledge Management: Overview and Trends
Section 1: INTRODUCTION
Description 1: Introduce the importance of knowledge management in organizations, the role of ICT and AI in managing knowledge, and the aim of the paper.
Section 2: OVERVIEW OF RESEARCH PAPERS (2001-2003)
Description 2: Summarize definitions of data, information, and knowledge, and review early research discussing KM tasks and AI methods like expert systems, neural networks, and intelligent agents.
Section 3: LATEST KM FRAMEWORKS
Description 3: Outline the latest knowledge management frameworks, highlighting the growing importance of AI in KM. Discuss the different types of learning within KM (human, organizational, machine learning).
Section 4: MEETING THE CHALLENGES
Description 4: Discuss the evolution of AI support in KM since the early 2000s, current integrated AI technologies for KM, measurement models for KM, and self-improving systems.
Section 4.A: Integrated framework for the use of AI technologies in KM tasks
Description 4.A: Provide examples of complex frameworks integrating multiple AI technologies in KM such as expert systems, data mining, and case-based reasoning.
Section 4.B: Measurements
Description 4.B: Discuss measurement-oriented culture in KM, including knowledge loss risk assessment and Knowledge Improvement Measurement Space (KIMS).
Section 4.C: Systems that improve their results
Description 4.C: Explain systems that employ techniques like Case-Based Reasoning (CBR) to improve performance by learning from experience.
Section 5: RECENT TRENDS IN KNOWLEDGE MANAGEMENT (2008-2010)
Description 5: Discuss recent trends in KM, including personal knowledge management and distributed knowledge work, and their support by AI technologies.
Section 5.A: Personal knowledge management
Description 5.A: Address the importance of personal KM, AI applications for tackling personal KM challenges like information overload, and intelligent search.
Section 5.B: Distributed knowledge work
Description 5.B: Describe the shift to virtual work in organizations, the increased demands on ICT for communication and collaboration, and KM challenges specific to distributed work.
Section 6: TOPICAL KM TASKS
Description 6: Explore the latest topical tasks in KM supported by AI technologies, including the representation of tacit knowledge, knowledge mining on the web, text categorization, knowledge sharing, and advanced search and retrieval systems.
Section 6.A: Efforts to represent tacit knowledge
Description 6.A: Discuss methods for representing tacit knowledge using expert systems, wikis, and concept maps.
Section 6.B: Knowledge mining on the web
Description 6.B: Explain approaches to mining knowledge on the web, including the creation of specialized glossaries and the role of blogging.
Section 6.C: Text categorization
Description 6.C: Review techniques for hierarchical and multi-label text categorization supported by AI, such as concept maps and specialized algorithms.
Section 6.D: Knowledge sharing
Description 6.D: Discuss the use of AI and ontologies for knowledge sharing within wireless sensor networks and other systems.
Section 6.E: Search and retrieval
Description 6.E: Analyze intelligent agent-based improvements in search and retrieval engines to enhance KM tasks.
Section 7: CONCLUSIONS
Description 7: Summarize the significance of AI in KM, highlight accomplishments and trends, and provide a reference collection for further studies on particular KM topics. |
Neuro – Fuzzy Rule Generation : Survey in Soft Computing Framework | 5 | ---
paper_title: Knowledge-based connectionism for revising domain theories
paper_content:
A knowledge-based connectionist model for machine learning referred to as KBCNN is presented. In the KBCNN learning model, useful domain attributes and concepts are first identified and linked in a way consistent with initial domain knowledge, and then the links are weighted properly so as to maintain the semantics. Hidden units and additional connections may be introduced into this initial connectionist structure as appropriate. Then, this primitive structure evolves to minimize empirical error. The KBCNN learning model allows the theory learned or revised to be translated into the symbolic rule-based language that describes the initial theory. Thus, a domain theory can be pushed onto the network, revised empirically over time, and decoded in symbolic form. The domain of molecular genetics is used to demonstrate the validity of the KBCNN learning model and its superiority over related learning methods. >
---
paper_title: Rough fuzzy MLP: knowledge encoding and classification
paper_content:
A scheme of knowledge encoding in a fuzzy multilayer perceptron (MLP) using rough set-theoretic concepts is described. Crude domain knowledge is extracted from the data set in the form of rules. The syntax of these rules automatically determines the appropriate number of hidden nodes while the dependency factors are used in the initial weight encoding. The network is then refined during training. Results on classification of speech and synthetic data demonstrate the superiority of the system over the fuzzy and conventional versions of the MLP (involving no initial knowledge).
---
paper_title: Survey and critique of techniques for extracting rules from trained artificial neural networks
paper_content:
It is becoming increasingly apparent that, without some form of explanation capability, the full potential of trained artificial neural networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically, the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy.
---
paper_title: Knowledge-Based Artificial Neural Networks
paper_content:
Abstract Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN ( Knowledge-Based Artificial Neural Networks ) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.
---
paper_title: Foundations Of Neuro-Fuzzy Systems
paper_content:
From the Publisher: ::: Foundations of Neuro-Fuzzy Systems reflects the current trend in intelligent systems research towards the integration of neural networks and fuzzy technology. The authors demonstrate how a combination of both techniques enhances the performance of control, decision-making and data analysis systems. Smarter and more applicable structures result from marrying the learning capability of the neural network with the transparency and interpretability of the rule-based fuzzy system. Foundations of Neuro-Fuzzy Systems highlights the advantages of integration making it a valuable resource for graduate students and researchers in control engineering, computer science and applied mathematics. The authors' informed analysis of practical neuro-fuzzy applications will be an asset to industrial practitioners using fuzzy technology and neural networks for control systems, data analysis and optimization tasks.
---
paper_title: Neuro-Fuzzy Pattern Recognition: Methods in Soft Computing
paper_content:
From the Publisher: ::: The authors consolidate a wealth of information previously scattered in disparate articles, journals, and edited volumes, explaining both the theory of neuro-fuzzy computing and the latest methodologies for performing different pattern recognition tasks in the neuro-fuzzy network - classification, feature evaluation, rule generation, knowledge extraction, and hybridization. Special emphasis is given to the integration of neuro-fuzzy methods with rough sets and genetic algorithms (GAs) to ensure more efficient recognition systems.
---
paper_title: The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks
paper_content:
To date, the preponderance of techniques for eliciting the knowledge embedded in trained artificial neural networks (ANN's) has focused primarily on extracting rule-based explanations from feedforward ANN's. The ADT taxonomy for categorizing such techniques was proposed in 1995 to provide a basis for the systematic comparison of the different approaches. This paper shows that not only is this taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types (e,g., recurrent neural networks) and explanation structures. In addition we identify some of the key research questions in extracting the knowledge embedded within ANN's including the need for the formulation of a consistent theoretical basis for what has been, until recently, a disparate collection of empirical results.
---
paper_title: Are artificial neural networks black boxes
paper_content:
Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Notwithstanding, one of the major criticisms is their being black boxes, since no satisfactory explanation of their behavior has been offered. In this paper, we provide such an interpretation of neural networks so that they will no longer be seen as black boxes. This is stated after establishing the equality between a certain class of neural nets and fuzzy rule-based systems. This interpretation is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality. In addition, this interpretation offers an automated knowledge acquisition procedure.
---
paper_title: Approximations between fuzzy expert systems and neural networks
paper_content:
Abstract The fuzzy expert system we are concerned about in this paper is a rule-based fuzzy expert system using any method of approximate reasoning to evaluate the rules when given new data. In this paper we argue that: (1) any continuous fuzzy expert system may be approximated by a neural net; and (2) any continuous neural net (feedforward, multilayered) may be approximated by a fuzzy expert system. We show how to train the neural net and how to write down the rules in the fuzzy expert system.
---
paper_title: An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
paper_content:
This paper describes an experiment on the “linguistic” synthesis of a controller for a model industrial plant (a steam engine). Fuzzy logic is used to convert heuristic control rules stated by a human operator into an automatic control strategy. The experiment was initiated to investigate the possibility of human interaction with a learning controller. However, the control strategy set up linguistically proved to be far better than expected in its own right, and the basic experiment of linguistic control synthesis in a non-learning controller is reported here.
---
paper_title: Neural fuzzy systems: a neuro-fuzzy synergism to intelligent systems
paper_content:
Neural Fuzzy Systems provides a comprehensive, up-to-date introduction to the basic theories of fuzzy systems and neural networks, as well as an exploration of how these two fields can be integrated to create Neural-Fuzzy Systems. It includes Matlab software, with a Neural Network Toolkit, and a Fuzzy System Toolkit.
---
paper_title: Computational Intelligence: An Introduction
paper_content:
From the Publisher: ::: Computational Intelligence: An Introduction consists of a highly readable and systematic exposure of the fundamentals of computational intelligence, along with the coherent presentation of sound and comprehensive analysis and design practices. Provides a balanced introduction to computational intelligence, emphasizing equally the important analysis and design aspects of the emerging technology; text is organized in a way that allows for the easy use of the book as a basic course material; presents a design-oriented approach toward the use of computational intelligence; organizes exercises and problems of different levels of difficulty following each chapter; and complete algorithms are presented in a structured fashion, easing understanding and implementation.
---
paper_title: On the equivalence of neural nets and fuzzy expert systems
paper_content:
Abstract We show, under the assumptions described in the paper, that: (1) we can approximate a neural net to any degree of accuracy using a fuzzy expert system; and conversely (2) we may approximate a fuzzy expert system to any degree of accuracy with a neural net.
---
paper_title: Hybrid neural nets can be fuzzy controllers and fuzzy expert systems
paper_content:
Abstract Given a discrete fuzzy expert system we show how to construct a hybrid neural net computationally identical to the fuzzy expert system. Given a Sugeno, Mamdani, or expert system type of controller we build a hybrid neural net that is computationally the same as the controller. This improves on previous results that show a (regular) neural net can approximate continuous fuzzy controllers and continuous fuzzy expert systems to any degree of accuracy.
---
paper_title: Neuro-Fuzzy Pattern Recognition: Methods in Soft Computing
paper_content:
From the Publisher: ::: The authors consolidate a wealth of information previously scattered in disparate articles, journals, and edited volumes, explaining both the theory of neuro-fuzzy computing and the latest methodologies for performing different pattern recognition tasks in the neuro-fuzzy network - classification, feature evaluation, rule generation, knowledge extraction, and hybridization. Special emphasis is given to the integration of neuro-fuzzy methods with rough sets and genetic algorithms (GAs) to ensure more efficient recognition systems.
---
paper_title: Functional Equivalence between Radial Basis Function Networks and Fuzzy Inference Systems
paper_content:
It is shown that, under some minor restrictions, the functional behavior of radial basis function networks (RBFNs) and that of fuzzy inference systems are actually equivalent. This functional equivalence makes it possible to apply what has been discovered (learning rule, representational power, etc.) for one of the models to the other, and vice versa. It is of interest to observe that two models stemming from different origins turn out to be functionally equivalent. >
---
paper_title: Bidirectional bridge between neural networks and linguistic knowledge: linguistic rule extraction and learning from linguistic rules
paper_content:
The aim of the paper is to clearly demonstrate that the relation between neural networks and linguistic knowledge is bidirectional. First we show how neural networks can be trained by linguistic knowledge, which is represented by a set of fuzzy rules. Next we show how linguistic knowledge can be extracted from neural networks. Then we discuss the design of classification systems when numerical data and linguistic knowledge are available. Since the relation between neural networks and linguistic knowledge is bidirectional, we can simultaneously utilize these two kinds of information for designing classification systems. For example, neural network-based classification systems can be trained by numerical data and linguistic knowledge. Fuzzy rule-based classification systems can be designed by linguistic knowledge and fuzzy rules extracted from neural networks. The performance of these classification systems is examined by computer simulations.
---
paper_title: Numerical relationships between neural networks, continuous functions, and fuzzy systems
paper_content:
Abstract In this survey paper we first discuss computational equivalence between continuous functions, regular neural nets, fuzzy controllers, and discrete fuzzy expert systems. We also discuss how to build hybrid neural nets numerically identical to a fuzzy controller or a discrete fuzzy expert system. We then discuss computational equivalence between continuous fuzzy functions, regular fuzzy neural nets, and fuzzy systems. We also discuss how to construct hybrid fuzzy neural nets to approximate continuous fuzzy functions.
---
paper_title: ANFIS: Adaptive-Network-Based Fuzzy Inference System
paper_content:
The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. >
---
paper_title: Fuzzy basis functions, universal approximation, and orthogonal least-squares learning.
paper_content:
Fuzzy systems are represented as series expansions of fuzzy basis functions which are algebraic superpositions of fuzzy membership functions. Using the Stone-Weierstrass theorem, it is proved that linear combinations of the fuzzy basis functions are capable of uniformly approximating any real continuous function on a compact set to arbitrary accuracy. Based on the fuzzy basis function representations, an orthogonal least-squares (OLS) learning algorithm is developed for designing fuzzy systems based on given input-output pairs; then, the OLS algorithm is used to select significant fuzzy basis functions which are used to construct the final fuzzy system. The fuzzy basis function expansion is used to approximate a controller for the nonlinear ball and beam system, and the simulation results show that the control performance is improved by incorporating some common-sense fuzzy control rules.
---
paper_title: Logical operation based fuzzy MLP for classification and rule generation
paper_content:
A fuzzy layered neural network for classification and rule generation is proposed using logical neurons. It can handle uncertainty and/or impreciseness in the input as well as the output. Logical operators, namely, t-norm T and t-conorm S involving And and Or neurons, are employed in place of the weighted sum and sigmoid functions. Various fuzzy implication operators are introduced to incorporate different amounts of mutual interaction during the back propagation of erros. In case of partial inputs the model is capable of querying the user for the more important feature information, if and when required. Justification for an inferred decision may be produced in rule form. The built-in And-Or structure of the network enables the generation of appropriate rules expressed as the disjunction of conjunctive clauses. The effectiveness of the model is tested on a speech recognition problem and on some artificially generated pattern sets.
---
paper_title: A gradient descent learning algorithm for fuzzy neural networks
paper_content:
In order to train fuzzy neural nets fuzzy number weights have to be adjusted. Since fuzzy arithmetic automatically leads to monotonic increasing outputs a direct fuzzification of the backpropagation method does not work. Therefore, other strategies like evolutionary algorithms are being considered in the literature. In this paper we suggest a backpropagation based method of adjusting the weights. Furthermore, we show that by using the proposed method convergence can be guaranteed.
---
paper_title: Constructing fuzzy model by self-organizing counterpropagation network
paper_content:
This paper describes a general and systematic approach to constructing a multivariable fuzzy model from numerical data through a self-organizing counterpropagation network (SOCPN). Two self-organizing algorithms USOCPN and SSOCPN, being unsupervised and supervised respectively, are introduced. SOCPN can be employed in two ways. In the first place, it can be used as a knowledge extractor by which a set of rules are generated from the available numerical data set. The generated rule-base is then utilized by a fuzzy reasoning model. The second use of the SOCPN is as an online adaptive fuzzy model in which the rule-base in terms of connection weights is updated successively in response to the incoming measured data. The comparative results on three well studied examples suggest that the method has merits of simple structure, fast learning speed, and good modeling accuracy. >
---
paper_title: Fuzzy Neural Networks
paper_content:
In this paper, the McCulloch-Pitts model of a neuron is extended to a more general model which allows the activity of a neuron to be a “fuzzy” rather than an “all-or-none” process. The generalized model is called a fuzzy neuron. Some basic properties of fuzzy neural networks as well as their applications to the synthesis of fuzzy automata are investigated. It is shown that any n-state minimal fuzzy automatan can be realized by a network of m fuzzy neurons, where ⌈log2n⌉<m<2n. Examples are given to illustrate the procedure. As an example of application, a realization of fuzzy language recognizer using a fuzzy neural network is presented. The techniques described in this paper may be of use in the study of neural networks as well as in language, pattern recognition, and learning.
---
paper_title: A learning algorithm of fuzzy neural networks with triangular fuzzy weights
paper_content:
Abstract In this paper, first we propose an architecture of fuzzy neural networks with triangular fuzzy weights. The proposed fuzzy neural network can handle fuzzy input vectors as well as real input vectors. In both cases, outputs from the fuzzy neural network are fuzzy vectors. The input-output relation of each unit of the fuzzy neural network is defined by the extension principle of Zadeh. Next we define a cost function for the level sets (i.e., α-cuts) of fuzzy outputs and fuzzy targets. Then we derive a learning algorithm from the cost function for adjusting three parameters of each triangular fuzzy weight. Finally, we illustrate our approach by computer simulations on numerical examples.
---
paper_title: Self-organization for object extraction using a multilayer neural network and fuzziness measures
paper_content:
The feedforward multilayer perceptron (MLP) with back-propagation of error is described. Since use of this network requires a set of labeled input-output, as such it cannot be used for segmentation of images when only one image is available. (However, if images to be processed are of similar nature, one can use a set of known images for learning and then use the network for processing of other images.) A self-organizing multilayer neural network architecture suitable for image processing is proposed. The proposed architecture is also a feedforward one with back-propagation of errors; but like MLP it does not require any supervised learning. Each neuron is connected to the corresponding neuron in the previous layer and the set of neighbors of that neuron. The output status of neurons in the output layer is described as a fuzzy set. A fuzziness measure of this fuzzy set is used as a measure of error in the system (instability of the network). Learning rates for various measures of fuzziness have been theoretically and experimentally studied. An application of the proposed network in object extraction from noisy scenes is also demonstrated.
---
paper_title: Fuzzy self-organization, inferencing, and rule generation
paper_content:
A connectionist inferencing network, based on the fuzzy version of Kohonen's model already developed by the authors, is proposed. It is capable of handling uncertainty and/or impreciseness in the input representation provided in quantitative, linguistic and/or set forms. The output class membership value of an input pattern is inferred by the trained network. A measure of certainty expressing confidence m the decision is also defined. The model is capable of querying the user for the more important input feature information, if required, in case of partial inputs. Justification for an inferred decision may be produced in rule form, when so desired by the user. The connection weight magnitude of the trained neural network are utilized in every stage of the proposed inferencing procedure. The antecedent and consequent parts of the justificatory rules are provided in natural forms. The effectiveness of the algorithm is tested on the vowel recognition problem and on two sets of artificially generated nonconvex pattern classes.
---
paper_title: Neural network implementation of fuzzy logic
paper_content:
Abstract Fuzzy logic has gained increased attention as a methodology for managing uncertainty in a rule-based structure. In a fuzzy logic inference system, more rules can fire at any given time than in a crisp expert system. Since the propositions are modelled as possibility distributions, there is a considerable computation load on the inference engine. In this paper, a neural network structure is proposed as a means of performing fuzzy logic inference. Three variations of the network are described, but in each case, the knowledge of the rule (i.e., the antecedent and consequent clauses) are explicitly encoded in the weights of the net. The theoretical properties of this structure are developed. In fact, the network reduces to crisp modus ponens when the inputs are crisp sets. Also, under suitable conditions the degree of specificity of the consequences of the inference is a monotone function of the degree of specificity of the input. Several simulation studies are included to illustrate the performance of the fuzzy logic inference networks.
---
paper_title: Learning and tuning fuzzy logic controllers through reinforcements
paper_content:
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing. >
---
paper_title: Fuzzy neural networks: a survey
paper_content:
Abstract In this paper a fuzzy neural network will be a layered, feedforward, neural net that has fuzzy signals and/or fuzzy weights. We survey recent results on learning algorithms and applications for fuzzy neural networks.
---
paper_title: Implementation of conjunctive and disjunctive fuzzy logic rules with neural networks
paper_content:
Abstract The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, we introduced trainable neural network structures for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. In this paper, the power of these networks is further explored. The sensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules (with the same variables) can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.
---
paper_title: Neural fuzzy systems: a neuro-fuzzy synergism to intelligent systems
paper_content:
Neural Fuzzy Systems provides a comprehensive, up-to-date introduction to the basic theories of fuzzy systems and neural networks, as well as an exploration of how these two fields can be integrated to create Neural-Fuzzy Systems. It includes Matlab software, with a Neural Network Toolkit, and a Fuzzy System Toolkit.
---
paper_title: Multilayer perceptron, fuzzy sets, and classification
paper_content:
A fuzzy neural network model based on the multilayer perceptron, using the backpropagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and other related models. >
---
paper_title: Evidence aggregation networks for fuzzy logic inference
paper_content:
Fuzzy logic has been applied in many engineering disciplines. The problem of fuzzy logic inference is investigated as a question of aggregation of evidence. A fixed network architecture employing general fuzzy unions and intersections is proposed as a mechanism to implement fuzzy logic inference. It is shown that these networks possess desirable theoretical properties. Networks based on parameterized families of operators (such as Yager's union and intersection) have extra predictable properties and admit a training algorithm which produces sharper inference results than were earlier obtained. Simulation studies corroborate the theoretical properties. >
---
paper_title: Foundations Of Neuro-Fuzzy Systems
paper_content:
From the Publisher: ::: Foundations of Neuro-Fuzzy Systems reflects the current trend in intelligent systems research towards the integration of neural networks and fuzzy technology. The authors demonstrate how a combination of both techniques enhances the performance of control, decision-making and data analysis systems. Smarter and more applicable structures result from marrying the learning capability of the neural network with the transparency and interpretability of the rule-based fuzzy system. Foundations of Neuro-Fuzzy Systems highlights the advantages of integration making it a valuable resource for graduate students and researchers in control engineering, computer science and applied mathematics. The authors' informed analysis of practical neuro-fuzzy applications will be an asset to industrial practitioners using fuzzy technology and neural networks for control systems, data analysis and optimization tasks.
---
paper_title: Fuzzy neural network with fuzzy signals and weights
paper_content:
We discuss the direct fuzzification of a standard layered, feedforward, neural network where the signals and weights are fuzzy sets. A fuzzified delta rule is presented for learning. Three applications are given including fuzzy expert systems, fuzzy hierarchical analysis, and fuzzy systems modeling. © 1993 John Wiley & Sons, Inc.
---
paper_title: Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps
paper_content:
A neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors, which may represent fuzzy or crisp sets of features. The architecture, called fuzzy ARTMAP, achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Four classes of simulation illustrated fuzzy ARTMAP performance in relation to benchmark backpropagation and generic algorithm systems. These simulations include finding points inside versus outside a circle, learning to tell two spirals apart, incremental approximation of a piecewise-continuous function, and a letter recognition database. The fuzzy ARTMAP system is also compared with Salzberg's NGE systems and with Simpson's FMMC system. >
---
paper_title: Interpolation of fuzzy if-then rules by neural networks
paper_content:
Abstract A number of approaches have been proposed for implementing fuzzy if-then rules with trainable multilayer feedforward neural networks. In these approaches, learning of neural networks is performed for fuzzy inputs and fuzzy targets. Because the standard back-propagation (BP) algorithm cannot be directly applied to fuzzy data, transformation of fuzzy data into non-fuzzy data or modification of the learning algorithm is required. Therefore the approaches for implementing fuzzy if-then rules can be classified into two main categories: introduction of preprocessors of fuzzy data and modification of the learning algorithm. In the first category, the standard BP algorithm can be employed after generating non-fuzzy data from fuzzy data by preprocessors. Two kinds of preprocessors based on membership values and level sets are examined in this paper. In the second category, the standard BP algorithm is modified to directly handle the level sets (i.e., intervals) of fuzzy data. This paper examines the ability of each approach to interpolate sparse fuzzy if-then rules. By computer simulations, high fitting ability of approaches in the first category and high interpolating ability of those in the second category are demonstrated.
---
paper_title: Neuro-Fuzzy Pattern Recognition: Methods in Soft Computing
paper_content:
From the Publisher: ::: The authors consolidate a wealth of information previously scattered in disparate articles, journals, and edited volumes, explaining both the theory of neuro-fuzzy computing and the latest methodologies for performing different pattern recognition tasks in the neuro-fuzzy network - classification, feature evaluation, rule generation, knowledge extraction, and hybridization. Special emphasis is given to the integration of neuro-fuzzy methods with rough sets and genetic algorithms (GAs) to ensure more efficient recognition systems.
---
paper_title: Fuzzy Kohonen clustering networks
paper_content:
The authors propose a fuzzy Kohonen clustering network which integrates the fuzzy c-means (FCM) model into the learning rate and updating strategies of the Kohonen network. This yields an optimization problem related to FCM, and the numerical results show improved convergence as well as reduced labeling errors. It is proved that the proposed scheme is equivalent to the c-means algorithms. The new method can be viewed as a Kohonen type of FCM, but it is self-organizing, since the size of the update neighborhood and the learning rate in the competitive layer are automatically adjusted during learning. Anderson's IRIS data were used to illustrate this method. The results are compared with the standard Kohonen approach. >
---
paper_title: Review Neuro-fuzzy computing for image processing and pattern recognition
paper_content:
The relevance of integration of the merits of fuzzy set theory and neural network models for designing an efficient decision making system is explained. The feasibility of such systems and different ways of integration, so far made, in the context of image processing and pattern recognition are described. Scope for further research and development is outlined. An extensive bibliography is also provided.
---
paper_title: A referential scheme of fuzzy decision making and its neural network structure
paper_content:
The author introduces a method for dealing with imprecise objectives involved in the process of decision-making. A three-stage form of the system is proposed. It comprises three basic functional components realizing matching, nonlinear transformation, and inverse matching. The proposed scheme has a referential structure which shows that the fuzzy set of a decision is not determined by the objectives themselves, but by the levels of the matching with some prototype decision situations. Both matching and inverse matching procedures involve some logic-based mechanisms (equality indices). Neural nets are used to realize the nonlinear mapping indicated in the general scheme. Several advantages of the referential model, including exhaustive usage of knowledge about the decision problem conveyed by prototype situations, and an introduction of mechanisms of evaluation of the relevancy of fuzzy decisions, are highlighted. Additional indices expressing consistency of decision scenarios are developed. Detailed numerical studies demonstrate the performance of the method and provide some additional background concerning an evaluation of the results. >
---
paper_title: Neural networks designed on approximate reasoning architecture and their applications
paper_content:
The NARA (neural networks based on approximate reasoning architecture) model is proposed and its composition procedure and evaluation are described. NARA is a neural network (NN) based on the structure of fuzzy inference rules. The distinctive feature of NARA is that its internal state can be analyzed according to the rule structure, and the problematic portion can be easily located and improved. The ease with which performance can be improved is shown by applying the NARA model to pattern classification problems. The NARA model is shown to be more efficient than ordinary NN models. In NARA, characteristics of the application task can be built into the NN model in advance by employing the logic structure, in the form of fuzzy inference rules. Therefore, it is easier to improve the performance of NARA, in which the internal state can be observed because of its structure, than that of an ordinary NN model, which is like a black box. Examples are introduced by applying the NARA model to the problems of auto adjustment of VTR tape running mechanisms and alphanumeric character recognition. >
---
paper_title: Incorporating Fuzzy Membership Functions into the Perceptron Algorithm
paper_content:
The perceptron algorithm, one of the class of gradient descent techniques, has been widely used in pattern recognition to determine linear decision boundaries. While this algorithm is guaranteed to converge to a separating hyperplane if the data are linearly separable, it exhibits erratic behavior if the data are not linearly separable. Fuzzy set theory is introduced into the perceptron algorithm to produce a ``fuzzy algorithm'' which ameliorates the convergence problem in the nonseparable case. It is shown that the fuzzy perceptron, like its crisp counterpart, converges in the separable case. A method of generating membership functions is developed, and experimental results comparing the crisp to the fuzzy perceptron are presented.
---
paper_title: Fuzzy logic, neural networks, and soft computing
paper_content:
In retrospect, the year 1990 may well be viewed as the beginning of a new trend in the design of household appliances, consumer electronics, cameras, and other types of widely used consumer products. The trend in question relates to a marked increase in what might be called the Machine Intelligence Quotient (MIQ) of such products compared to what it was before 1990. Today, we have microwave ovens and washing machines that can figure out on their own what settings to use to perform their tasks optimally; cameras that come close to professional photographers in picture-taking ability; and many other products that manifest an impressive capability to reason, make intelligent decisions, and learn from experience
---
paper_title: Rough fuzzy MLP: knowledge encoding and classification
paper_content:
A scheme of knowledge encoding in a fuzzy multilayer perceptron (MLP) using rough set-theoretic concepts is described. Crude domain knowledge is extracted from the data set in the form of rules. The syntax of these rules automatically determines the appropriate number of hidden nodes while the dependency factors are used in the initial weight encoding. The network is then refined during training. Results on classification of speech and synthetic data demonstrate the superiority of the system over the fuzzy and conventional versions of the MLP (involving no initial knowledge).
---
paper_title: Combining rough sets learning- and neural learning-method to deal with uncertain and imprecise information
paper_content:
Abstract Any system designed to reason about the real world must be capable of dealing with uncertainty. The complexity of the real world and the finite size of most knowledge bases pose significant difficulties for the traditional concept of the learning system. Experience has shown that many learning paradigms fail to scale up to those problems. One response to these failures has been to construct systems which use multiple learning paradigms. Thus the strengths of one paradigm counterbalance some of the weaknesses of the others. As a result the effectiveness of the overall system will be enhanced. Consequently, integrated techniques have been widespread over the last years. A multistrategy which addresses those issues is presented. This approach joins two forms of learning, the technique of neural networks and rough sets. These seem at first quite different but they share the common ability to work well in a natural environment. In a closed loop fashion we will achieve more robust concept learning capabilities for a variety of difficult classification tasks. The objective of integration is twofold: (i) to improve the overall classification effectiveness of learned objects' description, (ii) to refine the dependency factors of the rules.
---
paper_title: Real-time stable self-learning FNN controller using genetic algorithm
paper_content:
Abstract A kind of real-time stable self-learning fuzzy neural network (FNN) control system is proposed in this paper. The control system is composed of two parts: (1) A FNN controller which use genetic algorithm (GA) to search optimal fuzzy rules and membership functions for the unknown controlled plant; (2) A supervisor which can guarantee the stability of the control system during the real-time learning stage, since the GA has some random property which may cause control system unstable. The approach proposed in this paper combine a priori knowledge of designer and the learning ability of FNN to achieve optimal fuzzy control for an unknown plant in real-time. The efficiency of the approach is verified by computer simulation.
---
paper_title: Linguistic rule extraction from neural networks and genetic-algorithm-based rule selection
paper_content:
This paper proposes a hybrid approach to the design of a compact fuzzy rule-based classification system with a small number of linguistic rules. The proposed approach consists of two procedures: rule extraction from a trained neural network and rule selection by a genetic algorithm. We first describe how linguistic rules can be extracted from a multilayer feedforward neural network that has been already trained for a classification problem with many continuous attributes. In our rule extraction procedure, a linguistic input vector corresponding to the antecedent part of a linguistic rule is presented to the trained neural network, and the fuzzy output vector front the trained neural network is examined for determining the consequent part and the grade of certainty of that linguistic rule. Next we explain how a genetic algorithm can be utilized for selecting a small number of significant linguistic rules from a large number of extracted rules. Our rule selection problem has two objectives: to minimize the number of selected linguistic rules and to maximize the number of correctly classified patterns by the selected linguistic rules. A multi-objective genetic algorithm is employed for finding a set of non-dominated solutions with respect to these two objectives. Finally we illustrate our hybrid approach by computer simulations on real-world test problems.
---
paper_title: Extracting Rules from Composite Neural Networks for Medical Diagnostic Problems
paper_content:
Recently, neural networks have been applied to many medical diagnostic problems because of their appealing properties, robustness, capability of generalization and fault tolerance. Although the predictive accuracy of neural networks may be higher than that of traditional methods (e.g., statistical methods) or human experts, the lack of explanation from a trained neural network leads to the difficulty that users would hesitate to take the advise of a black box on faith alone. This paper presents a class of composite neural networks which are trained in such a way that the values of the network parameters can be utilized to generate If-Then rules on the basis of preselected meaningful coordinates. The concepts and methods presented in the paper are illustrated through one practical example from medical diagnosis.
---
paper_title: Evolutionary Modular Design of Rough Knowledge-based Network using Fuzzy Attributes
paper_content:
This article describes a way of integrating rough set theory with a fuzzy MLP using a modular evolutionary algorithm, for classification and rule generation in soft computing paradigm. The novelty of the method lies in applying rough set theory for extracting dependency rules directly from a real-valued attribute table consisting of fuzzy membership values. This helps in preserving all the class representative points in the dependency rules by adaptively applying a threshold that automatically takes care of the shape of membership functions. An l-class classification problem is split into l two-class problems. Crude subnetwork modules are initially encoded from the dependency rules. These subnetworks are then combined and the final network is evolved using a GA with restricted mutation operator which utilizes the knowledge of the modular structure already generated, for faster convergence. The GA tunes the fuzzification parameters, and network weight and structure simultaneously, by optimising a single fitness function. This methodology helps in imposing a structure on the weights, which results in a network more suitable for rule generation. Performance of the algorithm is compared with related techniques.
---
paper_title: Computational Intelligence: An Introduction
paper_content:
From the Publisher: ::: Computational Intelligence: An Introduction consists of a highly readable and systematic exposure of the fundamentals of computational intelligence, along with the coherent presentation of sound and comprehensive analysis and design practices. Provides a balanced introduction to computational intelligence, emphasizing equally the important analysis and design aspects of the emerging technology; text is organized in a way that allows for the easy use of the book as a basic course material; presents a design-oriented approach toward the use of computational intelligence; organizes exercises and problems of different levels of difficulty following each chapter; and complete algorithms are presented in a structured fashion, easing understanding and implementation.
---
paper_title: Staging of cervical cancer with soft computing
paper_content:
Describes a way of designing a hybrid decision support system in soft computing paradigm for detecting the different stages of cervical cancer. Hybridization includes the evolution of knowledge-based subnetwork modules with genetic algorithms (CIAs) using rough set theory and the Interactive Dichotomizer 3 (ID3) algorithm. Crude subnetworks obtained via rough set theory and the ID3 algorithm are evolved using CAs. The evolution uses a restricted mutation operator which utilizes the knowledge of the modular structure, already generated, for faster convergence. The CA tunes the network weights and structure simultaneously. The aforesaid integration enhances the performance in terms of classification score, network size and training time, as compared to the conventional multilayer perceptron. This methodology also helps in imposing a structure on the weights, which results in a network more suitable for extraction of logical rules and human interpretation of the inferencing procedure.
---
paper_title: Rough knowledge-based network, fuzziness and classification
paper_content:
A method of integrating rough sets and fuzzy multilayer perceptron (MLP) for designing a knowledge-based network for pattern recognition problems is described. Rough set theory is used to extract crude knowledge from the input domain in the form of rules. The syntax of these rules automatically determines the optimal number of hidden nodes while the dependency factors are used in the initial weight encoding. Results on classification of speech data demonstrate the superiority of the system over the fuzzy and conventional versions of the MLP.
---
paper_title: A genetic-based neuro-fuzzy approach for modeling and control of dynamical systems
paper_content:
Linguistic modeling of complex irregular systems constitutes the heart of many control and decision making systems, and fuzzy logic represents one of the most effective algorithms to build such linguistic models. In this paper, a linguistic (qualitative) modeling approach is proposed. The approach combines the merits of the fuzzy logic theory, neural networks, and genetic algorithms (GAs). The proposed model is presented in a fuzzy-neural network (FNN) form which can handle both quantitative (numerical) and qualitative (linguistic) knowledge. The learning algorithm of a FNN is composed of three phases. The first phase is used to find the initial membership functions of the fuzzy model. In the second phase, a new algorithm is developed and used to extract the linguistic-fuzzy rules. In the third phase, a multiresolutional dynamic genetic algorithm (MRD-GA) is proposed and used for optimized tuning of membership functions of the proposed model. Two well-known benchmarks are used to evaluate the performance of the proposed modeling approach, and compare it with other modeling approaches.
---
paper_title: Knowledge extraction and the integration by artificial life approach
paper_content:
Artificial life (A-Life) is a new paradigm to realize a phenomena of life and to extract the hidden principles. One of the most attractive features in the A-Life approach is emergence: simple elements interact with each other based on lower level rules, and then the higher level complex phenomena can emerge by interaction. The paper proposes a method for knowledge extraction and integration based on an A-Life approach. The proposed system has two parts: the knowledge extraction network and the A-Life environment. The simple elements interact in the A-Life environment and the data is transferred to the knowledge extraction network. The knowledge is extracted in the form of rules in the rule layer and then they are fed back to the simple elements in the A-Life environment. We dealt with a path planning problem as an example of A-Life environment. In the simulation, we assumed a severe condition: the position of the goal was unknown to the robots. Since the robots did not know the goal in the initial condition, the trajectory by the first robot that reached the goal is very complicated. The trajectory data which the robots had taken were inputted to the knowledge extraction network to extract rules. The trajectories become smooth step by step because of the extracted rules. We extracted various kinds of the rules using several different simple environments. By using the rules extracted from the simpler environments, the robot could reach the goal in a more complex environment.
---
paper_title: Induction of fuzzy rules and membership functions from training examples
paper_content:
Abstract Most fuzzy controllers and fuzzy expert systems must predefine membership functions and fuzzy inference rules to map numeric data into linguistic variable terms and to make fuzzy reasoning work. In this paper, we propose a general learning method as a framework for automatically deriving membership functions and fuzzy if-then rules from a set of given training examples to rapidly build a prototype fuzzy expert system. Based on the membership functions and the fuzzy rules derived, a corresponding fuzzy inference procedure to process inputs is also developed.
---
paper_title: Distributed representation of fuzzy rules and its application to pattern classification
paper_content:
Abstract This paper introduces the concept of distributed representation of fuzzy rules and applies it to classification problems. Distributed representation is implemented by superimposing many fuzzy rules corresponding to different fuzzy partitions of a pattern space. This means that we simulatenously employ many fuzzy rule tables corresponding to different fuzzy partitions in fuzzy inference. In order to apply distributed representation of fuzzy rules to pattern classification problems, we first propose an algorithm to generate fuzzy rules from numerical data. Next we propose a fuzzy inference method using the generated fuzzy rules. The classification power of distributed representation is compared with that of ordinary fuzzy rules which can be viewed as local representation.
---
paper_title: Fuzzy rules extraction directly from numerical data for function approximation
paper_content:
In our previous work (1993) we developed a method for extracting fuzzy rules directly from numerical input-output data for pattern classification. In this paper we extend the method to function approximation. For function approximation, first, the universe of discourse of an output variable is divided into multiple intervals, and each interval is treated as a class. Then the same as for pattern classification, using the input data for each interval, fuzzy rules are recursively defined by activation hyperboxes which show the existence region of the data for the interval and inhibition hyperboxes which inhibit the existence region of data for that interval. The approximation accuracy of the fuzzy system derived by this method is empirically studied using an operation learning application of a water purification plant. Additionally, we compare the approximation performance of the fuzzy system with the function approximation approach based on neural networks. >
---
paper_title: Efficient fuzzy partition of pattern space for classification problems
paper_content:
Abstract This paper proposes an efficient fuzzy partition method of a pattern space for classification problems. The proposed method is based on the sequential subdivision of fuzzy subspaces and the generated fuzzy subspaces have different sizes. In the proposed method, first an n -dimensional pattern space is divided into 2 n fuzzy subspaces with the same size. Next one of the fuzzy subspaces is selected and subdivided into 2 n fuzzy subspaces. This procedure is iterated until a stopping condition is satisfied. Some criteria for selecting a fuzzy subspace to be subdivided are proposed and compared with each other by computer simulations. The proposed method is also compared with other fuzzy classification methods.
---
paper_title: Fuzzy sets of rules for system identification
paper_content:
The synthesis of fuzzy systems involves the identification of a structure and its specialization by means of parameter optimization. In doing this, symbolic approaches which encode the structure information in the form of high-level rules allow further manipulation of the system to minimize its complexity, and possibly its implementation cost, while all-parametric methodologies often achieve better approximation performance. In this paper, we rely on the concept of a fuzzy set of rules to tackle the rule induction problem at an intermediate level. An online adaptive algorithm is developed which almost surely learns the extent to which inclusion of a rule in the rule set significantly contributes to the reproduction of the target behavior. Then, the resulting fuzzy set of rules can be defuzzified to give a conventional rule set with similar behavior. Comparisons with high-level and low-level methodologies show that this approach retains the most positive features of both.
---
paper_title: Generating fuzzy rules by learning from examples
paper_content:
A general method is developed for generating fuzzy rules from numerical data. The method consists of five steps: dividing the input and output spaces of the given numerical data into fuzzy regions; generating fuzzy rules from the given data; assigning a degree to each of the generated rules for the purpose of resolving conflicts among the generated rules; creating a combined fuzzy-associative-memory (FAM) bank based on both the generated rules and linguistic rules of human experts; and determining a mapping from input space to output space based on the combined FAM bank using a defuzzifying procedure. The mapping is proved to be capable of approximating any real continuous function on a compact set to arbitrary accuracy. The method is applied to predicting a chaotic time series. >
---
paper_title: Simultaneous design of membership functions and rule sets for fuzzy controllers using genetic algorithms
paper_content:
This paper examines the applicability of genetic algorithms (GA's) in the simultaneous design of membership functions and rule sets for fuzzy logic controllers. Previous work using genetic algorithms has focused on the development of rule sets or high performance membership functions; however, the interdependence between these two components suggests a simultaneous design procedure would be a more appropriate methodology. When GA's have been used to develop both, it has been done serially, e.g., design the membership functions and then use them in the design of the rule set. This, however, means that the membership functions were optimized for the initial rule set and not the rule set designed subsequently. GA's are fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. This new method has been applied to two problems, a cart controller and a truck controller. Beyond the development of these controllers, we also examine the design of a robust controller for the cart problem and its ability to overcome faulty rules. >
---
paper_title: Extracting fuzzy rules for system modeling using a hybrid of genetic algorithms and Kalman filter
paper_content:
Abstract This paper proposes a hybrid algorithm for extracting important fuzzy rules from a given rule base to construct a “parsimonious” fuzzy model with a high generalization ability. This algorithm combines the advantages of genetic algorithms' strong search capacity and Kalman filter's fast convergence merit. Each random combination of the rules in the rule base is coded into a binary string and treated as a chromosome in genetic algorithms. The binary string indicates the structure of a fuzzy model. The parameters of the model are then estimated using the Kalman filter. In order to achieve a trade-off between the accuracy and the complexity of a fuzzy model, the Schwarz-Rissanen Criterion is used as an evaluation function in the hybrid algorithm. The practical applicability of the proposed algorithm is examined by computer simulations on a human operator modeling problem and a nonlinear system modeling problem.
---
paper_title: Selecting fuzzy if-then rules for classification problems using genetic algorithms
paper_content:
This paper proposes a genetic-algorithm-based method for selecting a small number of significant fuzzy if-then rules to construct a compact fuzzy classification system with high classification power. The rule selection problem is formulated as a combinatorial optimization problem with two objectives: to maximize the number of correctly classified patterns and to minimize the number of fuzzy if-then rules. Genetic algorithms are applied to this problem. A set of fuzzy if-then rules is coded into a string and treated as an individual in genetic algorithms. The fitness of each individual is specified by the two objectives in the combinatorial optimization problem. The performance of the proposed method for training data and test data is examined by computer simulations on the iris data of Fisher. >
---
paper_title: Genetic-based new fuzzy reasoning models with application to fuzzy control
paper_content:
The successful application of fuzzy reasoning models to fuzzy control systems depends on a number of parameters, such as fuzzy membership functions, that are usually decided upon subjectively. It is shown in this paper that the performance of fuzzy control systems may be improved if the fuzzy reasoning model is supplemented by a genetic-based learning mechanism. The genetic algorithm enables us to generate an optimal set of parameters for the fuzzy reasoning model based either on their initial subjective selection or on a random selection. It is shown that if knowledge of the domain is available, it is exploited by the genetic algorithm leading to an even better performance of the fuzzy controller. >
---
paper_title: Fuzzy control of pH using genetic algorithms
paper_content:
Abstruct- Establishing suitable control of pH, a requirement in a number of mineral and chemical industries, poses a difficult problem because of inherent nonlinearities and frequently changing process dynamics. Researchers at the U.S. Bureau of Mines have developed a technique for producing adaptive fuzzy logic controllers (FLC’s) that are capable of effectively managing such systems. In this technique, a genetic algorithm (GA) alters the membership functions employed by a conventional FLC, an approach that is contrary to the tactic generally used to provide FLC’s with adaptive capabilities in which the rule set is altered. GA’s are search algorithms based on the mechanics of natural genetics that are able to rapidly locate near-optimal solutions to difficult problems. The Bureau-developed technique is used to produce an adaptive GA-FLC for a laboratory acid-base experiment. Nonlinearities in the laboratory system are associated with the logarithmic pH scale (pH is proportional to the logarithm of HJO’ ions) and changing process dynamics are introduced by altering system parameters such as the desired set point and the concentration and buffering capacity of input solutions. Results indicate that FLC’s augmented with GA’s offer a powerful alternative to conventional process control techniques in the nonlinear, rapidly changing pH systems commonly found in industry.
---
paper_title: Knowledge-based connectionism for revising domain theories
paper_content:
A knowledge-based connectionist model for machine learning referred to as KBCNN is presented. In the KBCNN learning model, useful domain attributes and concepts are first identified and linked in a way consistent with initial domain knowledge, and then the links are weighted properly so as to maintain the semantics. Hidden units and additional connections may be introduced into this initial connectionist structure as appropriate. Then, this primitive structure evolves to minimize empirical error. The KBCNN learning model allows the theory learned or revised to be translated into the symbolic rule-based language that describes the initial theory. Thus, a domain theory can be pushed onto the network, revised empirically over time, and decoded in symbolic form. The domain of molecular genetics is used to demonstrate the validity of the KBCNN learning model and its superiority over related learning methods. >
---
paper_title: NeuroLinear: From neural networks to oblique decision rules
paper_content:
Abstract We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classification of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axisparallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a significant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artificial datasets. Our experimental results on real-world datasets show that the system is effective in extracting compact and comprehensible rules with high predictive accuracy from neural networks.
---
paper_title: Extraction of Logical Rules from Neural Networks
paper_content:
Three neural-based methods for extraction of logical rules from data are presented. These methods facilitate conversion of graded response neural networks into networks performing logical functions. MLP2LN method tries to convert a standard MLP into a network performing logical operations (LN). C-MLP2LN is a constructive algorithm creating such MLP networks. Logical interpretation is assured by adding constraints to the cost function, forcing the weights to ±1 or 0. Skeletal networks emerge ensuring that a minimal number of logical rules are found. In both methods rules covering many training examples are generated before more specific rules covering exceptions. The third method, FSM2LN, is based on the probability density estimation. Several examples of performance of these methods are presented.
---
paper_title: Cascade ARTMAP: Integrating Neural Computation and Symbolic Knowledge Processing
paper_content:
This paper introduces a hybrid system termed cascade adaptive resonance theory mapping (ARTMAP) that incorporates symbolic knowledge into neural-network learning and recognition. Cascade ARTMAP, a generalization of fuzzy ARTMAP, represents intermediate attributes and rule cascades of rule-based knowledge explicitly and performs multistep inferencing. A rule insertion algorithm translates if-then symbolic rules into cascade ARTMAP architecture. Besides that initializing networks with prior knowledge can improve predictive accuracy and learning efficiency, the inserted symbolic knowledge can be refined and enhanced by the cascade ARTMAP learning algorithm. By preserving symbolic rule form during learning, the rules extracted from cascade ARTMAP can be compared directly with the originally inserted rules. Simulations on an animal identification problem indicate that a priori symbolic knowledge always improves system performance, especially with a small training set. Benchmark study on a DNA promoter recognition problem shows that with the added advantage of fast learning, cascade ARTMAP rule insertion and refinement algorithms produce performance superior to those of other machine learning systems and an alternative hybrid system known as knowledge-based artificial neural network (KBANN). Also, the rules extracted from cascade ARTMAP are more accurate and much cleaner than the NofM rules extracted from KBANN.
---
paper_title: C4.5: Programs for Machine Learning
paper_content:
From the Publisher: ::: Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available on a 3.5-inch floppy diskette for a Sun workstation. ::: ::: C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies. ::: ::: This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses.
---
paper_title: Structural learning with forgetting
paper_content:
Abstract It is widely known that, despite its popularity, back propagation learning suffers from various difficulties. There have been many studies aiming at the solution of these. Among them there are a class of learning algorithms, which I call structural learning, aiming at small-sized networks requiring less computational cost. Still more important is the discovery of regularities in or the extraction of rules from training data. For this purpose I propose a learning method called structural learning with forgetting. It is applied to various examples: the discovery of Boolean functions, classification of irises, discovery of recurrent networks, prediction of time series and rule extraction from mushroom data. These results demonstrate the effectiveness of structural learning with forgetting. A comparative study on various structural learning methods also supports its effectiveness.
---
paper_title: Learning in certainty-factor-based multilayer neural networks for classification
paper_content:
The computational framework of rule-based neural networks inherits from the neural network and the inference engine of an expert system. In one approach, the network activation function is based on the certainty factor (CF) model of MYCIN-like systems. In this paper, it is shown theoretically that the neural network using the CF-based activation function requires relatively small sample sizes for correct generalization. This result is also confirmed by empirical studies in several independent domains.
---
paper_title: A new rule extraction method from neural networks
paper_content:
This paper presents a method of extracting rules from multilayered neural networks (NN) formed using a random optimization (search) method (ROM). The objective of this study is to extract rules from NN, achieving 100% recognition accuracy in a pattern recognition system. NNs to be extracted rules are formed using ROM. A hybrid algorithm of NN and ROM performs a formation of a small-sized NN system, which is suitable for a rule extraction. In this paper iris data is used as inputs. ROM is utilized to reduce the number of connection weights in NN. The network weights survived after the ROM training represent regularities to perform pattern classification. The rules are then extracted from the networks in which hidden units use signum and sigmoid functions to produce binary outputs. It enables us to extract simple logical functions from the network. By means of computer simulation, the effectiveness of this approach is examined.
---
paper_title: Survey and critique of techniques for extracting rules from trained artificial neural networks
paper_content:
It is becoming increasingly apparent that, without some form of explanation capability, the full potential of trained artificial neural networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically, the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy.
---
paper_title: Connectionist expert systems
paper_content:
Connectionist networks can be used as expert system knowledge bases. Furthermore, such networks can be constructed from training examples by machine learning techniques. This gives a way to automate the generation of expert systems for classification problems.
---
paper_title: Learning rules for neuro-controller via simultaneous perturbation
paper_content:
This paper describes learning rules using simultaneous perturbation for a neurocontroller that controls an unknown plant. When we apply a direct control scheme by a neural network, the neural network must learn an inverse system of the unknown plant. In this case, we must know the sensitivity function of the plant using a kind of the gradient method as a learning rule of the neural network. On the other hand, the learning rules described here do not require information about the sensitivity function. Some numerical simulations of a two-link planar arm and a tracking problem for a nonlinear dynamic plant are shown.
---
paper_title: A search technique for rule extraction from trained neural networks
paper_content:
Abstract Search methods for rule extraction from neural networks work by finding those combinations of inputs that make the neuron active. By sorting the input weights to a neuron and ordering the weights suitably, it is possible to prune the search space. Based on this observation, we present an algorithm for rule extraction from feedforward neural networks with boolean inputs and analyze its properties.
---
paper_title: Knowledge-Based Artificial Neural Networks
paper_content:
Abstract Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN ( Knowledge-Based Artificial Neural Networks ) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.
---
paper_title: Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting
paper_content:
An algorithm for extracting rules from a standard three-layer feedforward neural network is proposed. The trained network is first pruned not only to remove redundant connections in the network but, more important, to detect the relevant inputs. The algorithm generates rules from the pruned network by considering only a small number of activation values at the hidden units. If the number of inputs connected to a hidden unit is sufficiently small, then rules that describe how each of its activation values is obtained can be readily generated. Otherwise the hidden unit will be split and treated as output units, with each output unit corresponding to an activation value. A hidden layer is inserted and a new subnetwork is formed, trained, and pruned. This process is repeated until every hidden unit in the network has a relatively small number of input units connected to it. Examples on how the proposed algorithm works are shown using real-world data arising from molecular biology and signal processing. Our re...
---
paper_title: Medical diagnostic expert system based on PDP model
paper_content:
The applicability of PDP (parallel distributed processing) models to knowledge processing is clarified. The authors evaluate the diagnostic capabilities of a prototype medical diagnostic expert system based on a multilayer network. After having been trained on only 300 patients, the prototype system shows diagnostic capabilities almost equivalent to those of a symbolic expert system. Symbolic knowledge is extracted from what the multilayer network has learned. The extracted knowledge is compared with doctors' knowledge. Moreover, a method to extract rules from the network and usage of the rules in a confirmation process are proposed. >
---
paper_title: Extracting M-of-N Rules from Trained Neural Networks
paper_content:
An effective algorithm for extracting M-of-N rules from trained feedforward neural networks is proposed. First, we train a network where each input of the data can only have one of the two possible values, -1 or one. Next, we apply the hyperbolic tangent function to each connection from the input layer to the hidden layer of the network. By applying this squashing function, the activation values at the hidden units are effectively computed as the hyperbolic tangent (or the sigmoid) of the weighted inputs, where the weights have magnitudes that are equal one. By restricting the inputs and the weights to binary values either -1 or one, the extraction of M-of-N rules from the networks becomes trivial. We demonstrate the effectiveness of the proposed algorithm on several widely tested datasets. For datasets consisting of thousands of patterns with many attributes, the rules extracted by the algorithm are simple and accurate.
---
paper_title: FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks
paper_content:
Before symbolic rules are extracted from a trained neural network, the network is usually pruned so as to obtain more concise rules. Typical pruning algorithms require retraining the network which incurs additional cost. This paper presents FERNN, a fast method for extracting rules from trained neural networks without network retraining. Given a fully connected trained feedforward network with a single hidden layer, FERNN first identifies the relevant hidden units by computing their information gains. For each relevant hidden unit, its activation values is divided into two subintervals such that the information gain is maximized. FERNN finds the set of relevant network connections from the input units to this hidden unit by checking the magnitudes of their weights. The connections with large weights are identified as relevant. Finally, FERNN generates rules that distinguish the two subintervals of the hidden activation values in terms of the network inputs. Experimental results show that the size and the predictive accuracy of the tree generated are comparable to those extracted by another method which prunes and retrains the network.
---
paper_title: Grammatical Inference using an Adaptive Recurrent Neural Network
paper_content:
In this study, we proposed an adaptive recurrent neural network that is capable of inferring a regular grammar, and at the same time of extracting the underlying grammatical rules emulated by a finite-state automata. Our proposed network adapts from an initial analog phase, which has good training behavior, to a discrete phase for automatic rule extraction. A modified objective function is proposed to accomplish the discretisation process as well as logic learning. Comparison on learning Tomita grammars shows that our network has a significant advantage over other approaches.
---
paper_title: Rule-extraction by backpropagation of polyhedra.
paper_content:
The core problem of rule-extraction from feed-forward networks is an inversion problem. In this article, we solve this inversion problem by backpropagating unions of polyhedra. We obtain as a by-product a new rule-extraction technique for which the fidelity of the extracted rules can be made arbitrarily high.
---
paper_title: Extraction of Rules from Discrete-time Recurrent Neural Networks
paper_content:
The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFAs the model which best approximates the learned regular grammar.
---
paper_title: The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks
paper_content:
To date, the preponderance of techniques for eliciting the knowledge embedded in trained artificial neural networks (ANN's) has focused primarily on extracting rule-based explanations from feedforward ANN's. The ADT taxonomy for categorizing such techniques was proposed in 1995 to provide a basis for the systematic comparison of the different approaches. This paper shows that not only is this taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types (e,g., recurrent neural networks) and explanation structures. In addition we identify some of the key research questions in extracting the knowledge embedded within ANN's including the need for the formulation of a consistent theoretical basis for what has been, until recently, a disparate collection of empirical results.
---
paper_title: Rule extraction from recurrent neural networks using a symbolic machine learning algorithm
paper_content:
Addresses the extraction of knowledge from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that network states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics which either limit the number of clusters that may form during training or limit the exploration of the output space of hidden recurrent state neurons. These limitations, while necessary, may lead to reduced fidelity, i.e. the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed uses a polynomial-time symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input/output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.
---
paper_title: A Neural Expert System with Automated Extraction of Fuzzy If-Then Rules and Its Application to Medical Diagnosis
paper_content:
This paper proposes a fuzzy neural expert system (FNES) with the following two functions: (1) Generalization of the information derived from the training data and embodiment of knowledge in the form of the fuzzy neural network; (2) Extraction of fuzzy If-Then rules with linguistic relative importance of each proposition in an antecedent (If-part) from a trained neural network. This paper also gives a method to extract automatically fuzzy If-Then rules from the trained neural network. To prove the effectiveness and validity of the proposed fuzzy neural expert system, a fuzzy neural expert system for medical diagnosis has been developed.
---
paper_title: Fuzzy self-organization, inferencing, and rule generation
paper_content:
A connectionist inferencing network, based on the fuzzy version of Kohonen's model already developed by the authors, is proposed. It is capable of handling uncertainty and/or impreciseness in the input representation provided in quantitative, linguistic and/or set forms. The output class membership value of an input pattern is inferred by the trained network. A measure of certainty expressing confidence m the decision is also defined. The model is capable of querying the user for the more important input feature information, if required, in case of partial inputs. Justification for an inferred decision may be produced in rule form, when so desired by the user. The connection weight magnitude of the trained neural network are utilized in every stage of the proposed inferencing procedure. The antecedent and consequent parts of the justificatory rules are provided in natural forms. The effectiveness of the algorithm is tested on the vowel recognition problem and on two sets of artificially generated nonconvex pattern classes.
---
paper_title: Connectionist expert systems
paper_content:
Connectionist networks can be used as expert system knowledge bases. Furthermore, such networks can be constructed from training examples by machine learning techniques. This gives a way to automate the generation of expert systems for classification problems.
---
paper_title: Neural expert system using fuzzy teaching input and its application to medical diagnosis
paper_content:
Abstract Genetic algorithms (GAs) are inspired by Darwin's the survival of the fittest theory. This paper discusses a genetic algorithm that can automatically generate test cases to test a selected path. This algorithm takes a selected path as a target and executes sequences of operators iteratively for test cases to evolve. The evolved test case will lead the program execution to achieve the target path. To determine which test cases should survive to produce the next generation of fitter test cases, a metric named normalized extended Hamming distance (NEHD, which is used to determine whether the final test case is found) is developed. Based on NEHD, a fitness function named SIMILARITY is defined to determine which test cases should survive if the final test case has not been found. Even when there are loops in the target path, SIMILARITY can help the algorithm to lead the execution to flow along the target path.
---
paper_title: Use of fuzzy-logic-inspired features to improve bacterial recognition through classifier fusion
paper_content:
Escherichia coli O157:H7 has been found to cause serious health problems. Traditional methods to identify the organism are quite slow, pulsed-held gel electrophoresis (PFGE) images contain "banding pattern" information which can be used to recognize the bacteria. A fuzzy logic rule-based system is used as a guide to find a good feature set for the recognition of E. coli O157:H7. While the fuzzy rule-based system achieved good recognition, the human inspired features used in the rules were incorporated into a multiple neural network fusion approach which gave excellent separation of the target bacteria. The fuzzy integral was utilized in the fusion of neural networks trained with different feature sets to reach an almost perfect classification rate of E. coli O157:H7 PFGE patterns made available for the experiments.
---
paper_title: Use of neural network techniques in a medical expert system
paper_content:
Expert systems in medicine have relied heavily upon knowledge-based techniques, in which decision making rules or strategies are derived through consultation with experts. These techniques, coupled with methods of approximate reasoning, have produced systems which model the human decision making process. This approach has the disadvantage of requiring extensive interviewing of experts for each new application. It is desirable to be able to supplement this information by extracting information directly from data bases, without expert intervention. In this article, a neural network model is used to extract this information, and then use it in conjunction with rule-based knowledge, incorporating techniques of approximate reasoning.
---
paper_title: Review Neuro-fuzzy computing for image processing and pattern recognition
paper_content:
The relevance of integration of the merits of fuzzy set theory and neural network models for designing an efficient decision making system is explained. The feasibility of such systems and different ways of integration, so far made, in the context of image processing and pattern recognition are described. Scope for further research and development is outlined. An extensive bibliography is also provided.
---
paper_title: Fuzzy basis functions, universal approximation, and orthogonal least-squares learning.
paper_content:
Fuzzy systems are represented as series expansions of fuzzy basis functions which are algebraic superpositions of fuzzy membership functions. Using the Stone-Weierstrass theorem, it is proved that linear combinations of the fuzzy basis functions are capable of uniformly approximating any real continuous function on a compact set to arbitrary accuracy. Based on the fuzzy basis function representations, an orthogonal least-squares (OLS) learning algorithm is developed for designing fuzzy systems based on given input-output pairs; then, the OLS algorithm is used to select significant fuzzy basis functions which are used to construct the final fuzzy system. The fuzzy basis function expansion is used to approximate a controller for the nonlinear ball and beam system, and the simulation results show that the control performance is improved by incorporating some common-sense fuzzy control rules.
---
paper_title: Constructing fuzzy model by self-organizing counterpropagation network
paper_content:
This paper describes a general and systematic approach to constructing a multivariable fuzzy model from numerical data through a self-organizing counterpropagation network (SOCPN). Two self-organizing algorithms USOCPN and SSOCPN, being unsupervised and supervised respectively, are introduced. SOCPN can be employed in two ways. In the first place, it can be used as a knowledge extractor by which a set of rules are generated from the available numerical data set. The generated rule-base is then utilized by a fuzzy reasoning model. The second use of the SOCPN is as an online adaptive fuzzy model in which the rule-base in terms of connection weights is updated successively in response to the incoming measured data. The comparative results on three well studied examples suggest that the method has merits of simple structure, fast learning speed, and good modeling accuracy. >
---
paper_title: Nonlinear system modeling by competitive learning and adaptive fuzzy inference system
paper_content:
Modeling nonlinear systems by neural networks and fuzzy systems encounters problems such as the conflict between overfitting and good generalization and low reliability, which requires a great number of fuzzy rules or neural nodes and uses very complicated learning algorithms. A new adaptive fuzzy inference system, combined with a learning algorithm, is proposed to cope with these problems. First, the algorithm partitions the input space into some local regions by competitive learning, then it determines the decision boundaries for local input regions, and finally, based on the decision boundaries, it learns the fuzzy rule for each local region by recursive least squares (RLS). In the learning algorithm, the key role of the decision boundaries is highly emphasized. To demonstrate the validity of the proposed learning approach and the new adaptive fuzzy inference system, four examples are studied by the proposed method and compared with the previous results.
---
paper_title: A learning algorithm of fuzzy neural networks with triangular fuzzy weights
paper_content:
Abstract In this paper, first we propose an architecture of fuzzy neural networks with triangular fuzzy weights. The proposed fuzzy neural network can handle fuzzy input vectors as well as real input vectors. In both cases, outputs from the fuzzy neural network are fuzzy vectors. The input-output relation of each unit of the fuzzy neural network is defined by the extension principle of Zadeh. Next we define a cost function for the level sets (i.e., α-cuts) of fuzzy outputs and fuzzy targets. Then we derive a learning algorithm from the cost function for adjusting three parameters of each triangular fuzzy weight. Finally, we illustrate our approach by computer simulations on numerical examples.
---
paper_title: Learning by fuzzified neural networks
paper_content:
Abstract We derive a general learning algorithm for training a fuzzified feedforward neural networks that has fuzzy inputs, fuzzy targets, and fuzzy conncetion weights. The derived algorithm is applicable to the learning of fuzzy connection weights with various shapes such as triangular and trapezoid. First we briefly describe how a feedforward neural network can be fuzzified. Inputs, targets, and connection weights in the fiuzzified neural network can be fuzzy numbers. Next we define a cost function that measures the differences between a fuzzy target vector and an actual fuzzy output vector. Then we derive a learning algorithm from the cost function for adjusting fuzzy connection weights. Finally we show some results of computer simulations.
---
paper_title: Neural network implementation of fuzzy logic
paper_content:
Abstract Fuzzy logic has gained increased attention as a methodology for managing uncertainty in a rule-based structure. In a fuzzy logic inference system, more rules can fire at any given time than in a crisp expert system. Since the propositions are modelled as possibility distributions, there is a considerable computation load on the inference engine. In this paper, a neural network structure is proposed as a means of performing fuzzy logic inference. Three variations of the network are described, but in each case, the knowledge of the rule (i.e., the antecedent and consequent clauses) are explicitly encoded in the weights of the net. The theoretical properties of this structure are developed. In fact, the network reduces to crisp modus ponens when the inputs are crisp sets. Also, under suitable conditions the degree of specificity of the consequences of the inference is a monotone function of the degree of specificity of the input. Several simulation studies are included to illustrate the performance of the fuzzy logic inference networks.
---
paper_title: Neural networks that learn from fuzzy if-then rules
paper_content:
An architecture for neural networks that can handle fuzzy input vectors is proposed, and learning algorithms that utilize fuzzy if-then rules as well as numerical data in neural network learning for classification problems and for fuzzy control problems are derived. The learning algorithms can be viewed as an extension of the backpropagation algorithm to the case of fuzzy input vectors and fuzzy target outputs. Using the proposed methods, linguistic knowledge from human experts represented by fuzzy if-then rules and numerical data from measuring instruments can be integrated into a single information processing system (classification system or fuzzy control system). It is shown that the scheme works well for simple examples. >
---
paper_title: Implementation of conjunctive and disjunctive fuzzy logic rules with neural networks
paper_content:
Abstract The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, we introduced trainable neural network structures for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. In this paper, the power of these networks is further explored. The sensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules (with the same variables) can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.
---
paper_title: Interpolation of fuzzy if-then rules by neural networks
paper_content:
Abstract A number of approaches have been proposed for implementing fuzzy if-then rules with trainable multilayer feedforward neural networks. In these approaches, learning of neural networks is performed for fuzzy inputs and fuzzy targets. Because the standard back-propagation (BP) algorithm cannot be directly applied to fuzzy data, transformation of fuzzy data into non-fuzzy data or modification of the learning algorithm is required. Therefore the approaches for implementing fuzzy if-then rules can be classified into two main categories: introduction of preprocessors of fuzzy data and modification of the learning algorithm. In the first category, the standard BP algorithm can be employed after generating non-fuzzy data from fuzzy data by preprocessors. Two kinds of preprocessors based on membership values and level sets are examined in this paper. In the second category, the standard BP algorithm is modified to directly handle the level sets (i.e., intervals) of fuzzy data. This paper examines the ability of each approach to interpolate sparse fuzzy if-then rules. By computer simulations, high fitting ability of approaches in the first category and high interpolating ability of those in the second category are demonstrated.
---
paper_title: Fuzzy classifications using fuzzy inference networks
paper_content:
In this paper, fuzzy inference models for pattern classifications have been developed and fuzzy inference networks based on these models are proposed. Most of the existing fuzzy rule-based systems have difficulties in deriving inference rules and membership functions directly from training data. Rules and membership functions are obtained from experts. Some approaches use backpropagation (BP) type learning algorithms to learn the parameters of membership functions from training data. However, BP algorithms take a long time to converge and they require an advanced setting of the number of inference rules. The work to determine the number of inference rules demands lots of experiences from the designer. In this paper, self-organizing learning algorithms are proposed for the fuzzy inference networks. In the proposed learning algorithms, the number of inference rules and the membership functions in the inference rules will be automatically determined during the training procedure. The learning speed is fast. The proposed fuzzy inference network (FIN) classifiers possess both the structure and the learning ability of neural networks, and the fuzzy classification ability of fuzzy algorithms. Simulation results on fuzzy classification of two-dimensional data are presented and compared with those of the fuzzy ARTMAP. The proposed fuzzy inference networks perform better than the fuzzy ARTMAP and need less training samples.
---
paper_title: Improving classification performance using fuzzy MLP and two-level selective partitioning of the feature space
paper_content:
A fuzzy MLP model, developed by one of the authors, is used for obtaining selective two-level partitioning of the feature space in order to improve its classification performance. The model can handle uncertainty and/or impreciseness in the input as well as the output. The input to the network is modelled in terms of linguistic pi-sets whose centres and radii along the feature axes in each partition are generated automatically from the distribution of the training data. The performance of the model at the end of the first stage is used as a criterion for guiding the selection of the appropriate partition to be subdivided at the second stage, in order to improve the effectiveness of the model. A comparative study of the performance of the two-level technique with other methods, viz., the conventional MLP, linear discriminant analysis and the k-nearest neighbours algorithms, is also provided to demonstrate its superiority.
---
paper_title: Some neural net realizations of fuzzy reasoning
paper_content:
In this paper we analyze the neural network implementation of fuzzy logic proposed by Keller et al. [Fuzzy Sets Syst., 45, 1-12 (1992)], derive a learning algorithm for obtaining an optimal α for the net, and, for a special case, we show how one can directly (avoiding training) compute the optimal α. We address how training data can be generated for such a system. Effectiveness of the optimal α is then established through numerical examples. In this regard, several indices for performance evaluation are discussed. Finally, we propose a new architecture and demonstrate its effectiveness with numerical examples.
---
paper_title: Neural networks designed on approximate reasoning architecture and their applications
paper_content:
The NARA (neural networks based on approximate reasoning architecture) model is proposed and its composition procedure and evaluation are described. NARA is a neural network (NN) based on the structure of fuzzy inference rules. The distinctive feature of NARA is that its internal state can be analyzed according to the rule structure, and the problematic portion can be easily located and improved. The ease with which performance can be improved is shown by applying the NARA model to pattern classification problems. The NARA model is shown to be more efficient than ordinary NN models. In NARA, characteristics of the application task can be built into the NN model in advance by employing the logic structure, in the form of fuzzy inference rules. Therefore, it is easier to improve the performance of NARA, in which the internal state can be observed because of its structure, than that of an ordinary NN model, which is like a black box. Examples are introduced by applying the NARA model to the problems of auto adjustment of VTR tape running mechanisms and alphanumeric character recognition. >
---
paper_title: ANFIS: Adaptive-Network-Based Fuzzy Inference System
paper_content:
The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. >
---
paper_title: Radial basis function based adaptive fuzzy systems and their applications to system identification and prediction
paper_content:
Abstract In this paper we describe a neuro-fuzzy system with adaptive capability to extract fuzzy If Then rules from input and output sample data through learning. The proposed system, called radial basis function (RBF) based adaptive fuzzy system (AFS), employs the Gaussian functions to represent the membership functions of the premise part of fuzzy rules. Three architectural deviations of the RBF based AFS are also presented according to different consequence types such as constant, first-order linear function, and fuzzy variable. These provide versatility of the network to handle arbitrary fuzzy inference schemes. We present examples of system identification and time series prediction to illustrate how to solve these problems and to demonstrate its validity and effectiveness using the RBF based AFS.
---
paper_title: Fuzzy inference system learning by reinforcement methods
paper_content:
Fuzzy Actor-Critic Learning (FACL) and Fuzzy Q-Learning (FQL) are reinforcement learning methods based on dynamic programming (DP) principles. In the paper, they are used to tune online the conclusion part of fuzzy inference systems (FIS). The only information available for learning is the system feedback, which describes in terms of reward and punishment the task the fuzzy agent has to realize. At each time step, the agent receives a reinforcement signal according to the last action it has performed in the previous state. The problem involves optimizing not only the direct reinforcement, but also the total amount of reinforcements the agent can receive in the future. To illustrate the use of these two learning methods, we first applied them to a problem that involves finding a fuzzy controller to drive a boat from one bank to another, across a river with a strong nonlinear current. Then, we used the well known Cart-Pole Balancing and Mountain-Car problems to be able to compare our methods to other reinforcement learning methods and focus on important characteristic aspects of FACL and FQL. We found that the genericity of our methods allows us to learn every kind of reinforcement learning problem (continuous states, discrete/continuous actions, various type of reinforcement functions). The experimental studies also show the superiority of these methods with respect to the other related methods we can find in the literature.
---
paper_title: Fuzzy Finite-state Automata Can Be Deterministically Encoded into Recurrent Neural Networks
paper_content:
There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships; they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automats (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automats (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-time second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by correct string recognition of randomly generated FFAs. In particular, we examine how the networks' performance varies as a function of synaptic weight strengths.
---
paper_title: Learning and tuning fuzzy logic controllers through reinforcements
paper_content:
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing. >
---
paper_title: An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
paper_content:
This paper describes an experiment on the “linguistic” synthesis of a controller for a model industrial plant (a steam engine). Fuzzy logic is used to convert heuristic control rules stated by a human operator into an automatic control strategy. The experiment was initiated to investigate the possibility of human interaction with a learning controller. However, the control strategy set up linguistically proved to be far better than expected in its own right, and the basic experiment of linguistic control synthesis in a non-learning controller is reported here.
---
paper_title: Recurrent neuro-fuzzy networks for nonlinear process modeling
paper_content:
A type of recurrent neuro-fuzzy network is proposed in this paper to build long-term prediction models for nonlinear processes. The process operation is partitioned into several fuzzy operating regions. Within each region, a local linear model is used to model the process. The global model output is obtained through the centre of gravity defuzzification which is essentially the interpolation of local model outputs. This modeling strategy utilizes both process knowledge and process input/output data. Process knowledge is used to initially divide the process operation into several fuzzy operating regions and to set up the initial fuzzification layer weights. Process I/O data are used to train the network. Network weights are such trained so that the long-term prediction errors are minimized. Through training, membership functions of fuzzy operating regions are refined and local models are learnt. Based on the recurrent neuro-fuzzy network model, a novel type of nonlinear model-based long range predictive controller can be developed and it consists of several local linear model-based predictive controllers. Local controllers are constructed based on the corresponding local linear models and their outputs are combined to form a global control action by using their membership functions. This control strategy has the advantage that control actions can be calculated analytically avoiding the time consuming nonlinear programming procedures required in conventional nonlinear model-based predictive control. The techniques have been successfully applied to the modeling and control of a neutralization process.
---
paper_title: Fuzzy identification of systems and its applications to modeling and control
paper_content:
A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.
---
paper_title: A fuzzy neural network for rule acquiring on fuzzy control systems
paper_content:
This paper presents a layer-structured fuzzy neural network (FNN) for learning rules of fuzzy-logic control systems. Initially, FNN is constructed to contain all the possible fuzzy rules. We propose a two-phase learning procedure for this network. The first phase is a error-backprop (EBP) training, and the second phase is a rule pruning. Since some functions of the nodes in the FNN have the competitive characteristics, the EBP training will converge quickly. After the training, a pruning process is performed to delete redundant rules for obtaining a concise fuzzy rule base. Simulation results show that for the truck backer-upper control problem, the training phase learns the knowledge of fuzzy rules in several dozen epochs with an error rate of less than 1%. Moreover, the fuzzy rule base generated by the pruning process contains only 14% of the initial fuzzy rules and is identical to the target fuzzy rule base.
---
paper_title: An adaptive fuzzy neural network for MIMO system model approximation in high-dimensional spaces
paper_content:
An adaptive fuzzy system implemented within the framework of neural network is proposed. The integration of the fuzzy system into a neural network enables the new fuzzy system to have learning and adaptive capabilities. The proposed fuzzy neural network can locate its rules and optimize its membership functions by competitive learning, Kalman filter algorithm and extended Kalman filter algorithms. A key feature of the new architecture is that a high dimensional fuzzy system can be implemented with fewer number of rules than the Takagi-Sugeno fuzzy systems. A number of simulations are presented to demonstrate the performance of the proposed system including modeling nonlinear function, operator's control of chemical plant, stock prices and bioreactor (multioutput dynamical system).
---
paper_title: Foundations Of Neuro-Fuzzy Systems
paper_content:
From the Publisher: ::: Foundations of Neuro-Fuzzy Systems reflects the current trend in intelligent systems research towards the integration of neural networks and fuzzy technology. The authors demonstrate how a combination of both techniques enhances the performance of control, decision-making and data analysis systems. Smarter and more applicable structures result from marrying the learning capability of the neural network with the transparency and interpretability of the rule-based fuzzy system. Foundations of Neuro-Fuzzy Systems highlights the advantages of integration making it a valuable resource for graduate students and researchers in control engineering, computer science and applied mathematics. The authors' informed analysis of practical neuro-fuzzy applications will be an asset to industrial practitioners using fuzzy technology and neural networks for control systems, data analysis and optimization tasks.
---
paper_title: Neuro-Fuzzy Systems for Function Approximation
paper_content:
Abstract We present a neuro-fuzzy architecture for function approximation based on supervised learning. The learning algorithm is able to determine the structure and the parameters of a fuzzy system. The approach is an extension to our already published NEFCON and NEFCLASS models which are used for control or classification purposes. The proposed extended model, which we call NEFPROX, is more general and can be used for any application based on function approximation.
---
paper_title: Handling the nonlinearity of a fuzzy logic controller at the transition between rules
paper_content:
Abstract A fuzzy logic controller approximates a desired control surface by using the outputs of its fuzzy control rules. The shape of this control surface is mainly influenced by the linguistic variables of the rule base, the basic operators of the fuzzy sets, the implication, the inference and the defuzzification method. The linearity/nonlinearity of this control surface is discussed in many studies, however none of those studies has explicitly considered the linearity/nonlinearity at the transition between fuzzy logic rules. In this paper a simple approach to control the linearity/nonlinearity at this transition by a modified center of gravity defuzzification method is proposed, namely by introducing the so-called defuzzification weights to the overlapping areas of the consequent. Those defuzzification weights can be determined heuristically or automatically. Both cases will be demonstrated, for the latter a feedforward neural network is employed.
---
paper_title: Using Fuzzy Logic for Performance Evaluation in Reinforcement Learning
paper_content:
Abstract A new architecture is described which uses fuzzy rules to initialize its two neural networks: a neural network for performance evaluation and another for action selection. This architecture, applied to control of dynamic systems, demonstrates that it is possible to start with an approximate prior knowledge and learn to refine it through experiments using Reinforcement Learning (RL).
---
paper_title: Manufacturing process control through integration of neural networks and fuzzy model
paper_content:
Artificial neural networks (ANNs) and fuzzy logic have been widely applied in many areas. This research is trying to discuss the integration of these two technologies. Three fuzzy models are utilized to update dynamically the training parameters in order to speed up the training. In addition, a fuzzy model is proposed which is self-organizing and self-adjusting, and able to learn from experience. In a self-organizing and self-adjusting fuzzy model (SOSAFM), the inputs and outputs are partitioned by Kohonen's feature mapping and the premise and consequence parameters are updated through an error backpropagation (EBP)-type learning algorithm. Physical experiments for manufacturing process control are implemented to evaluate the proposed methods. The results showed that updating the training parameters by using fuzzy models can accelerate the training speed. Moreover, SOSAFM is better than the multiple regression and artificial neural network both in speed and accuracy for the purpose of multi-sensor integration.
---
paper_title: A new type of fuzzy neural network based on a truth space approach for automatic acquisition of fuzzy rules with linguistic hedges
paper_content:
Abstract Fuzzy reasoning methods are generally classified into two approaches: the direct approach and the truth space approach. Several researches on the relationships between these approaches have been reported. There has been, however, no research which discusses their utility. The authors have previously proposed four types of fuzzy neural networks (FNNs) called Type I, II, III, and IV. The FNNs can identify the fuzzy rules and tune the membership functions of fuzzy reasoning automatically, utilizing the learning capability of neural networks. Types III and IV, which are based on the truth space approach, can acquire linguistic fuzzy rules with the fuzzy variables in the consequences labeled according to their linguistic truth values (LTVs). However, the expressions available for the linguistic labeling are limited, since the LTVs are singletons. This paper presents a new type of FNN based on the truth space approach for automatic acquisition of the fuzzy rules with linguistic hedges. The new FNN, called Type V, has the LTVs defined by fuzzy sets for fuzzy rules and can express the identified fuzzy rules linguistically using the fuzzy variables in the consequences with linguistic hedges. Two simulations are done for demonstrating the feasibility of the new method. The results show that the truth space approach makes the fuzzy rules easy to understand.
---
paper_title: Extraction of Rules from Discrete-time Recurrent Neural Networks
paper_content:
The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFAs the model which best approximates the learned regular grammar.
---
paper_title: Real-time stable self-learning FNN controller using genetic algorithm
paper_content:
Abstract A kind of real-time stable self-learning fuzzy neural network (FNN) control system is proposed in this paper. The control system is composed of two parts: (1) A FNN controller which use genetic algorithm (GA) to search optimal fuzzy rules and membership functions for the unknown controlled plant; (2) A supervisor which can guarantee the stability of the control system during the real-time learning stage, since the GA has some random property which may cause control system unstable. The approach proposed in this paper combine a priori knowledge of designer and the learning ability of FNN to achieve optimal fuzzy control for an unknown plant in real-time. The efficiency of the approach is verified by computer simulation.
---
paper_title: Linguistic rule extraction from neural networks and genetic-algorithm-based rule selection
paper_content:
This paper proposes a hybrid approach to the design of a compact fuzzy rule-based classification system with a small number of linguistic rules. The proposed approach consists of two procedures: rule extraction from a trained neural network and rule selection by a genetic algorithm. We first describe how linguistic rules can be extracted from a multilayer feedforward neural network that has been already trained for a classification problem with many continuous attributes. In our rule extraction procedure, a linguistic input vector corresponding to the antecedent part of a linguistic rule is presented to the trained neural network, and the fuzzy output vector front the trained neural network is examined for determining the consequent part and the grade of certainty of that linguistic rule. Next we explain how a genetic algorithm can be utilized for selecting a small number of significant linguistic rules from a large number of extracted rules. Our rule selection problem has two objectives: to minimize the number of selected linguistic rules and to maximize the number of correctly classified patterns by the selected linguistic rules. A multi-objective genetic algorithm is employed for finding a set of non-dominated solutions with respect to these two objectives. Finally we illustrate our hybrid approach by computer simulations on real-world test problems.
---
paper_title: Selecting fuzzy if-then rules for classification problems using genetic algorithms
paper_content:
This paper proposes a genetic-algorithm-based method for selecting a small number of significant fuzzy if-then rules to construct a compact fuzzy classification system with high classification power. The rule selection problem is formulated as a combinatorial optimization problem with two objectives: to maximize the number of correctly classified patterns and to minimize the number of fuzzy if-then rules. Genetic algorithms are applied to this problem. A set of fuzzy if-then rules is coded into a string and treated as an individual in genetic algorithms. The fitness of each individual is specified by the two objectives in the combinatorial optimization problem. The performance of the proposed method for training data and test data is examined by computer simulations on the iris data of Fisher. >
---
paper_title: Heuristic constraints enforcement for training of and knowledge extraction from a fuzzy/neural architecture. I. Foundation
paper_content:
Using fuzzy/neural architectures to extract heuristic information from systems has received increasing attention. A number of fuzzy/neural architectures and knowledge extraction methods have been proposed. Knowledge extraction from systems where the existing knowledge limited is a difficult task. One of the reasons is that there is no ideal rulebase, which can be used to validate the extracted rules. In most of the cases, using output error measures to validate extracted rules is not sufficient as extracted knowledge may not make heuristic sense, even if the output error may meet the specified criteria. The paper proposes a novel method for enforcing heuristic constraints on membership functions for rule extraction from a fuzzy/neural architecture. The proposed method not only ensures that the final membership functions conform to a priori heuristic knowledge, but also reduces the domain of search of the training and improves convergence speed. Although the method is described on a specific fuzzy/neural architecture, it is applicable to other realizations, including adaptive or static fuzzy inference systems. The foundations of the proposed method are given in Part I. The techniques for implementation and integration into the training are given in Part II, together with applications.
---
paper_title: A neural network based fuzzy set model for organizational decision making
paper_content:
A neural network based fuzzy set model is proposed to support organizational decision making under uncertainty. This model incorporates three theories and methodologies: classical decision-making theory under conflict, as suggested by Luce and Raiffa (1957), the fuzzy set theory of Zadeh (1965, 1984), and a modified version of the backpropagation (BP) neural network algorithm originated by Rumelhart et al. (1986). An algorithm that implements the model is described, and an application of the model to a real data example is used to demonstrate its use.
---
paper_title: A genetic-based neuro-fuzzy approach for modeling and control of dynamical systems
paper_content:
Linguistic modeling of complex irregular systems constitutes the heart of many control and decision making systems, and fuzzy logic represents one of the most effective algorithms to build such linguistic models. In this paper, a linguistic (qualitative) modeling approach is proposed. The approach combines the merits of the fuzzy logic theory, neural networks, and genetic algorithms (GAs). The proposed model is presented in a fuzzy-neural network (FNN) form which can handle both quantitative (numerical) and qualitative (linguistic) knowledge. The learning algorithm of a FNN is composed of three phases. The first phase is used to find the initial membership functions of the fuzzy model. In the second phase, a new algorithm is developed and used to extract the linguistic-fuzzy rules. In the third phase, a multiresolutional dynamic genetic algorithm (MRD-GA) is proposed and used for optimized tuning of membership functions of the proposed model. Two well-known benchmarks are used to evaluate the performance of the proposed modeling approach, and compare it with other modeling approaches.
---
paper_title: Logical operation based fuzzy MLP for classification and rule generation
paper_content:
A fuzzy layered neural network for classification and rule generation is proposed using logical neurons. It can handle uncertainty and/or impreciseness in the input as well as the output. Logical operators, namely, t-norm T and t-conorm S involving And and Or neurons, are employed in place of the weighted sum and sigmoid functions. Various fuzzy implication operators are introduced to incorporate different amounts of mutual interaction during the back propagation of erros. In case of partial inputs the model is capable of querying the user for the more important feature information, if and when required. Justification for an inferred decision may be produced in rule form. The built-in And-Or structure of the network enables the generation of appropriate rules expressed as the disjunction of conjunctive clauses. The effectiveness of the model is tested on a speech recognition problem and on some artificially generated pattern sets.
---
paper_title: Decision making on creditworthiness, using a fuzzy connectionist model
paper_content:
Abstract In this paper we present an unpolished expert system development tool, based on a connectionist architecture for knowledge representation. Our work is centered around a connectionist expert system, which can be expanded, and updated through learning of sample domain specific cases [1,4]. A cell recruitment learning algorithm [2] capable of forgetting previously learned facts by learning new ones is incorporated. Using this learning mechanism, we let the system learn a knowledge base on classifying the creditworthiness of credit applicants. The knowledge base consisted of 50 credit cases and was obtained from [11]. It will be shown that the fuzznet system is capable of learning such a model, with only a small amount of the cases being presented to it for learning. In all examples learned, an acceptable model was derived (based on the average prognostic error of actual and expected output for creditworthiness). The input features and their corresponding outputs (which were learned) are all fuzzy (uncertain) at the time of learning. So far implementations of connectionist expert systems, either only allowed for crisp inputs when placed in the learning mode, or only supported uncertainty when simulated [1, 8, 9, 10]. This example will further demonstrate the versatility of the fuzznet system, when dealing with uncertainty in the inputs and outputs when being placed in the learn mode, or while being simulated.
---
paper_title: Neural network implementation of fuzzy logic
paper_content:
Abstract Fuzzy logic has gained increased attention as a methodology for managing uncertainty in a rule-based structure. In a fuzzy logic inference system, more rules can fire at any given time than in a crisp expert system. Since the propositions are modelled as possibility distributions, there is a considerable computation load on the inference engine. In this paper, a neural network structure is proposed as a means of performing fuzzy logic inference. Three variations of the network are described, but in each case, the knowledge of the rule (i.e., the antecedent and consequent clauses) are explicitly encoded in the weights of the net. The theoretical properties of this structure are developed. In fact, the network reduces to crisp modus ponens when the inputs are crisp sets. Also, under suitable conditions the degree of specificity of the consequences of the inference is a monotone function of the degree of specificity of the input. Several simulation studies are included to illustrate the performance of the fuzzy logic inference networks.
---
paper_title: A neural fuzzy system with linguistic teaching signals
paper_content:
A neural fuzzy system learning with linguistic teaching signals is proposed. This system is able to process and learn numerical information as well as linguistic information. It can be used either as an adaptive fuzzy expert system or as an adaptive fuzzy controller. First, we propose a five-layered neural network for the connectionist realization of a fuzzy inference system. The connectionist structure can house fuzzy logic rules and membership functions for fuzzy inference. We use /spl alpha/-level sets of fuzzy numbers to represent linguistic information. The inputs, outputs, and weights of the proposed network can be fuzzy numbers of any shape. Furthermore, they can be hybrid of fuzzy numbers and numerical numbers through the use of fuzzy singletons. Based on interval arithmetics, two kinds of learning schemes are developed for the proposed system: fuzzy supervised learning and fuzzy reinforcement learning. Simulation results are presented to illustrate the performance and applicability of the proposed system. >
---
paper_title: Implementation of conjunctive and disjunctive fuzzy logic rules with neural networks
paper_content:
Abstract The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, we introduced trainable neural network structures for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. In this paper, the power of these networks is further explored. The sensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules (with the same variables) can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.
---
paper_title: Implementing fuzzy logic controllers using a neural network framework
paper_content:
Abstract We describe a system for implementing fuzzy logic controllers using a neural network. A significant aspect of this system is that the linguistic values associated with the fuzzy control rules can be general concave continuous fuzzy subsets. By using structures suggested by the fuzzy logic framework, we simplify the learning requirements. On the other hand the adaptive aspect of the neural framework allows for the necessary learning.
---
paper_title: Fuzzy rule generation methods for high-level computer vision
paper_content:
Abstract In many decision making systems involving multiple sources, the decisions made may be considered as the result of a rule-based system in which the decision rules are usually enumerated by experts or generated by a learning process. In this paper, we discuss the various issues involved in the generation of fuzzy rules automatically from training data for high-level computer vision. Features are treated as linguistic variables that appear in the antecedent clauses of the rules. We present methods to generate the corresponding linguistic labels (values) and their membership functions. Rules are generated by constructing a minimal approximate fuzzy aggregation network and then training the network using gradient descent methods. Several examples are given.
---
paper_title: Evidence aggregation networks for fuzzy logic inference
paper_content:
Fuzzy logic has been applied in many engineering disciplines. The problem of fuzzy logic inference is investigated as a question of aggregation of evidence. A fixed network architecture employing general fuzzy unions and intersections is proposed as a mechanism to implement fuzzy logic inference. It is shown that these networks possess desirable theoretical properties. Networks based on parameterized families of operators (such as Yager's union and intersection) have extra predictable properties and admit a training algorithm which produces sharper inference results than were earlier obtained. Simulation studies corroborate the theoretical properties. >
---
paper_title: Review Neuro-fuzzy computing for image processing and pattern recognition
paper_content:
The relevance of integration of the merits of fuzzy set theory and neural network models for designing an efficient decision making system is explained. The feasibility of such systems and different ways of integration, so far made, in the context of image processing and pattern recognition are described. Scope for further research and development is outlined. An extensive bibliography is also provided.
---
paper_title: Modeling and formulating fuzzy knowledge bases using neural networks
paper_content:
Abstract We show how the determination of the firing level of a neuron can be viewed as a measure of possibility between two fuzzy sets, the weights of connection and the input. We then suggest a way to represent fuzzy production rules in a neural framework. Central to this representation is the notion that the linguistic variables associated with the rule, the antecedent and consequent values, are represented as weights in the resulting neural structure. The structure used to represent these fuzzy rules allows learning of the membership grades of the associated linguistic variables. A self-organization procedure for obtaining the nucleus of rules for a fuzzy knowledge base is presented.
---
paper_title: POPFNN: a pseudo outer-product based fuzzy neural network
paper_content:
Abstract A novel fuzzy neural network, called the pseudo outer-product based fuzzy neural network (POPFNN), is proposed in this paper. The functions performed by each layer in the proposed POPFNN strictly correspond to the inference steps in the truth value restriction method in fuzzy logic [ Mantaras (1990) Approximate reasoning models, Ellis Horwood]. This correspondence gives it a strong theoretical basis. Similar to most of the existing fuzzy neural networks, the proposed POPFNN uses a self-organizing algorithm ( Kohonen, 1988 , Self-organization and associative memories, Springer) to learn and initialize the membership functions of the input and output variables from a set of training data. However, instead of employing the popularly used competitive learning [ Kosko (1990) IEEE Trans. Neural Networks, 3(5), 801], this paper proposes a novel pseudo outer-product (POP) learning algorithm to identify the fuzzy rules that are supported by the training data. The proposed POP learning algorithm is fast, reliable, and highly intuitive. Extensive experimental results and comparisons are presented at the end of the paper for discussion. Copyright © 1996 Elsevier Science Ltd.
---
paper_title: Learning capacity and sample complexity on expert networks
paper_content:
A major development in knowledge-based neural networks is the integration of symbolic expert rule-based knowledge into neural networks, resulting in so-called rule-based neural (or connectionist) networks. An expert network here refers to a particular construct in which the uncertainty management model of symbolic expert systems is mapped into the activation function of the neural network. This paper addresses a yet-to-be-answered question: Why can expert networks generalize more effectively from a finite number of training instances than multilayered perceptrons? It formally shows that expert networks reduce generalization dimensionality and require relatively small sample sizes for correct generalization.
---
paper_title: Knowledge-based connectionism for revising domain theories
paper_content:
A knowledge-based connectionist model for machine learning referred to as KBCNN is presented. In the KBCNN learning model, useful domain attributes and concepts are first identified and linked in a way consistent with initial domain knowledge, and then the links are weighted properly so as to maintain the semantics. Hidden units and additional connections may be introduced into this initial connectionist structure as appropriate. Then, this primitive structure evolves to minimize empirical error. The KBCNN learning model allows the theory learned or revised to be translated into the symbolic rule-based language that describes the initial theory. Thus, a domain theory can be pushed onto the network, revised empirically over time, and decoded in symbolic form. The domain of molecular genetics is used to demonstrate the validity of the KBCNN learning model and its superiority over related learning methods. >
---
paper_title: Survey and critique of techniques for extracting rules from trained artificial neural networks
paper_content:
It is becoming increasingly apparent that, without some form of explanation capability, the full potential of trained artificial neural networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically, the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy.
---
paper_title: Connectionist expert systems
paper_content:
Connectionist networks can be used as expert system knowledge bases. Furthermore, such networks can be constructed from training examples by machine learning techniques. This gives a way to automate the generation of expert systems for classification problems.
---
paper_title: Knowledge-Based Artificial Neural Networks
paper_content:
Abstract Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN ( Knowledge-Based Artificial Neural Networks ) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.
---
paper_title: A connectionist incremental expert system combining production systems and associative memory
paper_content:
A connectionist expert system model is proposed in this paper. The system combines the capability of production systems and associative memory. Rules are explicitly represented within a network structure. The Perceptron algorithm is generalized to include samples with uncertain components. A gradually-augmented-node learning algorithm is used to guarantee fast memorization of all rules. The convergence of the learning algorithms is analyzed. The system has a dynamic knowledge organization and adapts in real time to acquire new knowledge, or to relearn existing knowledge, through interaction with the user. This allows the system to be built incrementally. An example system is presented.
---
paper_title: Medical diagnostic expert system based on PDP model
paper_content:
The applicability of PDP (parallel distributed processing) models to knowledge processing is clarified. The authors evaluate the diagnostic capabilities of a prototype medical diagnostic expert system based on a multilayer network. After having been trained on only 300 patients, the prototype system shows diagnostic capabilities almost equivalent to those of a symbolic expert system. Symbolic knowledge is extracted from what the multilayer network has learned. The extracted knowledge is compared with doctors' knowledge. Moreover, a method to extract rules from the network and usage of the rules in a confirmation process are proposed. >
---
paper_title: Dynamically adding symbolically meaningful nodes to knowledge-based neural networks
paper_content:
Abstract Traditional connectionist theory-refinement systems map the dependencies of a domain-specific rule base into a neural network, and then refine this network using neural learning techniques. Most of these systems, however, lack the ability to refine their network's topology and are thus unable to add new rules to the (reformulated) rule base. Therefore, with domain theories that lack rules, generalization is poor, and training can corrupt the original rules — even those that were initially correct. The paper presents TopGen, an extension to the KBANN algorithm, which heuristically searches for possible expansions to the KBANN network. TopGen does this by dynamically adding hidden nodes to the neural representation of the domain theory, in a manner that is analogous to the adding of rules and conjuncts to the symbolic rule base. Experiments indicate that the method is able to heuristically find effective places to add nodes to the knowledge bases of four real-world problems, as well as an artificial chess domain. The experiments also verify that new nodes must be added in an intelligent manner. The algorithm showed statistically significant improvements over the KBANN algorithm in all five domains.
---
paper_title: The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks
paper_content:
To date, the preponderance of techniques for eliciting the knowledge embedded in trained artificial neural networks (ANN's) has focused primarily on extracting rule-based explanations from feedforward ANN's. The ADT taxonomy for categorizing such techniques was proposed in 1995 to provide a basis for the systematic comparison of the different approaches. This paper shows that not only is this taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types (e,g., recurrent neural networks) and explanation structures. In addition we identify some of the key research questions in extracting the knowledge embedded within ANN's including the need for the formulation of a consistent theoretical basis for what has been, until recently, a disparate collection of empirical results.
---
paper_title: Knowledge-based connectionism for revising domain theories
paper_content:
A knowledge-based connectionist model for machine learning referred to as KBCNN is presented. In the KBCNN learning model, useful domain attributes and concepts are first identified and linked in a way consistent with initial domain knowledge, and then the links are weighted properly so as to maintain the semantics. Hidden units and additional connections may be introduced into this initial connectionist structure as appropriate. Then, this primitive structure evolves to minimize empirical error. The KBCNN learning model allows the theory learned or revised to be translated into the symbolic rule-based language that describes the initial theory. Thus, a domain theory can be pushed onto the network, revised empirically over time, and decoded in symbolic form. The domain of molecular genetics is used to demonstrate the validity of the KBCNN learning model and its superiority over related learning methods. >
---
paper_title: Knowledge-based fuzzy MLP for classification and rule generation
paper_content:
A new scheme of knowledge-based classification and rule generation using a fuzzy multilayer perceptron (MLP) is proposed. Knowledge collected from a data set is initially encoded among the connection weights in terms of class a priori probabilities. This encoding also includes incorporation of hidden nodes corresponding to both the pattern classes and their complementary regions. The network architecture, in terms of both links and nodes, is then refined during training. Node growing and link pruning are also resorted to. Rules are generated from the trained network using the input, output, and connection weights in order to justify any decision(s) reached. Negative rules corresponding to a pattern not belonging to a class can also be obtained. These are useful for inferencing in ambiguous cases. Results on real life and synthetic data demonstrate that the speed of learning and classification performance of the proposed scheme are better than that obtained with the fuzzy and conventional versions of the MLP (involving no initial knowledge encoding). Both convex and concave decision regions are considered in the process.
---
paper_title: Cascade ARTMAP: Integrating Neural Computation and Symbolic Knowledge Processing
paper_content:
This paper introduces a hybrid system termed cascade adaptive resonance theory mapping (ARTMAP) that incorporates symbolic knowledge into neural-network learning and recognition. Cascade ARTMAP, a generalization of fuzzy ARTMAP, represents intermediate attributes and rule cascades of rule-based knowledge explicitly and performs multistep inferencing. A rule insertion algorithm translates if-then symbolic rules into cascade ARTMAP architecture. Besides that initializing networks with prior knowledge can improve predictive accuracy and learning efficiency, the inserted symbolic knowledge can be refined and enhanced by the cascade ARTMAP learning algorithm. By preserving symbolic rule form during learning, the rules extracted from cascade ARTMAP can be compared directly with the originally inserted rules. Simulations on an animal identification problem indicate that a priori symbolic knowledge always improves system performance, especially with a small training set. Benchmark study on a DNA promoter recognition problem shows that with the added advantage of fast learning, cascade ARTMAP rule insertion and refinement algorithms produce performance superior to those of other machine learning systems and an alternative hybrid system known as knowledge-based artificial neural network (KBANN). Also, the rules extracted from cascade ARTMAP are more accurate and much cleaner than the NofM rules extracted from KBANN.
---
paper_title: Learning fuzzy rules and approximate reasoning in fuzzy neural networks and hybrid systems
paper_content:
The paper considers both knowledge acquisition and knowledge interpretation tasks as tightly connected and continuously interacting processes in a contemporary knowledge engineering system. Fuzzy rules are used here as a framework for knowledge representation. An algorithm REFuNN for fuzzy rules extraction from adaptive fuzzy neural networks (FuNN) is proposed. A case study of Iris classification is chosen to illustrate the algorithm. Interpretation of fuzzy rules is possible by using fuzzy neural networks or by using standard fuzzy inference methods. Both approaches are compared in the paper based on the case example. A hybrid environment FuzzyCOPE which facilitates neural network simulation, fuzzy rules extraction from fuzzy neural networks and fuzzy rules interpretation by using different methods for approximate reasoning is briefly described.
---
paper_title: Inference, inquiry, evidence censorship, and explanation in connectionist expert systems
paper_content:
The combination of the techniques of expert systems and neural networks has the potential of producing more powerful systems, for example, expert systems able to learn from experience. In this paper, we address the combinatorial neural model (CNM), a kind of fuzzy neural network able to accommodate in a simple framework the highly desirable property of incremental learning, as well as the usual capabilities of expert systems. We show how an interval-based representation for membership grades makes CNM capable of reasoning with several types of uncertainty: vagueness, ignorance, and relevance commonly found in practical applications. In addition, we show how basic functions of expert systems such as inference, inquiry, censorship of input information, and explanation may be implemented. We also report experimental results of the application of CNM to the problem of deforestation monitoring of the Amazon region using satellite images.
---
paper_title: Adaptable neuro production systems
paper_content:
Abstract Connectionist production systems are neural network realizations of production rule-based systems. The connections are adjusted to a given set of rules to allow the system to perform reasoning. Adaptable connectionist production systems are introduced in this paper. They allow adaptation of the already pre-calculated connections to new data. The production rules are used to initialize the connection weights after which training with data occurs. At any time of the neural network operation, a set of updated rules can be extracted as a current knowledge base accumulated by the network. Using a set of rules for initializing a connectionist architecture before training may result in: (1) increase in the speed of training; (2) increase in the robustness of the neural network against the ‘catastrophic forgetting’ phenomenon; (3) better explanation of the learned by the network knowledge from data. In general, the proposed method facilitates building flexible and adaptable neuro-fuzzy production systems. This is demonstrated on a case problem of chaotic time series prediction.
---
paper_title: Knowledge-Based Artificial Neural Networks
paper_content:
Abstract Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN ( Knowledge-Based Artificial Neural Networks ) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.
---
paper_title: Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps
paper_content:
A neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors, which may represent fuzzy or crisp sets of features. The architecture, called fuzzy ARTMAP, achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Four classes of simulation illustrated fuzzy ARTMAP performance in relation to benchmark backpropagation and generic algorithm systems. These simulations include finding points inside versus outside a circle, learning to tell two spirals apart, incremental approximation of a piecewise-continuous function, and a letter recognition database. The fuzzy ARTMAP system is also compared with Salzberg's NGE systems and with Simpson's FMMC system. >
---
paper_title: Hidden Patterns in Combined and Adaptive Knowledge Networks
paper_content:
Uncertain causal knowledge is stored in fuzzy cognitive maps (FCMs). FCMs are fuzzy signed digraphs with feedback. The sign (+ or -) of FCM edges indicates causal increase or causal decrease. The fuzzy degree of causality is indicated by a number in [- 1, 1]. FCMs learn by modifying their causal connections in sign and magnitude, structurally analogous to the way in which neural networks learn. An appropriate causal learning law for inductively inferring FCMs from time-series data is the differential Hebbian law, which modifies causal connections by correlating time derivatives of FCM node outputs. The differential Hebbian law contrasts with Hebbian output-correlation learning laws of adaptive neural networks. FCM nodes represent variable phenomena or fuzzy sets. An FCM node nonlinearly transforms weighted summed inputs into numerical output, again in analogy to a model neuron. Unlike expert systems, which are feedforward search trees, FCMs are nonlinear dynamical systems. FCM resonant states are limit cycles, or time-varying patterns. An FCM limit cycle or hidden pattern is an FCM inference. Experts construct FCMs by drawing causal pictures or digraphs. The corresponding connection matrices are used for inferencing. By additively combining augmented connection matrices, any number of FCMs can be naturally combined into a single knowledge network. The credibility wi in [0, 1] of the ith expert is included in this learning process by multiplying the ith expert's augmented FCM connection matrix by w i. Combining connection matrices is a simple type of adaptive inference. In general, connection matrices are modified by an unsupervised learning law, such as the
---
paper_title: Rule Revision with Recurrent Neural Networks
paper_content:
Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly-generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect (i.e. the rules were not the ones in the randomly generated grammar).
---
paper_title: Knowledge-Based Artificial Neural Networks
paper_content:
Abstract Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN ( Knowledge-Based Artificial Neural Networks ) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.
---
paper_title: Connectionist theory refinement: Genetically searching the space of network topologies
paper_content:
An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the Regent algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.
---
paper_title: Rule insertion and rule extraction from evolving fuzzy neural networks: algorithms and applications for building adaptive, intelligent expert systems
paper_content:
Discusses the concept of intelligent expert systems and suggests tools for building an adaptable, in an online or in an off-line mode, rule base during the system operation in a changing environment. It applies evolving fuzzy neural networks (EFuNNs) as associative memories for the purpose of dynamic storing and modifying a rule base. Algorithms for rule extraction and rule insertion from EFuNNs are explained and applied to a case study using gas furnace data and the iris data set.
---
paper_title: Rough fuzzy MLP: knowledge encoding and classification
paper_content:
A scheme of knowledge encoding in a fuzzy multilayer perceptron (MLP) using rough set-theoretic concepts is described. Crude domain knowledge is extracted from the data set in the form of rules. The syntax of these rules automatically determines the appropriate number of hidden nodes while the dependency factors are used in the initial weight encoding. The network is then refined during training. Results on classification of speech and synthetic data demonstrate the superiority of the system over the fuzzy and conventional versions of the MLP (involving no initial knowledge).
---
paper_title: Combining rough sets learning- and neural learning-method to deal with uncertain and imprecise information
paper_content:
Abstract Any system designed to reason about the real world must be capable of dealing with uncertainty. The complexity of the real world and the finite size of most knowledge bases pose significant difficulties for the traditional concept of the learning system. Experience has shown that many learning paradigms fail to scale up to those problems. One response to these failures has been to construct systems which use multiple learning paradigms. Thus the strengths of one paradigm counterbalance some of the weaknesses of the others. As a result the effectiveness of the overall system will be enhanced. Consequently, integrated techniques have been widespread over the last years. A multistrategy which addresses those issues is presented. This approach joins two forms of learning, the technique of neural networks and rough sets. These seem at first quite different but they share the common ability to work well in a natural environment. In a closed loop fashion we will achieve more robust concept learning capabilities for a variety of difficult classification tasks. The objective of integration is twofold: (i) to improve the overall classification effectiveness of learned objects' description, (ii) to refine the dependency factors of the rules.
---
paper_title: Evolutionary Modular Design of Rough Knowledge-based Network using Fuzzy Attributes
paper_content:
This article describes a way of integrating rough set theory with a fuzzy MLP using a modular evolutionary algorithm, for classification and rule generation in soft computing paradigm. The novelty of the method lies in applying rough set theory for extracting dependency rules directly from a real-valued attribute table consisting of fuzzy membership values. This helps in preserving all the class representative points in the dependency rules by adaptively applying a threshold that automatically takes care of the shape of membership functions. An l-class classification problem is split into l two-class problems. Crude subnetwork modules are initially encoded from the dependency rules. These subnetworks are then combined and the final network is evolved using a GA with restricted mutation operator which utilizes the knowledge of the modular structure already generated, for faster convergence. The GA tunes the fuzzification parameters, and network weight and structure simultaneously, by optimising a single fitness function. This methodology helps in imposing a structure on the weights, which results in a network more suitable for rule generation. Performance of the algorithm is compared with related techniques.
---
paper_title: Staging of cervical cancer with soft computing
paper_content:
Describes a way of designing a hybrid decision support system in soft computing paradigm for detecting the different stages of cervical cancer. Hybridization includes the evolution of knowledge-based subnetwork modules with genetic algorithms (CIAs) using rough set theory and the Interactive Dichotomizer 3 (ID3) algorithm. Crude subnetworks obtained via rough set theory and the ID3 algorithm are evolved using CAs. The evolution uses a restricted mutation operator which utilizes the knowledge of the modular structure, already generated, for faster convergence. The CA tunes the network weights and structure simultaneously. The aforesaid integration enhances the performance in terms of classification score, network size and training time, as compared to the conventional multilayer perceptron. This methodology also helps in imposing a structure on the weights, which results in a network more suitable for extraction of logical rules and human interpretation of the inferencing procedure.
---
paper_title: Rough knowledge-based network, fuzziness and classification
paper_content:
A method of integrating rough sets and fuzzy multilayer perceptron (MLP) for designing a knowledge-based network for pattern recognition problems is described. Rough set theory is used to extract crude knowledge from the input domain in the form of rules. The syntax of these rules automatically determines the optimal number of hidden nodes while the dependency factors are used in the initial weight encoding. Results on classification of speech data demonstrate the superiority of the system over the fuzzy and conventional versions of the MLP.
---
paper_title: Knowledge-based fuzzy MLP for classification and rule generation
paper_content:
A new scheme of knowledge-based classification and rule generation using a fuzzy multilayer perceptron (MLP) is proposed. Knowledge collected from a data set is initially encoded among the connection weights in terms of class a priori probabilities. This encoding also includes incorporation of hidden nodes corresponding to both the pattern classes and their complementary regions. The network architecture, in terms of both links and nodes, is then refined during training. Node growing and link pruning are also resorted to. Rules are generated from the trained network using the input, output, and connection weights in order to justify any decision(s) reached. Negative rules corresponding to a pattern not belonging to a class can also be obtained. These are useful for inferencing in ambiguous cases. Results on real life and synthetic data demonstrate that the speed of learning and classification performance of the proposed scheme are better than that obtained with the fuzzy and conventional versions of the MLP (involving no initial knowledge encoding). Both convex and concave decision regions are considered in the process.
---
paper_title: Knowledge-based fuzzy MLP for classification and rule generation
paper_content:
A new scheme of knowledge-based classification and rule generation using a fuzzy multilayer perceptron (MLP) is proposed. Knowledge collected from a data set is initially encoded among the connection weights in terms of class a priori probabilities. This encoding also includes incorporation of hidden nodes corresponding to both the pattern classes and their complementary regions. The network architecture, in terms of both links and nodes, is then refined during training. Node growing and link pruning are also resorted to. Rules are generated from the trained network using the input, output, and connection weights in order to justify any decision(s) reached. Negative rules corresponding to a pattern not belonging to a class can also be obtained. These are useful for inferencing in ambiguous cases. Results on real life and synthetic data demonstrate that the speed of learning and classification performance of the proposed scheme are better than that obtained with the fuzzy and conventional versions of the MLP (involving no initial knowledge encoding). Both convex and concave decision regions are considered in the process.
---
paper_title: Multilayer perceptron, fuzzy sets, and classification
paper_content:
A fuzzy neural network model based on the multilayer perceptron, using the backpropagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and other related models. >
---
paper_title: Neural expert system using fuzzy teaching input and its application to medical diagnosis
paper_content:
Abstract Genetic algorithms (GAs) are inspired by Darwin's the survival of the fittest theory. This paper discusses a genetic algorithm that can automatically generate test cases to test a selected path. This algorithm takes a selected path as a target and executes sequences of operators iteratively for test cases to evolve. The evolved test case will lead the program execution to achieve the target path. To determine which test cases should survive to produce the next generation of fitter test cases, a metric named normalized extended Hamming distance (NEHD, which is used to determine whether the final test case is found) is developed. Based on NEHD, a fitness function named SIMILARITY is defined to determine which test cases should survive if the final test case has not been found. Even when there are loops in the target path, SIMILARITY can help the algorithm to lead the execution to flow along the target path.
---
paper_title: Knowledge-based fuzzy MLP for classification and rule generation
paper_content:
A new scheme of knowledge-based classification and rule generation using a fuzzy multilayer perceptron (MLP) is proposed. Knowledge collected from a data set is initially encoded among the connection weights in terms of class a priori probabilities. This encoding also includes incorporation of hidden nodes corresponding to both the pattern classes and their complementary regions. The network architecture, in terms of both links and nodes, is then refined during training. Node growing and link pruning are also resorted to. Rules are generated from the trained network using the input, output, and connection weights in order to justify any decision(s) reached. Negative rules corresponding to a pattern not belonging to a class can also be obtained. These are useful for inferencing in ambiguous cases. Results on real life and synthetic data demonstrate that the speed of learning and classification performance of the proposed scheme are better than that obtained with the fuzzy and conventional versions of the MLP (involving no initial knowledge encoding). Both convex and concave decision regions are considered in the process.
---
paper_title: Incorporating Fuzzy Membership Functions into the Perceptron Algorithm
paper_content:
The perceptron algorithm, one of the class of gradient descent techniques, has been widely used in pattern recognition to determine linear decision boundaries. While this algorithm is guaranteed to converge to a separating hyperplane if the data are linearly separable, it exhibits erratic behavior if the data are not linearly separable. Fuzzy set theory is introduced into the perceptron algorithm to produce a ``fuzzy algorithm'' which ameliorates the convergence problem in the nonseparable case. It is shown that the fuzzy perceptron, like its crisp counterpart, converges in the separable case. A method of generating membership functions is developed, and experimental results comparing the crisp to the fuzzy perceptron are presented.
---
| Title: Neuro – Fuzzy Rule Generation: Survey in Soft Computing Framework
Section 1: NEURO-FUZZY AND SOFT COMPUTING
Description 1: This section focuses on different aspects of neuro-fuzzy computing, explaining the need for neuro-fuzzy integration, the different methods of neuro-fuzzy hybridization, and introducing the concept of soft computing.
Section 2: RULE GENERATION
Description 2: This section reviews the different fuzzy, neural, and neuro-fuzzy models for rule generation, inferencing, and querying, along with their salient features.
Section 3: USING KNOWLEDGE-BASED NETWORKS
Description 3: This section discusses embedding initial knowledge into the network topology of knowledge-based networks, refining the network for rule generation, and combining neuro-fuzzy concepts with genetic algorithms and rough sets.
Section 4: APPLICATION TO MEDICAL DIAGNOSIS
Description 4: This section describes the application of neuro-fuzzy models to medical diagnosis problems, including the generation of rules for medical data and handling incomplete or ambiguous information.
Section 5: CONCLUSIONS
Description 5: This section summarizes the survey, highlighting the categorization of neuro-fuzzy models under a unified soft computing framework, the inclusion of rule extraction and refinement, and observations on the convergence analysis and future research directions. |
A Survey of CAD Model Simplification Techniques for Physics-based Simulation Applications | 8 | ---
paper_title: A small feature suppression/unsuppression system for preparing B-rep models for analysis
paper_content:
CAD technology plays an ever more central role in today's multidisciplinary simulation environments. While this has enabled highly complex and detailed models to be used earlier in the design process it has brought with it difficulties for simulation specialists. Most notably CAD models now contain many details which are irrelevant to simulation disciplines. CAD systems have feature trees which record feature creation but unfortunately this does not capture which features are relevant to which analysis discipline. Many features of little significance to an analysis only emerge during the construction of the model. The ability to selectively suppress and reinstate features while maintaining an audit trail of changes is required to facilitate the control of the idealisation process. Features suppressed for one analysis can be retrieved for use in another.This work uses combinatorial topology concepts to outline the necessary conditions so that CAD model simplification operations can be designed as continuous transformations. Irrelevant features can then be suppressed and subsequently reinstated, within defined limitations, independently from the order in which they were suppressed. The implementation of these concepts provides analysts with a mechanism for generating analysis models with different levels of detail, without having to repeat the simplification process from the original CAD geometry. Most importantly, the information recorded during the suppress operations forms an essential audit trail of the idealisation process and can be presented in a feature-tree like structure allowing analysts to review their modelling decisions retrospectively. The approach also facilitates the generation of local, detailed models encapsulating a feature of interest. The proposed system follows a Find and Fix paradigm; where different algorithms for feature finding and fixing can be utilised in a common cellular modelling framework.
---
paper_title: Fast collision detection between massive models using dynamic simplification
paper_content:
We present a novel approach for collision detection between large models composed of tens of millions of polygons. Each model is represented as a clustered hierarchy of progressive meshes (CHPM). The CHPM is a dual hierarchy of the original model: it serves both as a multiresolution representation of the original model, as well as a bounding volume hierarchy. We use the cluster hierarchy of a CHPM to perform coarse-grained selective refinement and the progressive meshes for fine-grained local refinement. We present a novel conservative error metric to perform collision queries based on the multiresolution representation. We use this error metric to perform dynamic simplification for collision detection. Our approach is conservative in that it may overestimate the set of colliding regions, but never misses any collisions. Furthermore, we are able to generate these hierarchies and perform collision queries using out-of-core techniques on all triangulated models. We have applied our algorithm to perform conservative collision detection between massive CAD and scanned models, consisting of millions of triangles at interactive rates on a commodity PC.
---
paper_title: Topology Simplification for Polygonal Virtual Environments
paper_content:
We present a topology simplifying approach that can be used for genus reductions, removal of protuberances, and repair of cracks in polygonal models in a unified framework. Our work is complementary to the existing work on geometry simplification of polygonal datasets and we demonstrate that using topology and geometry simplifications together yields superior multiresolution hierarchies than is possible by using either of them alone. Our approach can also address the important issue of repair of cracks in polygonal models, as well as for rapid identification and removal of protuberances based on internal accessibility in polygonal models. Our approach is based on identifying holes and cracks by extending the concept of /spl alpha/-shapes to polygonal meshes under the L/sub /spl infin// distance metric. We then generate valid triangulations to fill them using the intuitive notion of sweeping an L/sub /spl infin// cube over the identified regions.
---
paper_title: Modelling requirements for finite-element analysis
paper_content:
Abstract Efficient modelling of many problems in the finite-element analysis of structural and other continuum problems requires substantial simplification of the design geometry. Tools are needed for the efficient transformation of the detailed design geometry into an appropriate analysis model. It is argued that the medial axis and surface transform of geometric models provide an alternative representation that has many attractive properties for analysis feature recognition and simplification. Strategies for identifying possible idealizations, controlling their application, and estimating the associated errors appear feasible. Some requirements for geometric-modelling tools are identified.
---
paper_title: Voxel based object simplification
paper_content:
Presents a simple, robust and practical method for object simplification for applications where gradual elimination of high-frequency details is desired. This is accomplished by sampling and low-pass filtering the object into multi-resolution volume buffers and applying the marching cubes algorithm to generate a multi-resolution triangle-mesh hierarchy. Our method simplifies the genus of objects and can also help existing object simplification algorithms achieve better results. At each level of detail, a multi-layered mesh can be used for an optional and efficient antialiased rendering.
---
paper_title: Meshing Complexity of Single Part CAD Models
paper_content:
This paper proposes a method for predicting the complexity of meshing Computer Aided Design (CAD) geometries with unstructured, hexahedral, finite elements. Meshing complexity refers to the relative level of effort required to generate a valid finite element mesh on a given CAD geometry. A function is proposed to approximate the meshing complexity for single part CAD models. The function is dependent on a user defined element size as well as on data extracted from the geometry and topology of the CAD part. Several geometry and topology measures are proposed which both characterize the shape of the CAD part and detect configurations that complicate mesh generation. Based on a test suite of CAD models the function is demonstrated to be accurate within a certain range of error. The solution proposed here is intended to provide managers and users of meshing software a method of predicting the difficulty in meshing a CAD model. This will enable them to make decisions about model simplification and analysis approaches prior to mesh generation.
---
paper_title: A Comparison of mesh simplification algorithms
paper_content:
Abstract In many applications the need for an accurate simplification of surface meshes is becoming more and more urgent. This need is not only due to rendering speed reasons, but also to allow fast transmission of 3D models in network-based applications. Many different approaches and algorithms for mesh simplification have been proposed in the last few years. We present a survey and a characterization of the fundamental methods. Moreover, the results of an empirical comparison of the simplification codes available in the public domain are discussed. Five implementations, chosen to give a wide spectrum of different topology preserving methods, were run on a set of sample surfaces. We compared empirical computational complexities and the approximation accuracy of the resulting output meshes.
---
paper_title: Computing bounding volume hierarchies using model simplification
paper_content:
This paper presents a framework that uses the outputs of model simplification to guide the construction of bounding volume hierarchies for use in, for example, collision detection. Simplified models, besides their application to multiresolution rendering, can provide clues to the object’s shape. These clues help in the partitioning of the object’s model into components that may be more tightly bounded by simple bounding volumes. The framework naturally employs both the bottom-up and the topdown approaches of hierarchy building, and thus can have the advantages of both approaches. Experimental results show that our method built on top of the framework can indeed improve the bounding volume hierarchy, and as a result, significantly speedup the collision detection.
---
paper_title: Automated mixed dimensional modelling for the finite element analysis of swept and revolved CAD features
paper_content:
Thin-walled aerospace structures can be idealised as dimensionally reduced shell models. These models can be analysed in a fraction of the time required for a full 3D model yet still provide remarkably accurate results. The disadvantages of this approach are the time taken to derive the idealised model, though this is offset by the ease and rapidity of design optimisation with respect to parameters such as shell thickness, and the fact that the stresses in the local 3D details can not be resolved.A process for automatically creating a mixed dimensional idealisation of a component from its CAD model is outlined in this paper. It utilises information contained in the CAD feature tree to locate the sketches associated with suitable features in the model. Suitable features are those created by carrying out dimensional addition operations on 2D sketches, in particular sweeping the sketch along a line to create an extruded solid, or revolving the sketch around an axis to create an axisymetric solid. Geometric proximity information provided by the 2D Medial Axis Transform is used to determine slender regions in the sketch suitable for dimensional reduction. The slender regions in the sketch are used to create sheet bodies representing the thin regions of the component, into which local 3D solid models of complex details are embedded. Analyses of the resulting models provide accurate results in a fraction of the run time required for the 3D model analysis.Also discussed is a web service implementation of the process which automatically dimensionally reduces 2D planar sketches in the STEP format.
---
paper_title: A small feature suppression/unsuppression system for preparing B-rep models for analysis
paper_content:
CAD technology plays an ever more central role in today's multidisciplinary simulation environments. While this has enabled highly complex and detailed models to be used earlier in the design process it has brought with it difficulties for simulation specialists. Most notably CAD models now contain many details which are irrelevant to simulation disciplines. CAD systems have feature trees which record feature creation but unfortunately this does not capture which features are relevant to which analysis discipline. Many features of little significance to an analysis only emerge during the construction of the model. The ability to selectively suppress and reinstate features while maintaining an audit trail of changes is required to facilitate the control of the idealisation process. Features suppressed for one analysis can be retrieved for use in another.This work uses combinatorial topology concepts to outline the necessary conditions so that CAD model simplification operations can be designed as continuous transformations. Irrelevant features can then be suppressed and subsequently reinstated, within defined limitations, independently from the order in which they were suppressed. The implementation of these concepts provides analysts with a mechanism for generating analysis models with different levels of detail, without having to repeat the simplification process from the original CAD geometry. Most importantly, the information recorded during the suppress operations forms an essential audit trail of the idealisation process and can be presented in a feature-tree like structure allowing analysts to review their modelling decisions retrospectively. The approach also facilitates the generation of local, detailed models encapsulating a feature of interest. The proposed system follows a Find and Fix paradigm; where different algorithms for feature finding and fixing can be utilised in a common cellular modelling framework.
---
paper_title: Topology-reducing surface simplification using a discrete solid representation
paper_content:
This paper presents a new approach for generating coarse-level approximations of topologically complex models. Dramatic topology reduction is achieved by converting a 3D model to and from a volumetric representation. Our approach produces valid, error-bounded models and supports the creation of approximations that do not interpenetrate the original model, either being completely contained in the input solid or bounding it. Several simple to implement versions of our approach are presented and discussed. We show that these methods perform significantly better than other surface-based approaches when simplifying topologically-rich models such as scene parts and complex mechanical assemblies.
---
paper_title: Decomposition of complex models for manufacturing
paper_content:
This work presents a feature recognition (FR) technique that separates protrusions bounded by freeform surface geometry. The method can also generate complementary male/female assembly features at the interface between components. A heuristic that uses edge curvature continuity is applied to direct the search and the construction of protrusion boundaries. The result is represented as a cellular model. The objective is to manufacture complex prototype assemblies on multiaxis machining centres. After describing the algorithm and presenting some examples of its application the paper concludes by discussing the approach's limitations.
---
paper_title: Representation and management of feature information in a cellular model
paper_content:
Many limitations in current feature modelling systems are inherited from the geometric representation they use for the product model. Both a very rigid and a very extensive representation are unsuitable for feature applications, at least if no convenient support is provided to manage the data. This paper describes a cellular representation for feature models that contains all the relevant information to effectively solve a variety of current problems in feature modelling. Much benefit is gained from a coherent integration between shapes of a feature model and cells in the cellular model. Every feature shape has an explicit volumetric representation in terms of cells. Specific subsets of its boundary are also distinguished in terms of cell faces and edges. Feature interactions are maintained in attributes of cells, cell faces and cell edges. Methods for modifying and querying the cellular model are presented, and their application is illustrated for feature validity maintenance, feature interaction management, feature conversion between multiple views, and feature visualization.
---
paper_title: Geometric and Solid Modeling: An Introduction
paper_content:
It is the view of the author that the streams of geometric and solid modeling are converging, and that as the importance of this convergence is anticipated and recognized, the need for the development of techniques to bridge the gap between the two becomes critical. This book is devoted to filling that need. "Geometric and Solid Modeling" deals with the concepts and tools needed to design and implement solid-modeling systems and their infrastructure and substrata, making this information remarkably accessible--to the novice as well as to the experienced designer. The essential algorithms and the underlying theory needed to design these systems are given primary emphasis. Techniques for the study and implementation of geometric algorithms are taken from computer science, numerical analysis, and symbolic computation, among other areas. Special attention is given to geometric investigations of implicit and parametric surfaces, with the focal point being the possible integration of geometric and solid modeling.
---
paper_title: Voxel based object simplification
paper_content:
Presents a simple, robust and practical method for object simplification for applications where gradual elimination of high-frequency details is desired. This is accomplished by sampling and low-pass filtering the object into multi-resolution volume buffers and applying the marching cubes algorithm to generate a multi-resolution triangle-mesh hierarchy. Our method simplifies the genus of objects and can also help existing object simplification algorithms achieve better results. At each level of detail, a multi-layered mesh can be used for an optional and efficient antialiased rendering.
---
paper_title: A CAD-CAE integration approach using feature-based multi-resolution and multi-abstraction modelling techniques
paper_content:
In spite of the widespread use of CAD systems for design and CAE systems for analysis, the two processes are not well integrated because CAD and CAE models inherently use different types of geometric models and there currently exists no generic, unified model that allows both design and analysis information to be specified and shared. In this paper, a new approach called the CAD/CAE-integrated approach is proposed and implemented by a feature-based non-manifold modelling system. The system creates and manipulates a single master model containing different types of all of the geometric models required for CAD and CAE. Both a solid model (for CAD) and a non-manifold model (for CAE) are immediately extracted from the master model through a selection process. If a design change is required, the master model is modified by the feature modelling capabilities of the system. As a result, the design and analysis models are modified simultaneously and maintained consistently. This system also supports feature-based multi-resolution and multi-abstraction modelling capabilities providing the CAD model at different levels of detail and the CAE model at various levels of abstraction.
---
paper_title: Boundary representation deformation in parametric solid modeling
paper_content:
One of the major unsolved problems in parametric solid modeling is a robust update (regeneration) of the solid's boundary representation, given a specified change in the solid's parameter values. The fundamental difficulty lies in determining the mapping between boundary representations for solids in the same parametric family. Several heuristic approaches have been proposed for dealing with this problem, but the formal properties of such mappings are not well understood. We propose a formal definition for boundary representation. (BR-)deformation for solids in the same parametric family, based on the assumption of continuity: small changes in solid parameter values should result in small changes in the solid's boundary reprentation, which may include local collapses of cells in the boundary representation. The necessary conditions that must be satisfied by any BR-deforming mappings between boundary representations are powerful enough to identify invalid updates in many (but not all) practical situations, and the algorithms to check them are simple. Our formulation provides a formal criterion for the recently proposed heuristic approaches to “persistent naming,” and explains the difficulties in devising sufficient tests for BR-deformation encountered in practice. Finally our methods are also applicable to more general cellular models of pointsets and should be useful in developing universal standards in parametric modeling.
---
paper_title: A cellular topology-based approach to generating progressive solid models from feature-centric models
paper_content:
Abstract Progressive mesh representation and generation have become one of the most important issues in network-based computer graphics. However, current researches are mostly focused on triangular mesh models. On the other hand, solid models are widely used in industry and are applied to advanced applications such as product design and virtual assembly. Moreover, as the demand to share and transmit these solid models over the network is emerging, how to effectively stream the solid models has been considered as one of the major research issues. In this paper, we present a cellular topology-based approach to generating progressive solid models (PSM) from feature-based models. The proposed approach introduces a new scheme for storing and transmitting solid models over the network. The cellular topology (CT) approach makes it possible to effectively generate PSMs and to efficiently transmit the models over the network with compact model size. Thus, an arbitrary solid model SM designed by a set of design features is stored as a much coarser solid model SM 0 together with a sequence of n detail records that indicate how to incrementally refine SM 0 exactly back into the original solid model SM=SM n .
---
paper_title: Dimensional Reduction of Surface Models for Analysis
paper_content:
This paper describes a set of procedures by which an analyst can idealise slender 2D shell structures for linear static analysis using reduced-dimensional beam finite elements. The first step is the development of the topological operations that are necessary to achieve the desired dimensionally reduced representation. Next, the automatic derivation of necessary geometric and physical properties of the reduced dimensional entities are described, together with the application of appropriate coupling constraints between dimensions. Dimensional reduction of shell models involves finding areas of the geometric model whose dimensions are such that this region may be represented in an analysis model with a 1D beam. Using the medial axis transform, geometric measures are defined for identifying such areas in the geometric model. However, topological features of the model and its medial axis were also identified as significant in the automation of dimensional reduction. The application of the medial axis transform to automatic dimensional reduction is described and example models given.
---
paper_title: Surface simplification using quadric error metrics
paper_content:
Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations
---
paper_title: Intelligent Form Feature Interaction Management in a Cellular Modeling Scheme
paper_content:
Form features present a rather attractive building block in computer-aided design environments for a variety of applications, providing the embodiment of engineering semantics in specific part shape. In this paper we address the issue of volume feature behaviour throughout interaction phenomena. First, the fundamental concept of form feature as expression of the morphology of a model, is presented. A description of the feature properties, attributes and general constraints that are relevant for our purpose is attempted. In order to develop sound validity conditions for the various classes of form features, feature definitional entities are also introduced. A comprehensive definition of interaction among features is given that encompasses both adjacent and intersecting features. We use a structured cellular modeling scheme to capture both the morphology of features and the interactions among them. Within this framework, a thorough analysis of feature interactions is performed that explores the accessibility of feature definitional entities in order to assist feature-based model editing and validation. Operations that create or modify features often interfere with pre-existing ones, producing unanticipated effects that can corrupt or, at least, modify desired feature semantics and/or morphology. On the other hand, valid features can be obtained that exhibit non-standard or disconnected topology as a result of interactions caused by such operations. Systematic management of feature interactions is explored in both cases, from an object-oriented point of view, encapsulating interaction detection methods in feature class definition. This approach is shown to be quite adequate to handle complex interactions among several features. Reasoning mechanisms may, thereafter, be required to handle each situation identified.
---
paper_title: Elimination of the adverse effects of small model features by the local modification of automatically generated meshes
paper_content:
Issues related to the automated identification and elimination of the adverse influence of small geometric model features on the quality of automatically generated meshes, using local mesh modification operators, are addressed. The definition of mesh validity with respect to the geometric model is extended to include multiple mesh entity classifications. Checks based on mesh topology are used to ensure no dimensional reductions in the locally modified mesh. Example geometric models of varied complexity containing small geometric features are used to demonstrate the ability of presented procedures to improve mesh quality in terms of aspect ratio and small angle metrics.
---
paper_title: Face Clustering of a Large-scale CAD Model for Surface Mesh Generation
paper_content:
A detailed CAD model needs manual clean-up, or simplifying operations, before a finite element mesh can be automatically generated because such a model consists of hundreds or thousands of faces many of which may be smaller than a desired mesh element size. We propose an automated face clustering method used as a pre-process of surface mesh generation. By decomposing a model into face clusters so that each region can be projected onto a simple parametric surface such as a plane, we obtain a final mesh as an aggregate of sub-meshes for respective clusters without time-consuming manual preparation work. The projection onto a surface realises re-parameterisation as well as suppression of small details. The main contribution of this work is the integration of (1) a greedy algorithm for combining faces into clusters, and (2) geometric indices that reflect various aspects of a preferable shape for a cluster. The validity of the approach is demonstrated with results of clustering and mesh generation for a realistic-scale CAD model.
---
paper_title: Shape preserving polyhedral simplification with bounded error
paper_content:
Abstract A new approach is introduced to reduce the number of nodes of a polyhedral model according to several conditions which allow to produce high quality simplified geometries. The simplified polyhedron must satisfy everywhere a geometric restoration criterion based on an error zone assigned to each node of its initial model. These error zones can be specified by the user or automatically set up using the accuracy characteristics of the digitizing device. Moreover, specific criteria are used to preserve the shape of the object during the simplification process both from geometric and topologic points of view. Indeed, the uses of such polyhedra require a good geometric quality without topologic and geometric singularities. The simplification process is based on a node removal method. A new strategy is developed to produce the simplified polyhedron using front propagations and multiple remeshing schemes which take into account the discrete curvature characteristics of the object. Such an approach allows to increase the node reduction of the initial polyhedron and produces a smoothing effect on the simplified geometry. The front propagation technique also leads to a better preservation of the shape of the object. Examples illustrate the behaviour of the simplification algorithm in terms of data reduction, quality of the simplified geometry and shape preservation of objects.
---
paper_title: A small feature suppression/unsuppression system for preparing B-rep models for analysis
paper_content:
CAD technology plays an ever more central role in today's multidisciplinary simulation environments. While this has enabled highly complex and detailed models to be used earlier in the design process it has brought with it difficulties for simulation specialists. Most notably CAD models now contain many details which are irrelevant to simulation disciplines. CAD systems have feature trees which record feature creation but unfortunately this does not capture which features are relevant to which analysis discipline. Many features of little significance to an analysis only emerge during the construction of the model. The ability to selectively suppress and reinstate features while maintaining an audit trail of changes is required to facilitate the control of the idealisation process. Features suppressed for one analysis can be retrieved for use in another.This work uses combinatorial topology concepts to outline the necessary conditions so that CAD model simplification operations can be designed as continuous transformations. Irrelevant features can then be suppressed and subsequently reinstated, within defined limitations, independently from the order in which they were suppressed. The implementation of these concepts provides analysts with a mechanism for generating analysis models with different levels of detail, without having to repeat the simplification process from the original CAD geometry. Most importantly, the information recorded during the suppress operations forms an essential audit trail of the idealisation process and can be presented in a feature-tree like structure allowing analysts to review their modelling decisions retrospectively. The approach also facilitates the generation of local, detailed models encapsulating a feature of interest. The proposed system follows a Find and Fix paradigm; where different algorithms for feature finding and fixing can be utilised in a common cellular modelling framework.
---
paper_title: Adaptation of CAD model topology for finite element analysis
paper_content:
The preparation of a Finite Element Analysis (FEA) model from a Computer Aided Design (CAD) model is still a difficult task since its Boundary Representation (B-Rep) is often composed of a large number of faces, some of which may be narrow or feature short edges that are smaller than the desired FE size (for mesh generation). Consequently, these faces and edges are considered as geometric artefacts that are irrelevant for the automatic mesh generation process. Such inconsistencies often cause either poorly-shaped elements or meshes that are locally over-densified. These inconsistencies not only slow down the solver (using too many elements) but also produce poor or inappropriate simulation results. In this context, we propose a ''Mesh Constraint Topology'' (MCT) model with automatic adaptation operators aimed at transforming a CAD model boundary decomposition into a FE model, featuring only mesh-relevant faces, edges and vertices, i.e., an explicit data model that is intrinsically adapted to the meshing process. We provide a set of criteria that can be used to transform CAD model boundary topology using MCT transformations, i.e., edge deletion, vertex deletion, edge collapsing, and merging of vertices. The proposed simplification criteria take into account a size map, a discretization error threshold and boundary conditions. Applications and results are presented through the adaptation of CAD models using the proposed simplification criteria.
---
paper_title: A cellular topology-based approach to generating progressive solid models from feature-centric models
paper_content:
Abstract Progressive mesh representation and generation have become one of the most important issues in network-based computer graphics. However, current researches are mostly focused on triangular mesh models. On the other hand, solid models are widely used in industry and are applied to advanced applications such as product design and virtual assembly. Moreover, as the demand to share and transmit these solid models over the network is emerging, how to effectively stream the solid models has been considered as one of the major research issues. In this paper, we present a cellular topology-based approach to generating progressive solid models (PSM) from feature-based models. The proposed approach introduces a new scheme for storing and transmitting solid models over the network. The cellular topology (CT) approach makes it possible to effectively generate PSMs and to efficiently transmit the models over the network with compact model size. Thus, an arbitrary solid model SM designed by a set of design features is stored as a much coarser solid model SM 0 together with a sequence of n detail records that indicate how to incrementally refine SM 0 exactly back into the original solid model SM=SM n .
---
paper_title: Topology-reducing surface simplification using a discrete solid representation
paper_content:
This paper presents a new approach for generating coarse-level approximations of topologically complex models. Dramatic topology reduction is achieved by converting a 3D model to and from a volumetric representation. Our approach produces valid, error-bounded models and supports the creation of approximations that do not interpenetrate the original model, either being completely contained in the input solid or bounding it. Several simple to implement versions of our approach are presented and discussed. We show that these methods perform significantly better than other surface-based approaches when simplifying topologically-rich models such as scene parts and complex mechanical assemblies.
---
paper_title: Voxel based object simplification
paper_content:
Presents a simple, robust and practical method for object simplification for applications where gradual elimination of high-frequency details is desired. This is accomplished by sampling and low-pass filtering the object into multi-resolution volume buffers and applying the marching cubes algorithm to generate a multi-resolution triangle-mesh hierarchy. Our method simplifies the genus of objects and can also help existing object simplification algorithms achieve better results. At each level of detail, a multi-layered mesh can be used for an optional and efficient antialiased rendering.
---
paper_title: Feature-based multiresolution techniques for product design
paper_content:
Abstract3D computer-aided design (CAD) systems based on feature-based solid modelling technique have been widely spread and used for product design. However, when part models associated with features are used in various downstream applications, simplified models in various levels of detail (LODs) are frequently more desirable than the full details of the parts. In particular, the need for feature-based multiresolution representation of a solid model representing an object at multiple LODs in the feature unit is increasing for engineering tasks. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. The other challenges are to devise proper topological framework for multiresolution representation, to suggest more reasonable LOD criteria, and to extend applications. This paper surveys the recent research on these issues.
---
paper_title: Feature-based multiresolution modeling of solids
paper_content:
Recently, three-dimensional CAD systems based on feature-based solid modeling techniques have been widely used for product design. However, when part models associated with features are used in various downstream applications, simplified models at various levels of detail (LODs) are frequently more desirable than the full details of the parts. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. This article proposes an algorithm for feature-based multiresolution solid modeling based on the effective volumes of features. This algorithm guarantees the same resulting shape and the reasonable intermediate LOD models for an arbitrary rearrangement of the features, regardless of whether feature types are additive or subtractive. This characteristic enables various LOD criteria to be used for a wide range of applications including computer-aided design and analysis.
---
paper_title: A CAD-CAE integration approach using feature-based multi-resolution and multi-abstraction modelling techniques
paper_content:
In spite of the widespread use of CAD systems for design and CAE systems for analysis, the two processes are not well integrated because CAD and CAE models inherently use different types of geometric models and there currently exists no generic, unified model that allows both design and analysis information to be specified and shared. In this paper, a new approach called the CAD/CAE-integrated approach is proposed and implemented by a feature-based non-manifold modelling system. The system creates and manipulates a single master model containing different types of all of the geometric models required for CAD and CAE. Both a solid model (for CAD) and a non-manifold model (for CAE) are immediately extracted from the master model through a selection process. If a design change is required, the master model is modified by the feature modelling capabilities of the system. As a result, the design and analysis models are modified simultaneously and maintained consistently. This system also supports feature-based multi-resolution and multi-abstraction modelling capabilities providing the CAD model at different levels of detail and the CAE model at various levels of abstraction.
---
paper_title: B-Rep model simplification by automatic fillet/round suppressing for efficient automatic feature recognition
paper_content:
Abstract The CAD models of real-world mechanical parts usually have many fillets and rounds that are essentially important to ensure the manufacturability and assembability. In feature-based modeling, fillets and rounds are often referred as secondary features that are used to modify the local details of the primary features such as holes, slots and pockets. Although the major shape of the primary features may not be affected, fillets and rounds can greatly change the geometric and topological patterns of the primary features. The geometric and topological variations can result in inefficient feature semantics classification in feature recognition. When feature interactions occur, it may become even worse to identify the regular patterns of the primary features. In addition, the fillets and rounds consist of no-linear surfaces such as cylindrical surfaces, spherical surfaces and toroidal surfaces, which bring the difficulties in volumetric feature extraction by half-space partition. In order to facilitate volumetric feature extraction and feature semantics classification, we pre-process the input B-Rep models by suppressing fillets and rounds before the feature recognition. Thus the input B-Rep models can be simplified without altering the major shapes of primary features, the targets of the feature recognition. The B-Rep simplification can be viewed as the reverse process of the edge blending in feature-based design. In this paper, several issues on fillet/round suppressing are discussed and a relatively general and robust approach is proposed to suppress blendings in B-Rep models of mechanical parts before the surface feature recognition and volumetric feature extraction.
---
paper_title: Removal of blends from boundary representation models
paper_content:
This paper reports an algorithm for deletion of blends (or fillets) from Boundary Representation (B-rep) solid models. Blend deletion is usually performed as the first step in feature recognition since it simplifies the model for recognition of volumetric features. The algorithm handles several blend types that include face-face, face-edge and vertex blends. It also handles interactions of blends with other blends and/or volumetric features. The main feature of our approach is the usage of the underlying blend structure in predicting the final topology. This results in fewer intersections and greater predictability than earlier face-deletion approaches, especially for large blend networks. Another unique feature of our algorithm is the recreation of new faces in certain situations of blend deletion.
---
paper_title: An integrated approach to realize multi-resolution of B-rep model
paper_content:
It is becoming a common trend that many designers work on a very complex assembly together in a collaborative environment. In this environment, every designer should be able to see the whole assembly in a full detail or in a rough shape at least. Even though the hardware technology is being improved very rapidly, it is very difficult to display a very complex assembly at a speed to allow smooth interactions for designers. This problem could be solved if a designer could manipulate his portion of the assembly in a full resolution while the remaining portion of the assembly is displayed in a rough resolution. It is also desired that the remaining portion is converted to the full resolution when needed. To realize this environment, the capabilities to simplify the portions of an assembly and to reset to the original resolution should be added to the current CAD systems. Thus operators realizing multi-resolution on B-rep are proposed in this paper, They are: wrap-around, smooth-out, and thinning operator. Through appropriately applying these operators sequentially, an assembly model of any desired resolution can be easily generated. Of course, the assembly can go back to the finer resolution. In this paper, the data structures and the processes to realize these operators are described and a prototype modeling system with these operators is also demonstrated.
---
paper_title: Wrap-around operation to make multi-resolution model of part and assembly
paper_content:
Abstract When a single computer or network deals with large and complex assembly, a special method to compress or simplify the assembly is needed. One method is the use of a multi-resolution modeler, for which many approaches have been tried to obtain multi-solutions. Some of these approaches have considered triangular mesh compression, while some have considered features and topologies. This paper proposes a method for simplifying boundary representation models, in which the wrapping of products using the plastic wrap in the kitchen has been imitated. In wrapping, the plastic wrap hides the details of a product. This method is composed of two steps. The first step is the part level wrap-around operation. In this step, a convex inner loop is used as a clue to find concave space and fill this space by removing the convex inner loop. After filling the concave space, the faces that cannot be seen from outside the model are removed. The level of detail in our model was defined using a set composed of convex inner loop and faces that are removed with the convex inner loop. The second step is assembly level wrap-around operation. As a result of first step, an overlap between parts exists and faces that cannot be seen from outside of the model exist. These faces are deleted in the second step. Graph traverse method used to find these faces is presented. The proposed method is implemented using Parasolid kernel V12.1 and allows arbitrary movement to coarser and finer resolution.
---
paper_title: Reconstruction of feature volumes and feature suppression
paper_content:
This paper describes a systematic algorithm for reconstructing the feature volume from a set of faces in a solid model. This algorithm serves a dual purpose. Firstly, the algorithm generates the feature volume by extending or contracting the neighboring faces of the set of faces. Secondly, the algorithm may also be used to remove (or suppress) the face-set from the model. The algorithm uses a divide-and-conquer strategy and geometric cues to identify the correct topology. It robustly handles a wide class of feature volumes with complex topology and geometry. A simplified version of the algorithm has also been presented to handle volumes resulting from 2.5D features.
---
paper_title: Wrap-around Operation for Multi-resolution CAD Model
paper_content:
AbstractIn the design of a very complex product, the multi-resolution modeling technique plays an important role for the real time operation of the model on computers. There are many previous works to simplify or compress the geometric data to obtain a multi-resolution model from the original shape. But, most of them focused on the generation of multi-resolution model for facetbased models, not for B-rep models. The facet model is good enough for rendering purpose, but is not the basis model for many commercial CAD systems. A new multi-resolution algorithm for the B-rep model is needed to improve the interactivity in dealing with a large assembly model generated by CAD systems. We propose a new modeling operation, called wrap-around, to simplify a B-rep model. With this operation, a part model and an assembly can be simplified efficiently to yield multi-resolution models. Also, applying the reverse wrap-around operation can recover the original un-simplified model. In this paper, the structures and the pr...
---
paper_title: Mathematical Foundations of Scientific Visualization, Computer Graphics, and Massive Data Exploration
paper_content:
Visualization is one of the most active and exciting areas of Mathematics and Computing Science, and indeed one which is only beginning to mature. Current visualization algorithms break down for very large data sets. While present approaches use multi-resolution ideas, future data sizes will not be handled that way. New algorithms based on sophisticated mathematical modeling techniques must be devised which will permit the extraction of high-level topological structures that can be visualized. For these reasons a workshop was organized at the Banff International Research Station, focused specifically on mathematical issues. A primary objective of the workshop was to gather together a diverse set of researchers in the mathematical areas relevant to the recent advances in order to discuss the research challenges facing this field in the next several years. The workshop was organized into five different thrusts: - Topology and Discrete Methods; - Signal and Geometry Processing; - Partial Differential Equations; - Data Approximation Techniques; - Massive Data Applications. This book presents a summary of the research ideas presented at this workshop.
---
paper_title: Homotopy-Preserving Medial Axis Simplification
paper_content:
We present a novel algorithm to compute a simplified medial axis of a polyhedron. Our simplification algorithm tends to remove unstable features of Blum's medial axis. Moreover, our algorithm preserves the topological structure of the original medial axis and ensures that the simplified medial axis has the same homotopy type as Blum's medial axis. We use the separation angle formed by connecting a point on the medial axis to closest points on the boundary as a measure of the stability of the medial axis at the point. The medial axis is decomposed into its parts that are the sheets, seams and junctions. We present a stability measure of each part of the medial axis based on separation angles and examine the relation between the stability measures of adjacent parts. Our simplification algorithm uses iterative pruning of the parts based on efficient local tests. We have applied the algorithm to compute a simplified medial axis of complex models with tens of thousands of triangles and complex topologies.
---
paper_title: Dimensional Reduction of Surface Models for Analysis
paper_content:
This paper describes a set of procedures by which an analyst can idealise slender 2D shell structures for linear static analysis using reduced-dimensional beam finite elements. The first step is the development of the topological operations that are necessary to achieve the desired dimensionally reduced representation. Next, the automatic derivation of necessary geometric and physical properties of the reduced dimensional entities are described, together with the application of appropriate coupling constraints between dimensions. Dimensional reduction of shell models involves finding areas of the geometric model whose dimensions are such that this region may be represented in an analysis model with a 1D beam. Using the medial axis transform, geometric measures are defined for identifying such areas in the geometric model. However, topological features of the model and its medial axis were also identified as significant in the automation of dimensional reduction. The application of the medial axis transform to automatic dimensional reduction is described and example models given.
---
paper_title: Automatic solid decomposition and reduction for non-manifold geometric model generation
paper_content:
Abstract Oftentimes the level of complexity of a structure requires the analyst to idealize the finite element model, so that a solution may be obtained within a reasonable time scale. This paper describes some operations that have been defined and implemented which allow the user to reduce the dimensionality of geometric models automatically using a novel decomposition and reduction method. The model is first decomposed into simpler solids, and these solids undergo a reduction process to extract their midsurfaces. In some cases, not every part of the model can be reduced and thus, a non-manifold geometric model, suitable for mixed finite element modeling, is generated.
---
| Title: A Survey of CAD Model Simplification Techniques for Physics-based Simulation Applications
Section 1: Introduction
Description 1: Introduce the significance of physics-based simulations in the product realization process and outline the challenges caused by complex CAD models in simulations.
Section 2: Terminology
Description 2: Define the basic terminology and model representations used in CAD model simplification, including boundary representation, constructive solid geometry, and features.
Section 3: Techniques Based on Surface Entities
Description 3: Present techniques that simplify a model's surface entities, such as low pass filtering, face cluster-based simplification, and size-based entity decimation.
Section 4: Techniques Based on Volumetric Entities
Description 4: Discuss simplification techniques that operate on volumetric entities, including voxel-based simplification and effective volume-based simplification.
Section 5: Techniques Based on Explicit Features
Description 5: Describe methods that recognize and simplify explicit application features like prismatic features, blends, and arbitrary-shaped features.
Section 6: Techniques based on Dimension Reduction
Description 6: Explain dimension reduction methods that simplify models by reducing their dimensionality, such as medial axis transform and mid-surface abstraction.
Section 7: Discussion
Description 7: Analyze the taxonomy of model simplification techniques, comparing their application domains, types of features handled, and other critical factors for selection.
Section 8: Conclusions
Description 8: Summarize the key findings from the survey, highlight open research issues, and emphasize the need for further advancements in CAD model simplification techniques for physics-based simulations. |
Utilizing Noise Addition for Data Privacy, an Overview | 13 | ---
paper_title: Secure Data Management in Decentralized Systems
paper_content:
The field of database security has expanded greatly, with the rapid development of global inter-networked infrastructure. Databases are no longer stand-alone systems accessible only to internal users of organizations. Today, businesses must allow selective access from different security domains. New data services emerge every day, bringing complex challenges to those whose job is to protect data security. The Internet and the web offer means for collecting and sharing data with unprecedented flexibility and convenience, presenting threats and challenges of their own. This book identifies and addresses these new challenges and more, offering solid advice for practitioners and researchers in industry.
---
paper_title: Minimality Attack in Privacy Preserving Data Publishing
paper_content:
Data publishing generates much concern over the protection of individual privacy. Recent studies consider cases where the adversary may possess different kinds of knowledge about the data. In this paper, we show that knowledge of the mechanism or algorithm of anonymization for data publication can also lead to extra information that assists the adversary and jeopardizes individual privacy. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. We call such an attack a minimality attack. In this paper, we introduce a model called m-confidentiality which deals with minimality attacks, and propose a feasible solution. Our experiments show that minimality attacks are practical concerns on real datasets and that our algorithm can prevent such attacks with very little overhead and information loss.
---
paper_title: Secure Data Management in Decentralized Systems
paper_content:
The field of database security has expanded greatly, with the rapid development of global inter-networked infrastructure. Databases are no longer stand-alone systems accessible only to internal users of organizations. Today, businesses must allow selective access from different security domains. New data services emerge every day, bringing complex challenges to those whose job is to protect data security. The Internet and the web offer means for collecting and sharing data with unprecedented flexibility and convenience, presenting threats and challenges of their own. This book identifies and addresses these new challenges and more, offering solid advice for practitioners and researchers in industry.
---
paper_title: Utility and privacy of data sources: Can Shannon help conceal and reveal information?
paper_content:
The problem of private information “leakage” (inadvertently or by malicious design) from the myriad large centralized searchable data repositories drives the need for an analytical framework that quantifies unequivocally how safe private data can be (privacy) while still providing useful benefit (utility) to multiple legitimate information consumers. Rate distortion theory is shown to be a natural choice to develop such a framework which includes the following: modeling of data sources, developing application independent utility and privacy metrics, quantifying utility-privacy tradeoffs irrespective of the type of data sources or the methods of providing privacy, developing a side-information model for dealing with questions of external knowledge, and studying a successive disclosure problem for multiple query data sources.
---
paper_title: Composition attacks and auxiliary information in data privacy
paper_content:
Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. This paper explores how one can reason about privacy in the face of rich, realistic sources of auxiliary information. Specifically, we investigate the effectiveness of current anonymization schemes in preserving privacy when multiple organizations independently release anonymized data about overlapping populations. 1. We investigate composition attacks, in which an adversary uses independent anonymized releases to breach privacy. We explain why recently proposed models of limited auxiliary information fail to capture composition attacks. Our experiments demonstrate that even a simple instance of a composition attack can breach privacy in practice, for a large class of currently proposed techniques. The class includes k-anonymity and several recent variants. 2. On a more positive note, certain randomization-based notions of privacy (such as differential privacy) provably resist composition attacks and, in fact, the use of arbitrary side information.This resistance enables "stand-alone" design of anonymization schemes, without the need for explicitly keeping track of other releases. We provide a precise formulation of this property, and prove that an important class of relaxations of differential privacy also satisfy the property. This significantly enlarges the class of protocols known to enable modular design.
---
paper_title: On the Complexity of Optimal Microaggregation for Statistical Disclosure Control
paper_content:
Statistical disclosure control (SDC), also termed inference control two decades ago, is an integral part of data security dealing with the protection of statistical databases. The basic problem in SDC is to release data in a way that does not lead to disclosure of individual information (high security) but preserves the informational content as much as possible (low information loss). SDC is dual with data mining in that progress of data mining techniques forces official statistics to a continual improvement of SDC techniques: the more powerful the inferences that can be made on a released data set, the more protection is needed so that no inference jeopardizes the privacy of individual respondents’ numerical data. This paper deals with the computational complexity of optimal microaggregation, where optimal means yielding minimal information loss for a fixed security level. More specifically, we show that the problem of optimal microaggregation cannot be exactly solved in polynomial time. This result is relevant because it provides theoretical justification for the lack of exact optimal algorithms and for the current use of heuristic approaches.
---
paper_title: The Boundary Between Privacy and Utility in Data Publishing
paper_content:
We consider the privacy problem in data publishing: given a database instance containing sensitive information "anonymize" it to obtain a view such that, on one hand attackers cannot learn any sensitive information from the view, and on the other hand legitimate users can use it to compute useful statistics. These are conflicting goals. In this paper we prove an almost crisp separation of the case when a useful anonymization algorithm is possible from when it is not, based on the attacker's prior knowledge. Our definition of privacy is derived from existing literature and relates the attacker's prior belief for a given tuple t, with the posterior belief for the same tuple. Our definition of utility is based on the error bound on the estimates of counting queries. The main result has two parts. First we show that if the prior beliefs for some tuples are large then there exists no useful anonymization algorithm. Second, we show that when the prior is bounded for all tuples then there exists an anonymization algorithm that is both private and useful. The anonymization algorithm that forms our positive result is novel, and improves the privacy/utility tradeoff of previously known algorithms with privacy/utility guarantees such as FRAPP.
---
paper_title: Security and privacy in online social networks: A survey
paper_content:
Social networking becomes increasingly important due to the recent surge in online interaction. Social network analysis can be used to study the functioning of computer networks, information flow patterns in communities, and emergent behavior of physical and biological systems. In this paper, the mathematical formulation and computational models for security and privacy of social network data are discussed. Several possible ways for an attacker to attack are presented so that the mathematical formulation can take them into account. The metrics for measuring the amount of security and privacy in an online social network (OSN) are discussed so that we have an idea of how good a model is. Based on these current techniques and attack strategies, future directions of research are discussed.
---
paper_title: A survey on data security in data warehousing: Issues, challenges and opportunities
paper_content:
Data Warehouses (DWs) are the enterprise's most valuable assets in what concerns critical business information, making them an appealing target for malicious inside and outside attackers. Given the volume of data and the nature of DW queries, most of the existing data security solutions for databases are inefficient, consuming too many resources and introducing too much overhead in query response time, or resulting in too many false positive alarms (i.e., incorrect detection of attacks) to be checked. In this paper, we present a survey on currently available data security techniques, focusing on specific issues and requirements concerning their use in data warehousing environments. We also point out challenges and opportunities for future research work in this field.
---
paper_title: Privacy Preservation in Data Mining Through Noise Addition
paper_content:
Due to advances in information processing technology and storage capacity, nowadays huge amount of data is being collected for various data analyses. Data mining techniques, such as classification, are often applied on these data to extract hidden information. During the whole process of data mining the data get exposed to several parties and such an exposure potentially leads to breaches of individual privacy. This thesis presents a comprehensive noise addition technique for protecting individual privacy in a data set used for classification, while maintaining the data quality. We add noise to all attributes, both numerical and categorical, and both to class and non-class, in such a way so that the original patterns are preserved in a perturbed data set. Our technique is also capable of incorporating previously proposed noise addition techniques that maintain the statistical parameters of the data set, including correlations among attributes. Thus the perturbed data set may be used not only for classification but also for statistical analysis. Our proposal has two main advantages. Firstly, as also suggested by our experimental results the perturbed data set maintains the same or very similar patterns as the original data set, as well as the correlations among attributes. While there are some noise addition techniques that maintain the statistical parameters of the data set, to the best of our knowledge this is the first comprehensive technique that preserves the patterns and thus removes the so called Data Mining Bias from the perturbed data set. Secondly, re-identification of the original records directly depends on the amount of noise added, and in general can be made arbitrarily hard, while still preserving the original patterns in the data set. The only exception to this is the case when an intruder knows enough about the record to learn the confidential class value by applying the classifier. However, this is always possible, even when the original record has not been used in the training data set. In other words, providing that enough noise is added, our technique makes the records from the training set as safe as any other previously unseen records of the same kind. In addition to the above contribution, this thesis also explores the suitability of prediction accuracy as a sole indicator of data quality, and proposes technique for clustering both categorical values and records containing such values.
---
paper_title: On the Security of Noise Addition for Privacy in Statistical Databases
paper_content:
Noise addition is a family of methods used in the protection of the privacy of individual data (microdata) in statistical databases. This paper is a critical analysis of the security of the methods in that family.
---
paper_title: Microdata Protection through Noise Addition
paper_content:
Microdata protection by adding noise is being discussed for more than 20 years now. Several algorithms were developed that have different characteristics. The simplest algorithm consists of adding white noise to the data. More sophisticated methods use more or less complex transformations of the data and more complex error-matrices to improve the results. This contribution gives an overview over the different algorithms and discusses their properties in terms of analytical validity and level of protection. Therefore some theoretical considerations are shown and an illustrating empirical example is given.
---
paper_title: Differential privacy: A survey of results
paper_content:
Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. ::: ::: In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.
---
paper_title: Does differential privacy protect terry gross' privacy?
paper_content:
The concept of differential privacy was motivated through the example of Terry Gross' height in Dwork (2006). In this paper, we show that when a procedure based on differential privacy is implemented, it neither protects Terry Gross' privacy nor does it provide meaningful responses to queries. We also provide an additional illustration using income data from the US Census. These illustrations raise serious questions regarding the efficacy of using differential privacy based masking mechanism for numerical data.
---
paper_title: Differential Privacy Under Fire
paper_content:
Anonymizing private data before release is not enough to reliably protect privacy, as Netflix and AOL have learned to their cost. Recent research on differential privacy opens a way to obtain robust, provable privacy guarantees, and systems like PINQ and Airavat now offer convenient frameworks for processing arbitrary userspecified queries in a differentially private way. However, these systems are vulnerable to a variety of covertchannel attacks that can be exploited by an adversarial querier. ::: ::: We describe several different kinds of attacks, all feasible in PINQ and some in Airavat. We discuss the space of possible countermeasures, and we present a detailed design for one specific solution, based on a new primitive we call predictable transactions and a simple differentially private programming language. Our evaluation, which relies on a proof-of-concept implementation based on the Caml Light runtime, shows that our design is effective against remotely exploitable covert channels, at the expense of a higher query completion time.
---
paper_title: Does differential privacy protect terry gross' privacy?
paper_content:
The concept of differential privacy was motivated through the example of Terry Gross' height in Dwork (2006). In this paper, we show that when a procedure based on differential privacy is implemented, it neither protects Terry Gross' privacy nor does it provide meaningful responses to queries. We also provide an additional illustration using income data from the US Census. These illustrations raise serious questions regarding the efficacy of using differential privacy based masking mechanism for numerical data.
---
paper_title: Statistics: An Introduction Using R
paper_content:
Preface. Chapter 1. Fundamentals. Chapter 2. Dataframes. Chapter 3. Central Tendency. Chapter 4. Variance. Chapter 5. Single Samples. Chapter 6. Two Samples. Chapter 7. Statistical Modelling. Chapter 8. Regression. Chapter 9. Analysis of Variance. Chapter 10. Analysis of Covariance. Chapter 11. Multiple Regression. Chapter 12. Contrasts. Chapter 13. Count Data. Chapter 14. Proportion Data. Chapter 15. Death and Failure Data. Chapter 16. Binary Response Variable. Appendix 1: Fundamentals of the R Language. References and Further Reading. Index.
---
paper_title: On the Security of Noise Addition for Privacy in Statistical Databases
paper_content:
Noise addition is a family of methods used in the protection of the privacy of individual data (microdata) in statistical databases. This paper is a critical analysis of the security of the methods in that family.
---
paper_title: The Essential Physics of Medical Imaging
paper_content:
This renowned work is derived from the authors' acclaimed national review course ("Physics of Medical Imaging") at the University of California-Davis for radiology residents. The text is a guide to the fundamental principles of medical imaging physics, radiation protection and radiation biology, with complex topics presented in the clear and concise manner and style for which these authors are known. Coverage includes the production, characteristics and interactions of ionizing radiation used in medical imaging and the imaging modalities in which they are used, including radiography, mammography, fluoroscopy, computed tomography and nuclear medicine. Special attention is paid to optimizing patient dose in each of these modalities. Sections of the book address topics common to all forms of diagnostic imaging, including image quality and medical informatics as well as the non-ionizing medical imaging modalities of MRI and ultrasound. The basic science important to nuclear imaging, including the nature and production of radioactivity, internal dosimetry and radiation detection and measurement, are presented clearly and concisely. Current concepts in the fields of radiation biology and radiation protection relevant to medical imaging, and a number of helpful appendices complete this comprehensive textbook. The text is enhanced by numerous full color charts, tables, images and superb illustrations that reinforce central concepts. The book is ideal for medical imaging professionals, and teachers and students in medical physics and biomedical engineering. Radiology residents will find this text especially useful in bolstering their understanding of imaging physics and related topics prior to board exams. Features: new Four-color throughout; new Companion website with fully searchable text and images; basic line drawings help to explain concepts; comprehensive coverage of diagnostic imaging modalities; and superb writing style of the author team helps make a difficult subject approachable and engaging.
---
| Title: Utilizing Noise Addition for Data Privacy, an Overview
Section 1: Introduction
Description 1: This section introduces the challenge of protecting data privacy in statistical databases and outlines the need for noise addition as a method for enhancing confidentiality.
Section 2: Background
Description 2: This section explains key terms and concepts crucial to understanding noise addition, including data privacy and confidentiality, data de-identification, and the balance between data utility and privacy.
Section 3: Related Work
Description 3: This section surveys existing research and techniques in data privacy, emphasizing current state-of-the-art methods and highlighting the need for noise addition.
Section 4: Noise Addition
Description 4: This section describes noise addition perturbation methods, including how noise is added or multiplied to confidential attributes to provide data confidentiality.
Section 5: Additive Noise
Description 5: This section delves into the specifics of additive noise, including the mathematical expressions and examples of how stochastic noise is used to conceal data values.
Section 6: Multiplicative Noise
Description 6: This section details multiplicative noise, explaining how random numbers are generated and multiplied with original data to achieve noise addition.
Section 7: Logarithmic Multiplicative Noise
Description 7: This section explains a variation of multiplicative noise where a logarithmic transformation is applied to the data before adding noise.
Section 8: Differential Privacy
Description 8: This section examines differential privacy as a modern noise addition technique, describing its mechanism and the balance between privacy and utility it aims to achieve.
Section 9: Differential Privacy Pros and Cons
Description 9: This section discusses the advantages and disadvantages of differential privacy, particularly its impact on data utility and statistical properties.
Section 10: Statistical Background for Noise Addition
Description 10: This section provides a detailed statistical foundation for noise addition, covering mean, variance, standard deviation, covariance, and correlation.
Section 11: Signal Noise Ratio (SNR)
Description 11: This section explores the application of SNR in data perturbation using noise addition, aiming for optimal data utility while maintaining privacy.
Section 12: Illustration
Description 12: This section gives a practical example of data perturbation with noise addition, detailing the steps and providing graphical comparisons between the original and perturbed data.
Section 13: Conclusion
Description 13: This section summarizes the main points of the paper, emphasizing the trade-offs between privacy and utility in noise addition techniques and suggesting areas for future research. |
A survey on future railway radio communications services: challenges and opportunities | 23 | ---
paper_title: An overview of GSM-R technology and its shortcomings
paper_content:
Railway communication technologies undergo a revolutionary change bringing them from the analog to the digital era. The European Rail Traffic Management System (ERTMS) replaces numerous incompatible analog radio systems, classical side-track signs and legacy in-cab signaling with an integrated comprehensive solution. This paper presents an overview of GSM-Railways (GSM-R), which is the unified communication technology supporting ERTMS. Its shortcoming in terms of capacity and capability are discussed as a foundation for the need for further developments.
---
paper_title: Using dataflow traceability between functions in the safety evaluation process
paper_content:
This paper deals with the safety evaluation process required by law for high safety related system in rail in Europe. A methodology is given to guide the independent analysis and evaluation process. In a general framework for requirement engineering process, the proposed methodology aims at speeding up the process of safety evaluation and to facilitate the work of the independent checker and of the assessor of the system in a systematic way. The methodology is based on a functional analysis approach using the exchange of dataflows. This methodology has been tested on the EU LOCOPROL research project signalling part and is given as an application case.
---
paper_title: Survey of Wireless Communication Technologies for Public Safety
paper_content:
Public Safety (PS) organizations bring value to society by creating a stable and secure environment. The services they provide include protection of people, environment and assets and they address a large number of threats both natural and man-made, acts of terrorism, technological, radiological or environmental accidents. The capability to exchange information (e.g., voice and data) is essential to improve the coordination of PS officers during an emergency crisis and improve response efforts. Wireless communications are particularly important in field operations to support the mobility of first responders. Recent disasters have emphasized the need to enhance interoperability, capacity and broadband connectivity of the wireless networks used by PS organizations. This paper surveys the outstanding challenges in this area, the status of wireless communication technologies in this particular domain and the current regulatory, standardization and research activities to address the identified challenges, with a particular focus on USA and Europe.
---
paper_title: Mobile relay in LTE-advanced systems
paper_content:
Voice and data communications on high speed vehicles encounter bad channel condition, high call drop rate, serious signaling congestion and excessive power consumption of UE. Mobile relay technology which features on-board relay node is expected to improve the quality of service for passengers. However, the design for fixed relay in LTE-Advanced system cannot meet the requirements of mobile relay. In this article, the architecture for mobile relay is presented. The key techniques of supporting mobile relay are investigated, such as the group mobility, the local service support, the multi-RAT and RAN sharing, with the corresponding solutions. The potential system optimization, for example, the self-optimization network, power saving, measurement and system information acquisition are also presented where the efficiency of mobile relay can further be improved. Simulation and numerical results demonstrate the feasibility of the mobile relay.
---
paper_title: A Survey of Radio Propagation Modeling for Tunnels
paper_content:
Radio signal propagation modeling plays an important role in designing wireless communication systems. The propagation models are used to calculate the number and position of base stations and predict the radio coverage. Different models have been developed to predict radio propagation behavior for wireless communication systems in different operating environments. In this paper we shall limit our discussion to the latest achievements in radio propagation modeling related to tunnels. The main modeling approaches used for propagation in tunnels are reviewed, namely, numerical methods for solving Maxwell equations, waveguide or modal approach, ray tracing based methods and two-slope path loss modeling. They are discussed in terms of modeling complexity and required information on the environment including tunnel geometry and electric as well as magnetic properties of walls.
---
paper_title: Challenges Toward Wireless Communications for High-Speed Railway
paper_content:
High-speed railway (HSR) brings convenience to peoples' lives and is generally considered as one of the most sustainable developments for ground transportation. One of the important parts of HSR construction is the signaling system, which is also called the “operation control system,” where wireless communications play a key role in the transmission of train control data. We discuss in detail the main differences in scientific research for wireless communications between the HSR operation scenarios and the conventional public land mobile scenarios. The latest research progress in wireless channel modeling in viaducts, cuttings, and tunnels scenarios are discussed. The characteristics of nonstationary channel and the line-of-sight (LOS) sparse and LOS multiple-input-multiple-output channels, which are the typical channels in HSR scenarios, are analyzed. Some novel concepts such as composite transportation and key challenging techniques such as train-to-train communication, vacuum maglev train techniques, the security for HSR, and the fifth-generation wireless communications related techniques for future HSR development for safer, more comfortable, and more secure HSR operation are also discussed.
---
| Title: A survey on future railway radio communications services: challenges and opportunities
Section 1: INTRODUCTION
Description 1: Briefly introduce the scope and significance of the survey in railway radio communications services.
Section 2: SAFETY SERVICES
Description 2: Discuss critical services related to train safety and public safety, including their requirements.
Section 3: SIGNALING FOR HIGH-SPEED TRAINS
Description 3: Highlight the state-of-the-art signaling systems for high-speed trains and related challenges.
Section 4: SIGNALING FOR SUBWAYS: CBTC
Description 4: Cover communications-based train control systems in subways and how they compare to signaling for high-speed trains.
Section 5: TRAMWAYS: INTEGRATION ON SMART CARS PLATFORMS
Description 5: Explain the potential future integration of tramways with smart car platforms.
Section 6: SIGNALING DATA OVER SATELLITES
Description 6: Explore the use of satellites for signaling data in railway communications, especially for low-density traffic areas.
Section 7: THE FAR FUTURE OF FREIGHT TRAINS: VIRTUAL COUPLING
Description 7: Discuss the concept of virtual coupling for freight trains and its potential impact on rail capacity.
Section 8: PUBLIC SAFETY IN RAILWAYS
Description 8: Describe the use of public safety communications systems in railways and their challenges.
Section 9: OPERATIONAL SERVICES
Description 9: Discuss the challenges and opportunities related to non-safety operational services for railways.
Section 10: PASSENGER INFORMATION/INFOTAINMENT
Description 10: Highlight the services that provide passengers with multimedia content and information.
Section 11: CCTV
Description 11: Cover the challenges and requirements of CCTV systems in railway operations.
Section 12: THE INTERNET OF THINGS
Description 12: Explore the potential and challenges of adopting IoT in railways for operational and maintenance purposes.
Section 13: SERVICES FOR PASSENGERS: INTERNET ACCESS
Description 13: Discuss the challenges and current solutions for providing Internet access to onboard passengers.
Section 14: MOBILE RELAYS
Description 14: Explain the use and benefits of mobile relays in improving cell coverage and spectral efficiency.
Section 15: SATELLITES
Description 15: Describe the advantages and limitations of using satellites for providing Internet access on trains.
Section 16: CYBER SECURITY
Description 16: Highlight the importance of cyber security in railway communications and related challenges.
Section 17: TECHNOLOGICAL ISSUES
Description 17: Discuss general technological challenges not specific to any one category of service.
Section 18: RADIO CONVERGENCE
Description 18: Cover the trend towards radio convergence to handle multiple services over a single radio system.
Section 19: WITHDRAWAL OF ONBOARD WIRING
Description 19: Describe the trend and advantages of replacing onboard wiring with wireless links.
Section 20: HIGH-SPEED SCENARIOS
Description 20: Discuss the specific challenges of maintaining reliable communication at high train speeds.
Section 21: CHANNEL MODELLING
Description 21: Highlight the importance and challenges of accurate channel modeling in V2I and V2V scenarios.
Section 22: CONCLUSION
Description 22: Summarize the key points discussed in the survey and present open questions and future research directions.
Section 23: IN MEMORY OF LEANDRO DE HARO
Description 23: Tribute to Leandro de Haro for his contributions to the field and research in antennas and communications. |
Measurement errors and scaling relations in astrophysics: a review | 17 | ---
paper_title: Bayes in the sky: Bayesian inference and model selection in cosmology
paper_content:
The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher efficiency and of a consistent conceptual basis for dealing with the problem of induction in the presence of uncertainty. This trend is likely to continue in the future, when the way we collect, manipulate and analyse observations and compare them with theoretical models will assume an even more central role in cosmology. This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field. I first present Bayesian probability theory and its conceptual underpinnings, Bayes' Theorem and the role of priors. I discuss the problem of parameter inference and its general solution, along with numerical techniques such as Monte Carlo Markov Chain methods. I then review the th...
---
paper_title: The scaling relation between richness and mass of galaxy clusters: a Bayesian approach
paper_content:
We use a sample of 53 galaxy clusters at 0.03<z<0.1 with available masses derived from the caustic technique and with velocity dispersions computed using 208 galaxies on average per cluster, in order to investigate the scaling between richness, mass and velocity dispersion. A tight scaling between richness and mass is found, with an intrinsic scatter of only 0.19 dex in mass and with a slope one, i.e. clusters which have twice as many galaxies are twice as massive. When richness is measured without any knowledge of the cluster mass or linked parameters (such as r200), it can predict mass with an uncertainty of 0.29+/-0.01 dex. As a mass proxy, richness competes favourably with both direct measurements of mass given by the caustic method, which has typically 0.14 dex errors (vs 0.29) and X-ray luminosity, which offers a similar 0.30 dex uncertainty. The similar performances of X-ray luminosity and richness in predicting cluster masses has been confirmed using cluster masses derived from velocity dispersion fixed by numerical simulations. These results suggest that cluster masses can be reliably estimated from simple galaxy counts, at least at the redshift and masses explored in this work. This has important applications in the estimation of cosmological parameters from optical cluster surveys, because in current surveys clusters detected in the optical range outnumber, by at least one order of magnitude, those detected in X-ray. Our analysis is robust from astrophysical and statistical perspectives. The data and code used for the stochastic computation is distributed with the paper. [Abridged]
---
paper_title: Linear Regression for Astronomical Data with Measurement Errors and Intrinsic Scatter
paper_content:
Two new methods are proposed for linear regression analysis for data with measurement errors. Both methods are designed to accommodate intrinsic scatter in addition to measurement errors. The first (BCES) is a direct extension of the ordinary least squares (OLS) estimator to allow for measurement errors. It is quite general, allowing a) for measurement errors on both variables, b) the measurement errors for the two variables to be dependent, c) the magnitudes of the measurement errors to depend on the measurements, and d) other `symmetric' lines such as the bisector and the orthogonal regression can be constructed. The second method is a weighted least squares (WLS) estimator, which applies only in the case where the `independent' variable is measured without error and the magnitudes of the measurement errors on the 'dependent' variable are independent from the measurements. Several applications are made to extragalactic astronomy: The BCES method, when applied to data describing the color-luminosity relations for field galaxies, yields significantly different slopes than OLS and other estimators used in the literature. Simulations with artificial data sets are used to evaluate the small sample performance of the estimators. Unsurprisingly, the least-biased results are obtained when color is treated as the dependent variable. The Tully-Fisher relation is another example where the BCES method should be used because errors in luminosity and velocity are correlated due to inclination corrections. We also find, via simulations, that the WLS method is by far the best method for the Tolman surface-brightness test, producing the smallest variance in slope by an order of magnitude. Moreover, with WLS it is not necessary to ``reduce'' galaxies to a fiducial surface-brightness, since this model incorporates intrinsic scatter.
---
paper_title: The scaling relation between richness and mass of galaxy clusters: a Bayesian approach
paper_content:
We use a sample of 53 galaxy clusters at 0.03<z<0.1 with available masses derived from the caustic technique and with velocity dispersions computed using 208 galaxies on average per cluster, in order to investigate the scaling between richness, mass and velocity dispersion. A tight scaling between richness and mass is found, with an intrinsic scatter of only 0.19 dex in mass and with a slope one, i.e. clusters which have twice as many galaxies are twice as massive. When richness is measured without any knowledge of the cluster mass or linked parameters (such as r200), it can predict mass with an uncertainty of 0.29+/-0.01 dex. As a mass proxy, richness competes favourably with both direct measurements of mass given by the caustic method, which has typically 0.14 dex errors (vs 0.29) and X-ray luminosity, which offers a similar 0.30 dex uncertainty. The similar performances of X-ray luminosity and richness in predicting cluster masses has been confirmed using cluster masses derived from velocity dispersion fixed by numerical simulations. These results suggest that cluster masses can be reliably estimated from simple galaxy counts, at least at the redshift and masses explored in this work. This has important applications in the estimation of cosmological parameters from optical cluster surveys, because in current surveys clusters detected in the optical range outnumber, by at least one order of magnitude, those detected in X-ray. Our analysis is robust from astrophysical and statistical perspectives. The data and code used for the stochastic computation is distributed with the paper. [Abridged]
---
paper_title: Highly accurate H-2 Lyman and Werner band laboratory measurements and an improved constraint on a cosmological variation of the proton-to-electron mass ratio
paper_content:
The search for a cosmological variability of fundamental physical constants has become an active field of research now that accurate spectroscopic data can be obtained from quasars that have emitted their radiation more than ten billion years ago. A comparison between laboratory data, obtained at the highest accuracy in the modern epoch, and analysis of corresponding redshifted spectra from distant objects allows for stringent constraints on variability of the physical constants under
---
paper_title: Measurement Error: Models, Methods, and Applications
paper_content:
Introduction What is measurement error? Some examples The main ingredients Some terminology A look ahead Misclassification in Estimating a Proportion Motivating examples A model for the true values Misclassification models and naive analyses Correcting for misclassification Finite populations Multiple measures with no direct validation The multinomial case Mathematical developments Misclassification in Two-Way Tables Introduction Models for true values Misclassification models and naive estimators Behavior of naive analyses Correcting using external validation data Correcting using internal validation data General two-way tables Mathematical developments Simple Linear Regression Introduction The additive Berkson model and consequences The additive measurement error model The behavior of naive analyses Correcting for additive measurement error Examples Residual analysis Prediction Mathematical developments Multiple Linear Regression Introduction Model for true values Models and bias in naive estimators Correcting for measurement error Weighted and other estimators Examples Instrumental variables Mathematical developments Measurement Error in Regression: A General Overview Introduction Models for true values Analyses without measurement error Measurement error models Extra data Assessing bias in naive estimators Assessing bias using induced models Assessing bias via estimating equations Moment based and direct bias corrections Regression calibration and quasi-likelihood methods Simulation extrapolation (SIMEX) Correcting using likelihood methods Modified estimating equation approaches Correcting for misclassification Overview on use of validation data Bootstrapping Mathematical developments Binary Regression Introduction Additive measurement error Using validation data Misclassification of predictors Linear Models with Nonadditive Error Introduction Quadratic regression First-order models with interaction General nonlinear functions of the predictors Linear measurement error with validation data Misclassification of a categorical predictor Miscellaneous Nonlinear Regression Poisson regression: Cigarettes and cancer rates General nonlinear models Error in the Response Introduction Additive error in a single sample Linear measurement error in the one-way setting Measurement error in the response in linear models Mixed/Longitudinal Models Introduction, overview, and some examples Berkson error in designed repeated measures Additive error in the linear mixed model Time Series Introduction Random walk/population viability models Linear autoregressive models Background Material Notation for vectors, covariance matrices, etc. Double expectations Approximate Wald inferences The delta-method: approximate moments of nonlinear functions Fieller's method for ratios References Author Index Subject Index
---
paper_title: Highly accurate H-2 Lyman and Werner band laboratory measurements and an improved constraint on a cosmological variation of the proton-to-electron mass ratio
paper_content:
The search for a cosmological variability of fundamental physical constants has become an active field of research now that accurate spectroscopic data can be obtained from quasars that have emitted their radiation more than ten billion years ago. A comparison between laboratory data, obtained at the highest accuracy in the modern epoch, and analysis of corresponding redshifted spectra from distant objects allows for stringent constraints on variability of the physical constants under
---
paper_title: Do X-ray dark or underluminous galaxy clusters exist?
paper_content:
We study the X-ray properties of a color-selected sample of clusters at 0.1 < z < 0.3, to quantify the real aboundance of the population of X-ray dark or underluminous clusters and at the same time the spurious detection contamination level of color-selected cluster catalogs. Starting from a local sample of color-selected clusters, we restrict our attention to those with sufficiently deep X-ray observations to probe their X-ray luminosity down to very faint values and without introducing any X-ray bias. This allowed us to have an X-ray- unbiased sample of 33 clusters to measure the LX-richness relation. Swift 1.4 Ms X-ray observations show that at least 89% of the color-detected clusters are real objects with a potential well deep enough to heat and retain an intracluster medium. The percentage rises to 94% when one includes the single spectroscopically confirmed color-selected cluster whose X-ray emission is not secured. Looking at our results from the opposite perspective, the percentage of X-ray dark clusters among color-selected clusters is very low: at most about 11 per cent (at 90% confidence). Supplementing our data with those from literature, we conclude that X-rayand color- cluster surveys sample the same population and consequently that in this regard we can safely use clusters selected with any of the two methods for cosmological purposes. This is an essential and promising piece of information for upcoming surveys in both the optical/IR (DES, EUCLID) and X-ray (eRosita). Richness correlates with X-ray luminosity with a large scatter, 0.51 ± 0.08 (0.44 ± 0.07) dex in lgLX at a given richness, when Lx is measured in a 500 (1070) kpc aperture. We release data and software to estimate the X-ray flux, or its upper limit, of a source with over-Poisson background fluctuations (found in this work to be ∼20% on cluster angular scales) and to fit X-ray luminosity vs richness if there is an intrinsic scatter. These Bayesian applications rigorously account for boundaries (e.g., the X-ray luminosity and the richness cannot be negative).
---
paper_title: Cosmological Parameters from Observations of Galaxy Clusters
paper_content:
Studies of galaxy clusters have proved crucial in helping to establish the standard model of cosmology, with a Universe dominated by dark matter and dark energy. A theoretical basis that describes clusters as massive, multicomponent, quasi-equilibrium systems is growing in its capability to interpret multiwavelength observations of expanding scope and sensitivity. We review current cosmological results, including contributions to fundamental physics, obtained from observations of galaxy clusters. These results are consistent with and complementary to those from other methods. We highlight several areas of opportunity for the next few years, and emphasize the need for accurate modeling of survey selection and sources of systematic error. Capitalizing on these opportunities will require a multiwavelength approach and the application of rigorous statistical frameworks, utilizing the combined strengths of observers, simulators, and theorists.
---
paper_title: New evidence for a linear colour–magnitude relation and a single Schechter function for red galaxies in a nearby cluster of galaxies down to M*+ 8
paper_content:
The colour and luminosity distributions of red galaxies in the cluster Abell 1185 (z = 0.0325) were studied down to M ∗ + 8 in the B, V and R bands. The colour‐magnitude (CM) relation is linear without evidence for significant bending down to absolute magnitudes that are seldom probed in the literature (MR =− 12.5 mag). The CM relation is thin (±0.04 mag) and its thickness is quite independent of the magnitude. The luminosity function (LF) of red galaxies in Abell 1185 is adequately described by a Schechter function, with a characteristic magnitude and a faint end slope that also well describe the LF of red galaxies in other clusters. There is no passband dependence of the LF shape other than an obvious M ∗ shift due to the colour of the considered population. Finally, we conclude that, based on colours and luminosity, red galaxies form a homogeneous population over four decades in stellar mass, providing a second piece of evidence against faint red galaxies being a recent cluster population.
---
paper_title: XBOOTES: AN X-RAY SURVEY OF THE NDWFS BOOTES FIELD. II. THE X-RAY SOURCE CATALOG
paper_content:
We present results from a Chandra survey of the 9 deg 2 Bootes field of the NOAO Deep Wide-Field Survey (NDWFS). This XBootes survey consists of 126 separate contiguous ACIS-I observations each of approximately 5000 s in duration. These unique Chandra observations allow us to search for large-scale structure and to calculate X-raysource statistics overawide,contiguousfieldofviewwitharcsecondangularresolutionanduniformcoverage. Opticalspectroscopicfollow-upobservationsandtherichNDWFSdatasetwillallowustoidentifyandclassifythese X-ray‐selected sources. Using wavelet decomposition, we detect 4642 point sources with n � 2 counts. In order to keep our detections � 99% reliable, we limit our list to sources with n � 4 counts. For a 5000 s observation and assuming a canonical unabsorbed active galactic nucleus (AGN) type X-ray spectrum, a 4 count on-axis source correspondsto afluxof4:7 ;10 � 15 ergscm � 2 s � 1 inthe soft (0.5‐2keV) band, 1:5 ; 10 � 14 ergscm � 2 s � 1 in thehard (2‐7 keV) band, and 7:8 ; 10 � 15 ergs cm � 2 s � 1 in the full (0.5‐7 keV) band. The full 0.5‐7 keV band n � 4 count list has 3293 point sources. In addition to the point sources, 43 extended sources have been detected, consistent with the depth of these observations and the number counts of clusters. We present here the X-ray catalog for the XBootes survey, including source positions, X-ray fluxes, hardness ratios, and their uncertainties. We calculate and present the differential number of sources per flux density interval, N(S), for the point sources. In the soft (0.5‐2 keV) band, N(S) is well fitted by a broken power law with slope of 2:60 þ0:11 � 0:12 at bright fluxes and 1:74 þ0:28 � 0:22 for faint fluxes. The hard source N(S) is well described by a single power law with an index of � 2:93 þ0:09 � 0:09.
---
paper_title: The scaling relation between richness and mass of galaxy clusters: a Bayesian approach
paper_content:
We use a sample of 53 galaxy clusters at 0.03<z<0.1 with available masses derived from the caustic technique and with velocity dispersions computed using 208 galaxies on average per cluster, in order to investigate the scaling between richness, mass and velocity dispersion. A tight scaling between richness and mass is found, with an intrinsic scatter of only 0.19 dex in mass and with a slope one, i.e. clusters which have twice as many galaxies are twice as massive. When richness is measured without any knowledge of the cluster mass or linked parameters (such as r200), it can predict mass with an uncertainty of 0.29+/-0.01 dex. As a mass proxy, richness competes favourably with both direct measurements of mass given by the caustic method, which has typically 0.14 dex errors (vs 0.29) and X-ray luminosity, which offers a similar 0.30 dex uncertainty. The similar performances of X-ray luminosity and richness in predicting cluster masses has been confirmed using cluster masses derived from velocity dispersion fixed by numerical simulations. These results suggest that cluster masses can be reliably estimated from simple galaxy counts, at least at the redshift and masses explored in this work. This has important applications in the estimation of cosmological parameters from optical cluster surveys, because in current surveys clusters detected in the optical range outnumber, by at least one order of magnitude, those detected in X-ray. Our analysis is robust from astrophysical and statistical perspectives. The data and code used for the stochastic computation is distributed with the paper. [Abridged]
---
paper_title: The Build–up of the Red Sequence in the galaxy cluster MS1054-0321 at z = 0.831
paper_content:
Using one of the deepest datasets available, we determine that the red sequence of the massive cluster MS1054-0321 at z=0.831 is well populated at all studied magnitudes, showing no deficit of faint (down to M^*+3.5) red galaxies: the faint end of the colour-magnitude relation is neither empty nor underpopulated. The effect is quantified by the computation of the luminosity function (LF) of red galaxies. We found a flat slope, showing that the abundance of red galaxies is similar at faint and at intermediate magnitudes. Comparison with present-day and z~0.4 LFs suggests that the slope of the LF is not changed, within the errors, between z=0.831 and z=0. Therefore, the analysis of the LF shows no evidence for a decreasing (with magnitude or redshift) number of faint red galaxies. The presence of faint red galaxies in high redshift clusters disfavours scenarios where the evolution of red galaxies is mass-dependent, because the mass dependency should differentially depauperate the red sequence, while the MS1054-0321 colour-magnitude relation is populated as in nearby clusters and as in z~0.4 clusters. The presence of abundant faint red galaxies in the high redshift cluster MS1054-0321 restricts the room for allocating descendants of Butcher-Oemler galaxies, because they should change the faint end slope of the LF of red galaxies, while instead the same faint end slopes are observed in MS1054-0321, at z~0 and at z~0.4. In the rich MS1054-0321 cluster, the colour-magnitude relation seems to be fully in place at z=0.831 and therefore red galaxies of all magnitudes were wholly assembled at higher redshift.
---
paper_title: New evidence for a linear colour–magnitude relation and a single Schechter function for red galaxies in a nearby cluster of galaxies down to M*+ 8
paper_content:
The colour and luminosity distributions of red galaxies in the cluster Abell 1185 (z = 0.0325) were studied down to M ∗ + 8 in the B, V and R bands. The colour‐magnitude (CM) relation is linear without evidence for significant bending down to absolute magnitudes that are seldom probed in the literature (MR =− 12.5 mag). The CM relation is thin (±0.04 mag) and its thickness is quite independent of the magnitude. The luminosity function (LF) of red galaxies in Abell 1185 is adequately described by a Schechter function, with a characteristic magnitude and a faint end slope that also well describe the LF of red galaxies in other clusters. There is no passband dependence of the LF shape other than an obvious M ∗ shift due to the colour of the considered population. Finally, we conclude that, based on colours and luminosity, red galaxies form a homogeneous population over four decades in stellar mass, providing a second piece of evidence against faint red galaxies being a recent cluster population.
---
paper_title: Estimating Mixtures of Regressions
paper_content:
This article shows how Bayesian inference for switching regression models and their generalizations can be achieved by the specification of loss functions which overcome the label switching problem common to all mixture models. We also derive an extension to models where the number of components in the mixture is unknown, based on the birthand-death technique developed in recent literature. The methods are illustrated on various real datasets.
---
paper_title: AN OVERVIEW OF LINEAR STRUCTURAL MOD- ELS IN ERRORS IN VARIABLES REGRESSION
paper_content:
• This paper aims to overview the numerous approaches that have been developed to estimate the parameters of the linear structural model. The linear structural model is an example of an errors in variables model, or measurement error model that has wide practical use. This paper brings together key concepts from a scattered literature to give an accessible account of existing work on this particular errors in variables model.
---
paper_title: Measurement Error: Models, Methods, and Applications
paper_content:
Introduction What is measurement error? Some examples The main ingredients Some terminology A look ahead Misclassification in Estimating a Proportion Motivating examples A model for the true values Misclassification models and naive analyses Correcting for misclassification Finite populations Multiple measures with no direct validation The multinomial case Mathematical developments Misclassification in Two-Way Tables Introduction Models for true values Misclassification models and naive estimators Behavior of naive analyses Correcting using external validation data Correcting using internal validation data General two-way tables Mathematical developments Simple Linear Regression Introduction The additive Berkson model and consequences The additive measurement error model The behavior of naive analyses Correcting for additive measurement error Examples Residual analysis Prediction Mathematical developments Multiple Linear Regression Introduction Model for true values Models and bias in naive estimators Correcting for measurement error Weighted and other estimators Examples Instrumental variables Mathematical developments Measurement Error in Regression: A General Overview Introduction Models for true values Analyses without measurement error Measurement error models Extra data Assessing bias in naive estimators Assessing bias using induced models Assessing bias via estimating equations Moment based and direct bias corrections Regression calibration and quasi-likelihood methods Simulation extrapolation (SIMEX) Correcting using likelihood methods Modified estimating equation approaches Correcting for misclassification Overview on use of validation data Bootstrapping Mathematical developments Binary Regression Introduction Additive measurement error Using validation data Misclassification of predictors Linear Models with Nonadditive Error Introduction Quadratic regression First-order models with interaction General nonlinear functions of the predictors Linear measurement error with validation data Misclassification of a categorical predictor Miscellaneous Nonlinear Regression Poisson regression: Cigarettes and cancer rates General nonlinear models Error in the Response Introduction Additive error in a single sample Linear measurement error in the one-way setting Measurement error in the response in linear models Mixed/Longitudinal Models Introduction, overview, and some examples Berkson error in designed repeated measures Additive error in the linear mixed model Time Series Introduction Random walk/population viability models Linear autoregressive models Background Material Notation for vectors, covariance matrices, etc. Double expectations Approximate Wald inferences The delta-method: approximate moments of nonlinear functions Fieller's method for ratios References Author Index Subject Index
---
paper_title: Numerical recipes: Cambridge University Press
paper_content:
A polyanhydride copolymer is made having styrene grafted onto the linear backbone chain of a styrene-maleic anhydride copolymer or a 1-alkene-maleic anhydride copolymer. This graft copolymer is also prepared in the presence of an epoxy compound such as monoepoxide or an epoxy resin for subsequent curing.
---
paper_title: Parameter estimation in astronomy through application of the likelihood ratio
paper_content:
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
---
paper_title: The Use and Misuse of Orthogonal Regression in Linear Errors-in-Variables Models
paper_content:
Abstract Orthogonal regression is one of the standard linear regression methods to correct for the effects of measurement error in predictors. We argue that orthogonal regression is often misused in errors-in-variables linear regression because of a failure to account for equation errors. The typical result is to overcorrect for measurement error, that is, overestimate the slope, because equation error is ignored. The use of orthogonal regression must include a careful assessment of equation error, and not merely the usual (often informal) estimation of the ratio of measurement error variances. There are rarer instances, for example, an example from geology discussed here, where the use of orthogonal regression without proper attention to modeling may lead to either overcorrection or undercorrection, depending on the relative sizes of the variances involved. Thus our main point, which does not seem to be widely appreciated, is that orthogonal regression, just like any measurement error analysis, requires ...
---
paper_title: Numerical recipes: Cambridge University Press
paper_content:
A polyanhydride copolymer is made having styrene grafted onto the linear backbone chain of a styrene-maleic anhydride copolymer or a 1-alkene-maleic anhydride copolymer. This graft copolymer is also prepared in the presence of an epoxy compound such as monoepoxide or an epoxy resin for subsequent curing.
---
paper_title: Measurement Error: Models, Methods, and Applications
paper_content:
Introduction What is measurement error? Some examples The main ingredients Some terminology A look ahead Misclassification in Estimating a Proportion Motivating examples A model for the true values Misclassification models and naive analyses Correcting for misclassification Finite populations Multiple measures with no direct validation The multinomial case Mathematical developments Misclassification in Two-Way Tables Introduction Models for true values Misclassification models and naive estimators Behavior of naive analyses Correcting using external validation data Correcting using internal validation data General two-way tables Mathematical developments Simple Linear Regression Introduction The additive Berkson model and consequences The additive measurement error model The behavior of naive analyses Correcting for additive measurement error Examples Residual analysis Prediction Mathematical developments Multiple Linear Regression Introduction Model for true values Models and bias in naive estimators Correcting for measurement error Weighted and other estimators Examples Instrumental variables Mathematical developments Measurement Error in Regression: A General Overview Introduction Models for true values Analyses without measurement error Measurement error models Extra data Assessing bias in naive estimators Assessing bias using induced models Assessing bias via estimating equations Moment based and direct bias corrections Regression calibration and quasi-likelihood methods Simulation extrapolation (SIMEX) Correcting using likelihood methods Modified estimating equation approaches Correcting for misclassification Overview on use of validation data Bootstrapping Mathematical developments Binary Regression Introduction Additive measurement error Using validation data Misclassification of predictors Linear Models with Nonadditive Error Introduction Quadratic regression First-order models with interaction General nonlinear functions of the predictors Linear measurement error with validation data Misclassification of a categorical predictor Miscellaneous Nonlinear Regression Poisson regression: Cigarettes and cancer rates General nonlinear models Error in the Response Introduction Additive error in a single sample Linear measurement error in the one-way setting Measurement error in the response in linear models Mixed/Longitudinal Models Introduction, overview, and some examples Berkson error in designed repeated measures Additive error in the linear mixed model Time Series Introduction Random walk/population viability models Linear autoregressive models Background Material Notation for vectors, covariance matrices, etc. Double expectations Approximate Wald inferences The delta-method: approximate moments of nonlinear functions Fieller's method for ratios References Author Index Subject Index
---
paper_title: Linear Regression for Astronomical Data with Measurement Errors and Intrinsic Scatter
paper_content:
Two new methods are proposed for linear regression analysis for data with measurement errors. Both methods are designed to accommodate intrinsic scatter in addition to measurement errors. The first (BCES) is a direct extension of the ordinary least squares (OLS) estimator to allow for measurement errors. It is quite general, allowing a) for measurement errors on both variables, b) the measurement errors for the two variables to be dependent, c) the magnitudes of the measurement errors to depend on the measurements, and d) other `symmetric' lines such as the bisector and the orthogonal regression can be constructed. The second method is a weighted least squares (WLS) estimator, which applies only in the case where the `independent' variable is measured without error and the magnitudes of the measurement errors on the 'dependent' variable are independent from the measurements. Several applications are made to extragalactic astronomy: The BCES method, when applied to data describing the color-luminosity relations for field galaxies, yields significantly different slopes than OLS and other estimators used in the literature. Simulations with artificial data sets are used to evaluate the small sample performance of the estimators. Unsurprisingly, the least-biased results are obtained when color is treated as the dependent variable. The Tully-Fisher relation is another example where the BCES method should be used because errors in luminosity and velocity are correlated due to inclination corrections. We also find, via simulations, that the WLS method is by far the best method for the Tolman surface-brightness test, producing the smallest variance in slope by an order of magnitude. Moreover, with WLS it is not necessary to ``reduce'' galaxies to a fiducial surface-brightness, since this model incorporates intrinsic scatter.
---
paper_title: The BUGS project: Evolution, critique and future directions
paper_content:
BUGS is a software package for Bayesian inference using Gibbs sampling. The software has been instrumental in raising awareness of Bayesian modelling among both academic and commercial communities internationally, and has enjoyed considerable success over its 20-year life span. Despite this, the software has a number of shortcomings and a principal aim of this paper is to provide a balanced critical appraisal, in particular highlighting how various ideas have led to unprecedented flexibility while at the same time producing negative side effects. We also present a historical overview of the BUGS project and some future perspectives. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Prior distributions for variance parameters in hierarchical models
paper_content:
Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral- t family of conditionally conjugate priors for hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma family of ``noninformative'' prior distributions. We suggest instead to use a uniform prior on the hierarchical standard deviation, using the half-t family when the number of groups is small and in other settings where a weakly informative prior is desired.
---
paper_title: BAYESIAN REASONING IN DATA ANALYSIS: A CRITICAL INTRODUCTION
paper_content:
Critical review and outline of the Bayesian alternative: uncertainty in physics and the usual methods of handling it a probabilistic theory of measurement uncertainty. A Bayesian primer: subjective probability and Bayes' theorem probability distributions (a concise reminder) Bayesian inference of continuous quantities Gaussian likelihood counting experiments bypassing Bayes' theorem for routine applications Bayesian unfolding. Further comments, examples and applications: miscellanea on general issues in probability and inference combination of experimental results - a closer look asymmetric uncertainties and nonlinear propagation which priors for frontier physics? Concluding matter: conclusions and bibliography.
---
paper_title: BAYESIAN ANALYSIS OF ERRORS-IN-VARIABLES REGRESSION MODELS
paper_content:
SUMMARY Use of errors-in-variables models is appropriate in many practical experimental problems. However, inference based on such models is by no means straightforward. In previous analyses, simplifying assumptions have been made in order to ease this intractability, but assumptions of this nature are unfortunate and restrictive. In this paper, we analyse errors-in-variables models in full generality under a Bayesian formulation. In order to compute the necessary posterior distributions, we utilize various computational techniques. Two specific non-linear errors-in-variables regression examples are considered; the first is a re-analysed Berkson-type model, and the second is a classical errors-in-variables model. Our analyses are compared and contrasted with those presented elsewhere in the literature.
---
paper_title: Fits , and especially linear fits , with errors on both axes , extra variance of the data points and other complications
paper_content:
The aim of this paper, triggered by some discussions in the astrophysics community raised by astro-ph/0508529, is to introduce the issue of `fits' from a probabilistic perspective (also known as Bayesian), with special attention to the construction of model that describes the `network of dependences' (a Bayesian network) that connects experimental observations to model parameters and upon which the probabilistic inference relies. The particular case of linear fit with errors on both axes and extra variance of the data points around the straight line (i.e. not accounted by the experimental errors) is shown in detail. Some questions related to the use of linear fit formulas to log-linearized exponential and power laws are also sketched, as well as the issue of systematic errors.
---
paper_title: Linear Regression for Astronomical Data with Measurement Errors and Intrinsic Scatter
paper_content:
Two new methods are proposed for linear regression analysis for data with measurement errors. Both methods are designed to accommodate intrinsic scatter in addition to measurement errors. The first (BCES) is a direct extension of the ordinary least squares (OLS) estimator to allow for measurement errors. It is quite general, allowing a) for measurement errors on both variables, b) the measurement errors for the two variables to be dependent, c) the magnitudes of the measurement errors to depend on the measurements, and d) other `symmetric' lines such as the bisector and the orthogonal regression can be constructed. The second method is a weighted least squares (WLS) estimator, which applies only in the case where the `independent' variable is measured without error and the magnitudes of the measurement errors on the 'dependent' variable are independent from the measurements. Several applications are made to extragalactic astronomy: The BCES method, when applied to data describing the color-luminosity relations for field galaxies, yields significantly different slopes than OLS and other estimators used in the literature. Simulations with artificial data sets are used to evaluate the small sample performance of the estimators. Unsurprisingly, the least-biased results are obtained when color is treated as the dependent variable. The Tully-Fisher relation is another example where the BCES method should be used because errors in luminosity and velocity are correlated due to inclination corrections. We also find, via simulations, that the WLS method is by far the best method for the Tolman surface-brightness test, producing the smallest variance in slope by an order of magnitude. Moreover, with WLS it is not necessary to ``reduce'' galaxies to a fiducial surface-brightness, since this model incorporates intrinsic scatter.
---
paper_title: Scaling relations of the colour-detected cluster RzCS 052 at z=1.016 and of some other high redshift clusters
paper_content:
We report on the discovery of the z = 1.016 cluster RzCS 052 using a modified red-sequence method, follow up spectroscopy and X-ray imaging. This cluster has a velocity dispersion of 710 ± 150 km s -1 , a virial mass of 4.0 x 10 14 M ⊙ (based on 21 spectroscopically confirmed members) and an X-ray luminosity of (0.68 ± 0.47) x 10 44 erg s -1 in the [1-4] keV band. This optically selected cluster appears to be of richness class 3 and to follow the known L X -σ v relation for high-redshift X-ray selected clusters. Using these data, we find that the halo occupation number for this cluster is only marginally consistent with what was expected assuming a self-similar evolution of cluster scaling relations, suggesting perhaps a break of them at z ∼ 1. We also rule out a strong galaxy merging activity between z = 1 and today. Finally, we present a Bayesian approach to measuring cluster velocity dispersions and X-ray luminosities in the presence of a background: we critically reanalyse recent claims for X-ray underluminous clusters using these techniques and find that the clusters can be accommodated within the existing L X -σ v relation.
---
paper_title: BAYESIAN ANALYSIS OF ERRORS-IN-VARIABLES REGRESSION MODELS
paper_content:
SUMMARY Use of errors-in-variables models is appropriate in many practical experimental problems. However, inference based on such models is by no means straightforward. In previous analyses, simplifying assumptions have been made in order to ease this intractability, but assumptions of this nature are unfortunate and restrictive. In this paper, we analyse errors-in-variables models in full generality under a Bayesian formulation. In order to compute the necessary posterior distributions, we utilize various computational techniques. Two specific non-linear errors-in-variables regression examples are considered; the first is a re-analysed Berkson-type model, and the second is a classical errors-in-variables model. Our analyses are compared and contrasted with those presented elsewhere in the literature.
---
paper_title: New evidence for a linear colour–magnitude relation and a single Schechter function for red galaxies in a nearby cluster of galaxies down to M*+ 8
paper_content:
The colour and luminosity distributions of red galaxies in the cluster Abell 1185 (z = 0.0325) were studied down to M ∗ + 8 in the B, V and R bands. The colour‐magnitude (CM) relation is linear without evidence for significant bending down to absolute magnitudes that are seldom probed in the literature (MR =− 12.5 mag). The CM relation is thin (±0.04 mag) and its thickness is quite independent of the magnitude. The luminosity function (LF) of red galaxies in Abell 1185 is adequately described by a Schechter function, with a characteristic magnitude and a faint end slope that also well describe the LF of red galaxies in other clusters. There is no passband dependence of the LF shape other than an obvious M ∗ shift due to the colour of the considered population. Finally, we conclude that, based on colours and luminosity, red galaxies form a homogeneous population over four decades in stellar mass, providing a second piece of evidence against faint red galaxies being a recent cluster population.
---
paper_title: Linear Regression for Astronomical Data with Measurement Errors and Intrinsic Scatter
paper_content:
Two new methods are proposed for linear regression analysis for data with measurement errors. Both methods are designed to accommodate intrinsic scatter in addition to measurement errors. The first (BCES) is a direct extension of the ordinary least squares (OLS) estimator to allow for measurement errors. It is quite general, allowing a) for measurement errors on both variables, b) the measurement errors for the two variables to be dependent, c) the magnitudes of the measurement errors to depend on the measurements, and d) other `symmetric' lines such as the bisector and the orthogonal regression can be constructed. The second method is a weighted least squares (WLS) estimator, which applies only in the case where the `independent' variable is measured without error and the magnitudes of the measurement errors on the 'dependent' variable are independent from the measurements. Several applications are made to extragalactic astronomy: The BCES method, when applied to data describing the color-luminosity relations for field galaxies, yields significantly different slopes than OLS and other estimators used in the literature. Simulations with artificial data sets are used to evaluate the small sample performance of the estimators. Unsurprisingly, the least-biased results are obtained when color is treated as the dependent variable. The Tully-Fisher relation is another example where the BCES method should be used because errors in luminosity and velocity are correlated due to inclination corrections. We also find, via simulations, that the WLS method is by far the best method for the Tolman surface-brightness test, producing the smallest variance in slope by an order of magnitude. Moreover, with WLS it is not necessary to ``reduce'' galaxies to a fiducial surface-brightness, since this model incorporates intrinsic scatter.
---
paper_title: Fits , and especially linear fits , with errors on both axes , extra variance of the data points and other complications
paper_content:
The aim of this paper, triggered by some discussions in the astrophysics community raised by astro-ph/0508529, is to introduce the issue of `fits' from a probabilistic perspective (also known as Bayesian), with special attention to the construction of model that describes the `network of dependences' (a Bayesian network) that connects experimental observations to model parameters and upon which the probabilistic inference relies. The particular case of linear fit with errors on both axes and extra variance of the data points around the straight line (i.e. not accounted by the experimental errors) is shown in detail. Some questions related to the use of linear fit formulas to log-linearized exponential and power laws are also sketched, as well as the issue of systematic errors.
---
paper_title: The Build–up of the Red Sequence in the galaxy cluster MS1054-0321 at z = 0.831
paper_content:
Using one of the deepest datasets available, we determine that the red sequence of the massive cluster MS1054-0321 at z=0.831 is well populated at all studied magnitudes, showing no deficit of faint (down to M^*+3.5) red galaxies: the faint end of the colour-magnitude relation is neither empty nor underpopulated. The effect is quantified by the computation of the luminosity function (LF) of red galaxies. We found a flat slope, showing that the abundance of red galaxies is similar at faint and at intermediate magnitudes. Comparison with present-day and z~0.4 LFs suggests that the slope of the LF is not changed, within the errors, between z=0.831 and z=0. Therefore, the analysis of the LF shows no evidence for a decreasing (with magnitude or redshift) number of faint red galaxies. The presence of faint red galaxies in high redshift clusters disfavours scenarios where the evolution of red galaxies is mass-dependent, because the mass dependency should differentially depauperate the red sequence, while the MS1054-0321 colour-magnitude relation is populated as in nearby clusters and as in z~0.4 clusters. The presence of abundant faint red galaxies in the high redshift cluster MS1054-0321 restricts the room for allocating descendants of Butcher-Oemler galaxies, because they should change the faint end slope of the LF of red galaxies, while instead the same faint end slopes are observed in MS1054-0321, at z~0 and at z~0.4. In the rich MS1054-0321 cluster, the colour-magnitude relation seems to be fully in place at z=0.831 and therefore red galaxies of all magnitudes were wholly assembled at higher redshift.
---
paper_title: New evidence for a linear colour–magnitude relation and a single Schechter function for red galaxies in a nearby cluster of galaxies down to M*+ 8
paper_content:
The colour and luminosity distributions of red galaxies in the cluster Abell 1185 (z = 0.0325) were studied down to M ∗ + 8 in the B, V and R bands. The colour‐magnitude (CM) relation is linear without evidence for significant bending down to absolute magnitudes that are seldom probed in the literature (MR =− 12.5 mag). The CM relation is thin (±0.04 mag) and its thickness is quite independent of the magnitude. The luminosity function (LF) of red galaxies in Abell 1185 is adequately described by a Schechter function, with a characteristic magnitude and a faint end slope that also well describe the LF of red galaxies in other clusters. There is no passband dependence of the LF shape other than an obvious M ∗ shift due to the colour of the considered population. Finally, we conclude that, based on colours and luminosity, red galaxies form a homogeneous population over four decades in stellar mass, providing a second piece of evidence against faint red galaxies being a recent cluster population.
---
paper_title: Red sequence determination of the redshift of the cluster of galaxies JKCS041: z~2.2
paper_content:
This paper aims at robustly determining the redshift of the cluster of galaxies JKCS041 and at putting constraints on the formation epoch of the color-magnitude sequence in two very high redshift clusters. New deep z'-J data show a clear narrow red sequence that is co-centered with, and similarly concentrated on, the extended X-ray emission of the cluster of galaxies JKCS041. The JKCS041 red sequence is 0.32+/-0.06 mag redder in z'-J than the red sequence of the zspec=1.62 IRC0218A cluster, putting JKCS041 at z>>1.62 and ruling out z<~1.49 the latter claimed by a recent paper. The color difference of the two red sequences gives a red-sequence-based redshift of z=2.20+/-0.11 for JKCS041, where the uncertainty accounts for uncertainties in stellar synthesis population models, in photometric calibration, and in the red sequence color of both JKCS041 and IRC0218A clusters. We do not observe any sign of truncation of the red sequence for both clusters down to J=23 mag (1.0e+11 solar masses), which suggests that it is already in place in clusters rich and massive enough to heat and retain hot gas at these high redshifts.
---
paper_title: Richness-mass relation self-calibration for galaxy clusters
paper_content:
This work attains a threefold objective: first, we derived the richness-mass scaling in the local Universe from data of 53 clusters with individual measurements of mass. We found a 0.46 ± 0.12 slope and a 0.25 ± 0.03 dex scatter measuring richness with a previously developed method. Second, we showed on a real sample of 250 0.06 < z < 0.9 clusters, most of which are at z < 0.3, with spectroscopic redshift that the colour of the red sequence allows us to measure the clusters’ redshift to better than Δz = 0.02. Third, we computed the predicted prior of the richness-mass scaling to forecast the capabilities of future wide-field-area surveys of galaxy clusters to constrain cosmological parameters. To this aim, we generated a simulated universe obeying the richness-mass scaling that we found. We observed it with a PanStarrs 1+Euclid-like survey, allowing for intrinsic scatter between mass and richness, for errors on mass, on richness, and for photometric redshift errors. We fitted the observations with an evolving five-parameter richness-mass scaling with parameters to be determined. Input parameters were recovered, but only if the cluster mass function and the weak-lensing redshift-dependent selection function were accounted for in the fitting of the mass-richness scaling. This emphasizes the limitations of often adopted simplifying assumptions, such as having a mass-complete redshift-independent sample. We derived the uncertainty and the covariance matrix of the (evolving) richness-mass scaling, which are the input ingredients of cosmological forecasts using cluster counts. We find that the richness-mass scaling parameters can be determined 10 5 times better than estimated in previous works that did not use weak-lensing mass estimates, although we emphasize that this high factor was derived with scaling relations with different parameterizations. The better knowledge of the scaling parameters likely has a strong impact on the relative importance of the different probes used to constrain cosmological parameters. The fitting code used for computing the predicted prior, including the treatment of the mass function and of the weak-lensing selection function, is provided in Appendix A. It can be re-used, for example, to derive the predicted prior of other observable-mass scalings, such as the LX-mass relation.
---
paper_title: The absolute magnitudes of Type IA supernovae
paper_content:
Absolute magnitudes in the B, V, and I bands are derived for nine well-observed Type Ia supernovae using host galaxy distances estimated via the surface brightness fluctuations or Tully-Fisher methods. These data indicate that there is a significant intrinsic dispersion in the absolute magnitudes at maximum light of Type Ia supernovae, amounting to ±0.8 mag in B, ±0.6 mag in V, and ±0.5 mag in I. Moreover, the absolute magnitudes appear to be tightly correlated with the initial rate of decline of the B light curve, with the slope of the correlation being steepest in B and becoming progressively flatter in the V and I bands
---
paper_title: Do X-ray dark or underluminous galaxy clusters exist?
paper_content:
We study the X-ray properties of a color-selected sample of clusters at 0.1 < z < 0.3, to quantify the real aboundance of the population of X-ray dark or underluminous clusters and at the same time the spurious detection contamination level of color-selected cluster catalogs. Starting from a local sample of color-selected clusters, we restrict our attention to those with sufficiently deep X-ray observations to probe their X-ray luminosity down to very faint values and without introducing any X-ray bias. This allowed us to have an X-ray- unbiased sample of 33 clusters to measure the LX-richness relation. Swift 1.4 Ms X-ray observations show that at least 89% of the color-detected clusters are real objects with a potential well deep enough to heat and retain an intracluster medium. The percentage rises to 94% when one includes the single spectroscopically confirmed color-selected cluster whose X-ray emission is not secured. Looking at our results from the opposite perspective, the percentage of X-ray dark clusters among color-selected clusters is very low: at most about 11 per cent (at 90% confidence). Supplementing our data with those from literature, we conclude that X-rayand color- cluster surveys sample the same population and consequently that in this regard we can safely use clusters selected with any of the two methods for cosmological purposes. This is an essential and promising piece of information for upcoming surveys in both the optical/IR (DES, EUCLID) and X-ray (eRosita). Richness correlates with X-ray luminosity with a large scatter, 0.51 ± 0.08 (0.44 ± 0.07) dex in lgLX at a given richness, when Lx is measured in a 500 (1070) kpc aperture. We release data and software to estimate the X-ray flux, or its upper limit, of a source with over-Poisson background fluctuations (found in this work to be ∼20% on cluster angular scales) and to fit X-ray luminosity vs richness if there is an intrinsic scatter. These Bayesian applications rigorously account for boundaries (e.g., the X-ray luminosity and the richness cannot be negative).
---
paper_title: The scaling relation between richness and mass of galaxy clusters: a Bayesian approach
paper_content:
We use a sample of 53 galaxy clusters at 0.03<z<0.1 with available masses derived from the caustic technique and with velocity dispersions computed using 208 galaxies on average per cluster, in order to investigate the scaling between richness, mass and velocity dispersion. A tight scaling between richness and mass is found, with an intrinsic scatter of only 0.19 dex in mass and with a slope one, i.e. clusters which have twice as many galaxies are twice as massive. When richness is measured without any knowledge of the cluster mass or linked parameters (such as r200), it can predict mass with an uncertainty of 0.29+/-0.01 dex. As a mass proxy, richness competes favourably with both direct measurements of mass given by the caustic method, which has typically 0.14 dex errors (vs 0.29) and X-ray luminosity, which offers a similar 0.30 dex uncertainty. The similar performances of X-ray luminosity and richness in predicting cluster masses has been confirmed using cluster masses derived from velocity dispersion fixed by numerical simulations. These results suggest that cluster masses can be reliably estimated from simple galaxy counts, at least at the redshift and masses explored in this work. This has important applications in the estimation of cosmological parameters from optical cluster surveys, because in current surveys clusters detected in the optical range outnumber, by at least one order of magnitude, those detected in X-ray. Our analysis is robust from astrophysical and statistical perspectives. The data and code used for the stochastic computation is distributed with the paper. [Abridged]
---
paper_title: The enrichment history of the intracluster medium: a Bayesian approach
paper_content:
This work measures the evolution of the iron content in galaxy clusters by a rigorous analysis of the data of 130 clusters at 0.1 < z < 1.3. This task is made difficult by a) the low signal-to-noise ratio of abundance measurements and the upper limits; b) possible selection effects; c) boundaries in the parameter space; d) non-Gaussian errors; e) the intrinsic variety of the objects studied; and f) abundance systematics. We introduce a Bayesian model to address all these issues at the same time, thus allowing cross-talk (covariance). On simulated data, the Bayesian fit recovers the input enrichment history, unlike in standard analysis. After accounting for a possible dependence on X-ray temperature, for metal abundance systematics, and for the intrinsic variety of studied objects, we found that the present-day metal content is not reached either at high or at low redshifts, but gradually over time: iron abundance increases by a factor 1.5 in the 7 Gyr sampled by the data. Therefore, feedback in metal abundance does not end at high redshift. Evolution is established with a moderate amount of evidence, 19 to 1 odds against faster or slower metal enrichment histories. We quantify, for the first time, the intrinsic spread in metal abundance, 18 ± 3%, after correcting for the effect of evolution, X-ray temperature, and metal abundance systematics. Finally, we also present an analytic approximation of the X-ray temperature and metal abundance likelihood functions, which are useful for other regression fitting involving these parameters. The data for the 130 clusters and code used for the stochastic computation are provided with the paper.
---
| Title: Measurement errors and scaling relations in astrophysics: a review
Section 1: Introduction
Description 1: Provide an overview of scaling relations in astrophysics, including their importance and common examples.
Section 2: Why astronomers regress
Description 2: Discuss the reasons why astronomers perform regression analyses, such as parameter estimation, prediction, and model selection.
Section 3: Heteroscedastic error structure
Description 3: Explain the complications introduced by heteroscedastic error structures in astronomical data and how to address them.
Section 4: Intrinsic scatter
Description 4: Elaborate on the concept of intrinsic scatter and differentiate it from measurement noise in regression models.
Section 5: Non-ignorable data collection and selection effects
Description 5: Discuss the challenges posed by non-random data collection and selection effects on the regression analyses.
Section 6: Data structure and non-uniform populations
Description 6: Describe how the structure of the data and the presence of non-uniform populations can lead to biases in regression parameter estimation.
Section 7: Non Gaussian data, outliers and other complications
Description 7: Address issues arising from non-Gaussian data, outliers, and other complications in regression models.
Section 8: Commonly used regression techniques in astronomy
Description 8: Provide an overview of various regression methods used in astronomy, including their strengths and weaknesses.
Section 9: Ordinary Least Squares regression
Description 9: Explain the Ordinary Least Squares (OLS) regression method and its application in astronomy.
Section 10: Weighted least squares fits
Description 10: Describe the weighted least squares regression method and scenarios where it is more appropriate than OLS.
Section 11: Maximum Likelihood Estimation
Description 11: Elaborate on the Maximum Likelihood Estimation (MLE) method and its advantages over OLS and weighted least squares.
Section 12: Robust estimation
Description 12: Introduce robust estimation methods to handle outliers and violations of parametric assumptions.
Section 13: Bivariate Correlated Errors and Intrinsic Scatter (BCES)
Description 13: Review the BCES method developed to handle correlated errors and intrinsic scatter in both axes of regression models.
Section 14: Astronomy Survival Analysis (ASURV)
Description 14: Discuss the ASURV method for handling censored data in regression analysis.
Section 15: Errors-in-variable Bayesian regression
Description 15: Review the Bayesian regression method for errors-in-variables models and its advantages in handling complicated data structures.
Section 16: Performance comparisons
Description 16: Compare the performance of different regression techniques in estimating regression parameters and prediction accuracy.
Section 17: Including more features of astronomical data in Bayesian regression
Description 17: Discuss how Bayesian regression can be adapted to include common features of astronomical data such as heteroscedastic errors, intrinsic scatter, and non-ignorable selection effects.
Section 18: Conclusions
Description 18: Summarize the key points discussed in the review and highlight the importance of Bayesian approaches in addressing complex regression problems in astrophysics. |
Overview and open issues on penetration test | 16 | ---
paper_title: Penetration Testing - Protecting Networks and Systems
paper_content:
Penetration testing is the simulation of an unethical attack of a computer system or other facility in order to prove the vulnerability of that system in the event of a real attack. The Certified Penetration Testing Engineer (CPTE) examination is a widely recognized certification for penetration testers. Penetration Testing: Protecting networks and systems is a preparation guide for the CPTE examination. It describes the range of techniques employed by professional pen testers, and also includes advice on the preparation and delivery of the test report. The author's in-the-field experiences, combined with other real-world examples, are used to illustrate common pitfalls that can be encountered during testing and reporting. Special attention is also paid to new technologies that improve business operations, but which can create new vulnerabilities, such as employee remote access, wireless communications and public-facing web applications. This book will give you a better understanding of how to conduct a penetration test, and also how to deliver a client-focused report that assesses the security of the system and whether the level of risk to the organization is within acceptable levels. Kevin Henry has 35 years' experience working on computer systems, initially as a computer operator, and then in various programmer and analyst roles, before moving into audit and security. Kevin currently provides security auditing, training and educational programs for major clients and governments around the world and is a frequent speaker on the security conference circuit. A business-aligned approach to penetration testing!
---
paper_title: Opportunities and threats: A security assessment of state e-government websites
paper_content:
Abstract This study assessed the security of the U.S. state e-government sites to identify opportunities for and threats to the sites and their users. The study used a combination of three methods – web content analysis, information security auditing, and computer network security mapping – for data collection and analysis. The findings indicate that most state e-government sites posted privacy and security policy statements; however, only less than half stated clearly what security measures were in action. Second, the information security audit revealed that 98% of the sites secured users' accounts with SSL encryption for data transmission, and the sites' search tools enable public users to search for public information only. Third, although the sites had most of their internet ports filtered or behind firewalls, all of them had their main IP addresses detected and their port 80/tcp open. The study discussed the threats and opportunities and suggested possible solutions for improving e-government security.
---
paper_title: Modeling intrusion detection system using hybrid intelligent systems
paper_content:
The process of monitoring the events occurring in a computer system or network and analyzing them for sign of intrusions is known as intrusion detection system (IDS). This paper presents two hybrid approaches for modeling IDS. Decision trees (DT) and support vector machines (SVM) are combined as a hierarchical hybrid intelligent system model (DT-SVM) and an ensemble approach combining the base classifiers. The hybrid intrusion detection model combines the individual base classifiers and other hybrid machine learning paradigms to maximize detection accuracy and minimize computational complexity. Empirical results illustrate that the proposed hybrid systems provide more accurate intrusion detection systems.
---
paper_title: Penetration Testing and Cisco Network Defense
paper_content:
The practical guide to simulating, detecting, and responding to network attacksi? Create step-by-step testing plans Learn to perform social engineering and host reconnaissance Evaluate session hijacking methods Exploit web server vulnerabilities Detect attempts to breach database security Use password crackers to obtain access information Circumvent Intrusion Prevention Systems (IPS) and firewall protections and disrupt the service of routers and switches Scan and penetrate wireless networks Understand the inner workings of Trojan Horses, viruses, and other backdoor applications Test UNIX, Microsoft, and Novell servers for vulnerabilities Learn the root cause of buffer overflows and how to prevent them Perform and prevent Denial of Service attacksPenetration testing is a growing field but there has yet to be a definitive resource that instructs ethical hackers on how to perform a penetration test with the ethics and responsibilities of testing in mind. Penetration Testing and Network Defense offers detailed steps on how to emulate an outside attacker in order to assess the security of a network.Unlike other books on hacking, this book is specifically geared towards penetration testing. It includes important information about liability issues and ethics as well as procedures and documentation. Using popular open-source and commercial applications, the book shows you how to perform a penetration test on an organization's network, from creating a test plan to performing social engineering and host reconnaissance to performing simulated attacks on both wired and wireless networks.Penetration Testing and Network Defense also goes a step further than other books on hacking, as it demonstrates how to detect an attack on a live network. By detailing the method of an attack and how to spot an attack on your network, this book better prepares you to guard against hackers. You will learn how to configure, record, and thwart these attacks and how to harden a system to protect it against future internal and external attacks.Full of real-world examples and step-by-step procedures, this book is both an enjoyable read and full of practical advice that will help you assess network security and develop a plan for locking down sensitive data and company resources.“This book goes to great lengths to explain the various testing approaches that are used today and gives excellent insight into how a responsible penetration testing specialist executes his trade.”i??Bruce Murphy, Vice President, World Wide Security Services, Cisco Systems®
---
paper_title: Systematic mapping studies in software engineering
paper_content:
BACKGROUND: A software engineering systematic map is a defined method to build a classification scheme and structure a software engineering field of interest. The analysis of results focuses on frequencies of publications for categories within the scheme. Thereby, the coverage of the research field can be determined. Different facets of the scheme can also be combined to answer more specific research questions. ::: ::: OBJECTIVE: We describe how to conduct a systematic mapping study in software engineering and provide guidelines. We also compare systematic maps and systematic reviews to clarify how to chose between them. This comparison leads to a set of guidelines for systematic maps. ::: ::: METHOD: We have defined a systematic mapping process and applied it to complete a systematic mapping study. Furthermore, we compare systematic maps with systematic reviews by systematically analyzing existing systematic reviews. ::: ::: RESULTS: We describe a process for software engineering systematic mapping studies and compare it to systematic reviews. Based on this, guidelines for conducting systematic maps are defined. ::: ::: CONCLUSIONS: Systematic maps and reviews are different in terms of goals, breadth, validity issues and implications. Thus, they should be used complementarily and require different methods (e.g., for analysis).
---
paper_title: A survey on web penetration test
paper_content:
This paper reviews the penetration test specifically in the field of web. For this purpose, it first reviews articles generally on penetration test and its associated methods. Then articles in the field of web penetration test are examined in three aspects: comparing automatic penetration test tools, introduction of new methods or tools for manual penetration test, and articles that presented a test environment for training or checking various instruments and methods. This article studied 4 different methodologies for web penetration test, 13 articles for comparing web vulnerability scanners, 10 articles that proposed a new method or tool for penetration test and 4 test environments.
---
paper_title: Systematic mapping studies in software engineering
paper_content:
BACKGROUND: A software engineering systematic map is a defined method to build a classification scheme and structure a software engineering field of interest. The analysis of results focuses on frequencies of publications for categories within the scheme. Thereby, the coverage of the research field can be determined. Different facets of the scheme can also be combined to answer more specific research questions. ::: ::: OBJECTIVE: We describe how to conduct a systematic mapping study in software engineering and provide guidelines. We also compare systematic maps and systematic reviews to clarify how to chose between them. This comparison leads to a set of guidelines for systematic maps. ::: ::: METHOD: We have defined a systematic mapping process and applied it to complete a systematic mapping study. Furthermore, we compare systematic maps with systematic reviews by systematically analyzing existing systematic reviews. ::: ::: RESULTS: We describe a process for software engineering systematic mapping studies and compare it to systematic reviews. Based on this, guidelines for conducting systematic maps are defined. ::: ::: CONCLUSIONS: Systematic maps and reviews are different in terms of goals, breadth, validity issues and implications. Thus, they should be used complementarily and require different methods (e.g., for analysis).
---
paper_title: Penetration Testing Tool for Web Services Security
paper_content:
XML-based SOAP Web Services are a widely used technology, which allows the users to execute remote operations and transport arbitrary data. It is currently adapted in Service Oriented Architectures, cloud interfaces, management of federated identities, eGovernment, or millitary services. The wide adoption of this technology has resulted in an emergence of numerous -- mostly complex -- extension specifications. Naturally, this has been followed by a rise in large number of Web Services attacks. They range from specific Denial of Service attacks to attacks breaking interfaces of cloud providers or confidentiality of encrypted messages. By implementing common web applications, the developers evaluate the security of their systems by applying different penetration testing tools. However, in comparison to the well-known attacks as SQL injection or Cross Site Scripting, there exist no penetration testing tools for Web Services specific attacks. This was the motivation for developing the first automated penetration testing tool for Web Services called WS-Attacker. In this paper we give an overview of our design decisions and provide evaluation of four Web Services frameworks and their resistance against WS-Addressing spoofing and SOAPAction spoofing attacks. %WS-Attacker was built with respect to its future extensions with further attacks in order to provide an all-in-one security checking interface.
---
paper_title: Topological analysis of network attack vulnerability
paper_content:
This talk will discuss issues and methods for survivability of systems under malicious attacks. To protect from such attacks, it is necessary to take steps to prevent attacks from succeeding. At the same time, it is important to recognize that not all attacks can be averted at the outset; attacks that are successful to some degree must be recognized as unavoidable and comprehensive support for identifying and responding to attacks is required.In my talk, I will describe the recent research on attack graphs that represent known attack sequences attackers can use to penetrate computer networks. I will show how attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources. Attack graphs can also be used to correlate received alerts, hypothesize missing alerts, and predict future alerts, all at the same time. Thus, they offer a promising solution for administrators to monitor and predict the progress of an intrusion, and take appropriate countermeasures in a timely manner.
---
paper_title: RAPn: Network Attack Prediction Using Ranking Access Petri Net
paper_content:
Exploits sequencing is a typical way by which an attacker breaks into a network. In such a scenario, each exploit lays as an atomic proposition for subsequent exploits. An attack path is seen as a succession of exploits which take an attacker right to his/her final goal. The set of all possible attack paths form an attack graph. Researchers have proposed a multitude of techniques to generate attack graph which grows exponentially in the size of the network. Hence it is preferable to optimize the choice of solutions which avoid the cost of scalability and cumbersome. In this paper, we propose a comprehensive approach to network vulnerability analysis by ranking access Petri net graph and utilizing a penetration tester's perspective of maximal level of access possible on a host. Our approach has the following benefits: it provides a simple model in which an analyst can work, its algorithmic complexity is polynomial in the size of the network, and has the ability of scaling well to large size networks. Nevertheless, it has some drawback as in place of all possible attack paths, we seek only good attack paths. An analyst may make suboptimal choices when repairing the network.
---
paper_title: SWAM: Stuxnet Worm Analysis in Metasploit
paper_content:
Nowadays cyber security is becoming a great challenge. Attacker's community is progressing towards making smart and intelligent malwares (viruses, worms and Root kits). They stealth their existence and also use administrator rights without knowing legal user. Stuxnet worm is an example of a recent malware first detected in July 2010. Its variants were also detected earlier. It is the first type of worm that affects the normal functionality of industrial control systems (ICS) having programmable logic controllers (PLC) through PLC Root kit. Its main goal is to modify ICS behavior by changing the code of PLC and make it to behave in a way that attacker wants. It is a complex piece of malware having different operations and functionalities which are achieved by exploiting zero day vulnerabilities. Stuxnet exploits various vulnerable services in Microsoft Windows. In this paper we will show real time simulation of first three vulnerabilities of these through Metasploit Framework 3.2 and analyze results. A real time scenario is established based on some assumptions. We assumed Proteus design (pressure sensor) as PLC and showed after exploitation that the pressure value drops to an unacceptable level by changing Keil code of this design.
---
paper_title: Benchmarking the Security of Web Serving Systems Based on Known Vulnerabilities
paper_content:
This paper proposes a methodology and a tool to evaluate the security risk presented when using software components or systems. The risk is estimated based on known vulnerabilities existing on the software components. An automated tool is used to extract and aggregate information on vulnerabilities reported by users and available on public databases (e.g., OSVDB and NVD). This tool generates comprehensive reports including the vulnerability type frequency, severity, exploitability, impact, and so on, and extracts correlations between aspects such as impact and representativeness, making possible the identification of aspects such as typical and worst impact for a given vulnerability. The proposed methodology, when applied to systems within the same class, enables buyers and system integrators to identify which system or component presents the lower security risk, helping them to select which system to use. The paper includes a case study to demonstrate the usefulness of the methodology and the tool.
---
paper_title: Designing vulnerability testing tools for web services: approach, components, and tools
paper_content:
This paper proposes a generic approach for designing vulnerability testing tools for web services, which includes the definition of the testing procedure and the tool components. Based on the proposed approach, we present the design of three innovative testing tools that implement three complementary techniques (improved penetration testing, attack signatures and interface monitoring, and runtime anomaly detection) for detecting injection vulnerabilities, thus offering an extensive support for different scenarios. A case study has been designed to demonstrate the tools for the particular case of SQL Injection vulnerabilities. The experimental evaluation demonstrates that the tools can effectively be used in different scenarios and that they outperform well-known commercial tools by achieving higher detection coverage and lower false-positive rates.
---
paper_title: Denial-of-Service detection in 6LoWPAN based Internet of Things
paper_content:
Smart objects connected to the Internet, constituting the so called Internet of Things (IoT), are revolutionizing human beings' interaction with the world. As technology reaches everywhere, anyone can misuse it, and it is always essential to secure it. In this work we present a denial-of-service (DoS) detection architecture for 6LoWPAN, the standard protocol designed by IETF as an adaptation layer for low-power lossy networks enabling low-power devices to communicate with the Internet. The proposed architecture integrates an intrusion detection system (IDS) into the network framework developed within the EU FP7 project ebbits. The aim is to detect DoS attacks based on 6LoWPAN. In order to evaluate the performance of the proposed architecture, preliminary implementation was completed and tested against a real DoS attack using a penetration testing system. The paper concludes with the related results proving to be successful in detecting DoS attacks on 6LoWPAN. Further, extending the IDS could lead to detect more complex attacks on 6LoWPAN.
---
paper_title: Penetration Testing of OPC as Part of Process Control Systems
paper_content:
We have performed penetration testing on OPC, which is a central component in process control systems on oil installations. We have shown how a malicious user with different privileges --- outside the network, access to the signalling path and physical access to the OPC server --- can fairly easily compromise the integrity, availability and confidentiality of the system. Our tentative tests demonstrate that full-scale penetration testing of process control systems in offshore installations is necessary in order to sensitise the oil and gas industry to the evolving threats.
---
paper_title: An Analysis of Black-Box Web Application Security Scanners against Stored SQL Injection
paper_content:
Web application security scanners are a compilation of various automated tools put together and used to detect security vulnerabilities in web applications. Recent research has shown that detecting stored SQL injection, one of the most critical web application vulnerabilities, is a major challenge for black-box scanners. In this paper, we evaluate three state of art black-box scanners that support detecting stored SQL injection vulnerabilities. We developed our custom test bed that challenges the scanners capability regarding stored SQL injections. The results show that existing vulnerabilities are not detected even when these automated scanners are taught to exploit the vulnerability. The weaknesses of black-box scanners identified reside in many areas: crawling, input values and attack code selection, user login, analysis of server replies, miss-categorization of findings, and the automated process functionality. Because of the poor detection rate, we discuss the different phases of black-box scanners' scanning cycle and propose a set of recommendations that could enhance the detection rate of stored SQL injection vulnerabilities.
---
paper_title: A model-based approach to security flaw detection of network protocol implementations
paper_content:
A lot of efforts have been devoted to the analysis of network protocol specification for reliability and security properties using formal techniques. However, faults can also be introduced during system implementation; it is indispensable to detect protocol implementation flaws, yet due to the black-box nature of protocol implementation and the unavailability of protocol specification most of the approaches resort to random or manual testing. In this paper we propose a model-based approach for security flaw detection of protocol implementation with a high fault coverage, measurability, and automation. Our approach first synthesizes an abstract behavioral model from a protocol implementation and then uses it to guide the testing process for detecting security and reliability flaws. For protocol specification synthesis we reduce the problem a trace minimization with a finite state machine model and an efficient algorithm is presented for state space reduction. Our method is implemented and applied to real network protocols. Guided by the synthesized model our testing tool reveals a number of unknown reliability and security issues by automatically crashing the implementations of the Microsoft MSN instant messaging (MSNIM) protocol. Analytical comparison between our model-based and prevalent syntax-based flaw detection schemes is also provided with the support of experimental results.
---
paper_title: Multi-vendor penetration testing in the advanced metering infrastructure
paper_content:
The advanced metering infrastructure (AMI) is revolutionizing electrical grids. Intelligent AMI "smart meters" report real time usage data that enables efficient energy generation and use. However, aggressive deployments are outpacing security efforts: new devices from a dizzying array of vendors are being introduced into grids with little or no understanding of the security problems they represent. In this paper we develop an archetypal attack tree approach to guide penetration testing across multiple-vendor implementations of a technology class. In this, we graft archetypal attack trees modeling broad adversary goals and attack vectors to vendor-specific concrete attack trees. Evaluators then use the grafted trees as a roadmap to penetration testing. We apply this approach within AMI to model attacker goals such as energy fraud and denial of service. Our experiments with multiple vendors generate real attack scenarios using vulnerabilities identified during directed penetration testing, e.g., manipulation of energy usage data, spoofing meters, and extracting sensitive data from internal registers. More broadly, we show how we can reuse efforts in penetration testing to efficiently evaluate the increasingly large body of AMI technologies being deployed in the field.
---
paper_title: Automatic Generation for Penetration Testing Scheme Analysis Model for Network
paper_content:
The existing penetration testing systems need to rely on the professional skills in the testing process, so they increase the penetration testing of human resources and costs. While it also reduces the efficiency of the test, it increases the test cycle. This paper presents a new penetration testing scheme by an automatically generated method. Through penetration testing scheme description, we establish an automatic generation system about penetration testing scheme, and the automatic generation penetration testing scheme for network analysis model (AGPTSAM) is established, and then the termination of the AGPTSAM is proved by the pushdown automaton. The prototype system of AGPTSAM is implemented. The penetration testing scheme could provide guidance and basis for the implementation of the penetration testing system, and shorten the test cycle and the human resources cost. This method is verification of effectiveness by experimental which works on the implementing platform.
---
paper_title: Aiming at Higher Network Security through Extensive Penetration Tests
paper_content:
Modern enterprise infrastructures adopt multilayer network architectures and heterogeneous server environments in order to efficiently fulfill each organization's goals and objectives. These complex network architectures have resulted in increased demands of information security measures. Each organization needs to effectively deal with this major security concerns, forming a security policy according to its requirements and objectives. An efficient security policy must be proactive in order to provide sufficient defense layers against a variety of known and unknown attack classes and cases. This proactive approach is usually interpreted wrongly in only up-to-date software and hardware. Regular updates are necessary, although, not enough, because potential mis-configurations and design flaws cannot be located and patched, making the whole network vulnerable to attackers. In this paper we present how a comprehensive security level can be reached through extensive Penetration Tests (Ethical Hacking). We present a Penetration Test methodology and framework capable to expose possible exploitable vulnerabilities in every network layer. Additionally, we conducted an extensive analysis of a network penetration test case study against a network simulation lab setup, exposing common network mis-configurations and their security implications to the whole network and its users.
---
paper_title: Cyber Scanning: A Comprehensive Survey
paper_content:
Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.
---
paper_title: A Security Assessment Methodology for Critical Infrastructures
paper_content:
Interest in security assessment and penetration testing techniques has steadily increased. Likewise, security of industrial control systems (ICS) has become more and more important. Very few methodologies directly target ICS and none of them generalizes the concept of "critical infrastructures pentesting". Existing methodologies and tools cannot be applied directly to critical infrastructures (CIs) due to safety and availability requirements. Moreover, there is no clear understanding on the specific output that CI operators need from such an assessment. We propose a new methodology tailored to support security testing in ICS/CI environments. By analyzing security assessments and penetration testing methodologies proposed for other domains and interviewing stakeholders to identify existing best practices adopted in industry, deriving related issues and collecting proposals for possible solutions we propose a new security assessment and penetration testing methodology for critical infrastructure.
---
paper_title: Towards a Penetration Testing Framework Using Attack Patterns
paper_content:
The problems of system security are well known, but no satisfactory methods to resolve them have ever been discovered. One heuristic method is to use a penetration test with the rationale of finding system flaws before malicious attackers. However, this is a craft-based discipline without an adequate theoretical or empirical basis for justifying its activities and results. We show that both the automated tool and skill-based methods of pen testing are unsatisfactory, because we need to provide understandable evidence to clients about their weaknesses and offer actionable plans to fix the critical ones. We use attack patterns to help develop a pen-testing framework to help avoid the limitations of current approaches.
---
paper_title: Extending HARM to make Test Cases for Penetration Testing
paper_content:
[Context and motivation] Penetration testing is one key technique for discovering vulnerabilities, so that software can be made more secure. [Question/problem] Alignment between modeling techniques used earlier in a project and the development of penetration tests could enable a more systematic approach to such testing, and in some cases also enable creativity. [Principal ideas/results] This paper proposes an extension of HARM (Hacker Attack Representation Method) to achieve a systematic approach to penetration test development. [Contributions] The paper gives an outline of the approach, illustrated by an e-exam case study.
---
paper_title: Assessing and Comparing Vulnerability Detection Tools for Web Services: Benchmarking Approach and Examples
paper_content:
Selecting a vulnerability detection tool is a key problem that is frequently faced by developers of security-critical web services. Research and practice shows that state-of-the-art tools present low effectiveness both in terms of vulnerability coverage and false positive rates. The main problem is that such tools are typically limited in the detection approaches implemented, and are designed for being applied in very concrete scenarios. Thus, using the wrong tool may lead to the deployment of services with undetected vulnerabilities. This paper proposes a benchmarking approach to assess and compare the effectiveness of vulnerability detection tools in web services environments. This approach was used to define two concrete benchmarks for SQL Injection vulnerability detection tools. The first is based on a predefined set of web services, and the second allows the benchmark user to specify the workload that best portrays the specific characteristics of his environment. The two benchmarks are used to assess and compare several widely used tools, including four penetration testers, three static code analyzers, and one anomaly detector. Results show that the benchmarks accurately portray the effectiveness of vulnerability detection tools (in a relative manner) and suggest that the proposed benchmarking approach can be applied in the field.
---
paper_title: Security testing in the cloud by means of ethical worm
paper_content:
As Cloud Computing continues to evolve the majority of research tends to lean towards optimising, securing and improving Cloud technologies. Less work appears which leverages the architectural and economic advantages of the Cloud. This paper examines the Cloud as a security testing environment, having a number of purposes such as penetration testing, and the dynamic creation and testing of environments for learning about malicious processes and testing security concepts. A novel experiment into malicious software propagation using ethical worms is developed and tested as a proof of concept to be adopted as a novel approach for security testing in the cloud. The work presented in the paper is unprecedented, to the best of the authors' knowledge.
---
paper_title: Design and Implementation of an XML-Based Penetration Testing System
paper_content:
According to the low efficiency of system testing, the longer test cycle, the single form of the test results, no standardized documents of tested results and other drawbacks of the traditional penetration testing system, this paper design and implement of an XML-based penetration testing system. The system uses SNMP, PING, Telnet and other ways to explore resource, is based on OVAL, CVE to assess vulnerability, and finally realize the function of penetration testing combining with the strategy of penetration testing. Data is transmitted by XML format in whole system, and the tested results are shown in XML format ultimately. The system can improve test efficiency, reduce cycle time, make the tested results performance-rich and unified, and the system is cross-platform, uniformity and scalability.
---
paper_title: Effective Detection of SQL/XPath Injection Vulnerabilities in Web Services
paper_content:
This paper proposes a new automatic approach for the detection of SQL Injection and XPath Injection vulnerabilities, two of the most common and most critical types of vulnerabilities in web services. Although there are tools that allow testing web applications against security vulnerabilities, previous research shows that the effectiveness of those tools in web services environments is very poor. In our approach a representative workload is used to exercise the web service and a large set of SQL/XPath Injection attacks are applied to disclose vulnerabilities. Vulnerabilities are detected by comparing the structure of the SQL/XPath commands issued in the presence of attacks to the ones previously learned when running the workload in the absence of attacks. Experimental evaluation shows that our approach performs much better than known tools (including commercial ones), achieving extremely high detection coverage while maintaining the false positives rate very low.
---
paper_title: Two methodologies for physical penetration testing using social engineering
paper_content:
Penetration tests on IT systems are sometimes coupled with physical penetration tests and social engineering. In physical penetration tests where social engineering is allowed, the penetration tester directly interacts with the employees. These interactions are usually based on deception and if not done properly can upset the employees, violate their privacy or damage their trust toward the organization and might lead to law suits and loss of productivity. We propose two methodologies for performing a physical penetration test where the goal is to gain an asset using social engineering. These methodologies aim to reduce the impact of the penetration test on the employees. The methodologies have been validated by a set of penetration tests performed over a period of two years.
---
paper_title: Software Vulnerability Discovery Techniques: A Survey
paper_content:
Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.
---
paper_title: Cost effective assessment of the infrastructure security posture
paper_content:
An organisation's security posture is an indication the countermeasures that have been implemented to protect the organisations resources. The countermeasures are security best practice that are appropriate to the organisations risk appetite and the business requirements. The security posture is defined by an organisations security policy and its mission statement and business objectives. Countermeasures come with a cost which should not exceed the value of the resources they are protecting and they should be effective, provide value for money, and a return on investment for the organisation Measuring how the organisations actual security posture relates to its agreed acceptable level of risk is a problem that is faced by organisations when looking at whether their countermeasures are effective and providing value for money and a return on investment. There are two methodologies that can be used. 1. Auditing - which is the mechanism of confirming that the processes or procedures agree to a master checklist for compliance 2. Assessing - is a more active, or intrusive, testing methodology to adequately assess your processes or procedures that cannot be adequately verified using a checklist or security policy This paper investigates the surface attack area of an organisations infrastructure and applications examining the cases where the use of cloud and mobile computing have extend the infrastructure beyond the traditional perimeter of organisations physical locations and the challenges this causes in assessing the security posture. A review of the use of assessment methodologies such as vulnerability assessment and penetration testing to assess the infrastructure and application security posture of an organisation shows how they can provide identification of vulnerabilities which can aid the risk assessment process in developing a security policy. It will demonstrate how these methodologies can help in assessing the effectiveness of the implemented countermeasures and aid in evaluation as to whether there are provide value for money and a return on investment. It is proposed that a long term strategy of using both methodologies for assessing the security posture based on the business requirements will provide the following benefits. : Cost effective monitoring of the infrastructure and security posture. : Ensuring that the countermeasures retain effectiveness over time. : Responding to the continual changing threat environment. : Ensuring that value for money and return on investment are maintained. (6 pages)
---
paper_title: A Security Assessment Methodology for Critical Infrastructures
paper_content:
Interest in security assessment and penetration testing techniques has steadily increased. Likewise, security of industrial control systems (ICS) has become more and more important. Very few methodologies directly target ICS and none of them generalizes the concept of "critical infrastructures pentesting". Existing methodologies and tools cannot be applied directly to critical infrastructures (CIs) due to safety and availability requirements. Moreover, there is no clear understanding on the specific output that CI operators need from such an assessment. We propose a new methodology tailored to support security testing in ICS/CI environments. By analyzing security assessments and penetration testing methodologies proposed for other domains and interviewing stakeholders to identify existing best practices adopted in industry, deriving related issues and collecting proposals for possible solutions we propose a new security assessment and penetration testing methodology for critical infrastructure.
---
paper_title: Modeling and execution of complex attack scenarios using interval timed colored Petri nets
paper_content:
The commonly used flaw hypothesis model (FHM) for performing penetration tests provides only limited, high level guidance for the derivation of actual penetration attempts. In this paper, a mechanism for the systematic modeling, simulation, and exploitation of complex multistage and multiagent vulnerabilities in networked and distributed systems based on stochastic and interval-timed colored Petri nets is described and analyzed through case studies elucidating several properties of Petri net variants and their suitability to modeling this type of attack.
---
paper_title: Effective penetration testing with Metasploit framework and methodologies
paper_content:
Nowadays, information security is very important, because more and more confidential information, like medical reports, is being stored electronically on computer systems and those systems are often connected to computer networks. This represents new challenges for people working in information technology. They have to ensure that those systems are as much secure as possible and confidential information will not be revealed. One possible way how to prove system security is to conduct regular penetration tests — e.g. simulate attacker's malicious activity. This article briefly introduces the basics of penetration testing and shows how to deploy and use Metasploit framework when conducting penetration testing. Finally, a case study in production environment is shown. Software tools and techniques described in this work are also valid and applicable for SCADA systems and moreover in any other field where computer networks are used.
---
paper_title: Testing and assessing web vulnerability scanners for persistent SQL injection attacks
paper_content:
Web application security scanners are automated tools used to detect security vulnerabilities in web applications. Recent research has shown that detecting persistent SQL injection vulnerabilities, one of the most critical web application vulnerabilities, is a major challenge for black-box scanners. In this paper, we evaluate three state of art black-box scanners that support detecting persistent SQL injection vulnerabilities. We developed our custom testbed "MatchIt" that tests the scanners capability in detecting persistent SQL injections. The results show that existing vulnerabilities are not detected even when these automated scanners are explicitly configured to exploit the vulnerability. The weaknesses of blackbox scanners identified reside in many areas: crawling web pages, input values and attack code selection, user registration and login, analysis of server replies and classification of findings. Because of the poor detection rate, we analyze the scanner's behavior and present a set of recommendations that could enhance the discovery of persistent SQL injection vulnerabilities.
---
paper_title: Guidelines for Discovering and Improving Application Security
paper_content:
This paper analyzes current threats in computer security for web-based applications with a SQL database. We conduct a penetration test in a real-case scenario of multiple attacks against the network, the web application and the SQL database. The test infrastructure includes two servers, a firewall and one machine that acts as an attacker's computer. Based on our empirical analysis we diagnose specific vulnerabilities and we formulate best practices to improve security against common attack. The article contributes to the discussion of state-of-the art security techniques and illustrates the value of penetration testing for diagnosing attacks against specific technologies.
---
paper_title: Assessing and Comparing Vulnerability Detection Tools for Web Services: Benchmarking Approach and Examples
paper_content:
Selecting a vulnerability detection tool is a key problem that is frequently faced by developers of security-critical web services. Research and practice shows that state-of-the-art tools present low effectiveness both in terms of vulnerability coverage and false positive rates. The main problem is that such tools are typically limited in the detection approaches implemented, and are designed for being applied in very concrete scenarios. Thus, using the wrong tool may lead to the deployment of services with undetected vulnerabilities. This paper proposes a benchmarking approach to assess and compare the effectiveness of vulnerability detection tools in web services environments. This approach was used to define two concrete benchmarks for SQL Injection vulnerability detection tools. The first is based on a predefined set of web services, and the second allows the benchmark user to specify the workload that best portrays the specific characteristics of his environment. The two benchmarks are used to assess and compare several widely used tools, including four penetration testers, three static code analyzers, and one anomaly detector. Results show that the benchmarks accurately portray the effectiveness of vulnerability detection tools (in a relative manner) and suggest that the proposed benchmarking approach can be applied in the field.
---
paper_title: G.: Why Johnny Can’t Pentest: An Analysis of Black-box Web Vulnerability Scanners
paper_content:
Black-box web vulnerability scanners are a class of tools that can be used to identify security issues in web applications. These tools are often marketed as "point-and-click pentesting" tools that automatically evaluate the security of web applications with little or no human support. These tools access a web application in the same way users do, and, therefore, have the advantage of being independent of the particular technology used to implement the web application. However, these tools need to be able to access and test the application's various components, which are often hidden behind forms, JavaScript-generated links, and Flash applications. ::: ::: This paper presents an evaluation of eleven black-box web vulnerability scanners, both commercial and open-source. The evaluation composes different types of vulnerabilities with different challenges to the crawling capabilities of the tools. These tests are integrated in a realistic web application. The results of the evaluation show that crawling is a task that is as critical and challenging to the overall ability to detect vulnerabilities as the vulnerability detection techniques themselves, and that many classes of vulnerabilities are completely overlooked by these tools, and thus research is required to improve the automated detection of these flaws.
---
paper_title: Towards a practical and effective security testing methodology
paper_content:
Security testing is an important step in the lifetime of both newly-designed and existing systems. Different methodologies exist to guide testers to the selection, design, and implementation of the most appropriate testing procedures for various contexts. Typically, each methodology stems from the specific needs of a particular category of actors, and consequently is biased towards some aspect of peculiar interest to them. This work compares the most commonly adopted methodologies to point out their strengths and weaknesses, and, building on the results of the performed analysis, proposes a path towards the definition of an integrated approach, by defining the characteristics that a new methodology should exhibit in order to combine the best aspects of the existing ones.
---
paper_title: Model-Based Testing for Functional and Security Test Generation
paper_content:
With testing, a system is executed with a set of selected stimuli, and observed to determine whether its behavior conforms to the specification. Therefore, testing is a strategic activity at the heart of software quality assurance, and is today the principal validation activity in industrial context to increase the confidence in the quality of systems. This paper, summarizing the six hours lesson taught during the Summer School FOSAD’12, gives an overview of the test data selection techniques and provides a state-of-the-art about Model-Based approaches for security testing.
---
paper_title: Enemy of the State: a State-aware Black-box Web Vulnerability Scanner
paper_content:
Black-box web vulnerability scanners are a popular choice for finding security vulnerabilities in web applications in an automated fashion. These tools operate in a point-and-shootmanner, testing any web application-- regardless of the server-side language--for common security vulnerabilities. Unfortunately, black-box tools suffer from a number of limitations, particularly when interacting with complex applications that have multiple actions that can change the application's state. If a vulnerability analysis tool does not take into account changes in the web application's state, it might overlook vulnerabilities or completely miss entire portions of the web application. ::: ::: We propose a novel way of inferring the web application's internal state machine from the outside--that is, by navigating through the web application, observing differences in output, and incrementally producing a model representing the web application's state. ::: ::: We utilize the inferred state machine to drive a black-box web application vulnerability scanner. Our scanner traverses a web application's state machine to find and fuzz user-input vectors and discover security flaws. We implemented our technique in a prototype crawler and linked it to the fuzzing component from an open-source web vulnerability scanner. ::: ::: We show that our state-aware black-box web vulnerability scanner is able to not only exercise more code of the web application, but also discover vulnerabilities that other vulnerability scanners miss.
---
paper_title: A taxonomy of risk-based testing
paper_content:
Software testing has often to be done under severe pressure due to limited resources and a challenging time schedule facing the demand to assure the fulfillment of the software requirements. In addition, testing should unveil those software defects that harm the mission-critical functions of the software. Risk-based testing uses risk (re-)assessments to steer all phases of the test process to optimize testing efforts and limit risks of the software-based system. Due to its importance and high practical relevance, several risk-based testing approaches were proposed in academia and industry. This paper presents a taxonomy of risk-based testing providing a framework to understand, categorize, assess, and compare risk-based testing approaches to support their selection and tailoring for specific purposes. The taxonomy is aligned with the consideration of risks in all phases of the test process and consists of the top-level classes risk drivers, risk assessment, and risk-based test process. The taxonomy of risk-based testing has been developed by analyzing the work presented in available publications on risk-based testing. Afterwards, it has been applied to the work on risk-based testing presented in this special section of the International Journal on Software Tools for Technology Transfer.
---
paper_title: Automatic Generation of Test Drivers for Model Inference of Web Applications
paper_content:
In the “Internet of Services” (IoS) vision of the Internet, applications are developed as services using the web standards. Model-based testing combined with active model inference is one of the methods to test the applications pretty automatically, in particular to look for vulnerabilities. But one part still needs to be written manually, the test driver. It contains an abstraction of the real application and the methods to interact with the system at abstract and concrete level. We propose a generic abstraction of the web applications and an approach to generate the corresponding test driver automatically using a crawler to identify the needed information.
---
paper_title: SAGE: whitebox fuzzing for security testing
paper_content:
SAGE has had a remarkable impact at Microsoft.
---
paper_title: Risk-Based Vulnerability Testing Using Security Test Patterns
paper_content:
This paper introduces an original security testing approach guided by risk assessment, by means of risk coverage, to perform and automate vulnerability testing for Web applications. This approach, called Risk-Based Vulnerability Testing, adapts Model-Based Testing techniques, which are mostly used currently to address functional features. It also extends Model-Based Vulnerability Testing techniques by driving the testing process using security test patterns selected from risk assessment results. The adaptation of such techniques for Risk-Based Vulnerability Testing defines novel features in this research domain. In this paper, we describe the principles of our approach, which is based on a mixed modeling of the System Under Test: the model used for automated test generation captures some behavioral aspects of the Web applications, but also includes vulnerability test purposes to drive the test generation process.
---
paper_title: Leveraging User Interactions for In-Depth Testing of Web Applications
paper_content:
Over the last years, the complexity of web applications has grown significantly, challenging desktop programs in terms of functionality and design. Along with the rising popularity of web applications, the number of exploitable bugs has also increased significantly. Web application flaws, such as cross-site scripting or SQL injection bugs, now account for more than two thirds of the reported security vulnerabilities. ::: ::: Black-box testing techniques are a common approach to improve software quality and detect bugs before deployment. There exist a number of vulnerability scanners, or fuzzers, that expose web applications to a barrage of malformed inputs in the hope to identify input validation errors. Unfortunately, these scanners often fail to test a substantial fraction of a web application's logic, especially when this logic is invoked from pages that can only be reached after filling out complex forms that aggressively check the correctness of the provided values. ::: ::: In this paper, we present an automated testing tool that can find reflected and stored cross-site scripting (XSS) vulnerabilities in web applications. The core of our system is a black-box vulnerability scanner. This scanner is enhanced by techniques that allow one to generate more comprehensive test cases and explore a larger fraction of the application. Our experiments demonstrate that our approach is able to test more thoroughly these programs and identify more bugs than a number of open-source and commercial web vulnerability scanners.
---
paper_title: SecuBat: a web vulnerability scanner
paper_content:
As the popularity of the web increases and web applications become tools of everyday use, the role of web security has been gaining importance as well. The last years have shown a significant increase in the number of web-based attacks. For example, there has been extensive press coverage of recent security incidences involving the loss of sensitive credit card information belonging to millions of customers.Many web application security vulnerabilities result from generic input validation problems. Examples of such vulnerabilities are SQL injection and Cross-Site Scripting (XSS). Although the majority of web vulnerabilities are easy to understand and to avoid, many web developers are, unfortunately, not security-aware. As a result, there exist many web sites on the Internet that are vulnerable.This paper demonstrates how easy it is for attackers to automatically discover and exploit application-level vulnerabilities in a large number of web applications. To this end, we developed SecuBat, a generic and modular web vulnerability scanner that, similar to a port scanner, automatically analyzes web sites with the aim of finding exploitable SQL injection and XSS vulnerabilities. Using SecuBat, we were able to find many potentially vulnerable web sites. To verify the accuracy of SecuBat, we picked one hundred interesting web sites from the potential victim list for further analysis and confirmed exploitable flaws in the identified web pages. Among our victims were well-known global companies and a finance ministry. Of course, we notified the administrators of vulnerable sites about potential security problems. More than fifty responded to request additional information or to report that the security hole was closed.
---
paper_title: KameleonFuzz: evolutionary fuzzing for black-box XSS detection
paper_content:
Fuzz testing consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? Where to observe its effects? In this paper, we specifically address the questions: How to fuzz a parameter? How to observe its effects? To address these questions, we propose KameleonFuzz, a black-box Cross Site Scripting (XSS) fuzzer for web applications. KameleonFuzz can not only generate malicious inputs to exploit XSS, but also detect how close it is revealing a vulnerability. The malicious inputs generation and evolution is achieved with a genetic algorithm, guided by an attack grammar. A double taint inference, up to the browser parse tree, permits to detect precisely whether an exploitation attempt succeeded. Our evaluation demonstrates no false positives and high XSS revealing capabilities: KameleonFuzz detects several vulnerabilities missed by other black-box scanners.
---
paper_title: Web application security assessment by fault injection and behavior monitoring
paper_content:
As a large and complex application platform, the World Wide Web is capable of delivering a broad range of sophisticated applications. However, many Web applications go through rapid development phases with extremely short turnaround time, making it difficult to eliminate vulnerabilities. Here we analyze the design of Web application security assessment mechanisms in order to identify poor coding practices that render Web applications vulnerable to attacks such as SQL injection and cross-site scripting. We describe the use of a number of software-testing techniques (including dynamic analysis, black-box testing, fault injection, and behavior monitoring), and suggest mechanisms for applying these techniques to Web applications. Real-world situations are used to test a tool we named the Web Application Vulnerability and Error Scanner (WAVES, an open-source project available at http://waves.sourceforge.net) and to compare it with other tools. Our results show that WAVES is a feasible platform for assessing Web application security.
---
paper_title: Evaluation of Intrusion Detection Systems in Virtualized Environments Using Attack Injection
paper_content:
The evaluation of intrusion detection systems IDSes is an active research area with many open challenges, one of which is the generation of representative workloads that contain attacks. In this paper, we propose a novel approach for the rigorous evaluation of IDSes in virtualized environments, with a focus on IDSes designed to detect attacks leveraging or targeting the hypervisor via its hypercall interface. We present hInjector, a tool for generating IDS evaluation workloads by injecting such attacks during regular operation of a virtualized environment. We demonstrate the application of our approach and show its practical usefulness by evaluating a representative IDS designed to operate in virtualized environments. The virtualized environment of the industry-standard benchmark SPECvirt_sc2013 is used as a testbed, whose drivers generate workloads representative of workloads seen in production environments. This work enables for the first time the injection of attacks in virtualized environments for the purpose of generating representative IDS evaluation workloads.
---
paper_title: G.: Why Johnny Can’t Pentest: An Analysis of Black-box Web Vulnerability Scanners
paper_content:
Black-box web vulnerability scanners are a class of tools that can be used to identify security issues in web applications. These tools are often marketed as "point-and-click pentesting" tools that automatically evaluate the security of web applications with little or no human support. These tools access a web application in the same way users do, and, therefore, have the advantage of being independent of the particular technology used to implement the web application. However, these tools need to be able to access and test the application's various components, which are often hidden behind forms, JavaScript-generated links, and Flash applications. ::: ::: This paper presents an evaluation of eleven black-box web vulnerability scanners, both commercial and open-source. The evaluation composes different types of vulnerabilities with different challenges to the crawling capabilities of the tools. These tests are integrated in a realistic web application. The results of the evaluation show that crawling is a task that is as critical and challenging to the overall ability to detect vulnerabilities as the vulnerability detection techniques themselves, and that many classes of vulnerabilities are completely overlooked by these tools, and thus research is required to improve the automated detection of these flaws.
---
paper_title: An Efficient Black-box Technique for Defeating Web Application Attacks
paper_content:
Over the past few years, injection vulnerabilities have become the primary target for remote exploits. SQL injection, command injection, and cross-site scripting are some of the popular attacks that exploit these vulnerabilities. Taint-tracking has emerged as one of the most promising approaches for defending against these exploits, as it supports accurate detection (and prevention) of popular injection attacks. However, practical deployment of tainttracking defenses has been hampered by a number of factors, including: (a) high performance overheads (often over 100%), (b) the need for deep instrumentation, which has the potential to impact application robustness and stability, and (c) specificity to the language in which an application is written. In order to overcome these limitations, we present a new technique in this paper called taint inference. This technique does not require any source-code or binary instrumentation of the application to be protected; instead, it operates by intercepting requests and responses from this application. For most web applications, this interception may be achieved using network layer interposition or library interposition. We then develop a class of policies called syntaxand taint-aware policies that can accurately detect and/or block most injection attacks. An experimental evaluation shows that our techniques are effective in detecting a broad range of attacks on applications written in multiple languages (including PHP, Java and C), and impose low performance overheads (below 5%).
---
| Title: Overview and Open Issues on Penetration Test
Section 1: Introduction
Description 1: Discuss the importance of security for companies and the role of penetration testing in mitigating security risks.
Section 2: Related Work
Description 2: Review existing studies on penetration testing, covering models, tools, and techniques discussed in the literature.
Section 3: Planning
Description 3: Outline the systematic mapping study (SMS) planning process, including scope, objectives, and research questions.
Section 4: Scope and Objective
Description 4: Define the focus of the SMS on identifying contributions related to penetration tests, aiming to provide an overview of models, methodologies, and tools.
Section 5: Question Structure
Description 5: Explain the PICO criteria used to structure the SMS and define the research questions.
Section 6: Research Process
Description 6: Detail the databases selected, the search terms used, and the inclusion and exclusion criteria for selecting studies.
Section 7: Conduction
Description 7: Describe the execution of the search process, including the periods of the SMS and the steps taken to select and assess papers.
Section 8: Search Databases
Description 8: Summarize the results of the database searches and the process of selecting and assessing the final set of studies.
Section 9: Study Quality Assessment
Description 9: Explain the quality assessment criteria applied to the selected studies to evaluate their relevance and reliability.
Section 10: Result Analysis: Classification Schemes
Description 10: Discuss how the selected studies were classified based on keywords, research type, contribution type, and penetration testing methodologies.
Section 11: Mapping
Description 11: Present a qualitative assessment of the literature, including distributions of target scenarios, tools, models, and methodologies in penetration testing.
Section 12: Threats to Validity
Description 12: Identify potential threats to the validity of the SMS and the measures taken to mitigate them.
Section 13: Discussion
Description 13: Discuss the answers to the research questions, covering tools, target scenarios, models, methodologies, and challenges in penetration testing.
Section 14: Lessons Learned and Future Directions
Description 14: Summarize lessons learned from the study and suggest future directions for research in penetration testing.
Section 15: Removing Vulnerabilities: Before Deployment
Description 15: Provide a discussion on approaches to removing vulnerabilities before deploying systems, focusing on design, development, and testing phases.
Section 16: Conclusion
Description 16: Conclude the paper by summarizing the main findings, contributions, and recommendations for penetration testing practices. |
A Survey on the Flexibility Requirements Related to Business Processes and Modeling Artifacts | 7 | ---
paper_title: Context-aware Process Design Exploring the Extrinsic Drivers for Process Flexibility
paper_content:
Research on process flexibility has traditionally explored alternative ways of considering flexibility during the design of a business process. The focus typically has been on ways of how the demand for process flexibility can be satisfied by advanced process modeling techniques, i.e., issues intrinsic to the process. This paper proposes to extent current research by studying the extrinsic drivers for flexibility. These drivers can be found in the context of the process, which may include among others time, location, legislation, culture, performance requirements etc. Exemplary scenarios for such extrinsic flexibility drivers will be discussed and preliminary thoughts on context-aware process design approaches will be shared. The paper ends with a proposed research agenda in this area.
---
paper_title: Flexible Support of Inter-Organizational Business Processes Using Web Services
paper_content:
Inter-organizational business processes which cross multiple organizations need new kinds of flexibility which are not properly supported by existing enactment architectures. Therefore, after analyzing these requirements, a new enactment architecture based on web services is developed. It uses a homogeneous and dynamic composition of so called aspect-element-oriented web services to support inter-organizational business processes.
---
paper_title: A Role-Based Approach for Modeling Flexible Business Processes
paper_content:
As organisation environments become more complex, business process models have to provide means to suit the flexibility and adaptability requirements at any given time. A rolebased approach for modelling business processes is a natural way to reflect organisational structures and to highlight responsibilities assigned to actors. The purpose of this paper is to improve this kind of approach in order to support flexible business processes modelling. This can be done through introducing the concept of mission. In addition, to make the approach more flexible in changing organisational and functional contexts, we investigate issues related to the delegation and the constraint aspects.
---
paper_title: An Enterprise reference Scheme for Integrating Model Based Knowledge Engineering and Enterprise Modelling
paper_content:
In recent years the demand on business process modelling (BPM) became apparent in many different communities. To provide a unifying framework for different needs on enterprise modelling we define an enterprise reference scheme and show how the development of knowledge based systems can be incorporated in such a framework. From this framework conclusions for tool support are drawn.
---
paper_title: The Viable System Model: Interpretations and Applications of Stafford Beer's VSM
paper_content:
Part 1 Concepts: the viable system model - its provenance, development, methodology and pathology, Stafford Beer the need for formal development of the VSM, Ron Anderton the VSM and Ashby's law as illuminants of historical management thought, Fred Waelchli the VSM revisited, Raul Espejo. Part 2 Applications of the VSM: P.M.manufacturers - the VSM as a diagnostic tool, Raul Espejo the organization of a fortress factory, R.A.Foss application of the VSM to the trade training network in New Zealand, G.A.Britton and H.McCallion application of the VSM to commercial broadcasting in the United States, Allenna Leonard the evolution of a management cybernetics process, Stafford Beer developing organizational competence in a business, Bengt A.Holmberg strategic planning and management reorganization at an academic medical center - use of the VSM in guiding diagnosis and design, Michael U.Ben-Eli. Part 3 Methodology and epistemology: national government - disseminater regulation in real time or "how to run a country", Stafford Beer a cybernetic method to study organizations, Raul Espejo outside and then - an interpretative approach to the VSM, Roger J.Harnden. Part 4 Critical views: evaluation the managerial significance of the VSM, M.C.Jackson the VSM - an ongoing conversation, Raul Espejo and Roger J.Harnden.
---
paper_title: Model Driven Architectures for Enterprise Information Systems
paper_content:
Over the past decade, continuous challenges have been made to traditional business practices. At the same time, organisations have also experienced the effects of the integration and evolution of information and communication technologies (ICT). The Enterprise Information Systems (EIS) gained a new strategic support role as enabler of automation, monitoring, analysis and co- ordination of whole business functioning, a central role in the evolution of today organisations. These rapid changing situations originate a critical need for realistic representations -called business models- of what are the current or future business situations or what should be changed as well as its potential organisational impacts. This paper characterises the strong relationship existing between Business Models and EIS Architectures in a changing environment. Our main contribution is a set of roadmaps, which highlight the relationships between business process models and the requirements of EIS. These roadmaps provide guidance during the business modelling and the information system (IS) modelling processes.
---
paper_title: Object-Oriented Modeling and Design
paper_content:
1. Introduction. I. MODELING CONCEPTS. 2. Modeling as a Design Technique. 3. Object Modeling. 4. Advanced Object Modeling. 5. Dynamic Modeling. 6. Functional Modeling. II. DESIGN METHODOLOGY. 7. Methodology Preview. 8. Analysis. 9. System Design. 10. Object Design. 11. Methodology Summary. 12. Comparison of Methodologies. III. IMPLEMENTATION. 13. From Design to Implementation. 14. Programming Style. 15. Object-Oriented Languages. 16. Non-Object-Oriented Languages. 17. Databases. 18. Object Diagram Compiler. 19. Computer Animation. 20. Electrical Distribution Design System. 21. Future of Object-Oriented Technology. Appendix A: OMT Graphical Notation. Appendix B: Glossary. Index.
---
paper_title: A benchmarking framework for methods to design flexible business processes
paper_content:
The assumption made in this article is that flexible processes require specific design methods. The choice of a method for modelling flexible processes depends on many criteria and situations that we gathered in a benchmarking framework. The user can use it as a decision support tool to choose the appropriate method in order to design flexible business processes in a given project situation. This framework also includes managerial concerns such as the time and the budget of the project. We use three enterprise modelling techniques to illustrate how to use the proposed framework. Copyright © 2006 John Wiley & Sons, Ltd.
---
paper_title: Advanced Topics in Workflow Management : Issues , Requirements , and Solutions
paper_content:
This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in object-oriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.
---
paper_title: STATEMATE: a working environment for the development of complex reactive systems
paper_content:
This paper provides a brief overview of the STATEMATE system, constructed over the past three years by i-Logix Inc., and Ad Cad Ltd. STATEMATE is a graphical working environment, intended for the specification, analysis, design and documentation of large and complex reactive systems, such as real-time embedded systems, control and communication systems, and interactive software. It enables a user to prepare, analyze and debug diagrammatic, yet precise, descriptions of the system under development from three inter-related points of view, capturing, structure, functionality and behavior . These views are represented by three graphical languages, the most intricate of which is the language of statecharts used to depict reactive behavior over time. In addition to the use of state-charts, the main novelty of STATEMATE is in the fact that it `understands` the entire descriptions perfectly, to the point of being able to analyze them for crucial dynamic properties, to carry out rigorous animated executions and simulations of the described system, and to create running code automatically. These features are invaluable when it comes to the quality and reliability of the final outcome.
---
paper_title: Model Driven Architectures for Enterprise Information Systems
paper_content:
Over the past decade, continuous challenges have been made to traditional business practices. At the same time, organisations have also experienced the effects of the integration and evolution of information and communication technologies (ICT). The Enterprise Information Systems (EIS) gained a new strategic support role as enabler of automation, monitoring, analysis and co- ordination of whole business functioning, a central role in the evolution of today organisations. These rapid changing situations originate a critical need for realistic representations -called business models- of what are the current or future business situations or what should be changed as well as its potential organisational impacts. This paper characterises the strong relationship existing between Business Models and EIS Architectures in a changing environment. Our main contribution is a set of roadmaps, which highlight the relationships between business process models and the requirements of EIS. These roadmaps provide guidance during the business modelling and the information system (IS) modelling processes.
---
paper_title: Knowledge-Based Techniques to Increase the Flexibility of Workflow Management
paper_content:
Abstract This paper describes how knowledge-based techniques can be used to overcome problems of workflow management in engineering applications. Using explicit process and product models as a basis for a workflow interpreter allows us to alternate planning and execution steps, resulting in an increased flexibility of project coordination and enactment. To gain the full advantages of this flexibility, change processes have to be supported by the system. These require an improved traceability of decisions and have to be based on dependency management and change notification mechanisms. Our methods and techniques are illustrated by two applications: Urban land-use planning and software process modeling.
---
paper_title: Enforcement vs. freedom of action an integrated approach to flexible workflow enactment
paper_content:
The advantages of today'sprocess management, such as efficiency and quality aspects, are achievedby enforcing detailed models of work processes. But real world processescan be planned only to a limited degree and sometimes demand changing alreadyplanned parts of the process. So additional, unforeseen activities haveto take place in a process. This contradictory situationwill here be tackled by an approach that combines different forms of enforcementof planned parts of a process. The explicit modelling of these differentsupport strategies allows them to be changed if demanded by the situation.An extended enactment strategy gives workers the opportunity to negotiateabout changing the current support strategy. So there is some thresholdto deviate from the planned process because addtional actions are required.
---
paper_title: Design rationale: the argument behind the artifact
paper_content:
We assert that the product of user interface design should be not only the interface itself but also a rationale for why the interface is the way it is. We describe a representation for design based around a semi-formal notation which allows us explicitly to represent alternative design options and reasons for choosing among them. We illustrate the approach with examples from an analysis of scrolling mechanisms. We discuss the roles we expect such a representation to play in improving the coherence of designs and in communicating reasons for choices to others, whether designers, maintainers, collaborators or end users.
---
paper_title: A Role-Based Approach for Modeling Flexible Business Processes
paper_content:
As organisation environments become more complex, business process models have to provide means to suit the flexibility and adaptability requirements at any given time. A rolebased approach for modelling business processes is a natural way to reflect organisational structures and to highlight responsibilities assigned to actors. The purpose of this paper is to improve this kind of approach in order to support flexible business processes modelling. This can be done through introducing the concept of mission. In addition, to make the approach more flexible in changing organisational and functional contexts, we investigate issues related to the delegation and the constraint aspects.
---
paper_title: STATEMATE: a working environment for the development of complex reactive systems
paper_content:
This paper provides a brief overview of the STATEMATE system, constructed over the past three years by i-Logix Inc., and Ad Cad Ltd. STATEMATE is a graphical working environment, intended for the specification, analysis, design and documentation of large and complex reactive systems, such as real-time embedded systems, control and communication systems, and interactive software. It enables a user to prepare, analyze and debug diagrammatic, yet precise, descriptions of the system under development from three inter-related points of view, capturing, structure, functionality and behavior . These views are represented by three graphical languages, the most intricate of which is the language of statecharts used to depict reactive behavior over time. In addition to the use of state-charts, the main novelty of STATEMATE is in the fact that it `understands` the entire descriptions perfectly, to the point of being able to analyze them for crucial dynamic properties, to carry out rigorous animated executions and simulations of the described system, and to create running code automatically. These features are invaluable when it comes to the quality and reliability of the final outcome.
---
paper_title: Case Handling in Construction
paper_content:
Case handling is a new means for supporting flexible and knowledge intensive business processes. Unlike workflow management, which uses predefined process control structures to determine what should be done during a workflow process, case handling focuses on what can be done to achieve a business goal. In this paper, case handling is introduced as a new possibility for supporting construction processes. The construction of buildings and related facilities is a difficult and complex process, which requires both support and flexibility. This paper describes the application of the case-handling principles within Heijmans. Heijmans is one of the leading companies in the Dutch building industry and is interested in IT support for their construction processes. We have used the case-handling system FLOWer to provide automated support for preparing the construction of complex installations. In this paper, we report our experiences.
---
paper_title: Change Patterns and Change Support Features in Process-Aware Information Systems
paper_content:
A high voltage and high energy device for storing energy, comprising storage elements (4) disposed in concentric ring arrangements on insulating separating elements (5), the storage elements of one ring arrangement being connected by one of their ends to the storage elements of the preceding ring arrangement, and by their other end to the storage elements of the following ring arrangement, the assembly being mounted on a core of hard insulating material provided with a central bore and being embedded in a coating of semi-flexible material.
---
paper_title: A Workflow-Oriented System Architecture for the Management of Container Transportation
paper_content:
In this paper, we introduce a workflow-oriented system architecture for the processing of client requests (CRs) for container transportation. In the context of multi-transfer container transportation, the processing of CRs can be achieved by specific sequences of interdependent activities. These sequences need to be just-in-time created. They also need to be adapted to deal with unexpected events that may occur. Workflow technology is used to model and to manage the processing of CRs. The creation and the adaptation of activity sequences require first, an optimized scheduling of a limited number of resources (by also respecting CRs constraints); and second, a number of special workflow concepts and functionality to correctly manage activity sequences. Optimization models are involved to take care of the resource management and of the activity scheduling. Enhancements of workflow concepts and functionality for workflow management systems are investigated to deal with an activity sequence creation and adaptation. Finally, the proposed architecture includes a rule processing part to reduce the time-consuming manual interaction with the system.
---
paper_title: Specification and validation of process constraints for flexible workflows
paper_content:
Workflow systems have traditionally focused on the so-called production processes which are characterized by predefinition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes.In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of "pockets of flexibility" which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance.The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts.
---
paper_title: Goal-Oriented Business Process Modeling with EPCs and Value-Focused Thinking
paper_content:
Goal-oriented business process modeling is driven by the need to ensure congruence of business processes and decisions with the values and vision of the business while meeting continuous demands for increased business productivity. However, existing business process modeling tools fail to address effectiveness and efficiency concerns in an integrated manner. Building upon the previous research by the authors aimed at addressing this gap through integration of process and decision modeling, in this paper, the links between process and decision modeling domains are formalized using a common semantic model that provides the bridge for future development of integrated tools.
---
paper_title: Case Handling in Construction
paper_content:
Case handling is a new means for supporting flexible and knowledge intensive business processes. Unlike workflow management, which uses predefined process control structures to determine what should be done during a workflow process, case handling focuses on what can be done to achieve a business goal. In this paper, case handling is introduced as a new possibility for supporting construction processes. The construction of buildings and related facilities is a difficult and complex process, which requires both support and flexibility. This paper describes the application of the case-handling principles within Heijmans. Heijmans is one of the leading companies in the Dutch building industry and is interested in IT support for their construction processes. We have used the case-handling system FLOWer to provide automated support for preparing the construction of complex installations. In this paper, we report our experiences.
---
paper_title: Evaluation of correctness criteria for dynamic workflow changes
paper_content:
The capability to dynamically adapt in-progress workflows (WF) is an essential requirement for any workflow management system (WfMS). This fact has been recognized by the WF community for a long time and different approaches in the area of adaptive workflows have been developed so far. They either enable WF type changes and their propagation to in-progress WF instances or (ad-hoc) changes of single WF instances. Thus, at first glance, many of the major problems related to dynamic WF changes seem to be solved. However, this picture changes when digging deeper into the approaches and considering implementation and usability issues as well. This paper presents important criteria for the correct adaptation of runningw orkflows and analyzes how actual approaches satisfy them. At this, we demonstrate the strengths of the different approaches and provide additional solutions to overcome current limitations. These solutions comprise comprehensive correctness criteria as well as migration rules for change realization.
---
paper_title: How to handle dynamic change and capture management information? An approach based on generic workflow models
paper_content:
Today's workflow management systems have problems dealing with both ad-hoc changes and evolutionary changes. As a result, the workflow management system is not used to support dynamically changing workflow processes or the workflow process is supported in a rigid manner, i.e., changes are not allowed or handled outside of the workflow management system. This paper addresses two notorious problems related to adaptive workflow: (1) providing management information at the right aggregation level, and (2) supporting dynamic change, i.e., migrating cases from an old to a new workflow. These two problems are tackled by using generic process models. A generic process model describes a family of variants of the same workflow process. To relate members of a family of workflow processes we propose notions of inheritance. These notions of inheritance are used to address the two problems mentioned both a design-time and at run-time.
---
paper_title: A Knowledge-based Approach to Handling Exceptions in Workflow Systems
paper_content:
This paper describes a novel knowledge-based approach for helping workflow process designers and participants better manage the exceptions (deviations from an ideal collaborative work process caused by errors, failures, resource or requirements changes etc.) that can occur during the enactment of a workflow. This approach is based on exploiting a generic and reusable body of knowledge concerning what kinds of exceptions can occur in collaborative work processes, and how these exceptions can handled (detected, diagnosed and resolved). This work builds upon previous efforts from the MIT Process Handbook project and from research on conflict management in collaborative design.
---
paper_title: Dynamic workflow schema evolution based on workflow type versioning and workflow migration
paper_content:
An important yet open problem in workflow management is the evolution of workflow schemas, i.e., the creation, deletion and modification of workflow types in such a way that the schema remains correct. This problem is aggravated when instances of modified workflow types are active at the time of modification because any workflow instance has to conform to the definition of its type. The paper presents a framework for dynamic workflow schema evolution that is based on workflow type versioning and workflow migration. Workflow types can be versioned, and a new version can be derived from an existing one by applying modification operations. Workflow type versions allow us to handle active instances in an elegant way whenever a schema is modified. If possible, an affected workflow instance is migrated to the new version of its type. Otherwise, it continues to execute under its old type. We introduce correctness criteria that must be met by workflow schemas and workflow schema modification operations. We also define under which conditions the migration of workflow instances to new workflow type versions is allowed.
---
paper_title: Formal foundation and conceptual design of dynamic adaptations in a workflow management system
paper_content:
While the different aspects of flexible workflow management are still under discussion, the ability to adapt the structure of running workflow instances to modified workflow schemas is an important property of a flexible workflow management system. In this paper, we present the formal foundation and conceptual design of dynamic adaptations in an object-oriented workflow management system. We describe in some detail how workflow schemas are represented. The system architecture, based on the CORBA object-oriented middleware, is overviewed, and the implementation of dynamic adaptations is sketched. An example introduces the graphical user interface of the system and shows a dynamic adaptation.
---
paper_title: Adaptive workflow - On the interplay between flexibility and support
paper_content:
Today’s information systems do not support adaptive workflow: either the information system abstracts from the workflow processes at hand and focuses on the management of data and the execution of individual tasks via applications or the workflow is supported by the information system but it is hard to handle changes. This paper addresses this problem by classifying the types of changes. Based on this classification, issues such as syntactic/semantic correctness, case transfer, and management information are discussed. It turns out that the trade-off between flexibility and support raises challenging questions. Only some of these questions are answered in this paper; most of them require further research. Since the success of the next generation of workflow management systems depends on the ability to support adaptive workflow, it is important to provide answers for the questions raised in this paper.
---
paper_title: A Constraint Specification Approach to Building Flexible Workflows
paper_content:
Process support systems, such as workflows, are being used in a variety of domains. However, most areas of application have focused on traditional production-style processes, which are characterised by predictability and repetitiveness. Application in non-traditional domains with highly flexible process is still largely unexplored. Such flexible processes are characterised by lack of ability to completely predefine and/or an explosive number of alternatives. Accordingly we define flexibility as the ability of the process to execute on the basis of a partially defined model where the full specification is made at runtime and may be unique to each instance. In this paper, we will present an approach to building workflow models for such processes. We will present our approach in the context of a non-traditional domain for workflow, deployment, which is, degree programs in tertiary institutes. The primary motivation behind our approach is to provide the ability to model flexible processes without introducing non-standard modelling constructs. This ensures that the correctness and verification of the language is preserved. We propose to build workflow schemas from a standard set of modelling constructs and given process constraints. We identify the fundamental requirements for constraint specification and classify them into selection, termination and build constraints. We will detail the specification of these constraints in a relational model. Finally, we will demonstrate the dynamic building of instance specific workflow models on the basis of these constraints.
---
paper_title: Achieving Workflow Flexibility through Taming the Chaos
paper_content:
Traditionally, flexibility in workflow is introduced by moving from the rigid predefined control flow to permitting alternative patterns. The paper propose a reverse approach to achieving flexibility, namely to start with chaos and then impose restrictions. This approach employs an untraditional view on business process which is regarded not as a “flow of work”, but as a trajectory in the space of all possible states. The execution control in the proposed approach is realized via the notion of valid state, were a state includes activities currently planned for the given process. The flexibility is achieved by breaking the rules of planning into three categories: obligations, prohibitions, and recommendations.
---
paper_title: Knowledge-Based Techniques to Increase the Flexibility of Workflow Management
paper_content:
Abstract This paper describes how knowledge-based techniques can be used to overcome problems of workflow management in engineering applications. Using explicit process and product models as a basis for a workflow interpreter allows us to alternate planning and execution steps, resulting in an increased flexibility of project coordination and enactment. To gain the full advantages of this flexibility, change processes have to be supported by the system. These require an improved traceability of decisions and have to be based on dependency management and change notification mechanisms. Our methods and techniques are illustrated by two applications: Urban land-use planning and software process modeling.
---
paper_title: Enforcement vs. freedom of action an integrated approach to flexible workflow enactment
paper_content:
The advantages of today'sprocess management, such as efficiency and quality aspects, are achievedby enforcing detailed models of work processes. But real world processescan be planned only to a limited degree and sometimes demand changing alreadyplanned parts of the process. So additional, unforeseen activities haveto take place in a process. This contradictory situationwill here be tackled by an approach that combines different forms of enforcementof planned parts of a process. The explicit modelling of these differentsupport strategies allows them to be changed if demanded by the situation.An extended enactment strategy gives workers the opportunity to negotiateabout changing the current support strategy. So there is some thresholdto deviate from the planned process because addtional actions are required.
---
paper_title: Coo-flow: A process technology to support cooperative processes
paper_content:
In this paper we present a process management technology for the coordination of creative and large scale distributed processes. Our approach is the result of usage analysis in domains like Software Development, Architecture/Engineering/Construction, and e-Learning processes. The basic conclusions of these experiments are the following: (1) cooperative processes are described in the same way as production processes, but these descriptions are interpreted in a different way depending on the nature of the process, (2) the interpretation of process description depends mainly on the required flexibility of control flow and of data flow, and on the relationship between them, (3) the management of intermediate results is a central feature for supporting the cooperation inherent to these processes. COO-flow is a process technology that results from these studies. It is based on two complementing contributions: anticipation that allows succeeding activities to cooperate, and COO-transactions that allows parallel activities to cooperate. This paper introduces COO-flow characteristics, gives a (partial) formalization and briefly discusses its Web implementation.
---
paper_title: Advanced Topics in Workflow Management : Issues , Requirements , and Solutions
paper_content:
This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in object-oriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.
---
| Title: A Survey on the Flexibility Requirements Related to Business Processes and Modeling Artifacts
Section 1: Introduction
Description 1: Introduce the evolving business environment and the need for flexibility in business processes and modeling artifacts.
Section 2: Business process modeling: a survey
Description 2: Present a survey on business process modeling, including different perspectives, formalisms, and commercial offers.
Section 3: A classification of business process modeling perspectives
Description 3: Classify existing approaches to enterprise knowledge modeling into various perspectives and explain their relevance.
Section 4: Modeling formalisms
Description 4: Discuss various process modeling formalisms and their appropriateness for flexible process modeling.
Section 5: Position of workflow softwares for modeling and enacting processes
Description 5: Analyze the position of workflow management systems (WFMSs) and their effectiveness in modeling and enacting processes.
Section 6: Discussion
Description 6: Highlight key observations and insights from the survey related to BP modeling and flexibility requirements.
Section 7: Flexible process modeling and controlling: State of the Art
Description 7: Explore the state of the art in flexible and adaptive workflow approaches and their properties.
Section 8: Conclusion and future work
Description 8: Summarize the findings of the survey and propose directions for future research on business process flexibility and modeling artifacts. |
346 © 2001 Schattauer GmbH What is Bioinformatics? A Proposed Definition and Overview of the Field | 18 | ---
paper_title: The Protein Data Bank
paper_content:
The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.
---
paper_title: Whole-genome random sequencing and assembly of Haemophilus influenzae Rd
paper_content:
An approach for genome analysis based on sequencing and assembly of unselected pieces of DNA from the whole chromosome has been applied to obtain the complete nucleotide sequence (1,830,137 base pairs) of the genome from the bacterium Haemophilus influenzae Rd. This approach eliminates the need for initial mapping efforts and is therefore applicable to the vast array of microbial species for which genome maps are unavailable. The H. influenzae Rd genome sequence (Genome Sequence DataBase accession number L42023) represents the only complete genome sequence from a free-living organism.
---
paper_title: It's sink or swim as a tidal wave of data approaches
paper_content:
Enormous amounts of data are being amassed in fields as diverse as genomics and astronomy. If this information is to be used effectively to speed the pace of discovery, scientists need new ways of working. This requires investment in computers, new statistical tools, and a liberal approach to data sharing.
---
paper_title: The SWISS-PROT protein sequence database and its supplement TrEMBL in 2000
paper_content:
SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation (such as the description of the function of a protein, its domains structure, post-translational modifications, variants, etc.), a minimal level of redundancy and high level of integration with other databases. Recent developments of the database include format and content enhancements, cross-references to additional databases, new documentation files and improvements to TrEMBL, a computer-annotated supplement to SWISS-PROT. TrEMBL consists of entries in SWISS-PROT-like format derived from the translation of all coding sequences (CDSs) in the EMBL Nucleotide Sequence Database, except the CDSs already included in SWISS-PROT. We also describe the Human Proteomics Initiative (HPI), a major project to annotate all known human sequences according to the quality standards of SWISS-PROT. SWISS-PROT is available at: http://www.expasy.ch/sprot/ and http://www.ebi.ac.uk/swissprot/
---
paper_title: Protein-DNA interactions: a structural analysis
paper_content:
A detailed analysis of the DNA-binding sites of 26 proteins is presented using data from the Nucleic Acid Database (NDB) and the Protein Data Bank (PDB). Chemical and physical properties of the protein-DNA interface, such as polarity, size, shape, and packing, were analysed. The DNA-binding sites shared common features, comprising many discontinuous sequence segments forming hydrophilic surfaces capable of direct and water-mediated hydrogen bonds. These interface sites were compared to those of protein-protein binding sites, revealing them to be more polar, with many more intermolecular hydrogen bonds and buried water molecules than the protein-protein interface sites. By looking at the number and positioning of protein residue-DNA base interactions in a series of interaction footprints, three modes of DNA binding were identified (single-headed, double-headed and enveloping). Six of the eight enzymes in the data set bound in the enveloping mode, with the protein presenting a large interface area effectively wrapped around the DNA.A comparison of structural parameters of the DNA revealed that some values for the bound DNA (including twist, slide and roll) were intermediate of those observed for the unbound B-DNA and A-DNA. The distortion of bound DNA was evaluated by calculating a root-mean-square deviation on fitting to a canonical B-DNA structure. Major distortions were commonly caused by specific kinks in the DNA sequence, some resulting in the overall bending of the helix. The helix bending affected the dimensions of the grooves in the DNA, allowing the binding of protein elements that would otherwise be unable to make contact. From this structural analysis a preliminary set of rules that govern the bending of the DNA in protein-DNA complexes, are proposed.
---
paper_title: R EPORTS Functional Characterization of the S. cerevisiae Genome by Gene Deletion and Parallel Analysis
paper_content:
The functions of many open reading frames (ORFs) identified in genome-sequencing projects are unknown. New, whole-genome approaches are required to systematically determine their function. A total of 6925 Saccharomyces cerevisiae strains were constructed, by a high-throughput strategy, each with a precise deletion of one of 2026 ORFs (more than one-third of the ORFs in the genome). Of the deleted ORFs, 17 percent were essential for viability in rich medium. The phenotypes of more than 500 deletion strains were assayed in parallel. Of the deletion strains, 40 percent showed quantitative growth defects in either rich or minimal medium.
---
paper_title: RegulonDB (version 3.0): transcriptional regulation and operon organization in Escherichia coli K-12
paper_content:
RegulonDB is a database on transcription regulation and operon organization in Escherichia coli. The current version describes regulatory signals of transcription initiation, promoters, regulatory binding sites of specific regulators, ribosome binding sites and terminators, as well as information on genes clustered in operons. These specific annotations have been gathered from a constant search in the literature, as well as based on computational sequence predictions. The genomic coordinates of all these objects in the E.coli K-12 chromosome are clearly indicated. Every known object has a link to at least one MEDLINE reference. We have also added direct links to recent expression data of E.coli K-12. The version presented here has important modifications both in the structure of the database, as well as in the amount and type of information encoded in the database. RegulonDB can be accessed on the web at URL: http://www.cifn.unam. mx/Computational_Biology/regulondb/
---
paper_title: Metabolism and evolution of Haemophilus influenzae deduced from a whole-genome comparison with Escherichia coli
paper_content:
Abstract Background: The 1.83 Megabase (Mb) sequence of the Haemophilus influenzae chromosome, the first completed genome sequence of a cellular life form, has been recently reported. Approximately 75 % of the 4.7 Mb genome sequence of Escherichia coli is also available. The life styles of the two bacteria are very different – H. influenzae is an obligate parasite that lives in human upper respiratory mucosa and can be cultivated only on rich media, whereas E. coli is a saprophyte that can grow on minimal media. A detailed comparison of the protein products encoded by these two genomes is expected to provide valuable insights into bacterial cell physiology and genome evolution. Results We describe the results of computer analysis of the amino-acid sequences of 1703 putative proteins encoded by the complete genome of H. influenzae . We detected sequence similarity to proteins in current databases for 92 % of the H. influenzae protein sequences, and at least a general functional prediction was possible for 83 %. A comparison of the H. influenzae protein sequences with those of 3010 proteins encoded by the sequenced 75 % of the E. coli genome revealed 1128 pairs of apparent orthologs, with an average of 59 % identity. In contrast to the high similarity between orthologs, the genome organization and the functional repertoire of genes in the two bacteria were remarkably different. The smaller genome size of H. influenzae is explained, to a large extent, by a reduction in the number of paralogous genes. There was no long range colinearity between the E. coli and H. influenzae gene orders, but over 70 % of the orthologous genes were found in short conserved strings, only about half of which were operons in E. coli . Superposition of the H. influenzae enzyme repertoire upon the known E. coli metabolic pathways allowed us to reconstruct similar and alternative pathways in H. influenzae and provides an explanation for the known nutritional requirements. Conclusion By comparing proteins encoded by the two bacterial genomes, we have shown that extensive gene shuffling and variation in the extent of gene paralogy are major trends in bacterial evolution; this comparison has also allowed us to deduce crucial aspects of the largely uncharacterized metabolism of H. influenzae .
---
paper_title: Expression profiling using cDNA microarrays
paper_content:
cDNA microarrays are capable of profiling gene expression patterns of tens of thousands of genes in a single experiment. DNA targets, in the form of 3´ expressed sequence tags (ESTs), are arrayed onto glass slides (or membranes) and probed with fluorescent– or radioactively–labelled cDNAs. Here, we review technical aspects of cDNA microarrays, including the general principles, fabrication of the arrays, target labelling, image analysis and data extraction, management and mining.
---
paper_title: Comprehensive Analysis of Hydrogen Bonds in Regulatory Protein DNA-Complexes: In Search of Common Principles
paper_content:
A systematic analysis of hydrogen bonds between regulatory proteins and their DNA targets is presented, based on 28 crystallographically solved complexes. All possible hydrogen bonds were screened and classified into different types: those that involved the amino acid side-chains and DNA base edges and those that involve the backbone atoms of the molecules. For each interaction type, all bonds were characterized and a statistical analysis was performed to reveal significant amino acid-base interdependence. The interactions between the amino acid side-chains and DNA backbone constitute about half of the interactions, but did not show any amino acid-base correlation. Interactionsviathe protein backbone were also observed, predominantly with the DNA backbone. As expected, the most significant pairing perference was demonstrated for interactions between the amino acid side-chains and the DNA base edges. The statistically significant relationships could mostly be explained by the chemical nature of the participants. However, correlations that could not be trivially predicted from the hydrogen bonding potential of the residues were also identified, like the preference of lysine for guanine over adenine, or the preference of glutamic acid for cystosine over adenine. While Lys×G interactions were very frequent and spread over various families, the Glu×C interactions were found mainly in the basic helix-loop-helix family. Further examination of the side-chain-base edge contacts at the atomic level revealed a trend of the amino acids to contact the DNA by their donor atoms, preferably at position W2 in the major groove. In most cases it seems that the interactions are not guided simply by the presence of a required atom in a specific position in the groove, but that the identity of the base possessing this atom is crucial. This may have important implications in molecular design experiments.
---
paper_title: Dissecting the Regulatory Circuitry of a Eukaryotic Genome
paper_content:
Genome-wide expression analysis was used to identify genes whose expression depends on the functions of key components of the transcription initiation machinery in yeast. Components of the RNA polymerase II holoenzyme, the general transcription factor TFIID, and the SAGA chromatin modification complex were found to have roles in expression of distinct sets of genes. The results reveal an unanticipated level of regulation which is superimposed on that due to gene-specific transcription factors, a novel mechanism for coordinate regulation of specific sets of genes when cells encounter limiting nutrients, and evidence that the ultimate targets of signal transduction pathways can be identified within the initiation apparatus.
---
paper_title: The Relationship between Protein Structure and Function : a Comprehensive Survey with Application to the Yeast Genome
paper_content:
For most proteins in the genome databases, function is predicted via sequence comparison. In spite of the popularity of this approach, the extent to which it can be reliably applied is unknown. We address this issue by systematically investigating the relationship between protein function and structure. We focus initially on enzymes functionally classified by the Enzyme Commission (EC) and relate these to by structurally classified domains the SCOP database. We find that the major SCOP fold classes have different propensities to carry out certain broad categories of functions. For instance, alpha/beta folds are disproportionately associated with enzymes, especially transferases and hydrolases, and all-alpha and small folds with non-enzymes, while alpha+beta folds have an equal tendency either way. These observations for the database overall are largely true for specific genomes. We focus, in particular, on yeast, analyzing it with many classifications in addition to SCOP and EC (i.e. COGs, CATH, MIPS), and find clear tendencies for fold-function association, across a broad spectrum of functions. Analysis with the COGs scheme also suggests that the functions of the most ancient proteins are more evenly distributed among different structural classes than those of more modern ones. For the database overall, we identify the most versatile functions, i.e. those that are associated with the most folds, and the most versatile folds, associated with the most functions. The two most versatile enzymatic functions (hydro-lyases and O-glycosyl glucosidases) are associated with seven folds each. The five most versatile folds (TIM-barrel, Rossmann, ferredoxin, alpha-beta hydrolase, and P-loop NTP hydrolase) are all mixed alpha-beta structures. They stand out as generic scaffolds, accommodating from six to as many as 16 functions (for the exceptional TIM-barrel). At the conclusion of our analysis we are able to construct a graph giving the chance that a functional annotation can be reliably transferred at different degrees of sequence and structural similarity. Supplemental information is available from http://bioinfo.mbb.yale.edu/genome/foldfunc++ +.
---
paper_title: Exploring the Metabolic and Genetic Control of Gene Expression on a Genomic Scale
paper_content:
DNA microarrays containing virtually every gene of Saccharomyces cerevisiae were used to carry out a comprehensive investigation of the temporal program of gene expression accompanying the metabolic shift from fermentation to respiration. The expression profiles observed for genes with known metabolic functions pointed to features of the metabolic reprogramming that occur during the diauxic shift, and the expression patterns of many previously uncharacterized genes provided clues to their possible functions. The same DNA microarrays were also used to identify genes whose expression was affected by deletion of the transcriptional co-repressor TUP1 or overexpression of the transcriptional activator YAP1. These results demonstrate the feasibility and utility of this approach to genomewide exploration of gene expression patterns.
---
paper_title: DNA Recognition by a β-sheet
paper_content:
The stereochemical basis of DNA bending by a β-sheet is discussed in the light of crystal structures of MetJ and Arc repressors. The β-sheets of MetJ and Arc repressors bend the six basepair binding sites in the DNA to different directions. The β-sheet of MetJ compresses the major groove and thereby bends the DNA locally around the major groove, while that of ArcR widens the major groove at the centre and bends the DNA locally around the minor groove. Whether the major groove is compressed or widened seems to be dependent on the overall shape of the β-sheet, particularly the size of residues in some positions on the β-sheet. To close or open the major groove a pyrimidine-purine step at the centre of the binding site rolls to opposite directions.
---
paper_title: DNA recognition code of transcription factors
paper_content:
Introduction Over 35 years have passed since the 'central dogma' of molecular biology (DNA makes RNA makes protein) was proposed (Crick, 1958). Despite its remarkable verification, it is being seen increasingly as limited, for if the whole flow of information in a cell were unidirectional, all cells with the same complement of genetic material would have identical function and morphology. The truth is manifestly otherwise. A group of proteins, transcription factors, selects the information used in cells by specifically binding to 'regulatory' DNA sequences. Among other effects, this causes the differentiation of cells. These factors act as the final messenger in a transduction pathway of signals which come from outside the cell. Thus, gene expression can be regulated by the environment. Recognition between a transcription factor and its target DNA is achieved through the physical interaction of the two molecules. Since the structures of both DNA and proteins are determined by their primary sequences, there must be a set of rules to describe DNA-protein interactions entirely on the basis of sequences. The fundamental question is whether these rules are simple and comprehensible, such that the DNA recognition code can be compared with the triplet code which summarizes the rules of how DNA and protein sequences are related in the central dogma. As we review in this paper, a simple code for DNA recognition by transcription factors does seem to exist. In fact, the recognition rules allow us (i) to predict DNA-protein interactions, (ii) to change the binding specificity of an existing transcription factor, and (iii) probably even to design in a rational way a new protein which binds to a particular DNA sequence. The code has been derived from crystal structures of transcription factor-DNA complexes (Table I) and the vast body of biochemical, genetic and statistical information about the binding specificity of transcription factors. Most of the transcription factors discussed here use an a-helix, which binds to the DNA major groove, for recognition. Those proteins which have a 'recognition helix' discussed here fall mainly into four families: probe helix (PH), helix-turnhelix (HTH), zinc finger (ZnF) and C4 Zn binding proteins (C4). There is, in addition, one transcription factor family described that uses a (J-sheet, the MetJ repressor-like (MR) family. [See Table I for members of these and other families. Note that (i) individual Zn fingers are further subdivided into A and B fingers, AF and BF (Suzuki et ai, 1994a), (ii) the PH family includes homeodomain and basic-zipper proteins (Suzuki, 1993) and (iii) the C4 family includes the hormone receptors and the GATA proteins (Suzuki and Chothia, 1994).]
---
paper_title: Supersites within superfolds. Binding site similarity in the absence of homology.
paper_content:
Abstract A method is presented to assess the significance of binding site similarities within superimposed protein three-dimensional (3D) structures and applied to all similar structures in the Protein Data Bank. For similarities between 3D structures lacking significant sequence similarity, the important distinction was made between remote homology (an ancient common ancestor) and analogy (likely convergence to a folding motif) according to the structural classification of proteins (SCOP) database. Supersites were defined as structural locations on groups of analogous proteins (i.e. superfolds) showing a statistically significant tendency to bind substrates despite little evidence of a common ancestor for the proteins considered. We identify three potentially new superfolds containing supersites: ferredoxin-like folds, four-helical bundles and double-stranded β helices. In addition, the method quantifies binding site similarities within homologous proteins and previously identified supersites such as that found in the β/α (TIM) barrels. For the nine superfolds, the accuracy of predictions of binding site locations is assessed. Implications for protein evolution, and the prediction of protein function either through fold recognition or tertiary structure comparison, are discussed.
---
paper_title: Sequence-specific recognition of double helical nucleic acids by proteins.
paper_content:
The base pairs in double helical nucleic acids have been compared to see how they can be recognized by proteins. We conclude that a single hydrogen bond is inadequate for uniquely identifying any particular base pair, as this leads to numerous degeneracies. However, using two hydrogen bonds, fidelity of base pair recognition may be achieved. We propose specific amino-acid side chain interactions involving two hydrogen bonds as a component of the recognition system for base pairs. In the major groove we suggest that asparagine or glutamine binds to adenine of the base pair or arginine binds to guanine. In the minor groove, we suggest an interaction between asparagine or glutamine with guanine of the base pair. We also discuss the role that ions and other amino-acid side chains may play in recognition interactions.
---
paper_title: Assessing annotation transfer for genomics: quantifying the relations between protein sequence, structure and function through traditional and probabilistic scores
paper_content:
Measuring in a quantitative, statistical sense the degree to which structural and functional information can be ‘‘transferred’’ between pairs of related protein sequences at various levels of similarity is an essential prerequisite for robust genome annotation. To this end, we performed pairwise sequence, structure and function comparisons on30,000 pairs of protein domains with known structure and function. Our domain pairs, which are constructed according to the SCOP fold classification, range in similarity from just sharing a fold, to being nearly identical. Our results show that traditional scores for sequence and structure similarity have the same basic exponential relationship as observed previously, with structural divergence, measured in RMS, being exponentially related to sequence divergence, measured in percent identity. However, as the scale of our survey is much larger than any previous investigations, our results have greater statistical weight and precision. We have been able to express the relationship of sequence and structure similarity using more ‘‘modern scores,‘‘ such as Smith-Waterman alignment scores and probabilistic P-values for both sequence and structure comparison. These modern scores address some of the problems with traditional scores, such as determining a conserved core and correcting for length dependency; they enable us to phrase the sequence-structure relationship in more precise and accurate terms. We found that the basic exponential sequence-structure relationship is very general: the same essential relationship is found in the different secondary-structure classes and is evident in all the scoring schemes. To relate function to sequence and structure we assigned various levels of functional similarity to the domain pairs, based on a simple functional classification scheme. This scheme was constructed by combining and augmenting annotations in the enzyme and fly functional classifications and comparing subsets of these to the Escherichia coli and yeast classifications. We found sigmoidal relationships between similarity in function and sequence, with clear thresholds for different levels of functional conservation. For pairs of domains that share the same fold, precise function appears to be conserved down to40 % sequence identity, whereas broad functional class is conserved to25 %. Interestingly, percent identity is more effective at quantifying functional conservation than the more modern scores (e.g. Pvalues). Results of all the pairwise comparisons and our combined functional classification scheme for protein structures can be accessed from a web database at http://bioinfo.mbb.yale.edu/align # 2000 Academic Press
---
paper_title: A Comprehensive Library of DNA-binding Site Matrices for 55 Proteins Applied to the Complete Escherichia coli K-12 Genome
paper_content:
A major mode of gene regulation occurs via the binding of specific proteins to specific DNA sequences. The availability of complete bacterial genome sequences offers an unprecedented opportunity to describe networks of such interactions by correlating existing experimental data with computational predictions. Of the 240 candidate Escherichia coli DNA-binding proteins, about 55 have DNA-binding sites identified by DNA footprinting. We used these sites to construct recognition matrices, which we used to search for additional binding sites in the E. coli genomic sequence. Many of these matrices show a strong preference for non-coding DNA. Discrepancies are identified between matrices derived from natural sites and those derived from SELEX (Systematic Evolution of Ligands by Exponential enrichment) experiments. We have constructed a database of these proteins and binding sites, called DPInteract (available at http://arep.med.harvard.edu/dpinteract).
---
paper_title: Predictive docking of protein—protein and protein—DNA complexes
paper_content:
Abstract Recent developments in algorithms to predict the docking of two proteins have considered both the initial rigid-body global search and subsequent screening and refinement. The results of two blind trials of protein docking are encouraging — for complexes that are not too large and do not undergo sizeable conformational change upon association, the algorithms are now able to suggest reasonably accurate models.
---
paper_title: The Protein Data Bank
paper_content:
The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.
---
paper_title: Operons in Escherichia coli: Genomic analyses and predictions
paper_content:
The rich knowledge of operon organization in Escherichia coli, together with the completed chromosomal sequence of this bacterium, enabled us to perform an analysis of distances between genes and of functional relationships of adjacent genes in the same operon, as opposed to adjacent genes in different transcription units. We measured and demonstrated the expected tendencies of genes within operons to have much shorter intergenic distances than genes at the borders of transcription units. A clear peak at short distances between genes in the same operon contrasts with a flat frequency distribution of genes at the borders of transcription units. Also, genes in the same operon tend to have the same physiological functional class. The results of these analyses were used to implement a method to predict the genomic organization of genes into transcription units. The method has a maximum accuracy of 88% correct identification of pairs of adjacent genes to be in an operon, or at the borders of transcription units, and correctly identifies around 75% of the known transcription units when used to predict the transcription unit organization of the E. coli genome. Based on the frequency distance distributions, we estimated a total of 630 to 700 operons in E. coli. This step opens the possibility of predicting operon organization in other bacteria whose genome sequences have been finished.
---
paper_title: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring
paper_content:
Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge.
---
paper_title: Prediction of function in DNA sequence analysis.
paper_content:
Recognition of function of newly sequenced DNA fragments is an important area of computational molecular biology. Here we present an extensive review of methods for prediction of functional sites, tRNA, and protein-coding genes and discuss possible further directions of research in this area.
---
paper_title: Improved tools for biological sequence comparison.
paper_content:
We have developed three computer programs for comparisons of protein and DNA sequences. They can be used to search sequence data bases, evaluate similarity scores, and identify periodic structures based on local sequence similarity. The FASTA program is a more sensitive derivative of the FASTP program, which can be used to search protein or DNA sequence data bases and can compare a protein sequence to a DNA sequence data base by translating the DNA data base as it is searched. FASTA includes an additional step in the calculation of the initial pairwise similarity score that allows multiple regions of similarity to be joined to increase the score of related sequences. The RDF2 program can be used to evaluate the significance of similarity scores using a shuffling method that preserves local sequence composition. The LFASTA program can display all the regions of local similarity between two sequences with scores greater than a threshold, using the same scoring parameters and a similar alignment algorithm; these local similarities can be displayed as a "graphic matrix" plot or as individual alignments. In addition, these programs have been generalized to allow comparison of DNA or protein sequences based on a variety of alternative scoring matrices.
---
paper_title: [12] DNA arrays for analysis of gene expression
paper_content:
Publisher Summary This chapter describes one of the currently used microarray technologies commonly called “spotting” or “printing” because DNAs are physically spotted on a solid substrate in which short oligonucleotides is synthesized directly on a solid support. In standard spotting applications, large collections of DNA samples are assembled in 96- or 384-well plates. DNA microarrays are used for a variety of purposes; essentially any property of a DNA sequence that can be made experimentally to result in differential recovery of that sequence can be assayed for thousands of sequences at once by DNA microarray hybridization. The chapter focuses on the application of DNA microarrays to gene expression studies and discusses general principles of whole genome expression monitoring as well as detailing the specific process of making and using spotted DNA microarrays.
---
paper_title: Initial sequencing and analysis of the human genome.
paper_content:
The human genome holds an extraordinary trove of information about human development, physiology, medicine and evolution. Here we report the results of an international collaboration to produce and make freely available a draft sequence of the human genome. We also present an initial analysis of the data, describing some of the insights that can be gleaned from the sequence.
---
paper_title: Conservation of DNA Regulatory Motifs and Discovery of New Motifs in Microbial Genomes
paper_content:
Regulatory motifs can be found by local multiple alignment of upstream regions from coregulated sets of genes, or regulons. We searched for regulatory motifs using the program AlignACE together with a set of filters that helped us choose the motifs most likely to be biologically relevant in 17 complete microbial genomes. We searched the upstream regions of potentially coregulated genes grouped by three methods: (1) genes that make up functional pathways; (2) genes homologous to regulons from a well-studied species (Escherichia coli); and (3) groups of genes derived from conserved operons. This last group is based on the observation that genes making up homologous regulons in different species are often assorted into coregulated operons in different combinations. This allows partial reconstruction of regulons by looking at operon structure across several species. Unlike other methods for predicting regulons, this method does not depend on the availability of experimental data other than the genome sequence and the locations of genes. New, statistically significant motifs were found in the genome sequence of each organism using each grouping method. The most significant new motif was found upstream of genes in the methane-metabolism functional group in Methanobacterium thermoautotrophicum. We found that at least 27% of the known E. coli DNA-regulatory motifs are conserved in one or more distantly related eubacteria. We also observed significant motifs that differed from the E. coli motif in other organisms upstream of sets of genes homologous to known E. coli regulons, including Crp, LexA, and ArcA in Bacillus subtilis; four anaerobic regulons in Archaeoglobus fulgidus (NarL, NarP, Fnr, and ModE); and the PhoB, PurR, RpoH, and FhlA regulons in other archaebacterial species. We also used motif conservation to aid in finding new motifs by grouping upstream regions from closely related bacteria, thus increasing the number of instances of the motif in the sequence to be aligned. For example, by grouping upstream sequences from three archaebacterial species, we found a conserved motif that may regulate ferrous ion transport that was not found in individual genomes. Discovery of conserved motifs becomes easier as the number of closely related genome sequences increases.
---
paper_title: A Genomic Perspective on Protein Families
paper_content:
In order to extract the maximum amount of information from the rapidly accumulating genome sequences, all conserved genes need to be classified according to their homologous relationships. Comparison of proteins encoded in seven complete genomes from five major phylogenetic lineages and elucidation of consistent patterns of sequence similarities allowed the delineation of 720 clusters of orthologous groups (COGs). Each COG consists of individual orthologous proteins or orthologous sets of paralogs from at least three lineages. Orthologs typically have the same function, allowing transfer of functional information from one member to an entire COG. This relation automatically yields a number of functional predictions for poorly characterized genomes. The COGs comprise a framework for functional and evolutionary genome analysis.
---
paper_title: Complete genomes in WWW Entrez: data representation and analysis.
paper_content:
Motivation: The large amount of genome sequence data now publicly available can be accessed through the National Center for Biotechnology Information (NCBI) Entrez search and retrieval system, making it possible to explore data of a breadth and scope exceeding traditional flatfile views. Results: Here we report recent improvements for completely sequenced genomes from viruses, bacteria, and yeast. Flexible web based views, precomputed relationships, and immediate access to analytical tools provide scientists with a portal into the new insights to be gained from completed genome sequences.
---
paper_title: Comparing Genomes in terms of Protein Structure: Surveys of a Finite Parts List
paper_content:
We give an overview of the emerging field of structural genomics, describing how genomes can be compared in terms of protein structure. As the number of genes in a genome and the total number of protein folds are both quite limited, these comparisons take the form of surveys of a finite parts list, similar in respects to demographic censuses. Fold surveys have many similarities with other whole-genome characterizations, e.g. analyses of motifs or pathways. However, structure has a number of aspects that make it particularly suitable for comparing genomes, namely the way it allows for the precise definition of a basic protein module and the fact that it has a better defined relationship to sequence similarity than does protein function. An essential requirement for a structure survey is a library of folds, which groups the known structures into `fold families'. This library can be built up automatically using a structure comparison program, and we described how important objective statistical measures are for assessing similarities within the library and between the library and genome sequences. After building the library, one can use it to count the number of folds in genomes, expressing the results in the form of Venn diagrams and `top-10' statistics for shared and common folds. Depending on the counting methodology employed, these statistics can reflect different aspects of the genome, such as the amount of internal duplication or gene expression. Previous analyses have shown that the common folds shared between very different microorganisms, i.e. in different kingdoms, have a remarkably similar structure, being comprised of repeated strand–helix–strand super-secondary structure units. A major difficulty with this sort of `fold-counting' is that only a small subset of the structures in a complete genome are currently known and this subset is prone to sampling bias. One way of overcoming biases is through structure prediction, which can be applied uniformly and comprehensively to a whole genome. Various investigators have, in fact, already applied many of the existing techniques for predicting secondary structure and transmembrane (TM) helices to the recently sequenced genomes. The results have been consistent: microbial genomes have similar fractions of strands and helices even though they have significantly different amino acid composition. The fraction of membrane proteins with a given number of TM helices falls off rapidly with more TM elements, approximately according to a Zipf law. This latter finding indicates that there is no preference for the highly studied 7-TM proteins in microbial genomes. Continuously updated tables and further information pertinent to this review are available over the web at http://bioinfo.mbb.yale.edu/genome.
---
paper_title: Advances in structural genomics
paper_content:
New computational techniques have allowed protein folds to be assigned to all or parts of between a quarter ( Caenorhabditis elegans ) and a half ( Mycoplasma genitalium ) of the individual protein sequences in different genomes. These assignments give a new perspective on domain structures, gene duplications, protein families and protein folds in genome sequences.
---
paper_title: Whole-genome random sequencing and assembly of Haemophilus influenzae Rd
paper_content:
An approach for genome analysis based on sequencing and assembly of unselected pieces of DNA from the whole chromosome has been applied to obtain the complete nucleotide sequence (1,830,137 base pairs) of the genome from the bacterium Haemophilus influenzae Rd. This approach eliminates the need for initial mapping efforts and is therefore applicable to the vast array of microbial species for which genome maps are unavailable. The H. influenzae Rd genome sequence (Genome Sequence DataBase accession number L42023) represents the only complete genome sequence from a free-living organism.
---
paper_title: Distinguishing homologous from analogous proteins.
paper_content:
Fitch, W. M. (Dept. Physiological Chem., U. Wisconsin, Madison 53706) 1970. Distinguishing homologous from analogaus proteins. Syst. Zool., 19:99-113.-This work provides a means by which it is possible to determine whether two groups of related proteins have a common ancestor or are of independent origin. A set of 16 random amino acid sequences were shown to be unrelated by this method. A set of 16 real but presumably unrelated proteins gave a similar result. A set of 24 model proteins which was composed of two independently evolving groups, converging toward the same chemical goal, was correctly shown to be convergently related, with the probability that the result was due to chance being <10'. A set of 24 cytochromes composed of 5 fungi and 19 metazoans was shown to be divergently related, with the probability that the result was due to chance being < 10-'. A process was described which leads to the absolute minimum of nucleotide replacements required to account for the divergent descent of a set of genes given a particular topology for the tree depicting their ancestral relations. It was also shown that the convergent processes could realistically lead to amino acid sequences which would produce positive tests for relatedness, not only by a chemical criterion, but by a genetic (nucleotide sequence) criterion as well. Finally, a realistic case is indicated where truly homologous traits, behaving in a perfectly expectable way, may nevertheless lead to a ludicrous phylogeny. The demonstration that two proteins are related has been attempted using two different criteria. One criterion is to show that their chemical structures are very similar. An early example of this approach was the observation of the relatedness of the oxygen carrying proteins, myoglobin and hemoglobin (Watson and Kendrew, 1961). More recent is the relatedness of two enzymes in carbohydrate metabolism, lysozyme and alpha-lactalbumin (Brew, Vanaman and Hill, 1967). The other criterion is to show that underlying genetic structures of the proteins are more alike than one would expect by chance. This is now possible because our knowledge of the genetic code permits us to determine how many nucleotide positions, at the minimum, must differ in the genes encoding the two presumptively homologous proteins. One then compares the answer obtained to the number of differences one would expect for unrelated proteins. An example of this approach is the observation of the relatedness of plant and bacterial ferredoxins (Matsubara, Jukes and Cantor, 1969) for which added evidence has been produced (Fitch, 1970a). But regardless of the approach, the impulse, too powerful to resist, is to conclude that a particular pair of proteins had a common genic ancestor if they meet whichever criterion the observer uses. Now two proteins may appear similar because they descend with divergence from a common ancestral gene (i.e., are homologous in a time-honoured meaning dating back at the least to Darwin's Origin of Species) or because they descend with convergence from separate ancestral genes (i.e., are analogous). And, if a common genic ancestor is to be the conclusion, a genetic criterion should be superior to a chemical criterion. This is because analogous gene products, although they have no common ancestor, do serve similar functions and may well be expected to have similar chemical structures and thereby be confused with homologous gene products. This danger can only be increased by using a chemical, as opposed to a genetic, criterion.
---
paper_title: Protein folds and functions
paper_content:
Abstract Background: The recent rapid increase in the number of available three-dimensional protein structures has further highlighted the necessity to understand the relationship between biological function and structure. Using structural classification schemes such as SCOP, CATH and DALI, it is now possible to explore global relationships between protein fold and function, something which was previously impractical. Results: Using a relational database of CATH data we have generated fold distributions for arbitrary selections of proteins automatically. These distributions have been examined in the light of protein function and bound ligand. Different enzyme classes are not clearly reflected in distributions of protein class and architecture, whereas the type of bound ligand has a much more dramatic effect. Conclusions: The availability of structural classification data has enabled this novel overview analysis. We conclude that function at the top level of the EC number enzyme classification is not related to fold, as only a very few specific residues are actually responsible for enzyme activity. Conversely, the fold is much more closely related to ligand type.
---
paper_title: The Frequency Distribution of Gene Family Sizes in Complete Genomes.
paper_content:
We compare the frequency distribution of gene family sizes in the complete genomes of six bacteria (Escherichia coli, Haemophilus influenzae, Helicobacter pylori, Mycoplasma genitalium, Mycoplasma pneumoniae, and Synechocystis sp. PCC6803), two Archaea (Methanococcus jannaschii and Methanobacterium thermoautotrophicum), one eukaryote (Saccharomyces cerevisiae), the vaccinia virus, and the bacteriophage T4. The sizes of the gene families versus their frequencies show power-law distributions that tend to become flatter (have a larger exponent) as the number of genes in the genome increases. Power-law distributions generally occur as the limit distribution of a multiplicative stochastic process with a boundary constraint. We discuss various models that can account for a multiplicative process determining the sizes of gene families in the genome. In particular, we argue that, in order to explain the observed distributions, gene families have to behave in a coherent fashion within the genome; i.e., the probabilities of duplications of genes within a gene family are not independent of each other. Likewise, the probabilities of deletions of genes within a gene family are not independent of each other.
---
paper_title: Extracting Regulatory Sites from the Upstream Region of Yeast Genes by Computational Analysis of Oligonucleotide Frequencies
paper_content:
We present here a simple and fast method allowing the isolation of DNA binding sites for transcription factors from families of coregulated genes, with results illustrated in Saccharomyces cerevisiae. Although conceptually simple, the algorithm proved efficient for extracting, from most of the yeast regulatory families analyzed, the upstream regulatory sequences which had been previously found by experimental analysis. Furthermore, putative new regulatory sites are predicted within upstream regions of several regulons. The method is based on the detection of over-represented oligonucleotides. A specificity of this approach is to define the statistical significance of a site based on tables of oligonucleotide frequencies observed in all non-coding sequences from the yeast genome. In contrast with heuristic methods, this oligonucleotide analysis is rigorous and exhaustive. Its range of detection is however limited to relatively simple patterns: short motifs with a highly conserved core. These features seem to be shared by a good number of regulatory sites in yeast. This, and similar methods, should be increasingly required to identify unknown regulatory elements within the numerous new coregulated families resulting from measurements of gene expression levels at the genomic scale. All tools described here are available on the web at the site http://copan.cifn.unam.mx/Computational_Biology/ yeast-tools
---
paper_title: The SWISS-PROT protein sequence database and its supplement TrEMBL in 2000
paper_content:
SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation (such as the description of the function of a protein, its domains structure, post-translational modifications, variants, etc.), a minimal level of redundancy and high level of integration with other databases. Recent developments of the database include format and content enhancements, cross-references to additional databases, new documentation files and improvements to TrEMBL, a computer-annotated supplement to SWISS-PROT. TrEMBL consists of entries in SWISS-PROT-like format derived from the translation of all coding sequences (CDSs) in the EMBL Nucleotide Sequence Database, except the CDSs already included in SWISS-PROT. We also describe the Human Proteomics Initiative (HPI), a major project to annotate all known human sequences according to the quality standards of SWISS-PROT. SWISS-PROT is available at: http://www.expasy.ch/sprot/ and http://www.ebi.ac.uk/swissprot/
---
paper_title: Molecular portraits of human breast tumours
paper_content:
Human breast tumours are diverse in their natural history and in their responsiveness to treatments. Variation in transcriptional programs accounts for much of the biological diversity of human cells and tumours. In each cell, signal transduction and regulatory systems transduce information from the cell's identity to its environmental status, thereby controlling the level of expression of every gene in the genome. Here we have characterized variation in gene expression patterns in a set of 65 surgical specimens of human breast tumours from 42 different individuals, using complementary DNA microarrays representing 8,102 human genes. These patterns provided a distinctive molecular portrait of each tumour. Twenty of the tumours were sampled twice, before and after a 16-week course of doxorubicin chemotherapy, and two tumours were paired with a lymph node metastasis from the same patient. Gene expression patterns in two tumour samples from the same individual were almost always more similar to each other than either was to any other sample. Sets of co-expressed genes were identified for which variation in messenger RNA levels could be related to specific features of physiological variation. The tumours could be classified into subtypes distinguished by pervasive differences in their gene expression patterns.
---
paper_title: High density synthetic oligonucleotide arrays
paper_content:
Experimental genomics involves taking advantage of sequence information to investigate and understand the workings of genes, cells and organisms. We have developed an approach in which sequence information is used directly to design high-density, two-dimensional arrays of synthetic oligonucleotides. The GeneChip® probe arrays are made using spatially patterned, light- directed combinatorial chemical synthesis, and contain up to hundreds of thousands of different oligonucleotides on a small glass surface. The arrays have been designed and used for quantitative and highly parallel measurements of gene expression, to discover polymorphic loci and to detect the presence of thousands of alternative alleles. Here, we describe the fabrication of the arrays, their design and some specific applications to high-throughput genetic and cellular analysis.
---
paper_title: PartsList: a web-based system for dynamically ranking protein folds based on disparate attributes, including whole-genome expression and interaction information
paper_content:
As the number of protein folds is quite limited, a mode of analysis that will be increasingly common in the future, especially with the advent of structural genomics, is to survey and re-survey the finite parts list of folds from an expanding number of perspectives. We have developed a new resource, called PartsList, that lets one dynamically perform these comparative fold surveys. It is available on the web at http://bioinfo.mbb.yale.edu/partslist and http://www.partslist.org. The system is based on the existing fold classifications and functions as a form of companion annotation for them, providing ‘global views’ of many already completed fold surveys. The central idea in the system is that of comparison through ranking; PartsList will rank the approximately 420 folds based on more than 180 attributes. These include: (i) occurrence in a number of completely sequenced genomes (e.g. it will show the most common folds in the worm versus yeast); (ii) occurrence in the structure databank (e.g. most common folds in the PDB); (iii) both absolute and relative gene expression information (e.g. most changing folds in expression over the cell cycle); (iv) protein–protein interactions, based on experimental data in yeast and comprehensive PDB surveys (e.g. most interacting fold); (v) sensitivity to inserted transposons; (vi) the number of functions associated with the fold (e.g. most multi-functional folds); (vii) amino acid composition (e.g. most Cys-rich folds); (viii) protein motions (e.g. most mobile folds); and (ix) the level of similarity based on a comprehensive set of structural alignments (e.g. most structurally variable folds). The integration of whole-genome expression and protein–protein interaction data with structural information is a particularly novel feature of our system. We provide three ways of visualizing the rankings: a profiler emphasizing the progression of high and low ranks across many pre-selected attributes, a dynamic comparer for custom comparisons and a numerical rankings correlator. These allow one to directly compare very different attributes of a fold (e.g. expression level, genome occurrence and maximum motion) in the uniform numerical format of ranks. This uniform framework, in turn, highlights the way that the frequency of many of the attributes falls off with approximate power-law behavior (i.e. according to V–b, for attribute value V and constant exponent b), with a few folds having large values and most having small values.
---
paper_title: Global response of Saccharomyces cerevisiae to an alkylating agent
paper_content:
DNA chip technology enables simultaneous examination of how ≈6,200 Saccharomyces cerevisiae gene transcript levels, representing the entire genome, respond to environmental change. By using chips bearing oligonucleotide arrays, we show that, after exposure to the alkylating agent methyl methanesulfonate, ≈325 gene transcript levels are increased and ≈76 are decreased. Of the 21 genes that already were known to be induced by a DNA-damaging agent, 18 can be scored as inducible in this data set, and surprisingly, most of the newly identified inducible genes are induced even more strongly than these 18. We examined 42 responsive and 8 nonresponsive ORFs by conventional Northern blotting, and 48 of these 50 ORFs responded as they did by DNA chip analysis, with magnitudes displaying a correlation coefficient of 0.79. Responsive genes fall into several expected and many unexpected categories. Evidence for the induction of a program to eliminate and replace alkylated proteins is presented.
---
paper_title: Quantitative parameters for amino acid-base interaction: Implications for prediction of protein-DNA binding sites
paper_content:
Inspection of the amino acid-base interactions in protein-DNA complexes is essential to the understanding of specific recognition of DNA target sites by regulatory proteins. The accumulation of information on protein-DNA co-crystals challenges the derivation of quantitative parameters for amino acid-base interaction based on these data. Here we use the coordinates of 53 solved protein-DNA complexes to extract all non-homologous pairs of amino acid-base that are in close contact, including hydrogen bonds and hydrophobic interactions. By comparing the frequency distribution of the different pairs to a theoretical distribution and calculating the log odds, a quantitative measure that expresses the likelihood of interaction for each pair of amino acid-base could be extracted. A score that reflects the compatibility between a protein and its DNA target can be calculated by summing up the individual measures of the pairs of amino acid-base involved in the complex, assuming additivity in their contributions to binding. This score enables ranking of different DNA binding sites given a protein binding site and vice versa and can be used in molecular design protocols. We demonstrate its validity by comparing the predictions using this score with experimental binding results of sequence variants of zif268 zinc fingers and their DNA binding sites.
---
paper_title: Modelling repressor proteins docking to DNA
paper_content:
The docking of repressor proteins to DNA starting from the unbound protein and model-built DNA coordinates is modeled computationally. The approach was evaluated on eight repressor/DNA complexes that employed different modes for protein/ DNA recognition. The global search is based on a protein-protein docking algorithm that evaluates shape and electrostatic complementarity, which was modified to consider the importance of electrostatic features in DNA-protein recognition. Complexes were then ranked by an empirical score for the observed amino acid /nucleotide pairings (i.e., protein-DNA pair potentials) derived from a database of 20 protein/DNA complexes. A good prediction had at least 65% of the correct contacts modeled. This approach was able to identify a good solution at rank four or better for three out of the eight complexes. Predicted complexes were filtered by a distance constraint based on experimental data defining the DNA footprint. This improved coverage to four out of eight complexes having a good model at rank four or better. The additional use of amino acid mutagenesis and phylogenetic data defining residues on the repressor resulted in between 2 and 27 models that would have to be examined to find a good solution for seven of the eight test systems. This study shows that starting with unbound coordinates one can predict three-dimensional models for protein/DNA complexes that do not involve gross conformational changes on association. Proteins 33:535–549, 1998. © 1998 Wiley-Liss, Inc.
---
paper_title: Zinc Fingers in Caenorhabditis elegans: Finding Families and Probing Pathways
paper_content:
More than 3 percent of the protein sequences inferred from the Caenorhabditis elegans genome contain sequence motifs characteristic of zinc-binding structural domains, and of these more than half are believed to be sequence-specific DNA-binding proteins. The distribution of these zinc-binding domains among the genomes of various organisms offers insights into the role of zinc-binding proteins in evolution. In addition, the complete genome sequence of C. elegans provides an opportunity to analyze, and perhaps predict, pathways of transcriptional regulation.
---
paper_title: Making and reading microarrays
paper_content:
There are a variety of options for making microarrays and obtaining microarray data. Here, we describe the building and use of two microarray facilities in academic settings. In addition to specifying technical detail, we comment on the advantages and disadvantages of components and approaches, and provide a protocol for hybridization. The fact that we are now making and using microarrays to answer biological questions demonstrates that the technology can be implemented in a university environment.
---
paper_title: Saturation mutagenesis of the UASNTR (GATAA) responsible for nitrogen catabolite repression-sensitive transcriptional activation of the allantoin pathway genes in Saccharomyces cerevisiae.
paper_content:
Saturation mutagenesis of the UASNTR element responsible for GLN3-dependent, nitrogen catabolite repression-sensitive transcriptional activation of allantoin pathway genes in yeast cells identified the dodecanucleotide sequence 5'-TTNCTGATAAGG-3' as the minimum required for UAS activity. There was significant flexibility in mutant sequences capable of supporting UAS activity, which correlates well with the high variation in UASNTR homologous sequences reported to be upstream of the DAL and DUR genes. Three of nine UASNTR-like sequences 5' of the DAL5 gene supported high-level transcriptional activation. The others, which contained nonpermissive substitutions, were not active.
---
paper_title: A DNA structural atlas for escherichia coli
paper_content:
We have performed a computational analysis of DNA structural features in 18 fully sequenced prokaryotic genomes using models for DNA curvature, DNA flexibility, and DNA stability. The structural values that are computed for the Escherichia coli chromosome are significantly different from (and generally more extreme than) that expected from the nucleotide composition. To aid this analysis, we have constructed tools that plot structural measures for all positions in a long DNA sequence (e.g. an entire chromosome) in the form of color-coded wheels (http://www.cbs.dtu. dk/services/GenomeAtlas/). We find that these "structural atlases" are useful for the discovery of interesting features that may then be investigated in more depth using statistical methods. From investigation of the E. coli structural atlas, we discovered a genome-wide trend, where an extended region encompassing the terminus displays a high of level curvature, a low level of flexibility, and a low degree of helix stability. The same situation is found in the distantly related Gram-positive bacterium Bacillus subtilis, suggesting that the phenomenon is biologically relevant. Based on a search for long DNA segments where all the independent structural measures agree, we have found a set of 20 regions with identical and very extreme structural properties. Due to their strong inherent curvature, we suggest that these may function as topological domain boundaries by efficiently organizing plectonemically supercoiled DNA. Interestingly, we find that in practically all the investigated eubacterial and archaeal genomes, there is a trend for promoter DNA being more curved, less flexible, and less stable than DNA in coding regions and in intergenic DNA without promoters. This trend is present regardless of the absolute levels of the structural parameters, and we suggest that this may be related to the requirement for helix unwinding during initiation of transcription, or perhaps to the previously observed location of promoters at the apex of plectonemically supercoiled DNA. We have also analyzed the structural similarities between groups of genes by clustering all RNA and protein-encoding genes in E. coli, based on the average structural parameters. We find that most ribosomal genes (protein-encoding as well as rRNA genes) cluster together, and we suggest that DNA structure may play a role in the transcription of these highly expressed genes.
---
paper_title: How different amino acid sequences determine similar protein structures: the structure and evolutionary dynamics of the globins.
paper_content:
To determine how different amino acid sequences form similar protein structures, and how proteins adapt to mutations that change the volume of residues buried in their close-packed interiors, we have analysed and compared the atomic structures of nine different globins. The homology of the sequences in the two most distantly related molecules is only 16%. ::: ::: The principal determinants of three-dimensional structure of these proteins are the approximately 59 residues involved in helix to helix and helix to haem packings. Half of these residues are buried within the molecules. The observed variations in the sequence keep the side-chains of buried residues non-polar, but do not maintain their size: the mean variation of the volume among homologous amino acids is 56 A3. ::: ::: Changes in the volumes of buried residues are accompanied by changes in the geometry of the helix packings. The relative positions and orientations of homologous pairs of helices in the globins differ by rigid body shifts of up to 7 A and 30 °. In order to retain functional activity these shifts are coupled so that the geometry of the residues forming the haem pocket is very similar in all the globins. ::: ::: We discuss the implications of these results for the mechanism of protein evolution.
---
paper_title: KEGG: kyoto encyclopedia of genes and genomes.
paper_content:
KEGG (Kyoto Encyclopedia of Genes and Genomes) is a knowledge base for systematic analysis of gene functions, linking genomic information with higher order functional information. The genomic information is stored in the GENES database, which is a collection of gene catalogs for all the completely sequenced genomes and some partial genomes with up-to-date annotation of gene functions. The higher order functional information is stored in the PATHWAY database, which contains graphical representations of cellular processes, such as metabolism, membrane transport, signal transduction and cell cycle. The PATHWAY database is supplemented by a set of ortholog group tables for the information about conserved subpathways (pathway motifs), which are often encoded by positionally coupled genes on the chromosome and which are especially useful in predicting gene functions. A third database in KEGG is LIGAND for the information about chemical compounds, enzyme molecules and enzymatic reactions. KEGG provides Java graphics tools for browsing genome maps, comparing two genome maps and manipulating expression maps, as well as computational tools for sequence comparison, graph comparison and path computation. The KEGG databases are daily updated and made freely available (http://www. genome.ad.jp/kegg/).
---
paper_title: Recognition of analogous and homologous protein folds--assessment of prediction success and associated alignment accuracy using empirical substitution matrices.
paper_content:
Fold recognition methods aim to use the information in the known protein structures (the targets) to identify that the sequence of a protein of unknown structure (the probe) will adopt a known fold. This paper highlights that the structural similarities sought by these methods can be divided into two types: remote homologues and analogues. Homologues are the result of divergent evolution and often share a common function. We define remote homologues as those that are not easily detectable by sequence comparison methods alone. Analogues do not have a common ancestor and generally do not have a common function. Several sets of empirical matrices for residue substitution, secondary structure conservation and residue accessibility conservation have previously been derived from aligned pairs of remote homologues and analogues (Russell et al., J. Mol. Biol., 1997, 269, 423-439). Here a method for fold recognition, FOLDFIT, is introduced that uses these matrices to match the sequences, secondary structures and residue accessibilities of the probe and target. The approach is evaluated on distinct datasets of analogous and remotely homologous folds. The accuracy of FOLDFIT with the different matrices on the two datasets is contrasted to results from another fold recognition method (THREADER) and to searches using mutation matrices in the absence of any structural information. FOLDFIT identifies at top rank 12 out of 18 remotely homologous folds and five out of nine analogous folds. The average alignment accuracies for residue and secondary structure equivalencing are much higher for homologous folds (residue approximately 42%, secondary structure approximately 78%) than for analogues folds (approximately 12%, approximately 47%). Sequence searches alone can be successful for several homologues in the testing sets but nearly always fail for the analogues. These results suggest that the recognition of analogous and remotely homologous folds should be assessed separately. This study has implications for the development and comparative evaluation of fold recognition algorithms.
---
paper_title: Integrative database analysis in structural genomics
paper_content:
An important aspect of structural genomics is connecting coordinate data with whole-genome information related to phylogenetic occurrence, protein function, gene expression, and protein−protein interactions. Integrative database analysis allows one to survey the 'finite parts list' of protein folds from many perspectives, highlighting certain folds and structural features that stand out in particular ways.
---
paper_title: SRS : INFORMATION RETRIEVAL SYSTEM FOR MOLECULAR BIOLOGY DATA BANKS
paper_content:
Publisher Summary This chapter presents a retrieval system called “Sequence Retrieval System (SRS)” that acts on data banks in a flat file or text format. It provides a homogeneous interface to about 80 biological databanks for accessing and querying their contents and for navigating among them. SRS is an integrated system that provides a homogeneous interface to all flat file data banks retained in their original format. It is a retrieval system that allows access to, but not the depositing of, data. Several elements are combined into a system that extends the power of normal retrieval systems and that rivals that of real databases, such as a relational system, without compromising speed. These elements include languages for data bank and syntax definition, a programmable parser, an indexing system, support for subentries, a novel system for exploiting links among data banks, and a query language. The database linking is a unique feature that considerably extends the capability of hypertext links.
---
paper_title: PartsList: a web-based system for dynamically ranking protein folds based on disparate attributes, including whole-genome expression and interaction information
paper_content:
As the number of protein folds is quite limited, a mode of analysis that will be increasingly common in the future, especially with the advent of structural genomics, is to survey and re-survey the finite parts list of folds from an expanding number of perspectives. We have developed a new resource, called PartsList, that lets one dynamically perform these comparative fold surveys. It is available on the web at http://bioinfo.mbb.yale.edu/partslist and http://www.partslist.org. The system is based on the existing fold classifications and functions as a form of companion annotation for them, providing ‘global views’ of many already completed fold surveys. The central idea in the system is that of comparison through ranking; PartsList will rank the approximately 420 folds based on more than 180 attributes. These include: (i) occurrence in a number of completely sequenced genomes (e.g. it will show the most common folds in the worm versus yeast); (ii) occurrence in the structure databank (e.g. most common folds in the PDB); (iii) both absolute and relative gene expression information (e.g. most changing folds in expression over the cell cycle); (iv) protein–protein interactions, based on experimental data in yeast and comprehensive PDB surveys (e.g. most interacting fold); (v) sensitivity to inserted transposons; (vi) the number of functions associated with the fold (e.g. most multi-functional folds); (vii) amino acid composition (e.g. most Cys-rich folds); (viii) protein motions (e.g. most mobile folds); and (ix) the level of similarity based on a comprehensive set of structural alignments (e.g. most structurally variable folds). The integration of whole-genome expression and protein–protein interaction data with structural information is a particularly novel feature of our system. We provide three ways of visualizing the rankings: a profiler emphasizing the progression of high and low ranks across many pre-selected attributes, a dynamic comparer for custom comparisons and a numerical rankings correlator. These allow one to directly compare very different attributes of a fold (e.g. expression level, genome occurrence and maximum motion) in the uniform numerical format of ranks. This uniform framework, in turn, highlights the way that the frequency of many of the attributes falls off with approximate power-law behavior (i.e. according to V–b, for attribute value V and constant exponent b), with a few folds having large values and most having small values.
---
paper_title: [10] Entrez: Molecular biology database and retrieval system
paper_content:
Publisher Summary Entrez is a biomedical information resource that has been designed to facilitate the discovery process by providing connections among biological sequences, molecular structures, and abstracts. Because it must be anticipated that the amount of data will continue to grow at phenomenal rates, the Internet would seem to be the most practical medium for the future dissemination of this information. However, besides these quantitative changes, several trends promise to alter qualitatively the nature of the nucleotide sequence database. It is necessary to be prepared for the expected volume of data, but changes to the Entrez user interface may also be needed to make effective use of it. For example, imagine finding a sequence of interest and asking for its sequence neighbors only to be presented with a complete chromosome sequence. A new graphical viewer in Entrez will allow the user to view the genomic landscape from different vantage points and make connections to the sequences, structures, and abstracts relevant to specific chromosomal regions.
---
paper_title: CORA-Topological fingerprints for protein structural families
paper_content:
CORA is a suite of programs for multiply aligning and analyzing protein structural families to identify the consensus positions and capture their most conserved structural characteristics (e.g., residue accessibility, torsional angles, and global geometry as described by inter-residue vectors/contacts). Knowledge of these structurally conserved positions, which are mostly in the core of the fold and of their properties, significantly improves the identification and classification of newly-determined relatives. Information is encoded in a consensus three-dimensional (3D) template and relatives found by a sensitive alignment method, which employs a new scoring scheme based on conserved residue contacts. By encapsulating these critical “core” features, templates perform more reliably in recognizing distant structural relatives than searches with representative structures. Parameters for 3D-template generation and alignment were optimized for each structural class (mainly-α, mainly-β, α-β), using representative superfold families. For all families selected, the templates gave significant improvements in sensitivity and selectivity in recognizing distant structural relatives. Furthermore, since templates contain less than 70% of fold positions and compare fewer positions when aligning structures, scans are at least an order of magnitude faster than scans using selected structures. CORA was subsequently tested on eight other broad structural families from the CATH database. Diagnostics plots are generated automatically and provide qualitative assistance for classifying newly determined relatives. They are demonstrated here by application to the large globin-like fold family. CORA templates for both homologous superfamilies and fold families will be stored in CATH and used to improve the classification and analysis of newly determined structures.
---
paper_title: SSAP: sequential structure alignment program for protein structure comparison.
paper_content:
Publisher Summary This chapter discusses the methods that are flexible enough to align distantly related structures and, therefore, most suitable for identifying and analyzing protein fold families. It illustrates different ways of overcoming the various difficulties encountered and discusses the method, sequential structure alignment program (SSAP) and the various modifications that are required to handle complex similarities. The need to identify similar motifs among proteins and the development of a multiple comparison method that can identify the consensus structure for a family of related proteins are discussed. Analysis of protein structural families and identification of consensus structural templates will improve both template- and threading-based prediction algorithms for identifying the fold of a protein, particularly for the superfolds. For more complex and less frequently observed folds, characterization of common structural motifs and any associated sequence patterns would be expected to improve prediction.
---
paper_title: A RAPID algorithm for sequence database comparisons: application to the identification of vector contamination in the EMBL databases.
paper_content:
MOTIVATION:Word-matching algorithms such as BLAST are routinely used for sequence comparison. These algorithms typically use areas of matching words to seed alignments which are then used to assess the degree of sequence similarity. In this paper, we show that by formally separating the word-matching and sequence-alignment process, and using information about word frequencies to generate alignments and similarity scores, we can create a new sequence-comparison algorithm which is both fast and sensitive. The formal split between word searching and alignment allows users to select an appropriate alignment method without affecting the underlying similarity search. The algorithm has been used to develop software for identifying entries in DNA sequence databases which are contaminated with vector sequence. RESULTS:We present three algorithms, RAPID, PHAT and SPLAT, which together allow vector contaminations to be found and assessed extremely rapidly. RAPID is a word search algorithm which uses probabilities to modify the significance attached to different words; PHAT and SPLAT are alignment algorithms. An initial implementation has been shown to be approximately an order of magnitude faster than BLAST. The formal split between word searching and alignment not only offers considerable gains in performance, but also allows alignment generation to be viewed as a user interface problem, allowing the most useful output method to be selected without affecting the underlying similarity search. Receiver Operator Characteristic (ROC) analysis of an artificial test set allows the optimal score threshold for identifying vector contamination to be determined. ROC curves were also used to determine the optimum word size (nine) for finding vector contamination. An analysis of the entire expressed sequence tag (EST) subset of EMBL found a contamination rate of 0.27%. A more detailed analysis of the 50 000 ESTs in est10.dat (an EST subset of EMBL) finds an error rate of 0.86%, principally due to two large-scale projects. AVAILABILITY:A Web page for the software exists at http://bioinf.man.ac.uk/rapid, or it can be downloaded from ftp://ftp.bioinf.man.ac.uk/RAPID CONTACT: [email protected]
---
paper_title: Evaluation Measures of Multiple Sequence Alignments
paper_content:
Multiple sequence alignments (MSAs) are frequently used in the study of families of protein sequences or DNA/RNA sequences. They are a fundamental tool for the understanding of the structure, functionality and, ultimately, the evolution of proteins. A new algorithm, the Circular Sum (CS) method, is presented for formally evaluating the quality of an MSA. It is based on the use of a solution to the Traveling Salesman Problem, which identifies a circular tour through an evolutionary tree connecting the sequences in a protein family. With this approach, the calculation of an evolutionary tree and the errors that it would introduce can be avoided altogether. The algorithm gives an upper bound, the best score that can possibly be achieved by any MSA for a given set of protein sequences. Alternatively, if presented with a specific MSA, the algorithm provides a formal score for the MSA, which serves as an absolute measure of the quality of the MSA. The CS measure yields a direct connection between an MSA and the associated evolutionary tree. The measure can be used as a tool for evaluating different methods for producing MSAs. A brief example of the last application is provided. Because it weights all evolutionary events on a tree identically, but does not require the reconstruction of a tree, the CS algorithm has advantages over the frequently used sum-of-pairs measures for scoring MSAs, which weight some evolutionary events more strongly than others. Compared to other weighted sum-of-pairs measures, it has the advantage that no evolutionary tree must be constructed, because we can find a circular tour without knowing the tree.
---
paper_title: Biosequence exegesis.
paper_content:
Annotation of large-scale gene sequence data will benefit from comprehensive and consistent application of well-documented, standard analysis methods and from progressive and vigilant efforts to ensure quality and utility and to keep the annotation up to date. However, it is imperative to learn how to apply information derived from functional genomics and proteomics technologies to conceptualize and explain the behaviors of biological systems. Quantitative and dynamical models of systems behaviors will supersede the limited and static forms of single-gene annotation that are now the norm. Molecular biological epistemology will increasingly encompass both teleological and causal explanations.
---
paper_title: Protein-DNA interactions: a structural analysis
paper_content:
A detailed analysis of the DNA-binding sites of 26 proteins is presented using data from the Nucleic Acid Database (NDB) and the Protein Data Bank (PDB). Chemical and physical properties of the protein-DNA interface, such as polarity, size, shape, and packing, were analysed. The DNA-binding sites shared common features, comprising many discontinuous sequence segments forming hydrophilic surfaces capable of direct and water-mediated hydrogen bonds. These interface sites were compared to those of protein-protein binding sites, revealing them to be more polar, with many more intermolecular hydrogen bonds and buried water molecules than the protein-protein interface sites. By looking at the number and positioning of protein residue-DNA base interactions in a series of interaction footprints, three modes of DNA binding were identified (single-headed, double-headed and enveloping). Six of the eight enzymes in the data set bound in the enveloping mode, with the protein presenting a large interface area effectively wrapped around the DNA.A comparison of structural parameters of the DNA revealed that some values for the bound DNA (including twist, slide and roll) were intermediate of those observed for the unbound B-DNA and A-DNA. The distortion of bound DNA was evaluated by calculating a root-mean-square deviation on fitting to a canonical B-DNA structure. Major distortions were commonly caused by specific kinks in the DNA sequence, some resulting in the overall bending of the helix. The helix bending affected the dimensions of the grooves in the DNA, allowing the binding of protein elements that would otherwise be unable to make contact. From this structural analysis a preliminary set of rules that govern the bending of the DNA in protein-DNA complexes, are proposed.
---
paper_title: MIPS: a database for genomes and protein sequences
paper_content:
The Munich Information Center for Protein Sequences (MIPS-GSF), Martinsried near Munich, Germany, develops and maintains genome oriented databases. It is commonplace that the amount of sequence data available increases rapidly, but not the capacity of qualified manual annotation at the sequence databases. Therefore, our strategy aims to cope with the data stream by the comprehensive application of analysis tools to sequences of complete genomes, the systematic classification of protein sequences and the active support of sequence analysis and functional genomics projects. This report describes the systematic and up-to-date analysis of genomes (PEDANT), a comprehensive database of the yeast genome (MYGD), a database reflecting the progress in sequencing the Arabidopsis thaliana genome (MATD), the database of assembled, annotated human EST clusters (MEST), and the collection of protein sequence data within the framework of the PIR-International Protein Sequence Database (described elsewhere in this volume). MIPS provides access through its WWW server (http://www.mips.biochem.mpg.de) to a spectrum of generic databases, including the above mentioned as well as a database of protein families (PROTFAM), the MITOP database, and the all-against-all FASTA database.
---
paper_title: RegulonDB (version 3.0): transcriptional regulation and operon organization in Escherichia coli K-12
paper_content:
RegulonDB is a database on transcription regulation and operon organization in Escherichia coli. The current version describes regulatory signals of transcription initiation, promoters, regulatory binding sites of specific regulators, ribosome binding sites and terminators, as well as information on genes clustered in operons. These specific annotations have been gathered from a constant search in the literature, as well as based on computational sequence predictions. The genomic coordinates of all these objects in the E.coli K-12 chromosome are clearly indicated. Every known object has a link to at least one MEDLINE reference. We have also added direct links to recent expression data of E.coli K-12. The version presented here has important modifications both in the structure of the database, as well as in the amount and type of information encoded in the database. RegulonDB can be accessed on the web at URL: http://www.cifn.unam. mx/Computational_Biology/regulondb/
---
paper_title: DNA-binding proteins and evolution of transcription regulation in the archaea
paper_content:
Likely DNA-binding domains in archaeal proteins were analyzed using sequence profile methods and available structural information. It is shown that all archaea encode a large number of proteins containing the helix-turn-helix (HTH) DNA-binding domains whose sequences are much more similar to bacterial HTH domains than to eukaryotic ones, such as the PAIRED, POU and homeodomains. The predominant class of HTH domains in archaea is the winged-HTH domain. The number and diversity of HTH domains in archaea is comparable to that seen in bacteria. The HTH domain in archaea combines with a variety of other domains that include replication system components, such as MCM proteins, translation system components, such as the alpha-subunit of phenyl-alanyl-tRNA synthetase, and several metabolic enzymes. The majority of the archaeal HTH-containing proteins are predicted to be gene/operon-specific transcriptional regulators. This apparent bacterial-type mode of transcription regulation is in sharp contrast to the eukaryote-like layout of the core transcription machinery in the archaea. In addition to the predicted bacterial-type transcriptional regulators, the HTH domain is conserved in archaeal and eukaryotic core transcription factors, such as TFIIB, TFIIE-alpha and MBF1. MBF1 is the only highly conserved, classical HTH domain that is vertically inherited in all archaea and eukaryotes. In contrast, while eukaryotic TFIIB and TFIIE-alpha possess forms of the HTH domain that are divergent in sequence, their archaeal counterparts contain typical HTH domains. It is shown that, besides the HTH domain, archaea encode unexpectedly large numbers of two other predicted DNA-binding domains, namely the Arc/MetJ domain and the Zn-ribbon. The core transcription regulators in archaea and eukaryotes (TFIIB/TFB, TFIIE-alpha and MBF1) and in bacteria (the sigma factors) share no similarity beyond the presence of distinct HTH domains. Thus HTH domains might have been independently recruited for a role in transcription regulation in the bacterial and archaeal/eukaryotic lineages. During subsequent evolution, the similarity between archaeal and bacterial gene/operon transcriptional regulators might have been established and maintained through multiple horizontal gene transfer events.
---
paper_title: Advances in structural genomics
paper_content:
New computational techniques have allowed protein folds to be assigned to all or parts of between a quarter ( Caenorhabditis elegans ) and a half ( Mycoplasma genitalium ) of the individual protein sequences in different genomes. These assignments give a new perspective on domain structures, gene duplications, protein families and protein folds in genome sequences.
---
paper_title: The repertoire of DNA-binding transcriptional regulators in Escherichia coli K-12.
paper_content:
Using a combination of several approaches we estimated and characterized a total of 314 regulatory DNA-binding proteins in Escherichia coli, which might represent its minimal set of transcription factors. The collection is comprised of 35% activators, 43% repressors and 22% dual regulators. Within many regulatory protein families, the members are homogeneous in their regulatory roles, physiology of regulated genes, regulatory function, length and genome position, showing that these families have evolved homogeneously in prokaryotes, particularly in E.coli. This work describes a full characterization of the repertoire of regulatory interactions in a whole living cell. This repertoire should contribute to the interpretation of global gene expression profiles in both prokaryotes and eukaryotes.
---
paper_title: TRANSFAC: an integrated system for gene expression regulation
paper_content:
TRANSFAC is a database on transcription factors, their genomic binding sites and DNA-binding profiles (http://transfac.gbf.de/TRANSFAC/). Its content has been enhanced, in particular by information about training sequences used for the construction of nucleotide matrices as well as by data on plant sites and factors. Moreover, TRANSFAC has been extended by two new modules: PathoDB provides data on pathologically relevant mutations in regulatory regions and transcription factor genes, whereas S/MARt DB compiles features of scaffold/matrix attached regions (S/MARs) and the proteins binding to them. Additionally, the databases TRANSPATH, about signal transduction, and CYTOMER, about organs and cell types, have been extended and are increasingly integrated with the TRANSFAC data sources.
---
paper_title: Analysis of the yeast transcriptome with structural and functional categories: characterizing highly expressed proteins
paper_content:
We analyzed 10 genome expression data sets by large-scale cross-referencing against broad structural and functional categories. The data sets, generated by different techniques (e.g. SAGE and gene chips), provide various representations of the yeast transcriptome (the set of all yeast genes, weighted by transcript abundance). Our analysis enabled us to determine features more prevalent in the transcriptome than the genome: i.e. those that are common to highly expressed proteins. Starting with simplest categories, we find that, relative to the genome, the transcriptome is enriched in Ala and Gly and depleted in Asn and very long proteins. We find, furthermore, that protein length and maximum expression level have a roughly inverse relationship. To relate expression level and protein structure, we assigned transmembrane helices and known folds (using PSI-blast) to each protein in the genome; this allowed us to determine that the transcriptome is enriched in mixed alpha-beta structures and depleted in membrane proteins relative to the genome. In particular, some enzymatic folds, such as the TIM barrel and the G3P dehydrogenase fold, are much more prevalent in the transcriptome than the genome, whereas others, such as the protein-kinase and leucine-zipper folds, are depleted. The TIM barrel, in fact, is overwhelmingly the 'top fold' in the transcriptome, while it only ranks fifth in the genome. The most highly enriched functional categories in the transcriptome (based on the MIPS system) are energy production and protein synthesis, while categories such as transcription, transport and signaling are depleted. Furthermore, for a given functional category, transcriptome enrichment varies quite substantially between the different expression data sets, with a variation an order of magnitude larger than for the other categories cross-referenced (e.g. amino acids). One can readily see how the enrichment and depletion of the various functional categories relates directly to that of particular folds.
---
paper_title: Protein function in the post-genomic era.
paper_content:
Faced with the avalanche of genomic sequences and data on messenger RNA expression, biological scientists are confronting a frightening prospect: piles of information but only flakes of knowledge. How can the thousands of sequences being determined and deposited, and the thousands of expression profiles being generated by the new array methods, be synthesized into useful knowledge? What form will this knowledge take? These are questions being addressed by scientists in the field known as 'functional genomics'.
---
paper_title: Dissecting the Regulatory Circuitry of a Eukaryotic Genome
paper_content:
Genome-wide expression analysis was used to identify genes whose expression depends on the functions of key components of the transcription initiation machinery in yeast. Components of the RNA polymerase II holoenzyme, the general transcription factor TFIID, and the SAGA chromatin modification complex were found to have roles in expression of distinct sets of genes. The results reveal an unanticipated level of regulation which is superimposed on that due to gene-specific transcription factors, a novel mechanism for coordinate regulation of specific sets of genes when cells encounter limiting nutrients, and evidence that the ultimate targets of signal transduction pathways can be identified within the initiation apparatus.
---
paper_title: Detecting Protein Function and Protein-Protein Interactions from Genome Sequences
paper_content:
A computational method is proposed for inferring protein interactions from genome sequences on the basis of the observation that some pairs of interacting proteins have homologs in another organism fused into a single protein chain. Searching sequences from many genomes revealed 6809 such putative protein-protein interactions in Escherichia coli and 45,502 in yeast. Many members of these pairs were confirmed as functionally related; computational filtering further enriches for interactions. Some proteins have links to several other proteins; these coupled links appear to represent functional interactions such as complexes or pathways. Experimentally confirmed interacting pairs are documented in a Database of Interacting Proteins.
---
paper_title: RNA expression patterns change dramatically in human neutrophils exposed to bacteria
paper_content:
A comprehensive study of changes in messenger RNA (mRNA) levels in human neutrophils following exposure to bacteria is described. Within 2 hours there are dramatic changes in the levels of several hundred mRNAs including those for a variety of cytokines, receptors, apoptosis-regulating products, and membrane trafficking regulators. In addition, there are a large number of up-regulated mRNAs that appear to represent a common core of activation response genes that have been identified as early-response products to a variety of stimuli in a number of other cell types. The activation response of neutrophils to nonpathogenic bacteria is greatly altered by exposure to Yersinia pestis, which may be a major factor contributing to the virulence and rapid progression of plague. Several gene clusters were created based on the patterns of gene induction caused by different bacteria. These clusters were consistent with those found by a principal components analysis. A number of the changes could be interpreted in terms of neutrophil physiology and the known functions of the genes. These findings indicate that active regulation of gene expression plays a major role in the neutrophil contribution to the cellular inflammatory response. Interruption of these changes by pathogens, such as Y pestis, could be responsible, at least in part, for the failure to contain infections by highly virulent organisms.
---
paper_title: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays
paper_content:
Oligonucleotide arrays can provide a broad picture of the state of the cell, by monitoring the expression level of thousands of genes at the same time. It is of interest to develop techniques for extracting useful information from the resulting data sets. Here we report the application of a two-way clustering method for analyzing a data set consisting of the expression patterns of different cell types. Gene expres- sion in 40 tumor and 22 normal colon tissue samples was analyzed with an Affymetrix oligonucleotide array comple- mentary to more than 6,500 human genes. An efficient two- way clustering algorithm was applied to both the genes and the tissues, revealing broad coherent patterns that suggest a high degree of organization underlying gene expression in these tissues. Coregulated families of genes clustered together, as demonstrated for the ribosomal proteins. Clustering also separated cancerous from noncancerous tissue and cell lines from in vivo tissues on the basis of subtle distributed patterns of genes even when expression of individual genes varied only slightly between the tissues. Two-way clustering thus may be of use both in classifying genes into functional groups and in classifying tissues based on gene expression.
---
paper_title: Systematic determination of genetic network architecture
paper_content:
Technologies to measure whole-genome mRNA abundances and methods to organize and display such data are emerging as valuable tools for systems-level exploration of transcriptional regulatory networks. For instance, it has been shown that mRNA data from 118 genes, measured at several time points in the developing hindbrain of mice, can be hierarchically clustered into various patterns (or 'waves') whose members tend to participate in common processes. We have previously shown that hierarchical clustering can group together genes whose cis-regulatory elements are bound by the same proteins in vivo. Hierarchical clustering has also been used to organize genes into hierarchical dendograms on the basis of their expression across multiple growth conditions. The application of Fourier analysis to synchronized yeast mRNA expression data has identified cell-cycle periodic genes, many of which have expected cis-regulatory elements. Here we apply a systematic set of statistical algorithms, based on whole-genome mRNA data, partitional clustering and motif discovery, to identify transcriptional regulatory sub-networks in yeast-without any a priori knowledge of their structure or any assumptions about their dynamics. This approach uncovered new regulons (sets of co-regulated genes) and their putative cis-regulatory elements. We used statistical characterization of known regulons and motifs to derive criteria by which we infer the biological significance of newly discovered regulons and motifs. Our approach holds promise for the rapid elucidation of genetic network architecture in sequenced organisms in which little biology is known.
---
paper_title: Cluster analysis and display of genome-wide expression patterns
paper_content:
A system of cluster analysis for genome-wide expression data from DNA microarray hybridization is described that uses standard statistical algorithms to arrange genes according to similarity in pattern of gene expression. The output is displayed graphically, conveying the clustering and the underlying expression data simultaneously in a form intuitive for biologists. We have found in the budding yeast Saccharomyces cerevisiae that clustering gene expression data groups together efficiently genes of known similar function, and we find a similar tendency in human data. Thus patterns seen in genome-wide expression experiments can be interpreted as indications of the status of cellular processes. Also, coexpression of genes of known function with poorly characterized or novel genes may provide a simple means of gaining leads to the functions of many genes for which information is not available currently.
---
paper_title: Interpreting patterns of gene expression with self-organizing maps: methods and application to hematopoietic differentiation.
paper_content:
Array technologies have made it straightforward to monitor simultaneously the expression pattern of thousands of genes. The challenge now is to interpret such massive data sets. The first step is to extract the fundamental patterns of gene expression inherent in the data. This paper describes the application of self-organizing maps, a type of mathematical cluster analysis that is particularly well suited for recognizing and classifying features in complex, multidimensional data. The method has been implemented in a publicly available computer package, GENECLUSTER, that performs the analytical calculations and provides easy data visualization. To illustrate the value of such analysis, the approach is applied to hematopoietic differentiation in four well studied models (HL-60, U937, Jurkat, and NB4 cells). Expression patterns of some 6,000 human genes were assayed, and an online database was created. GENECLUSTER was used to organize the genes into biologically relevant clusters that suggest novel hypotheses about hematopoietic differentiation-for example, highlighting certain genes and pathways involved in "differentiation therapy" used in the treatment of acute promyelocytic leukemia.
---
paper_title: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring
paper_content:
Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge.
---
paper_title: Large-scale temporal gene expression mapping of central nervous system development
paper_content:
We used reverse transcription–coupled PCR to produce a high-resolution temporal map of fluctuations in mRNA expression of 112 genes during rat central nervous system development, focusing on the cervical spinal cord. The data provide a temporal gene expression “fingerprint” of spinal cord development based on major families of inter- and intracellular signaling genes. By using distance matrices for the pair-wise comparison of these 112 temporal gene expression patterns as the basis for a cluster analysis, we found five basic “waves” of expression that characterize distinct phases of development. The results suggest functional relationships among the genes fluctuating in parallel. We found that genes belonging to distinct functional classes and gene families clearly map to particular expression profiles. The concepts and data analysis discussed herein may be useful in objectively identifying coherent patterns and sequences of events in the complex genetic signaling network of development. Functional genomics approaches such as this may have applications in the elucidation of complex developmental and degenerative disorders.
---
paper_title: A Bayesian system integrating expression data with sequence patterns for localizing proteins: comprehensive application to the yeast genome
paper_content:
We develop a probabilistic system for predicting the subcellular localization of proteins and estimating the relative population of the various compartments in yeast. Our system employs a Bayesian approach, updating a protein’s probability of being in a compartment, based on a diverse range of 30 features. These range from specific motifs (e.g. signal sequences or the HDEL motif) to overall properties of a sequence (e.g. surface composition or isoelectric point) to whole-genome data (e.g. absolute mRNA expression levels or their fluctuations). The strength of our approach is the easy integration of many features, particularly the whole-genome expression data. We construct a training and testing set of 1300 yeast proteins with an experimentally known localization from merging, filtering, and standardizing the annotation in the MIPS, SwissProt and YPD databases, and we achieve 75 % accuracy on individual protein predictions using this dataset. Moreover, we are able to estimate the relative protein population of the various compartments without requiring a definite localization for every protein. This approach, which is based on an analogy to formalism in quantum mechanics, gives better accuracy in determining relative compartment populations than that obtained by simply tallying the localization predictions for individual proteins (on the yeast proteins with known localization, 92 % versus 74 %). Our training and testing also highlights which of the 30 features are informative and which are redundant (19 being particularly useful). After developing our system, we apply it to the 4700 yeast proteins with currently unknown localization and estimate the relative population of the various compartments in the entire yeast genome. An unbiased prior is essential to this extrapolated estimate; for this, we use the MIPS localization catalogue, and adapt recent results on the localization of yeast proteins obtained by Snyder and colleagues using a minitransposon system. Our final localizations for all 6000 proteins in the yeast genome are available over the web at: http://bioinfo.mbb.yale.edu/genome/localize # 2000 Academic Press
---
paper_title: Systematic variation in gene expression patterns in human cancer cell lines
paper_content:
We used cDNA microarrays to explore the variation in expression of approximately 8,000 unique genes among the 60 cell lines used in the National Cancer Institute's screen for anti-cancer drugs. Classification of the cell lines based solely on the observed patterns of gene expression revealed a correspondence to the ostensible origins of the tumours from which the cell lines were derived. The consistent relationship between the gene expression patterns and the tissue of origin allowed us to recognize outliers whose previous classification appeared incorrect. Specific features of the gene expression patterns appeared to be related to physiological properties of the cell lines, such as their doubling time in culture, drug metabolism or the interferon response. Comparison of gene expression patterns in the cell lines to those observed in normal breast tissue or in breast tumour specimens revealed features of the expression patterns in the tumours that had recognizable counterparts in specific cell lines, reflecting the tumour, stromal and inflammatory components of the tumour tissue. These results provided a novel molecular characterization of this important group of human cell lines and their relationships to tumours in vivo.
---
paper_title: Distinctive gene expression patterns in human mammary epithelial cells and breast cancers.
paper_content:
cDNA microarrays and a clustering algorithm were used to identify patterns of gene expression in human mammary epithelial cells growing in culture and in primary human breast tumors. Clusters of coexpressed genes identified through manipulations of mammary epithelial cells in vitro also showed consistent patterns of variation in expression among breast tumor samples. By using immunohistochemistry with antibodies against proteins encoded by a particular gene in a cluster, the identity of the cell type within the tumor specimen that contributed the observed gene expression pattern could be determined. Clusters of genes with coherent expression patterns in cultured cells and in the breast tumors samples could be related to specific features of biological variation among the samples. Two such clusters were found to have patterns that correlated with variation in cell proliferation rates and with activation of the IFN-regulated signal transduction pathway, respectively. Clusters of genes expressed by stromal cells and lymphocytes in the breast tumors also were identified in this analysis. These results support the feasibility and usefulness of this systematic approach to studying variation in gene expression patterns in human cancers as a means to dissect and classify solid tumors.
---
paper_title: Relating whole-genome expression data with protein-protein interactions
paper_content:
We investigate the relationship of protein-protein interactions with mRNA expression levels, by integrating a variety of data sources for yeast. We focus on known protein complexes that have clearly defined interactions between their subunits. We find that subunits of the same protein complex show significant coexpression, both in terms of similarities of absolute mRNA levels and expression profiles, e.g., we can often see subunits of a complex having correlated patterns of expression over a time course. We classify the yeast protein complexes as either permanent or transient, with permanent ones being maintained through most cellular conditions. We find that, generally, permanent complexes, such as the ribosome and proteasome, have a particularly strong relationship with expression, while transient ones do not. However, we note that several transient complexes, such as the RNA polymerase II holoenzyme and the replication complex, can be subdivided into smaller permanent ones, which do have a strong relationship to gene expression. We also investigated the interactions in aggregated, genome-wide data sets, such as the comprehensive yeast two-hybrid experiments, and found them to have only a weak relationship with gene expression, similar to that of transient complexes. (Further details on genecensus.org/expression/interactions and bioinfo.mbb.yale.edu/expression/interactions.)
---
paper_title: The current excitement in bioinformatics—analysis of whole-genome expression data: how does it relate to protein structure and function?
paper_content:
Whole-genome expression profiles provide a rich new datatrove for bioinformatics. Initial analyses of the profiles have included clustering and cross-referencing to ‘external’ information on protein structure and function. Expression profile clusters do relate to protein function, but the correlation is not perfect, with the discrepancies partially resulting from the difficulty in consistently defining function. Other attributes of proteins can also be related to expression — in particular, structure and localization — and sometimes show a clearer relationship than function.
---
paper_title: Molecular portraits of human breast tumours
paper_content:
Human breast tumours are diverse in their natural history and in their responsiveness to treatments. Variation in transcriptional programs accounts for much of the biological diversity of human cells and tumours. In each cell, signal transduction and regulatory systems transduce information from the cell's identity to its environmental status, thereby controlling the level of expression of every gene in the genome. Here we have characterized variation in gene expression patterns in a set of 65 surgical specimens of human breast tumours from 42 different individuals, using complementary DNA microarrays representing 8,102 human genes. These patterns provided a distinctive molecular portrait of each tumour. Twenty of the tumours were sampled twice, before and after a 16-week course of doxorubicin chemotherapy, and two tumours were paired with a lymph node metastasis from the same patient. Gene expression patterns in two tumour samples from the same individual were almost always more similar to each other than either was to any other sample. Sets of co-expressed genes were identified for which variation in messenger RNA levels could be related to specific features of physiological variation. The tumours could be classified into subtypes distinguished by pervasive differences in their gene expression patterns.
---
paper_title: Comparative Protein Modelling by Satisfaction of Spatial Restraints
paper_content:
We describe a comparative protein modelling method designed to find the most probable structure for a sequence given its alignment with related structures. The three dimensional (3D) model is obtained by optimally satisfying spatial restraints derived from the alignment and expressed as probability density functions (pdfs) for the features restrained. For example, the probabilities for main-chain conformations of a modelled residue may be restrained by its residue type, main-chain conformation of an equivalent residue in a related protein, and the local similarity between the two sequences. Several such pdfs are obtained from the correlations between structural features in 17 families of homologous proteins which have been aligned on the basis of their 3D structures. The pdfs restrain Cα-Cα distances, main-chain N-O distances, main-chain and side-chain dihedral angles. A smoothing procedure is used in the derivation of these relationships to minimize the problem of a sparse database. The 3D model of a protein is obtained by optimization of the molecular pdf such that the model violates the input restraints as little as possible. The molecular pdf is derived as a combination of pdfs restraining individual spatial features of the whole molecule. The optimization procedure is a variable target function method that applies the conjugate gradients algorithm to positions of all non hydrogen atoms. The method is automated and is illustrated by the modelling of trypsin from two other serine proteinases.
---
paper_title: Deletions of the Short Arm of Chromosome 3 in Solid Tumors and the Search for Suppressor Genes
paper_content:
Publisher Summary This chapter discusses the methodological advantages and limitations of the various approaches used to localize tumor suppressor genes on 3p, the short arm of chromosome 3. The results of functional assays of tumor suppression by transfer of part of chromosome 3 into tumor cell lines are also discussed. The two approaches routinely used to detect deletions in tumor cells or tumor-derived cell lines as a possible indication of the location of a tumor suppressor gene are karyotyping and analysis of loss of heterozygosity. The role of 3p in tumor suppression is confirmed by observations that spontaneous tumorigenic mutants in nontumorigenic immortalized cell lines show loss of parts of 3p. Such cellular models offer an alternative approach to define the 3p regions involved in some specific tumors. In several types of tumor, an increase in the size of 3p deletions with the stage of disease is observed. Searches for genes are in progress for some of the more precisely defined smaller regions with presumed tumor suppressor activity on 3p. The only gene on 3p considered as a tumor suppressor, VHL, is identified because of its involvement in a hereditary cancer syndrome, allowing the collection of families and the application of linkage analysis to pinpoint the gene.
---
paper_title: Analysis of the yeast transcriptome with structural and functional categories: characterizing highly expressed proteins
paper_content:
We analyzed 10 genome expression data sets by large-scale cross-referencing against broad structural and functional categories. The data sets, generated by different techniques (e.g. SAGE and gene chips), provide various representations of the yeast transcriptome (the set of all yeast genes, weighted by transcript abundance). Our analysis enabled us to determine features more prevalent in the transcriptome than the genome: i.e. those that are common to highly expressed proteins. Starting with simplest categories, we find that, relative to the genome, the transcriptome is enriched in Ala and Gly and depleted in Asn and very long proteins. We find, furthermore, that protein length and maximum expression level have a roughly inverse relationship. To relate expression level and protein structure, we assigned transmembrane helices and known folds (using PSI-blast) to each protein in the genome; this allowed us to determine that the transcriptome is enriched in mixed alpha-beta structures and depleted in membrane proteins relative to the genome. In particular, some enzymatic folds, such as the TIM barrel and the G3P dehydrogenase fold, are much more prevalent in the transcriptome than the genome, whereas others, such as the protein-kinase and leucine-zipper folds, are depleted. The TIM barrel, in fact, is overwhelmingly the 'top fold' in the transcriptome, while it only ranks fifth in the genome. The most highly enriched functional categories in the transcriptome (based on the MIPS system) are energy production and protein synthesis, while categories such as transcription, transport and signaling are depleted. Furthermore, for a given functional category, transcriptome enrichment varies quite substantially between the different expression data sets, with a variation an order of magnitude larger than for the other categories cross-referenced (e.g. amino acids). One can readily see how the enrichment and depletion of the various functional categories relates directly to that of particular folds.
---
paper_title: MIPS: a database for genomes and protein sequences
paper_content:
The Munich Information Center for Protein Sequences (MIPS-GSF), Martinsried near Munich, Germany, develops and maintains genome oriented databases. It is commonplace that the amount of sequence data available increases rapidly, but not the capacity of qualified manual annotation at the sequence databases. Therefore, our strategy aims to cope with the data stream by the comprehensive application of analysis tools to sequences of complete genomes, the systematic classification of protein sequences and the active support of sequence analysis and functional genomics projects. This report describes the systematic and up-to-date analysis of genomes (PEDANT), a comprehensive database of the yeast genome (MYGD), a database reflecting the progress in sequencing the Arabidopsis thaliana genome (MATD), the database of assembled, annotated human EST clusters (MEST), and the collection of protein sequence data within the framework of the PIR-International Protein Sequence Database (described elsewhere in this volume). MIPS provides access through its WWW server (http://www.mips.biochem.mpg.de) to a spectrum of generic databases, including the above mentioned as well as a database of protein families (PROTFAM), the MITOP database, and the all-against-all FASTA database.
---
paper_title: Digging for dead genes: an analysis of the characteristics of the pseudogene population in the Caenorhabditis elegans genome.
paper_content:
Pseudogenes are non-functioning copies of genes in genomic DNA, which may either result from reverse transcription from an mRNA transcript (processed pseudogenes) or from gene duplication and subsequent disablement (non-processed pseudogenes). As pseudogenes are apparently 'dead', they usually have a variety of obvious disablements (e.g., insertions, deletions, frameshifts and truncations) relative to their functioning homologs. We have derived an initial estimate of the size, distribution and characteristics of the pseudogene population in the Caenorhabditis elegans genome, performing a survey in 'molecular archaeology'. Corresponding to the 18 576 annotated proteins in the worm (i.e., in Wormpep18), we have found an estimated total of 2168 pseudogenes, about one for every eight genes. Few of these appear to be processed. Details of our pseudogene assignments are available from http://bioinfo.mbb.yale.edu/genome/worm/pseudogene. The population of pseudogenes differs significantly from that of genes in a number of respects: (i) pseudogenes are distributed unevenly across the genome relative to genes, with a disproportionate number on chromosome IV; (ii) the density of pseudogenes is higher on the arms of the chromosomes; (iii) the amino acid composition of pseudogenes is midway between that of genes and (translations of) random intergenic DNA, with enrichment of Phe, Ile, Leu and Lys, and depletion of Asp, Ala, Glu and Gly relative to the worm proteome; and (iv) the most common protein folds and families differ somewhat between genes and pseudogenes-whereas the most common fold found in the worm proteome is the immunoglobulin fold and the most common 'pseudofold' is the C-type lectin. In addition, the size of a gene family bears little overall relationship to the size of its corresponding pseudogene complement, indicating a highly dynamic genome. There are in fact a number of families associated with large populations of pseudogenes. For example, one family of seven-transmembrane receptors (represented by gene B0334.7) has one pseudogene for every four genes, and another uncharacterized family (represented by gene B0403.1) is approximately two-thirds pseudogenic. Furthermore, over a hundred apparent pseudogenic fragments do not have any obvious homologs in the worm.
---
paper_title: Metabolism and evolution of Haemophilus influenzae deduced from a whole-genome comparison with Escherichia coli
paper_content:
Abstract Background: The 1.83 Megabase (Mb) sequence of the Haemophilus influenzae chromosome, the first completed genome sequence of a cellular life form, has been recently reported. Approximately 75 % of the 4.7 Mb genome sequence of Escherichia coli is also available. The life styles of the two bacteria are very different – H. influenzae is an obligate parasite that lives in human upper respiratory mucosa and can be cultivated only on rich media, whereas E. coli is a saprophyte that can grow on minimal media. A detailed comparison of the protein products encoded by these two genomes is expected to provide valuable insights into bacterial cell physiology and genome evolution. Results We describe the results of computer analysis of the amino-acid sequences of 1703 putative proteins encoded by the complete genome of H. influenzae . We detected sequence similarity to proteins in current databases for 92 % of the H. influenzae protein sequences, and at least a general functional prediction was possible for 83 %. A comparison of the H. influenzae protein sequences with those of 3010 proteins encoded by the sequenced 75 % of the E. coli genome revealed 1128 pairs of apparent orthologs, with an average of 59 % identity. In contrast to the high similarity between orthologs, the genome organization and the functional repertoire of genes in the two bacteria were remarkably different. The smaller genome size of H. influenzae is explained, to a large extent, by a reduction in the number of paralogous genes. There was no long range colinearity between the E. coli and H. influenzae gene orders, but over 70 % of the orthologous genes were found in short conserved strings, only about half of which were operons in E. coli . Superposition of the H. influenzae enzyme repertoire upon the known E. coli metabolic pathways allowed us to reconstruct similar and alternative pathways in H. influenzae and provides an explanation for the known nutritional requirements. Conclusion By comparing proteins encoded by the two bacterial genomes, we have shown that extensive gene shuffling and variation in the extent of gene paralogy are major trends in bacterial evolution; this comparison has also allowed us to deduce crucial aspects of the largely uncharacterized metabolism of H. influenzae .
---
paper_title: The Relationship between Protein Structure and Function : a Comprehensive Survey with Application to the Yeast Genome
paper_content:
For most proteins in the genome databases, function is predicted via sequence comparison. In spite of the popularity of this approach, the extent to which it can be reliably applied is unknown. We address this issue by systematically investigating the relationship between protein function and structure. We focus initially on enzymes functionally classified by the Enzyme Commission (EC) and relate these to by structurally classified domains the SCOP database. We find that the major SCOP fold classes have different propensities to carry out certain broad categories of functions. For instance, alpha/beta folds are disproportionately associated with enzymes, especially transferases and hydrolases, and all-alpha and small folds with non-enzymes, while alpha+beta folds have an equal tendency either way. These observations for the database overall are largely true for specific genomes. We focus, in particular, on yeast, analyzing it with many classifications in addition to SCOP and EC (i.e. COGs, CATH, MIPS), and find clear tendencies for fold-function association, across a broad spectrum of functions. Analysis with the COGs scheme also suggests that the functions of the most ancient proteins are more evenly distributed among different structural classes than those of more modern ones. For the database overall, we identify the most versatile functions, i.e. those that are associated with the most folds, and the most versatile folds, associated with the most functions. The two most versatile enzymatic functions (hydro-lyases and O-glycosyl glucosidases) are associated with seven folds each. The five most versatile folds (TIM-barrel, Rossmann, ferredoxin, alpha-beta hydrolase, and P-loop NTP hydrolase) are all mixed alpha-beta structures. They stand out as generic scaffolds, accommodating from six to as many as 16 functions (for the exceptional TIM-barrel). At the conclusion of our analysis we are able to construct a graph giving the chance that a functional annotation can be reliably transferred at different degrees of sequence and structural similarity. Supplemental information is available from http://bioinfo.mbb.yale.edu/genome/foldfunc++ +.
---
paper_title: A Genomic Perspective on Protein Families
paper_content:
In order to extract the maximum amount of information from the rapidly accumulating genome sequences, all conserved genes need to be classified according to their homologous relationships. Comparison of proteins encoded in seven complete genomes from five major phylogenetic lineages and elucidation of consistent patterns of sequence similarities allowed the delineation of 720 clusters of orthologous groups (COGs). Each COG consists of individual orthologous proteins or orthologous sets of paralogs from at least three lineages. Orthologs typically have the same function, allowing transfer of functional information from one member to an entire COG. This relation automatically yields a number of functional predictions for poorly characterized genomes. The COGs comprise a framework for functional and evolutionary genome analysis.
---
paper_title: Comparing Genomes in terms of Protein Structure: Surveys of a Finite Parts List
paper_content:
We give an overview of the emerging field of structural genomics, describing how genomes can be compared in terms of protein structure. As the number of genes in a genome and the total number of protein folds are both quite limited, these comparisons take the form of surveys of a finite parts list, similar in respects to demographic censuses. Fold surveys have many similarities with other whole-genome characterizations, e.g. analyses of motifs or pathways. However, structure has a number of aspects that make it particularly suitable for comparing genomes, namely the way it allows for the precise definition of a basic protein module and the fact that it has a better defined relationship to sequence similarity than does protein function. An essential requirement for a structure survey is a library of folds, which groups the known structures into `fold families'. This library can be built up automatically using a structure comparison program, and we described how important objective statistical measures are for assessing similarities within the library and between the library and genome sequences. After building the library, one can use it to count the number of folds in genomes, expressing the results in the form of Venn diagrams and `top-10' statistics for shared and common folds. Depending on the counting methodology employed, these statistics can reflect different aspects of the genome, such as the amount of internal duplication or gene expression. Previous analyses have shown that the common folds shared between very different microorganisms, i.e. in different kingdoms, have a remarkably similar structure, being comprised of repeated strand–helix–strand super-secondary structure units. A major difficulty with this sort of `fold-counting' is that only a small subset of the structures in a complete genome are currently known and this subset is prone to sampling bias. One way of overcoming biases is through structure prediction, which can be applied uniformly and comprehensively to a whole genome. Various investigators have, in fact, already applied many of the existing techniques for predicting secondary structure and transmembrane (TM) helices to the recently sequenced genomes. The results have been consistent: microbial genomes have similar fractions of strands and helices even though they have significantly different amino acid composition. The fraction of membrane proteins with a given number of TM helices falls off rapidly with more TM elements, approximately according to a Zipf law. This latter finding indicates that there is no preference for the highly studied 7-TM proteins in microbial genomes. Continuously updated tables and further information pertinent to this review are available over the web at http://bioinfo.mbb.yale.edu/genome.
---
paper_title: Protein folds and functions
paper_content:
Abstract Background: The recent rapid increase in the number of available three-dimensional protein structures has further highlighted the necessity to understand the relationship between biological function and structure. Using structural classification schemes such as SCOP, CATH and DALI, it is now possible to explore global relationships between protein fold and function, something which was previously impractical. Results: Using a relational database of CATH data we have generated fold distributions for arbitrary selections of proteins automatically. These distributions have been examined in the light of protein function and bound ligand. Different enzyme classes are not clearly reflected in distributions of protein class and architecture, whereas the type of bound ligand has a much more dramatic effect. Conclusions: The availability of structural classification data has enabled this novel overview analysis. We conclude that function at the top level of the EC number enzyme classification is not related to fold, as only a very few specific residues are actually responsible for enzyme activity. Conversely, the fold is much more closely related to ligand type.
---
paper_title: Whole-genome trees based on the occurrence of folds and orthologs: implications for comparing genomes on different levels
paper_content:
We built whole-genome trees based on the presence or absence of particular molecular features, either orthologs or folds, in the genomes of a number of recently sequenced microorganisms. To put these genomic trees into perspective, we compared them to the traditional ribosomal phylogeny and also to trees based on the sequence similarity of individual orthologous proteins. We found that our genomic trees based on the overall occurrence of orthologs did not agree well with the traditional tree. This discrepancy, however, vanished when one restricted the tree to proteins involved in transcription and translation, not including problematic proteins involved in metabolism. Protein folds unite superficially unrelated sequence families and represent a most fundamental molecular unit described by genomes. We found that our genomic occurrence tree based on folds agreed fairly well with the traditional ribosomal phylogeny. Surprisingly, despite this overall agreement, certain classes of folds, particularly all-beta ones, had a somewhat different phylogenetic distribution. We also compared our occurrence trees to whole-genome clusters based on the composition of amino acids and di-nucleotides. Finally, we analyzed some technical aspects of genomic trees-e.g., comparing parsimony versus distance-based approaches and examining the effects of increasing numbers of organisms. Additional information (e.g. clickable trees) is available from http://bioinfo.mbb.yale.edu/genome/trees.
---
paper_title: Transposon mutagenesis for the analysis of protein production, function, and localization.
paper_content:
Publisher Summary This chapter describes a transposon mutagenesis system that produces multipurpose constructs for the monitoring of protein production, localization, and function. A single mutagenesis generates a large spectrum of alleles, including null, hypomorphic, and conditional alleles, reporter fusions, and epitope-insertion alleles. The system, therefore, provides the basis for a wide variety of studies of gene and protein function. The chapter provides comprehensive instructions for use of the new transposons to mutagenize a gene of interest, and for use of the transposon insertion libraries to mutagenize the yeast genome. While the application of these specific transposons is limited to organisms in which the Saccharomyces cerevisiae selectable marker URA3 can be used, the approach is generally applicable to mutagenesis of DNA from any organism for which a transformation and selection system exists.
---
| Title: What is Bioinformatics? A Proposed Definition and Overview of the Field
Section 1: Introduction
Description 1: Introduce the rapid production of biological data and the essential role of computers in biological research, leading to the emergence of bioinformatics.
Section 2: Aims of Bioinformatics
Description 2: Discuss the three primary goals of bioinformatics: organizing data, developing tools for analysis, and interpreting data to glean biologically meaningful information.
Section 3: Summary
Description 3: Present the background, objectives, methods, and conclusions of the study, including the proposed definition of bioinformatics.
Section 4: Bioinformatics - A Definition
Description 4: Provide a formal definition of bioinformatics, detailing its scope and applications.
Section 5: The Information Associated with these Molecules
Description 5: Examine the types of data analyzed in bioinformatics, focusing on DNA and protein sequences, macromolecular structures, and functional genomics.
Section 6: Organise the Information on a Large Scale
Description 6: Discuss methods for managing and assessing large volumes of biological data, including classification systems and database links.
Section 7: Data Integration
Description 7: Explain the importance and methods of integrating multiple sources of data for comprehensive bioinformatics analysis.
Section 8: Understand and Organise the Information
Description 8: Describe the types of analyses conducted in bioinformatics based on different types of data and subject areas.
Section 9: The Bioinformatics Spectrum
Description 9: Explore the various dimensions of bioinformatics, including both depth (detailed analysis of single entities) and breadth (comparative analyses across multiple entities).
Section 10: Applying Informatics Techniques
Description 10: Outline the different informatics techniques used in bioinformatics for data organization, sequence analysis, structural analysis, and molecular simulations.
Section 11: Transcription Regulation - A Case Study in Bioinformatics
Description 11: Use transcription regulation as an example to demonstrate how bioinformatics contributes to understanding biological systems and practical applications.
Section 12: Structural Studies
Description 12: Review the structural analyses of protein-DNA complexes and the insights gained regarding DNA-binding proteins.
Section 13: Genomic Studies
Description 13: Describe the genomic studies used to identify transcription factors and structural motifs in different organisms.
Section 14: Gene Expression Studies
Description 14: Discuss the methods and findings of gene expression studies, particularly in relation to clustering genes by expression profiles and analyzing cancer cells.
Section 15: Finding Homologues
Description 15: Present the practical application of finding homologous biomolecules to transfer information and develop theoretical models.
Section 16: Rational Drug Design
Description 16: Illustrate how bioinformatics aids in rational drug design using tools like sequence search techniques and docking algorithms.
Section 17: Large-scale Censuses
Description 17: Discuss the importance of large-scale censuses in bioinformatics for understanding evolutionary, biochemical, and biophysical trends.
Section 18: Conclusions
Description 18: Summarize the indispensability of computational methods in biological investigations and the two principal approaches that underlie bioinformatics studies. |
A survey of Markov decision models for control of networks of queues | 10 | ---
paper_title: Controlled Markov Chains and Stochastic Networks
paper_content:
Controlled Markov chains with average cost criterion and with special cost and transition structures are studied. Existence of optimal stationary strategies is established for the average cost criterion. Corresponding dynamic programming equations are derived. A stochastic network problem that includes interconnected queues as a special case is described and studied within this framework.
---
paper_title: Discrete-time controlled Markov processes with average cost criterion: a survey
paper_content:
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation.
---
paper_title: Recent results on conditions for the existence of average optimal stationary policies
paper_content:
This paper concerns countable state space Markov decision processes endowed with a (long-run expected)average reward criterion. For these models we summarize and, in some cases,extend some recent results on sufficient conditions to establish the existence of optimal stationary policies. The topics considered are the following: (i) the new assumptions introduced by Sennott in [20–23], (ii)necessary and sufficient conditions for the existence of a bounded solution to the optimality equation, and (iii) equivalence of average optimality criteria. Some problems are posed.
---
paper_title: Control of Markov Chains with Long-Run Average Cost Criterion: The Dynamic Programming Equations
paper_content:
The long-run average cost control problem for discrete time Markov chains on a countable state space is studied in a very general framework. Necessary and sufficient conditions for optimality in terms of the dynamic programming equations are given when an optimal stable stationary strategy is known to exist (e.g., for the situations studied in [Stochastic Differential Systems, Stochastic Control Theory and Applications, IMA Vol. Math. App. 10, Springer-Verlag, New York, Berlin, 1988, pp. 57–77]). A characterization of the desired solution of the dynamic programming equations is given in a special case. Also included is a novel convex analytic argument for deducing the existence of an optimal stable stationary.strategy when that of a randomized one is known.
---
paper_title: Control of Markov Chains with Long-Run Average Cost Criterion
paper_content:
The long-run average cost control problem for discrete time Markov chains is studied in an extremely general framework. Existence of stable stationary strategies which are optimal in the appropriate sense is established and these are characterized via the dynamic programming equations. The approach here differs from the conventional approach via the discounted cost problem and covers situations not covered by the latter.
---
paper_title: Necessary conditions for the optimality equation in average-reward Markov decision processes
paper_content:
An average-reward Markov decision process (MDP) with discretetime parameter, denumerable state space, and bounded reward function is considered. With such a model, we associate a family of MDPs. Then, we determinenecessary conditions for the existence of a bounded solution to the optimality equation for each one of the models in the family. Moreover,necessary andsufficient conditions are given so that the optimality equations have a bounded solution with an additional property.
---
paper_title: Scheduling, Routing, and Flow Control in Stochastic Networks
paper_content:
Queueing models are frequently helpful in the analysis and control of communication, manufacturing, and transportation systems. The theory of Markov decision processes and the inductive techniques of dynamic programming have been used to develop normative models for optimal control of admission, servicing, routing, and scheduling of jobs in queues and networks of queues. We review some of these models, beginning with single-facility models and then progressing to models for networks of queues. We emphasize the use of induction on a sequence of successive approximations of the optimal value function (value iteration) to establish the form of optimal control policies.
---
paper_title: Monotone Control of Queueing and Production/Inventory Systems
paper_content:
Weber and Stidham (1987) used submodularity to establish transition monotonicity (a service completion at one station cannot reduce the service rate at another station) for Markovian queueing networks that meet certain regularity conditions and are controlled to minimize service and queueing costs. We give an extension of monotonicity to other directions in the state space, such as arrival transitions, and to arrival routing problems. The conditions used to establish monotonicity, which deal with the boundary of the state space, are easily verified for many queueing systems. We also show that, without service costs, transition-monotone controls can be described by simple control regions and switching functions, extending earlier results. The theory is applied to production/inventory systems with holding costs at each stage and finished goods backorder costs.
---
paper_title: Monotone Optimal Control of Permutable GSMPs
paper_content:
We consider Markovian GSMPs (generalized semi-Markov processes) in which the rates of events are subject to control. A control is monotone if the rate of one event is increasing or decreasing in the number of occurrences of other events. We give general conditions for the existence of monotone optimal controls. The conditions are functional properties for the one-step cost functions and, more importantly, structural properties for the GSMP. The main conditions on costs are submodularity or supermodularity with respect to pairs of events. The key structural condition is strong permutability , requiring that the state at any time be determined by the number of events of each type that have occurred, regardless of their order. This permits a reformulation of the original control problem into one based only on event counting processes. This reformulation leads to a unified treatment of a broad class of models and to meaningful generality beyond existing results.
---
paper_title: Extremal Splittings of Point Processes
paper_content:
The sequence with nth term defined by [(n + 1)p] − [np] is an extremal zero-one valued sequence of asymptotic mean p in the following sense (for example): if a fraction p of customers from a point process with iid interarrival times is sent to an exponential server queue according to a prespecified splitting sequence, then the long-term average queue size is minimized when the above sequence is used. The proof involves consideration of the lower convex envelope J (which is a function on Rm) of a function J on Zm. An explicit representation is given for J in terms of J, for J in a broad class of functions, which we call “multimodular.” The expected queue size just before an arrival, considered as a function of the zero-one splitting sequence, is shown to belong to this class.
---
paper_title: Monotone Control of Queueing and Production/Inventory Systems
paper_content:
Weber and Stidham (1987) used submodularity to establish transition monotonicity (a service completion at one station cannot reduce the service rate at another station) for Markovian queueing networks that meet certain regularity conditions and are controlled to minimize service and queueing costs. We give an extension of monotonicity to other directions in the state space, such as arrival transitions, and to arrival routing problems. The conditions used to establish monotonicity, which deal with the boundary of the state space, are easily verified for many queueing systems. We also show that, without service costs, transition-monotone controls can be described by simple control regions and switching functions, extending earlier results. The theory is applied to production/inventory systems with holding costs at each stage and finished goods backorder costs.
---
paper_title: Minimizing a Submodular Function on a Lattice
paper_content:
This paper gives general conditions under which a collection of optimization problems, with the objective function and the constraint set depending on a parameter, has optimal solutions that are an isotone function of the parameter. Relating to this, we present a theory that explores and elaborates on the problem of minimizing a submodular function on a lattice.
---
paper_title: Optimal control of two interacting service stations
paper_content:
Optimal controls described by switching curves in the two dimensional state space are shown to exist for the optimal control of a Markov network with two service stations and linear cost. The controls govern routing and service priorities. Finite horizon and long run average cost problems are considered. An example is given which shows that nonconvex value functions can arise for slightly more general networks. A single station control problem with nonconvex value functions is then considered to indicate how switch structure might be established more generally.
---
paper_title: On the Optimality of the Generalized Shortest Queue Policy
paper_content:
Consider a queueing model in which arriving customers have to choose between m parallel servers, each with its own queue. We prove for general arrival streams that the policy which assigns to the shortest queue is stochastically optimal for models with finite buffers and batch arrivals.
---
paper_title: On the Optimality of the Generalized Shortest Queue Policy
paper_content:
Consider a queueing model in which arriving customers have to choose between m parallel servers, each with its own queue. We prove for general arrival streams that the policy which assigns to the shortest queue is stochastically optimal for models with finite buffers and batch arrivals.
---
paper_title: Optimal control of two interacting service stations
paper_content:
Optimal controls described by switching curves in the two dimensional state space are shown to exist for the optimal control of a Markov network with two service stations and linear cost. The controls govern routing and service priorities. Finite horizon and long run average cost problems are considered. An example is given which shows that nonconvex value functions can arise for slightly more general networks. A single station control problem with nonconvex value functions is then considered to indicate how switch structure might be established more generally.
---
paper_title: Optimal control of a queueing system with two heterogeneous servers
paper_content:
The problem considered is that of optimally controlling a queueing system which consists of a common buffer or queue served by two servers. The arrivals to the buffer are Poisson and the servers are both exponential, but with different mean service times. It is shown that the optimal policy which minimizes the mean sojourn time of customers in the system is of threshold type. The faster server should be fed a customer from the buffer whenever it becomes available for service, but the slower server should be utilized if and only if the queue length exceeds a readily computed threshold value.
---
paper_title: On decentralized dynamic routing for congested traffic networks
paper_content:
The problem of routing traffic through single destination congested networks is considered, when the network has deterministic inputs and may contain looping paths. A decentralized dynamic routing strategy is proposed which assures that no traffic is directed around loops.
---
paper_title: The optimal control of heterogeneous queueing systems: a paradigm for load-sharing and routing
paper_content:
The essence of the basic control decisions implicit in load-sharing and routing algorithms is captured in a simple model of heterogeneous queue control. The authors solve for the optimal control policy and investigate the performance of previously proposed policies in a tractable limit of this model. Using their understanding of this solvable limit, the authors propose heuristic policies for the general model. Simulation data for these policies suggest that they perform well over a wide range of system parameters. >
---
paper_title: Martingale dynamics and optimal routing in a network
paper_content:
In this paper, we consider a dynamic routing problem in a store-and-forward computer-communication network which consists of l parallel channels connected between the source and the destination. We prove that if messages arrive at the network in accordance with a Poisson process and all nodes have an exponential service rate, then the routing strategy that minimizes the total expected time in transmitting all messages that arrive before T>0 is to route the arriving message to the channel along which the sum of all queue sizes is minimum. Techniques of martingale and dynamic programming are used in obtaining the result.
---
paper_title: The Join-Biased-Queue Rule and Its Application to Routing in Computer Communication Networks
paper_content:
A routing rule similar in nature to delta-routing [8] is studied in this paper. The approach is to superimpose, local adaptivity on top of a fixed traffic flow distribution. The fixed flow distribution we choose is obtained from the best stochastic (BS) rule [3]. The adaptive part is called the join-biased-queue (JBQ) rule. The resultant JBQ-BS rule is analyzed on small networks and is shown to provide 10-27 percent delay improvement over the BS rule.
---
paper_title: Optimal control of two interacting service stations
paper_content:
Optimal controls described by switching curves in the two dimensional state space are shown to exist for the optimal control of a Markov network with two service stations and linear cost. The controls govern routing and service priorities. Finite horizon and long run average cost problems are considered. An example is given which shows that nonconvex value functions can arise for slightly more general networks. A single station control problem with nonconvex value functions is then considered to indicate how switch structure might be established more generally.
---
paper_title: Dynamic Scheduling of a Multiclass Queue: Discount Optimality
paper_content:
We consider a single-server queuing system with several classes of customers who arrive according to independent Poisson processes. The service time distributions are arbitrary, and we assume a linear cost structure. The problem is to decide, at the completion of each service and given the state of the system, which class (if any) to admit next into service. The objective is to maximize the expected net present value of service rewards received minus holding costs incurred over an infinite planning horizon, the interest rate being positive. One very special type of scheduling rule, called a modified static policy, simply enforces a (nonpreemptive) priority ranking except that certain classes are never served. It is shown that there is a modified static policy that is optimal, and a simple algorithm for its computation is presented.
---
paper_title: Interchange arguments for classical scheduling problems in queues
paper_content:
Abstract Simple interchange arguments are applied for solving classical scheduling problems in queues. We first show that the μc- rule minimizes a fairly general cost function in a multiclass ·/M/1 queue. Then, we address the same problem when partial feedback and change of class are allowed. Finally, we consider two ·/M/1 queues in series and we show that the μc-rule is always optimal in the second node.
---
paper_title: Interchange Arguments in Stochastic Scheduling.
paper_content:
Interchange arguments are applied to establish the optimality of priority list policies in three problems. First, we prove that in a multiclass tandem of two · /M/1 queues it is always optimal in the second node to serve according to the cp rule. The result holds more generally if the first node is replaced by a multiclass network consisting of ·/M/1 queues with Bernoulli routing. Next, for scheduling a single server in a multiclass node with feedback, a simplified proof of Klimov's result is given. From it follows the optimality of the index rule among idling policies for general service time distributions, and among pre-emptive policies when the service time distributions are exponential. Lastly, we consider the problem of minimizing the blocking in a communication link with lossy channels and exponential holding times.
---
paper_title: A Priority Queue with Discounted Linear Costs
paper_content:
We consider a nonpreemptive priority queue with a finite number of priority classes, Poisson arrival processes, and general service time distributions. It is not required that the system be stable or even that the mean service times be finite. The economic framework is linear, consisting of a holding cost per unit time and fixed service reward for each customer class. Future costs and rewards are continuously discounted with a positive interest rate. Allowing general initial queue sizes, we develop an expression for the expected present value of rewards received minus costs incurred over an infinite horizon. From this we obtain the Laplace transform of the time-dependent expected queue length for each customer class.
---
paper_title: Optimal scheduling in some multi-queue single-server systems
paper_content:
The server visits N queues in an arbitrary manner. Each queue is visited for a random period of time whose duration is sampled in advance. At the end of a visit period, either all customers of the attended queue leave the system (variant I) or only customers that were present in the queue upon the arrival of the server leave the system (variant II). A scheduling policy is a rule that selects the next queue to be visited by the server. When the controller has no information on the state of the system, it is shown, under homogeneous arrival assumptions, that a cyclic policy minimizes the expected number of customers in the system. When the controller knows the number of customers in each queue, it is shown that the so-called most-customers-first (MCF) policy minimizes, in the sense of strong stochastic ordering, the vector of the number of customers in each queue whose components are arranged in decreasing order. These results hold for variants I and II and are obtained under fairly weak statistical assumptions. This model has potential applications in videotex and time-division multiple-access systems. >
---
paper_title: Recent results on conditions for the existence of average optimal stationary policies
paper_content:
This paper concerns countable state space Markov decision processes endowed with a (long-run expected)average reward criterion. For these models we summarize and, in some cases,extend some recent results on sufficient conditions to establish the existence of optimal stationary policies. The topics considered are the following: (i) the new assumptions introduced by Sennott in [20–23], (ii)necessary and sufficient conditions for the existence of a bounded solution to the optimality equation, and (iii) equivalence of average optimality criteria. Some problems are posed.
---
paper_title: Necessary conditions for the optimality equation in average-reward Markov decision processes
paper_content:
An average-reward Markov decision process (MDP) with discretetime parameter, denumerable state space, and bounded reward function is considered. With such a model, we associate a family of MDPs. Then, we determinenecessary conditions for the existence of a bounded solution to the optimality equation for each one of the models in the family. Moreover,necessary andsufficient conditions are given so that the optimality equations have a bounded solution with an additional property.
---
| Title: A Survey of Markov Decision Models for Control of Networks of Queues
Section 1: Introduction
Description 1: Introduce the context and importance of using Markov decision models for the control of networks of queues. Discuss the focus of the survey, including the types of control (service rates, admission, routing, and scheduling).
Section 2: Control of transitions in a Markov process: A general model
Description 2: Present a general model for controlling the transition rates in a continuous-time Markov chain. Explain the framework and its applications to queue networks.
Section 3: Control of service rates in a network of queues
Description 3: Apply the general model to the control of service rates at the nodes of a network of queues, illustrating with specific structures like cycles and series of queues. Discuss optimal policies such as bang-bang controls and kanban policies.
Section 4: Control of admission to the first queue
Description 4: Explore the structure of optimal policies for the control of arrivals to the first node in a series of queues, including admission/rejection decisions.
Section 5: Control of arrivals to each of two queues in series
Description 5: Discuss models and results for controlling arrivals to two queues in series, highlighting the structure of optimal policies and its practical implications.
Section 6: Control of admission, routing, and server allocation in parallel queues
Description 6: Examine the control of admission, routing, and server allocation in systems with parallel queues, including symmetric cases and optimal policies like the join-the-shortest-queue rule.
Section 7: Routing to parallel servers from a common queue
Description 7: Investigate optimal routing strategies when customers enter a common buffer and are subsequently assigned to parallel servers, compared to separate queue models.
Section 8: Other routing models
Description 8: Review additional routing models, including the join-biased-shortest-queue rule and numerical approaches for optimal routing and flow-control policies.
Section 9: Scheduling in networks of queues
Description 9: Discuss scheduling problems in networked queues, where decisions about which class of customers to process are made dynamically. Highlight key results and optimal policies, including index rules and Brownian network approximations.
Section 10: Scheduling a series system with constant service times
Description 10: Conclude the survey by illustrating a scheduling problem in a series system with constant service times, detailing the optimal scheduling rules and their derivations. |
A Survey on Context-aware Web Service Systems | 14 | ---
paper_title: Context-aware office assistant
paper_content:
This paper describes the design and implementation of the Office Assistant — an agent that interacts with visitors at the office door and manages the office owner's schedule. We claim that rich context information about users is key to making a flexible and believable interaction. We also argue that natural face-to-face conversation is an appropriate metaphor for human-computer interaction.
---
paper_title: A Survey on Context-aware systems
paper_content:
Context-aware systems offer entirely new opportunities for application developers and for end users by gathering context data and adapting systems behaviour accordingly. Especially in combination with mobile devices, these mechanisms are of high value and are used to increase usability tremendously. In this paper, we present common architecture principles of context-aware systems and derive a layered conceptual design framework to explain the different elements common to most context-aware architectures. Based on these design principles, we introduce various existing context-aware systems focusing on context-aware middleware and frameworks, which ease the development of context-aware applications. We discuss various approaches and analyse important aspects in context-aware computing on the basis of the presented systems.
---
paper_title: Middleware for Distributed Context-Aware Systems ⋆
paper_content:
Context-aware systems represent extremely complex and heterogeneous distributed systems, composed of sensors, actuators, application components, and a variety of context processing components that manage the flow of context information between the sensors/actuators and applications. The need for middleware to seamlessly bind these components together is well recognised. Numerous attempts to build middleware or infrastructure for context-aware systems have been made, but these have provided only partial solutions; for instance, most have not adequately addressed issues such as mobility, fault tolerance or privacy. One of the goals of this paper is to provide an analysis of the requirements of a middleware for context-aware systems, drawing from both traditional distributed system goals and our experiences with developing context-aware applications. The paper also provides a critical review of several middleware solutions, followed by a comprehensive discussion of our own PACE middleware. Finally, it provides a comparison of our solution with the previous work, highlighting both the advantages of our middleware and important topics for future research.
---
paper_title: A Survey of Context-Aware Mobile Computing Research
paper_content:
Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information (such as user location, time of day, nearby people and devices, and user activity). Since it was proposed about a decade ago, many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology. Context-aware applications (or the system infrastructure to support them), however, have never been widely available to everyday users. In this survey of research on context-aware systems and applications, we looked in depth at the types of context used and models of context information, at systems that support collecting and disseminating context, and at applications that adapt to the changing context. Through this survey, it is clear that context-aware research is an old but rich area for research. The difficulties and possible solutions we outline serve as guidance for researchers hoping to make context-aware computing a reality.
---
paper_title: Understanding and Using Context Personal and Ubiquitous Computing Journal
paper_content:
A combined system of printing presses and a plurality of collators. Each of the collators assembles a freshly printed signature from the printing press with at least one preprinted signature. Each collator includes a plurality of hoppers for receiving signatures and means for feeding the signatures individually from the hoppers and for collating the signatures fed from the hoppers. A conveying system is provided for selectively either directing a stream of signatures from each of the respective printing presses to one respective hopper of each of the collators or directing a stream of signatures from either press to both of the one hoppers of both collators.
---
paper_title: Context sharing platform
paper_content:
When a ubiquitous society is realized, various types of sensors will be ubiquitous in the real world. The sensors will enable us to obtain information concerning the state of people, objects and environment (context). Theimportance of "context-aware services" that provide services to users according to their context information is increasing. By sharing context obtained by sensors set up by various types business, a new service that utilizes context information can be created. NEC is conducting research and development of context sharing technology to enable various types of businesses to share context. A context sharing platform is a system that incorporates context sharing technology.
---
paper_title: Flexible Middleware Support for Future Mobile Services and Their Context-Aware Adaptation
paper_content:
This paper presents a flexible peer-to-peer-based middleware for future user-centric mobile telecommunication services, which supports key functionalities needed to address personalization, adaptation and coordination of services running on top of it. The underlying communication pattern is based on dynamic negotiation that enables interworking of autonomous decentralized entities in a rapidly changing and open environment. This paper focuses on the middleware’s support for context-aware adaptation of a multimedia service for mobile users. Service adaptation takes into account both user preferences and contextual changes to modify the service behavior and contents. The middleware implementation is based on JXTA extended by a mobile agent platform and is deployable on a range of mobile devices including mobile phones and PDAs.
---
paper_title: A Survey of Context-Aware Mobile Computing Research
paper_content:
Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information (such as user location, time of day, nearby people and devices, and user activity). Since it was proposed about a decade ago, many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology. Context-aware applications (or the system infrastructure to support them), however, have never been widely available to everyday users. In this survey of research on context-aware systems and applications, we looked in depth at the types of context used and models of context information, at systems that support collecting and disseminating context, and at applications that adapt to the changing context. Through this survey, it is clear that context-aware research is an old but rich area for research. The difficulties and possible solutions we outline serve as guidance for researchers hoping to make context-aware computing a reality.
---
paper_title: A service-oriented middleware for building context-aware
paper_content:
The advancement of wireless networks and mobile computing necessitates more advanced applications and services to be built with context-awareness enabled and adaptability to their changing contexts. Today, building context-aware services is a complex task due to the lack of an adequate infrastructure support in pervasive computing environments. In this article, we propose a Service-Oriented Context-Aware Middleware (SOCAM) architecture for the building and rapid prototyping of context-aware services. It provides efficient support for acquiring, discovering, interpreting and accessing various contexts to build context-aware services. We also propose a formal context model based on ontology using Web Ontology Language to address issues including semantic representation, context reasoning, context classification and dependency. We describe our context model and the middleware architecture, and present a performance study for our prototype in a smart home environment.
---
paper_title: CAMUS: a middleware supporting context-aware services for network-based robots
paper_content:
A URC (ubiquitous robotic companion) is a concept for a network-based service robot. It allows the service robot to extend its functions and services by utilizing external sensor networks and remote computing servers. It also provides the robot's services at any time and any place. The URC requires not only the hardware infrastructure such as ubiquitous networks or sensor networks and high-performance computing servers but also the software infrastructure which resides above the hardware infrastructure. In this paper, authors introduce the CAMUS (context-aware middleware for URC system) as a part of the software infrastructure, which is a system middleware to support context-aware services for network-based robots. The CAMUS is based on the CORBA technology. It provides the common data model for different types of context information from external sensors, applications and users in the environment. It also offers the software framework to acquire, interpret and disseminate context information. PLUE (Programming Language for Ubiquitous Environment) is proposed to describe context-aware services for robots.
---
paper_title: Middleware for Distributed Context-Aware Systems ⋆
paper_content:
Context-aware systems represent extremely complex and heterogeneous distributed systems, composed of sensors, actuators, application components, and a variety of context processing components that manage the flow of context information between the sensors/actuators and applications. The need for middleware to seamlessly bind these components together is well recognised. Numerous attempts to build middleware or infrastructure for context-aware systems have been made, but these have provided only partial solutions; for instance, most have not adequately addressed issues such as mobility, fault tolerance or privacy. One of the goals of this paper is to provide an analysis of the requirements of a middleware for context-aware systems, drawing from both traditional distributed system goals and our experiences with developing context-aware applications. The paper also provides a critical review of several middleware solutions, followed by a comprehensive discussion of our own PACE middleware. Finally, it provides a comparison of our solution with the previous work, highlighting both the advantages of our middleware and important topics for future research.
---
paper_title: The Java Context Awareness Framework (JCAF) - A Service Infrastructure and Programming Framework for Context-Aware Applications
paper_content:
Context-awareness is a key concept in ubiquitous computing. But to avoid developing dedicated context-awareness sub-systems for specific application areas there is a need for more generic programming frameworks. Such frameworks can help the programmer develop and deploy context-aware applications faster. This paper describes the Java Context-Awareness Framework – JCAF, which is a Java-based context-awareness infrastructure and programming API for creating context-aware computer applications. The paper presents the design goals of JCAF, its runtime architecture, and its programming model. The paper presents some applications of using JCAF in three different applications and discusses lessons learned from using JCAF.
---
paper_title: Situated Web Service: Context-Aware Approach to High-Speed Web Service Communication
paper_content:
A framework is proposed to improve Web Service performance based on context-aware communication. Two key ideas are introduced to represent a client context; (1) available protocols that the client can handle, and (2) operation usage that shows how the client uses Web Service operations. We call our context aware approach a Situated Web Service (SiWS). We implemented and evaluated the SiWS and found that the overall performance was improved if more than three Web Services were executed between context changes.
---
paper_title: Context-Aware Environment-Role-Based Access Control Model for Web Services
paper_content:
The paper presents a context-aware environment-role-based access control model (CERBAC). Unlike traditional systems where access control has been explored, access decisions may depend on the context in which requests are made. It illustrated how the well-developed notion of roles can be used to capture security relevant context of the environment in which access requests are made. By introducing environment roles, a novel access control framework that incorporates context-based access control it creates. Moreover a architecture is presented that supports security policies that make use of environment roles to control access to resources. Furthermore, it outlines the configuration mechanism needed to apply our model to the Web services environment, and describes the implementation architecture for the system.
---
paper_title: A context-aware system based on service-oriented architecture
paper_content:
Advances in mobile devices, sensors and wireless networks have motivated the development of context-aware applications. Such mobile applications use sensors for monitoring several features such as: location, temperature, velocity, weather, traffic jam, noise, air pollution, and so on. This monitoring enables service provision according to a given context. These applications are known as context-aware systems. Context-aware applications are sensible to user necessities, personalized according to his profile, requirements and context. They evaluate user environment and push information that is relevant to user context. In this paper we present Omnipresent, which is a service-oriented architecture for context-aware applications. Omnipresent may be accessed from either mobile devices or Web browsers, and it is based on Web services and well-established standards for LBS applications such as those proposed by the OpenGeoSpatial Consortium. Omnipresent offers several services: map presentation, routing, advertisement, and also works as a reminder tool.
---
paper_title: Ubiquitous Provision of Context-Aware Web Services
paper_content:
Providing context-aware Web services refers to an adaptive process of delivering contextually matched Web services to meet service requesters’ needs at the moment. This article presents an ontology-based context model that enables formal description and acquisition of contextual information pertaining to both service requesters and services. The context model is supported by context query and phased acquisition techniques. We also report two context-aware Web services built on top of our context model to demonstrate how the model can be used to facilitate Web services discovery and Web content adaptation. Implementation details of the context elicitation system and the evaluation results of context-aware services provision are also reported.
---
paper_title: Model-driven Composition of Context-aware Web Services Using ContextUML and Aspects
paper_content:
Service oriented architectures (SOAs) are constantly gaining ground for the provision of business to business as well as user-centric services, mainly in the form of Web services technology. SOAs enable service providers to design and deploy new,composite service offerings out of existing component services. In order to match end-user expectations with respect to personalization and ease of use, these services should be designed in a manner that allows them to exhibit a certain level of context-awareness which is a basic element towards a richer end-user experience. However, in the majority of such services, context-handling is still tightly coupled with the core functionality of the service, resulting in a design which is difficult to implement and maintain. The paper proposes the decoupling of core service logic from context-related functionality by adopting a model-driven approach based on a modified version of the ContextUML metamodel. Core service logic and context handling are treated as separate concerns at the modeling level as well as in the resulting source code where aspect oriented programming (AOP) encapsulates context-dependent behavior in discrete code modules. The design of a restaurant finder service is used to portray the modified ContextUML metamodel and the service modeling process which is covered in full. Respective code snippets belonging to the executable version of the service (part of work in progress) are also provided, illustrating the transition from model to code and the resulting separation of concerns.
---
paper_title: A Framework for Context-Aware Adaptable Web Services
paper_content:
The trend towards pervasive computing involves an increasing number of ubiquitous, connected devices. As a consequence, the heterogeneity of client capabilities and the number of methods for accessing information services on the Internet also increases. Nevertheless, consumers expect information services to be accessible from all of these devices in a similar fashion. They also expect that information services are aware of their current environment. Generally, this kind of information is called context. More precisely, in our work context constitutes information about consumers and their environment that may be used by Web services to provide consumers a customized and personalized behaviour.
---
paper_title: Using P3P in a web services-based context-aware application platform
paper_content:
This paper describes a proposal for a privacy control architecture to be applied in the WASP project. The WASP project aims to develop a context-aware service platform on top of 3G networks, using web services technology. The proposed privacy control architecture is based on the P3P privacy policy description standard defined by W3C. The paper identifies extensions to P3P and its associated preference expression language APPEL that are needed to operate in a context-aware environment.
---
paper_title: A Survey on Context-aware systems
paper_content:
Context-aware systems offer entirely new opportunities for application developers and for end users by gathering context data and adapting systems behaviour accordingly. Especially in combination with mobile devices, these mechanisms are of high value and are used to increase usability tremendously. In this paper, we present common architecture principles of context-aware systems and derive a layered conceptual design framework to explain the different elements common to most context-aware architectures. Based on these design principles, we introduce various existing context-aware systems focusing on context-aware middleware and frameworks, which ease the development of context-aware applications. We discuss various approaches and analyse important aspects in context-aware computing on the basis of the presented systems.
---
paper_title: A Survey of Context-Aware Mobile Computing Research
paper_content:
Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information (such as user location, time of day, nearby people and devices, and user activity). Since it was proposed about a decade ago, many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology. Context-aware applications (or the system infrastructure to support them), however, have never been widely available to everyday users. In this survey of research on context-aware systems and applications, we looked in depth at the types of context used and models of context information, at systems that support collecting and disseminating context, and at applications that adapt to the changing context. Through this survey, it is clear that context-aware research is an old but rich area for research. The difficulties and possible solutions we outline serve as guidance for researchers hoping to make context-aware computing a reality.
---
paper_title: A Survey of Context Adaptation in Autonomic Computing
paper_content:
Autonomic computing (AC) is an emerging paradigm aiming at simplifying the administration of complex computer systems. Efforts required to deploy and maintain complex systems are usually high. Autonomic computing may help to reduce these efforts by allowing administrators to define abstract policies and then enable systems to configure, optimize and maintain themselves according to the specified policies. Context adaptation can be regarded as an enabling technology for future applications in the field of autonomic computing. In this paper we present a survey of past and future secrets of this enabling technology in Autonomic computing.
---
paper_title: A data-oriented survey of context models
paper_content:
Context-aware systems are pervading everyday life, therefore context modeling is becoming a relevant issue and an expanding research field. This survey has the goal to provide a comprehensive evaluation framework, allowing application designers to compare context models with respect to a given target application; in particular we stress the analysis of those features which are relevant for the problem of data tailoring. The contribution of this paper is twofold: a general analysis framework for context models and an up-to-date comparison of the most interesting, data-oriented approaches available in the literature.
---
paper_title: Middleware for Distributed Context-Aware Systems ⋆
paper_content:
Context-aware systems represent extremely complex and heterogeneous distributed systems, composed of sensors, actuators, application components, and a variety of context processing components that manage the flow of context information between the sensors/actuators and applications. The need for middleware to seamlessly bind these components together is well recognised. Numerous attempts to build middleware or infrastructure for context-aware systems have been made, but these have provided only partial solutions; for instance, most have not adequately addressed issues such as mobility, fault tolerance or privacy. One of the goals of this paper is to provide an analysis of the requirements of a middleware for context-aware systems, drawing from both traditional distributed system goals and our experiences with developing context-aware applications. The paper also provides a critical review of several middleware solutions, followed by a comprehensive discussion of our own PACE middleware. Finally, it provides a comparison of our solution with the previous work, highlighting both the advantages of our middleware and important topics for future research.
---
paper_title: A Framework for Context-Aware Adaptable Web Services
paper_content:
The trend towards pervasive computing involves an increasing number of ubiquitous, connected devices. As a consequence, the heterogeneity of client capabilities and the number of methods for accessing information services on the Internet also increases. Nevertheless, consumers expect information services to be accessible from all of these devices in a similar fashion. They also expect that information services are aware of their current environment. Generally, this kind of information is called context. More precisely, in our work context constitutes information about consumers and their environment that may be used by Web services to provide consumers a customized and personalized behaviour.
---
paper_title: Putting Context in Context: The Role and Design of Context Management in a Mobility and Adaptation Enabling Middleware
paper_content:
The operating context of mobile applications and services is constantly changing. In order to achieve higher levels of usability, mobile applications and services need to adapt to changes in context. This paper argues the need for adaptation enabling middleware that simplifies the development of context aware adaptive applications, and makes it economically and practically feasible to develop such applications. We claim that the traditional approach of simply providing contextual information to applications and let them handle the adaptation can be ineffective. We suggest a holistic approach where context management is an integral part of a more comprehensive adaptation enabling middleware. This paper describes the role and the design of the context management component in such a middleware architecture. The feasibility of the approach is demonstrated in a scenario where proof-of-concept implementations have been developed and evaluated.
---
paper_title: An Ontology for Context-Aware Pervasive Computing Environments
paper_content:
This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.
---
paper_title: ContextUML: a UML-based modeling language for model-driven development of context-aware Web services
paper_content:
Context-aware Web services are emerging as a promising technology for the electronic businesses in mobile and pervasive environments. Unfortunately, complex context-aware services are still hard to build. In this paper, we present a modeling language for the model-driven development of context-aware Web services based on the Unified Modeling Language (UML). Specifically, we show how UML can be used to specify information related to the design of context-aware services. We present the abstract syntax and notation of the language and illustrate its usage using an example service. Our language offers significant design flexibility that considerably simplifies the development of context-aware Web services.
---
paper_title: Model-driven Composition of Context-aware Web Services Using ContextUML and Aspects
paper_content:
Service oriented architectures (SOAs) are constantly gaining ground for the provision of business to business as well as user-centric services, mainly in the form of Web services technology. SOAs enable service providers to design and deploy new,composite service offerings out of existing component services. In order to match end-user expectations with respect to personalization and ease of use, these services should be designed in a manner that allows them to exhibit a certain level of context-awareness which is a basic element towards a richer end-user experience. However, in the majority of such services, context-handling is still tightly coupled with the core functionality of the service, resulting in a design which is difficult to implement and maintain. The paper proposes the decoupling of core service logic from context-related functionality by adopting a model-driven approach based on a modified version of the ContextUML metamodel. Core service logic and context handling are treated as separate concerns at the modeling level as well as in the resulting source code where aspect oriented programming (AOP) encapsulates context-dependent behavior in discrete code modules. The design of a restaurant finder service is used to portray the modified ContextUML metamodel and the service modeling process which is covered in full. Respective code snippets belonging to the executable version of the service (part of work in progress) are also provided, illustrating the transition from model to code and the resulting separation of concerns.
---
paper_title: Flexible Middleware Support for Future Mobile Services and Their Context-Aware Adaptation
paper_content:
This paper presents a flexible peer-to-peer-based middleware for future user-centric mobile telecommunication services, which supports key functionalities needed to address personalization, adaptation and coordination of services running on top of it. The underlying communication pattern is based on dynamic negotiation that enables interworking of autonomous decentralized entities in a rapidly changing and open environment. This paper focuses on the middleware’s support for context-aware adaptation of a multimedia service for mobile users. Service adaptation takes into account both user preferences and contextual changes to modify the service behavior and contents. The middleware implementation is based on JXTA extended by a mobile agent platform and is deployable on a range of mobile devices including mobile phones and PDAs.
---
paper_title: Integrating a Context Model in Web Services
paper_content:
Nowadays, with the great diffusion of mobile technology, and ubiquitous systems, the context has become the ear and the eye of information systems. These systems are more and more based on the usage of Web services. The classical architecture of these services that allows an interoperable interaction between service users and providers does not take in account context adaptation. In this article, we aim to integrate context adaptation within the classical architecture of Web services, adding to it dedicated components that would return to nomadic users a list of Web services that are adapted not only to his profile but also to his context.
---
paper_title: Quality of Context: What It Is And Why We Need It
paper_content:
When people interact with each other, they implicitly make use of context information while intuitively deducing and interpreting their actual situation. Compared to humans, IT infrastructures cannot easily take advantage of context information in interactions. Typically, context information has to be provided explicitly. Recently, cellular network operators have been showing interest in offering Context-Aware Services (CAS) in the future. For a service to be context-aware it must be able to use context information in order to adapt its behavior or the content it provides. Examples of CASs are restaurant finders, tour guides and dating services. These services will depend on the availability of context information which must be provided at the right time, in the right quality, and at the right place. The quality of this context information is neither identical to Quality of Service (QoS), nor to the quality of the underlying hardware components, i.e., Quality of Device (QoD). Rather, the precision, probability of correctness, trustworthiness, resolution, and up-to-dateness of context information form a new set of quality parameters which we call Quality of Context (QoC). In this paper, we will discuss what QoC is, what its most important parameters are and how QoC relates to QoS and QoD. These three notions of quality are unequal, but not unrelated. Based on several examples we will show the interdependence between them. We will argue that QoC as a new notion of quality is necessary to allow for the provisioning of CASs in an interorganizational manner.
---
paper_title: Location History in a Low-cost Context Awareness Environment
paper_content:
Location awareness is a crucial part of the context-awareness mechanism for ubicomputing. This paper explores how usefull is the location awareness history for an office based low-cost context-awareness environment. Capturing location awareness data into a relational database is simple and feasible in office environment. We use extended SQL to access the location awareness history database to provide direct support for speech commands. The mechanism improve flexibility for developing context awareness application in the Intelligent Environment.
---
paper_title: Middleware for Distributed Context-Aware Systems ⋆
paper_content:
Context-aware systems represent extremely complex and heterogeneous distributed systems, composed of sensors, actuators, application components, and a variety of context processing components that manage the flow of context information between the sensors/actuators and applications. The need for middleware to seamlessly bind these components together is well recognised. Numerous attempts to build middleware or infrastructure for context-aware systems have been made, but these have provided only partial solutions; for instance, most have not adequately addressed issues such as mobility, fault tolerance or privacy. One of the goals of this paper is to provide an analysis of the requirements of a middleware for context-aware systems, drawing from both traditional distributed system goals and our experiences with developing context-aware applications. The paper also provides a critical review of several middleware solutions, followed by a comprehensive discussion of our own PACE middleware. Finally, it provides a comparison of our solution with the previous work, highlighting both the advantages of our middleware and important topics for future research.
---
paper_title: A Framework for Context-Aware Adaptable Web Services
paper_content:
The trend towards pervasive computing involves an increasing number of ubiquitous, connected devices. As a consequence, the heterogeneity of client capabilities and the number of methods for accessing information services on the Internet also increases. Nevertheless, consumers expect information services to be accessible from all of these devices in a similar fashion. They also expect that information services are aware of their current environment. Generally, this kind of information is called context. More precisely, in our work context constitutes information about consumers and their environment that may be used by Web services to provide consumers a customized and personalized behaviour.
---
paper_title: Context-Aware Environment-Role-Based Access Control Model for Web Services
paper_content:
The paper presents a context-aware environment-role-based access control model (CERBAC). Unlike traditional systems where access control has been explored, access decisions may depend on the context in which requests are made. It illustrated how the well-developed notion of roles can be used to capture security relevant context of the environment in which access requests are made. By introducing environment roles, a novel access control framework that incorporates context-based access control it creates. Moreover a architecture is presented that supports security policies that make use of environment roles to control access to resources. Furthermore, it outlines the configuration mechanism needed to apply our model to the Web services environment, and describes the implementation architecture for the system.
---
paper_title: Context-based personalization of Web services composition and provisioning
paper_content:
This work presents an approach that aims at personalizing Web services composition and provisioning using context. Composition addresses the situation of a user's request that cannot be satisfied by any available service, and thus requires the combination of several Web services. Provisioning focuses on the deployment of Web services according to users' preferences. A Web service is an accessible application that other applications and humans can discover and trigger. Context is the information that characterizes the interactions between humans, applications, and the surrounding environment. Web services are subject to personalization if there is a need of accommodating users' preferences during service performance and outcome delivery. To be able to track personalization in terms of what happened, what is happening, and what might happen three types of context are devised, and they are referred to as user-, Web service-, and resource-context.
---
paper_title: Using P3P in a web services-based context-aware application platform
paper_content:
This paper describes a proposal for a privacy control architecture to be applied in the WASP project. The WASP project aims to develop a context-aware service platform on top of 3G networks, using web services technology. The proposed privacy control architecture is based on the P3P privacy policy description standard defined by W3C. The paper identifies extensions to P3P and its associated preference expression language APPEL that are needed to operate in a context-aware environment.
---
paper_title: Towards Context-Aware Mobile Web 2.0 Service Architecture
paper_content:
The emergence of new lightweight Web technologies and the development of mobile devices leads to a situation, where the users can consume the same Web services regardless of the place, time and device. Mobile devices are equipped with networking capabilities and sensors that provide versatile context and user-community information. This information enhances the user experience as it can be used to compensate the limited means of input. We present a context-aware mobile Web 2.0 service architecture that connects user context and community information with the Web services. This convergence enables the development of device-independent services that are enriched and personalized with user context and community information. Mobile middleware may be needed for efficient delivery of this information from the mobile device to the Web services. Four novel communication models for the delivery are introduced, namely centralized control, centralized services, peer-to-peer services, and pure peer-to-peer. The purpose of the models is to offer a secure and reliable platform for creating new services. Finally, we study virtual communities and market structures of the proposed models from a multidisciplinary point of view. We claim that the selected technical model determines the prospective market structure.
---
| Title: A Survey on Context-aware Web Service Systems
Section 1: Introduction
Description 1: Provide an initial overview of the survey's scope, covering the importance of context-aware systems in web services and summarizing the structure of the paper.
Section 2: Related Work and Motivation
Description 2: Discuss existing work related to context-aware systems and explain the motivation behind the survey.
Section 3: Context-aware Systems
Description 3: Present an overview of general context-aware systems and specifically detail systems that are built on web services.
Section 4: Non Web Service-based System
Description 4: Examine various context-aware systems that do not rely on web services, focusing on their architectures and techniques.
Section 5: Web Service-based Context-aware Systems
Description 5: Provide an overview of context-aware systems that are built around web service technologies, highlighting the web service-related components.
Section 6: Existing Surveys on Context-aware Systems and Frameworks
Description 6: Summarize previous surveys on context-aware systems, focusing on their scope and findings, and highlight the gap that this survey aims to address.
Section 7: Context-aware Systems in Web services Environments
Description 7: Discuss the common structure and components of context-aware systems built in web service environments.
Section 8: Context Information and Context Representations
Description 8: Survey the types of context information used in these systems and the various languages and models for their representation.
Section 9: Context Sensor Techniques
Description 9: Analyze the techniques used by context sensors to capture and provide contextual information in web service environments.
Section 10: Context Storage Techniques
Description 10: Discuss the methods used for storing context information, focusing on database technologies and access interfaces.
Section 11: Context Distribution Techniques
Description 11: Examine the different techniques used to distribute context information in web service environments, including transport protocols and overlay networks.
Section 12: Context Reasoning Techniques
Description 12: Discuss the reasoning techniques employed to derive new context information from existing data.
Section 13: Context Adaptation Techniques
Description 13: Analyze how systems adapt to changing contexts, covering aspects like service selection, security, privacy, and content adaptation.
Section 14: Conclusion
Description 14: Summarize the findings of the survey and propose future research directions to address current challenges in developing context-aware web service systems. |
A Survey on Information Retrieval, Text Categorization, and Web Crawling | 22 | ---
paper_title: A vector space model for automatic indexing
paper_content:
In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.
---
paper_title: H. P. Luhn A Statistical Approach to Mechanized Encoding and Searching of Literary Information*
paper_content:
Written communication of ideas is carried out on the basis of statistical probability in that a writer chooses that level of subject specificity and that combination of words which he feels will convey the most meaning. Since this process varies among individuals and since similar ideas are therefore relayed at different levels of specificity and by means of different words, the problem of literature searching by machines still presents major difficulties. A statistical approach to this problem will be outlined and the various steps of a system based on this approach will be described. Steps include the statistical analysis of a collection of documents in a field of interest, the establishment of a set of "notions" and the vocabulary by which they are expressed, the compilation of a thesaurus-type dictionary and index, the automatic encoding of documents by machine with the aid of such a dictionary, the encoding of topological notations (such as branched structures), the recording of the coded information, the establishment of a searching pattern for finding pertinent information, and the programming of appropriate machines to carry out a search.
---
paper_title: The text mining handbook: advanced approaches in analyzing unstructured data
paper_content:
Providing an in-depth examination of core text mining and link detection algorithms and operations, this text examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches.
---
paper_title: The text mining handbook: advanced approaches in analyzing unstructured data
paper_content:
Providing an in-depth examination of core text mining and link detection algorithms and operations, this text examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches.
---
paper_title: The text mining handbook: advanced approaches in analyzing unstructured data
paper_content:
Providing an in-depth examination of core text mining and link detection algorithms and operations, this text examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches.
---
| Title: A Survey on Information Retrieval, Text Categorization, and Web Crawling
Section 1: Introduction
Description 1: Provide an overview of information retrieval (IR), its historical context, and its applications.
Section 2: The Vector Space Model
Description 2: Explain the Vector Space Model for representing documents and queries and describe how it works.
Section 3: Term Frequency and Weighting
Description 3: Discuss the concept of term frequency, weighting of terms, and introduce Inverse Document Frequency (idf).
Section 4: Extracting and Calculating Term Frequency
Description 4: Describe tokenization and methods for extracting term frequency from documents.
Section 5: Document Preprocessing
Description 5: Explore various preprocessing techniques for documents to improve IR system performance.
Section 6: Stop Words
Description 6: Explain the concept of stop words and their exclusion during the tokenization process, along with potential issues.
Section 7: Normalization
Description 7: Discuss token normalization and the creation of equivalence classes.
Section 8: Hyphens, Punctuations and Digits
Description 8: Describe techniques for handling hyphens, punctuation, and digits in documents.
Section 9: Capitalization and Case-Folding
Description 9: Explain case-folding and strategies for handling capitalization in text documents.
Section 10: Stemming and Lemmatization
Description 10: Detail the processes of stemming and lemmatization to reduce inflection and derivation in words.
Section 11: Synonymy
Description 11: Discuss the importance of handling synonyms in IR and the use of a thesaurus.
Section 12: Evaluation of Information Retrieval
Description 12: Explain evaluation metrics such as recall and precision for assessing IR systems.
Section 13: Text Categorization
Description 13: Define text categorization and describe its applications and the main approaches used.
Section 14: Document Representation
Description 14: Explain different methods for representing documents for text categorization.
Section 15: Knowledge Engineering
Description 15: Discuss the knowledge engineering approach to building text categorization systems.
Section 16: Machine Learning
Description 16: Detail the machine learning approach for text categorization, including supervised learning and the steps involved.
Section 17: Naive Bayes Text Classification
Description 17: Provide an overview of the Naive Bayes supervised learning method for text classification.
Section 18: Support Vector Machines (SVM)
Description 18: Discuss the Support Vector Machine method for text classification and its key components.
Section 19: K-Nearest Neighbor
Description 19: Explain the k-Nearest Neighbor algorithm for text classification and its advantages and limitations.
Section 20: Comparison among Classifiers
Description 20: Compare various classifiers used in text categorization and discuss their performance.
Section 21: Web Crawling and the World Wide Web
Description 21: Introduce the concept of web crawling, its components, and the process of crawling the World Wide Web.
Section 22: Special Topics in Web Crawling
Description 22: Discuss advanced topics in web crawling, such as the robots exclusion protocol. |
Algorithm/Architecture Co-Exploration of Visual Computing on Emergent Platforms: Overview and Future Prospects | 9 | ---
paper_title: A high-quality spatial-temporal content-adaptive deinterlacing algorithm
paper_content:
This paper introduced a spatial-temporal content- adaptive algorithm, which can precisely select an appropriate interpolation technique for high-quality deinterlacing according to the spectral, edge-oriented and statistical features of local video content. Our algorithm employs a linear-phase statistical- adaptive vertical-temporal filter to deal with generic video scenes and adopts a modified edge-based line-averaging interpolation to efficiently recover moving edges. In addition, annoying flickering artifacts are efficiently suppressed by a flickering detection and a field-averaging filter. As a result, our algorithm outperforms other non-motion compensated methods in terms of objective PSNR and reveals more impressive subjective visual quality.
---
paper_title: A motion-adaptive deinterlacer via hybrid motion detection and edge-pattern recognition
paper_content:
A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.
---
paper_title: Motion Adaptive Deinterlacing via Edge Pattern Recognition
paper_content:
In this paper, a novel edge pattern recognition (EPR) deinterlacing algorithm with successive 4-field enhanced motion detection is introduced. The EPR algorithm surpasses the performance of ELA-based and other conventional methods especially at textural scenes. In addition, the current 4-field enhanced motion detection scheme overcomes conventional motion missing artifacts by gaining good motion detection accuracies and suppression of "motion missing" detection errors efficiently. Furthermore, with the incorporation of our new successive 4-field enhanced motion detection, the interpolation technique of EPR algorithm is capable of flexible adaptation in achieving better performance on textural scenes in generic video sequences.
---
paper_title: Spatial-temporal content-adaptive deinterlacing algorithm
paper_content:
An algorithm, which adapts to an appropriate interpolation technique for high-quality deinterlacing based on the spatial-temporal spectral and edge-oriented features of local video contents, is introduced. This algorithm employs a spectrum-adaptive vertical-temporal filter to deal with generic video scenes, and adopts a texture-adaptive edge-based line-averaging interpolation with an edge-consistency check to interpolate moving edges with high efficiency. This deinterlacing algorithm likewise incorporates an economical recursive stationary-pixel detection scheme with a field-averaging filter to effectively interpolate stationary video scenes. Because of the precise selection of the interpolation technique for versatile video contents, the proposed algorithm provides not only better objective peak signal-to-noise ratio, but also results in more impressive subjective visual quality when compared with other non-motion compensated deinterlacing approaches.
---
paper_title: Rate control algorithm based on intra-picture complexity for H.264/AVC
paper_content:
An efficient rate control algorithm based on the content-adaptive initial quantisation parameter (QP) setting scheme and the peak signal-to-noise ratio (PSNR) variation-limited bit-allocation strategy for low-complexity mobile applications is presented. This algorithm can efficiently measure the residual complexity of intra-pictures without performing the computation-intensive intra-prediction and mode decision in H.264/AVC, based on the structural and statistical features of local textures. This can adaptively set proper initial QP values for versatile video contents. In addition, this bit-allocation strategy can effectively distribute bit-rate budgets based on the monotonic property to enhance overall coding efficiency while maintaining the consistency of visual quality by limiting the variation of quantisation distortion. The experimental results reveal that the proposed algorithm surpasses the conventional rate control approaches in terms of the average PSNR from 0.34 to 0.95 dB. Moreover, this algorithm provides more impressive visual quality and more robust buffer controllability when compared with other algorithms.
---
paper_title: Macroblock-level decoding and deblocking method and its pipeline implementation in H.264 decoder SOC design
paper_content:
This paper presents a macroblock-level (MB-level) decoding and deblocking method for supporting the flexible macroblock ordering (FMO) and arbitrary slice ordering (ASO) bit streams in H.264 decoder and its SOC/ASIC implementation. By searching the slice containing the current macroblock in the bit stream and switching slices correctly, MBs can be decoded in the raster scan order, while the decoding process can immediately begin as long as the slice containing the current MB is available. This architectural modification enables the MB-level decoding and deblocking 3-stage pipeline, and saves about 20% of SDRAM bandwidth. Implementation results showed that the design achieves real-time decoding of 1080HD (1920×1088@30 fps) at a system clock of 166 MHz.
---
paper_title: H.264/AVC baseline profile decoder complexity analysis
paper_content:
We study and analyze the computational complexity of a software-based H.264/AVC (advanced video codec) baseline profile decoder. Our analysis is based on determining the number of basic computational operations required by a decoder to perform the key decoding subfunctions. The frequency of use of each of the required decoding subfunctions is empirically derived using bitstreams generated from two different encoders for a variety of content, resolutions and bit rates. Using the measured frequencies, estimates of the decoder time complexity for various hardware platforms can be determined. A detailed example is provided to assist in deriving time complexity estimates. We compare the resulting estimates to numbers measured for an optimized decoder on the Pentium 3 hardware platform. We then use those numbers to evaluate the dependence of the time complexity of each of the major decoder subfunctions on encoder characteristics, content, resolution and bit rate. Finally, we compare an H.264/AVC-compliant baseline decoder to a decoder that is compliant with the H.263 standard, which is currently dominant in interactive video applications. Both "C" only decoder implementations were compared on a Pentium 3 hardware platform. Our results indicate that an H.264/AVC baseline decoder is approximately 2.5 times more time complex than an H.263 baseline decoder.
---
paper_title: Spatiotemporal video segmentation based on graphical models
paper_content:
This paper proposes a probabilistic framework for spatiotemporal segmentation of video sequences. Motion information, boundary information from intensity segmentation, and spatial connectivity of segmentation are unified in the video segmentation process by means of graphical models. A Bayesian network is presented to model interactions among the motion vector field, the intensity segmentation field, and the video segmentation field. The notion of the Markov random field is used to encourage the formation of continuous regions. Given consecutive frames, the conditional joint probability density of the three fields is maximized in an iterative way. To effectively utilize boundary information from the intensity segmentation, distance transformation is employed in local objective functions. Experimental results show that the method is robust and generates spatiotemporally coherent segmentation results. Moreover, the proposed video segmentation approach can be viewed as the compromise of previous motion based approaches and region merging approaches.
---
paper_title: Video Analysis and Compression on the STI Cell Broadband Engine Processor
paper_content:
With increased concern for physical security, video surveillance is becoming an important business area. Similar camera-based system can also be used in such diverse applications as retail-store shopper motion analysis and casino behavioral policy monitoring. There are two aspects of video surveillance that require significant computing power: image analysis for detecting objects, and video compression for digital storage. The new STI CELL Broadband Engine (CBE) processor is an appealing platform for such applications because it incorporates 8 separate high-speed processing cores with an aggregate performance of 256Gflops. Moreover, this chip is the heart of the new Sony Playstation 3 and can be expected to be relatively inexpensive due to the high volume of production. In this paper we show how object detection and compression can be implemented on the CBE, discuss the difficulties encountered in porting the code, and provide performance results demonstrating significant speed-up.
---
paper_title: Visual pattern matching in motion estimation for object-based very low bit-rate coding using moment-preserving edge detection
paper_content:
This paper proposes an object-based coding method for very low bit-rate channels, using a method based on motion estimation with a block-based moment-preserving edge detector. In most existing object-based coding methods, only the global motion components are transmitted. However, the global motion prediction error is large, even after motion compensation using the discrete cosine transform (DCT), when images contain rapid moving objects and noise. Furthermore, the global motion-compensating method cannot result in small prediction error if the segmented objects consist of subobjects that move along different directions. The technique proposed in this paper involves segmenting moving objects from video sequences and representing objects compactly by visual-pattern approximations of the boundary. A visual pattern is obtained by detecting the line edge from a square block using the moment-preserving edge detector. Since high computational complexity is required for motion estimation using block matching, a fast block-matching method based on the visual patterns is proposed to reduce the burden of the overall coder complexity. Computer simulation results show that the proposed method gives good performance in terms of the subjective quality, the peak signal-to-noise ratio, and the compression ratio.
---
paper_title: Image classification for content-based indexing
paper_content:
Grouping images into (semantically) meaningful categories using low-level visual features is a challenging and important problem in content-based image retrieval. Using binary Bayesian classifiers, we attempt to capture high-level concepts from low-level image features under the constraint that the test image does belong to one of the classes. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified as indoor or outdoor; outdoor images are further classified as city or landscape; finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small vector quantizer (whose optimal size is selected using a modified MDL criterion) can be used to model the class-conditional densities of the features, required by the Bayesian methodology. The classifiers have been designed and evaluated on a database of 6931 vacation photographs. Our system achieved a classification accuracy of 90.5% for indoor/outdoor, 95.3% for city/landscape, 96.6% for sunset/forest and mountain, and 96% for forest/mountain classification problems. We further develop a learning method to incrementally train the classifiers as additional data become available. We also show preliminary results for feature reduction using clustering techniques. Our goal is to combine multiple two-class classifiers into a single hierarchical classifier.
---
paper_title: Predictive watershed: A fast watershed algorithm for video segmentation
paper_content:
The watershed transform is a key operator in video segmentation algorithms. However, the computation load of watershed transform is too large for real-time applications. In this paper, a new fast watershed algorithm, named P-watershed, for image sequence segmentation is proposed. By utilizing the temporal coherence property of the video signal, this algorithm updates watersheds instead of searching watersheds in every frame, which can avoid a lot of redundant computation. The watershed process can be accelerated, and the segmentation results are almost the same as those of conventional algorithms. Moreover, an intra-inter watershed scheme (IP-watershed) is also proposed to further improve the results. Experimental results show that this algorithm can save 20%-50% computation without degrading the segmentation results. This algorithm can be combined with any video segmentation algorithm to give more precise segmentation results. An example is also shown by combining a background registration and change-detection-based segmentation algorithm with P-Watershed. This new video segmentation algorithm can give accurate object masks with acceptable computation complexity.
---
paper_title: Region-based representations of image and video: Segmentation tools for multimedia services
paper_content:
This paper discusses region-based representations of image and video that are useful for multimedia services such as those supported by the MPEG-4 and MPEG-7 standards. Classical tools related to the generation of the region-based representations are discussed. After a description of the main processing steps and the corresponding choices in terms of feature spaces, decision spaces, and decision algorithms, the state of the art in segmentation is reviewed. Mainly tools useful in the context of the MPEG-4 and MPEG-7 standards are discussed. The review is structured around the strategies used by the algorithms (transition based or homogeneity based) and the decision spaces (spatial, spatio-temporal, and temporal). The second part of this paper proposes a partition tree representation of images and introduces a processing strategy that involves a similarity estimation step followed by a partition creation step. This strategy tries to find a compromise between what can be done in a systematic and universal way and what has to be application dependent. It is shown in particular how a single partition tree created with an extremely simple similarity feature can support a large number of segmentation applications: spatial segmentation, motion estimation, region-based coding, semantic object extraction, and region-based retrieval.
---
paper_title: Extraction of Perceptual Hue Feature Set for Color Image/Video Segmentation
paper_content:
In this paper, we present a simple but effective algorithm for the extraction of perceptual hue feature set used in color image/video segmentation with emphasis on color textures. Feature extraction, with significant impact on the overall image/video analysis process, plays a critical role in classification-based segmentation. Color textures are accurately characterized by the newly introduced feature set with invariance to illumination, translation, and rotation, which is contributed by the statistical scheme in exploring the distribution of six rudimentary colors and the achromatic component at local positions. The feature set provides characteristic information and enables segmentation that is more meaningful than the recently published works do.
---
paper_title: Multiresolution-Based Texture Adaptive Algorithm for High-Quality Deinterlacing
paper_content:
This paper introduces a texture analysis mechanism utilizing multiresolution technique to reduce false motion detection and hence thoroughly improve the interpolation results for high-quality deinterlacing. Conventional motion-adaptive deinterlacing algorithm selects from inter-field and intra-field interpolations according to motion. Accurate determination of motion information is essential for this purpose. Fine textures, having high local pixel variation, tend to cause false detection of motion. Based on hierarchical wavelet analysis, this algorithm provides much better perceptual visual quality and considerably higher PSNR than other motion adaptive deinterlacers as shown. In addition, a recursive 3-field motion detection algorithm is also proposed to achieve better performance than the traditional 2-field motion detection algorithm with little memory overhead.
---
paper_title: Real-time 3 D computed tomographic reconstruction using commodity graphics hardware
paper_content:
The recent emergence of various types of flat-panel x-ray detectors and C-arm gantries now enables the construction of novel imaging platforms for a wide variety of clinical applications. Many of these applications require interactive 3D image generation, which cannot be satisfied with inexpensive PC-based solutions using the CPU. We present a solution based on commodity graphics hardware (GPUs) to provide these capabilities. While GPUs have been employed for CT reconstruction before, our approach provides significant speedups by exploiting the various built-in hardwired graphics pipeline components for the most expensive CT reconstruction task, backprojection. We show that the timings so achieved are superior to those obtained when using the GPU merely as a multi-processor, without a drop in reconstruction quality. In addition, we also show how the data flow across the graphics pipeline can be optimized, by balancing the load among the pipeline components. The result is a novel streaming CT framework that conceptualizes the reconstruction process as a steady flow of data across a computing pipeline, updating the reconstruction result immediately after the projections have been acquired. Using a single PC equipped with a single high-end commodity graphics board (the Nvidia 8800 GTX), our system is able to process clinically-sized projection data at speeds meeting and exceeding the typical flat-panel detector data production rates, enabling throughput rates of 40–50 projections s−1 for the reconstruction of 5123 volumes.
---
paper_title: Parallel Encoding - Decoding Operation for Multiview Video Coding with High Coding Efficiency
paper_content:
Multiview Video Coding (MVC) standardization is an ongoing effort aiming to extend H.264/AVC by developing novel tools optimized for 3D and multiview video use cases. One of the key identified requirements for the MVC standard is its ability to support parallel processing of different views. The parallel operation is known to be especially important for 3DTV applications, where the display needs to output many views simultaneously to support head-motion parallax. In this paper, we present a novel coding structure that enables parallel encoder/decoder operation for different views, without compromising from the coding efficiency. This is achieved by systematically restricting the reference area of each view, so that encoding and decoding of macroblocks from different views could be efficiently pipelined and parallel operation of separate views becomes possible. As the inter-view prediction is still used, proposed structure achieves up to 0.9 dB gain compared to simulcast, maintaining very similar desirable parallelism characteristics.
---
paper_title: Overview of free viewpoint television
paper_content:
We have been developing ray-based 3D information systems that consist of ray acquisition, ray processing, and ray display. Free viewpoint television (FTV) based on the ray-space method is a typical example. FTV will bring an epoch-making change in the history of television because it enables us to view a distant 3D world freely by changing our viewpoints as if we were there. We constructed a real-time FTV including the complete chain from capturing to display. A new algorithm was developed to generate free viewpoint images. In addition, a new user interface is presented for FTV to make full use of 3D information. FTV is not a pixel-based system but a ray-based system. We are creating ray-based image engineering through the development of FTV.
---
paper_title: Towards systematic exploration of tradeoffs for medical image registration on heterogeneous platforms
paper_content:
For the past decade, improving the performance and accuracy of medical image registration has been a driving force of innovation in medical imaging. The ultimate goal of accurate, robust, real-time image registration will enhance diagnoses of patients and enable new image-guided intervention techniques. With such a computationally intensive and multifaceted problem, improvements have been found in high performance platforms such as graphics processors (GPUs) and general purpose clusters, but there has yet to be a solution fast enough and effective enough to gain widespread clinical use. In this study, we examine the differences in accuracy and speed of implementations of the same image registration algorithm on a general purpose uniprocessor, a GPU, and a cluster of GPUs. We utilize a novel domain specific framework that allows us to simultaneously exploit parallelism on a heterogeneous platform. Using a set of representative images, we examine implementations with speedups of up to two orders of magnitude and accuracy varying from sub-millimeter to 2.6 millimeters of average error.
---
paper_title: Interpolator data compression for MPEG-4 animation
paper_content:
Interpolator representation in key-frame animation is now the most popular method for computer animation. The interpolator data consist of key and key value pairs, where a key is a time stamp and a key value is the corresponding value to the key. In this paper, we propose a set of new technologies to compress the interpolator data. The performance of the proposed technique is compared with the existing MPEG-4 generic compression tool. Throughout the core experiments in MPEG-4, the proposed technique showed its superiority over the existing tool, becoming a part of MPEG-4 standard within the Animation Framework eXtension framework.
---
paper_title: Future graphics architectures
paper_content:
Graphics architectures are in the midst of a major transition. In the past, these were specialized architectures designed to support a single rendering algorithm: the standard Z buffer. Realtime 3D graphics has now advanced to the point where the Z-buffer algorithm has serious shortcomings for generating the next generation of higher-quality visual effects demanded by games and other interactive 3D applications. There is also a desire to use the high computational capability of graphics architectures to support collision detection, approximate physics simulations, scene management, and simple artificial intelligence. In response to these forces, graphics architectures are evolving toward a general-purpose parallel-programming model that will support a variety of image-synthesis algorithms, as well as nongraphics tasks.
---
paper_title: GPU algorithms for radiosity and subsurface scattering
paper_content:
We capitalize on recent advances in modern programmable graphics hardware, originally designed to support advanced local illumination models for shading, to instead perform two different kinds of global illumination models for light transport. We first use the new floating-point texture map formats to find matrix radiosity solutions for light transport in a diffuse environment, and use this example to investigate the differences between GPU and CPU performance on matrix operations. We then examine multiple-scattering subsurface light transport, which can be modeled to resemble a single radiosity gathering step. We use a multiresolution meshed atlas to organize a hierarchy of precomputed subsurface links, and devise a three-pass GPU algorithm to render in real time the subsurface-scattered illumination of an object, with dynamic lighting and viewing.
---
paper_title: An introduction to the MPEG-4 animation framework eXtension
paper_content:
This paper presents the MPEG-4 Animation Framework eXtension (AFX) standard, ISO/IEC 14496-16. Initiated by the MPEG Synthetic/Natural Hybrid Coding group in 2000, MPEG-4 AFX proposes an advanced framework for interactive multimedia applications using both natural and synthetic objects. Following this model, new synthetic objects have been specified, increasing content realism over existing MPEG-4 synthetic objects. The general overview of MPEG-4 AFX is provided on top of the review of MPEG-4 standards to explain the relationship between MPEG-4 and MPEG-4 AFX. Then we give a bird's-eye view of new tools available in this standard.
---
paper_title: REALIZATION AND OPTIMIZATION OF H.264 DECODER FOR DUAL-CORE SOC
paper_content:
A filter for filtering micro-emboli from a patient's blood during an angioplasty procedure is disclosed which comprises a plurality of curved wires connected to a rod between a first connector fixed with respect to the rod and a second connector slidingly mounted on the rod. Two layers of filter material are connected to opposite sides of the wires, and each layer includes perforations which are offset from the perforations in the other layer. When the rod and the wires are disposed within a catheter, the inner wall of the catheter compresses the wires toward the rod and when the rod is extended from the catheter, the wires resume their curved shape and pull the sliding connector along the rod toward the fixed connector.
---
paper_title: Overview of ITRI PAC project - from VLIW DSP processor to multicore computing platform
paper_content:
The Industrial Technology Research Institute (ITRI) PAC (parallel architecture core) project was initiated in 2003. The target is to develop a low-power and high-performance programmable SoC platform for multimedia applications. In the first PAC project phase (2004-2006), a 5-way VLIW DSP (PACDSP) processor has been developed with our patented distributed & ping-pong register file and variable-length VLIW encoding techniques. A dual-core PAC SoC, which is composed of a PACDSP core and an ARM9 core, has also been designed and fabricated in the TSMC 0.13 mum technology to demonstrate its outstanding performance and energy efficiency for multimedia processing such as real-time H.264 codec. This paper summarizes the technical contents of PACDSP, DVFS (dynamic voltage and frequency scaling) -enabled PAC SoC, and the energy-aware multimedia codec. The research directions of our second-phase PAC project (PAC II), including multicore architectures, ESL (electronics system-level) technology, and low-power multimedia framework, are also addressed in this paper.
---
paper_title: Trends in multicore DSP platforms
paper_content:
In the last two years, the embedded DSP market has been swept up by the general increase in interest in multicore that has been driven by companies such as Intel and Sun. One reason for this is that there is now a lot of focus on tooling in academia and also a willingness on the part of users to accept new programming paradigms. This industry-wide effort will have an effect on the way multicore DSPs are programmed and perhaps architected. But it is too early to say in what way this will occur. Programming multicore DSPs remains very challenging. The problem of how to take a piece of sequential code and optimally partition it across multiple cores remains unsolved. Hence, there will naturally be a lot of variations in the approaches taken. Equally important is the issue of debugging and visibility. Developing effective and easy-to-use code development and real-time debug tools is tremendously important as the opportunity for bugs goes up significantly when one starts to deal with both time and space. The markets that DSP plays in have unique features in their desire for low power, low cost, and hard real-time processing, with an emphasis on mathematical computation. How well the multicore research being performed presently in academia will address these concerns remains to be seen.
---
paper_title: Power efficient processor architecture and the cell processor
paper_content:
This paper provides a background and rationale for some of the architecture and design decisions in the cell processor, a processor optimized for compute-intensive and broadband rich media applications, jointly developed by Sony Group, Toshiba, and IBM. The paper discusses some of the challenges microprocessor designers face and provides motivation for performance per transistor as a reasonable first-order metric for design efficiency. Common microarchitectural enhancements relative to this metric are provided. Also alternate architectural choices and some of its limitations are discussed and non-homogeneous SMP as a means to overcome these limitations is proposed.
---
paper_title: Cell Broadband Engine Architecture and its first implementation—A performance view
paper_content:
The Cell Broadband Engine™ (Cell/B.E.) processor is the first implementation of the Cell Broadband Engine Architecture (CBEA), developed jointly by Sony, Toshiba, and IBM. In addition to use of the Cell/B.E. processor in the Sony Computer Entertainment PLAYSTATION® 3 system, there is much interest in using it for workstations, media-rich electronics devices, and video and image processing systems. The Cell/B.E. processor includes one PowerPC® processor element (PPE) and eight synergistic processor elements (SPEs). The CBEA is designed to be well suited for a wide variety of programming models, and it allows for partitioning of work between the PPE and the eight SPEs. In this paper we show that the Cell/B.E. processor can outperform other modern processors by approximately an order of magnitude and by even more in some cases.
---
paper_title: Larrabee: a many-core x86 architecture for visual computing
paper_content:
This paper presents a many-core visual computing architecture code named Larrabee, a new software rendering pipeline, a manycore programming model, and performance analysis for several applications. Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit, as well as some fixed function logic blocks. This provides dramatically higher performance per watt and per unit of area than out-of-order CPUs on highly parallel workloads. It also greatly increases the flexibility and programmability of the architecture as compared to standard GPUs. A coherent on-die 2nd level cache allows efficient inter-processor communication and high-bandwidth local data access by CPU cores. Task scheduling is performed entirely with software in Larrabee, rather than in fixed function logic. The customizable software graphics rendering pipeline for this architecture uses binning in order to reduce required memory bandwidth, minimize lock contention, and increase opportunities for parallelism relative to standard GPUs. The Larrabee native programming model supports a variety of highly parallel applications that use irregular data structures. Performance analysis on those applications demonstrates Larrabee's potential for a broad range of parallel computation.
---
paper_title: Scientific Computing Kernels on the Cell Processor
paper_content:
Scientific Computing Kernels on the Cell Processor Samuel Williams, John Shalf, Leonid Oliker Shoaib Kamil, Parry Husbands, Katherine Yelick Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA 94720 { swwilliams,jshalf,loliker,sakamil,pjrhusbands,kayelick } @lbl.gov ABSTRACT The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power de- mands has become of utmost concern to computational sci- entists. As a result, the high performance computing com- munity is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end com- puting systems. Our work contains several novel contribu- tions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil com- putations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the ac- curacy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell per- formance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different map- pings of the kernels and demonstrates a simple and effective programming model for Cell’s unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency. INTRODUCTION Over the last decade the HPC community has moved to- wards machines composed of commodity microprocessors as a strategy for tracking the tremendous growth in processor performance in that market. As frequency scaling slows and the power requirements of these mainstream processors con- tinue to grow, the HPC community is looking for alternative architectures that provide high performance on scientific ap- plications, yet have a healthy market outside the scientific community. In this work, we examine the potential of the recently-released STI Cell processor as a building block for future high-end computing systems, by investigating perfor- mance across several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil com- putations on regular grids, as well as 1D and 2D FFTs. Cell combines the considerable floating point resources re- quired for demanding numerical algorithms with a power- efficient software-controlled memory hierarchy. Despite its radical departure from previous mainstream/commodity pro- cessor designs, Cell is particularly compelling because it will be produced at such high volumes that it will be cost- competitive with commodity CPUs. The current implemen- tation of Cell is most often noted for its extremely high per- formance single-precision arithmetic, which is widely consid- ered insufficient for the majority of scientific applications. Although Cell’s peak double precision performance is still impressive relative to its commodity peers (˜14.6 Gflop/s @ 3.2GHz), we explore how modest hardware changes could significantly improve performance for computationally in- tensive double precision applications. This paper presents several novel results and expands our previous efforts [37]. We present quantitative performance data for scientific kernels that compares Cell performance to leading superscalar (AMD Opteron), VLIW (Intel Ita- nium2), and vector (Cray X1E) architectures. We believe this study examines the broadest array of scientific algo- rithms to date on Cell. We developed both analytical mod- els and lightweight simulators to predict kernel performance that we demonstrated to be accurate when compared against published Cell hardware results, as well as our own imple- mentations on a 3.2GHz Cell blade. Our work also explores the complexity of mapping several important scientific algo- rithms onto the Cell’s unique architecture in order to lever- age the large number of available functional units and the software-controlled memory. Additionally, we propose mod- est microarchitectural modifications that would increase the efficiency of double-precision arithmetic calculations com- pared with the current Cell implementation. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency. We exploit Cell’s heterogeneity not in computation, but in control and system support. Thus we conclude that Cell’s heterogeneous multi-core implementation is inherently better suited to the HPC environment than homogeneous commodity multicore
---
paper_title: Multicore system-on-chip architecture for MPEG-4 streaming video
paper_content:
The newly defined MPEG-4 Advanced Simple (AS) profile delivers single-layered streaming video in digital television (DTV) quality in the promising 1-2 Mbit/s range. However, the coding tools involved add significantly to the complexity of the decoding process, raising the need for further hardware acceleration. A programmable multicore system-on-chip (SOC) architecture is presented which targets MPEG-4 AS profile decoding of ITU-R 601 resolution streaming video. Based on a detailed analysis of corresponding bitstream statistics, the implementation of an optimized software video decoder for the proposed architecture is described. Results show that overall performance is sufficient for real-time AS profile decoding of ITU-R 601 resolution video.
---
paper_title: Niagara: a 32-way multithreaded Sparc processor
paper_content:
The Niagara processor implements a thread-rich architecture designed to provide a high-performance solution for commercial server applications. This is an entirely new implementation of the Sparc V9 architectural specification, which exploits large amounts of on-chip parallelism to provide high throughput. The hardware supports 32 threads with a memory subsystem consisting of an on-board crossbar, level-2 cache, and memory controllers for a highly integrated design that exploits the thread-level parallelism inherent to server applications, while targeting low levels of power consumption.
---
paper_title: DVFS Aware Techniques on Parallel Architecture Core (PAC) Platform
paper_content:
Rapid developments of multimedia and communication technologies enrich the applications of portable devices. However, design flexibility and low power are two important criteria for real-time system development. In this paper, a DVFS-aware implementation is introduced to demonstrate intelligent dynamic voltage and frequency scaling (DVFS) technique on dual-core PAC Platform. The power management of DVFS technique is verified with the H.264/AVC decoder example which can save 46% of power consumption.
---
paper_title: An Architecture for Programmable Multi-core IP Accelerated Platform with an Advanced Application of H.264 Codec Implementation
paper_content:
A new integrated programmable platform architecture is presented, with the support of multiple accelerators and extensible processing cores. An advanced application for this architecture is to facilitate the implementation of H.264 baseline profile video codec. The platform architecture employs the novel concept of virtual socket and optimized memory access to increase the efficiency for video encoding. The proposed architecture is mapped on an integrated FPGA device, Annapolis WildCard-II? or WildCard-4?, for verification. According to the evaluation under different configurations, the results show that the overall performance of the architecture, with the integrated accelerators, can sufficiently meet the real-time encoding requirement for H.264 BP at basic levels, and achieve about 2---5.5 and 1---3 dB improvement, in terms of PSNR, as compared with MPEG-2 MP and MPEG-4 SP, respectively. The architecture is highly extensible, and thus can be utilized to benefit the development of multi-standard video codec beyond the description in this paper.
---
paper_title: A Simulation and Exploration Technology for Multimedia-Application-Driven Architectures
paper_content:
The increasing of computational power requirements for DSP and Multimedia application and the needs of easy-to-program development environment has driven recent programmable devices toward Very Long Instruction Word (VLIW) [1] architectures and Hw-Sw co-design environments [2]. VLIW architecture allows generating optimized machine code from high-level languages exploiting Instruction Level Parallelism (ILP) [3]. Furthermore, applications requirements and time to market constraints are growing dramatically moving functionalities toward System on Chip (SoC) direction. This paper presents VLIW-SIM, an Application-Driven Architecture-design approach based on Instruction Set simulation. VLIW architectures and Instruction Set simulation were chosen to fulfill multimedia domain requirements and to implement an efficient Hw-Sw co-design environment. The VLIW-SIM simulation technology is based on pipeline status modeling, Simulation cache and Simulation Oriented Hw description. An effective support for Hw-Sw co-design requires high simulation performance (in terms of Simulated Instruction per Second--SIPS), flexibility (the ability to represent a number of different architectures) and cycle accuracy. There is a strong trade-off between these features: cycle accurate or close to cycle accurate simulation have usually low performance [4, 5]. Good simulation performance can be obtained loosing the simulator flexibility. Moreover SoC simulation requires a further degree of flexibility in simulating different components (core, co-processors, memories, buses). The proposed approach is focused on interpretative (not compiled [6]) re-configurable Instruction Set Simulator (ISS) in order to support both application design and architecture exploration. VLIW-SIM main features are: efficient host resource allocation, Instruction Set and Architecture description Flexibility (Instruction Set Dynamic Generation and Simulation Oriented Hardware Description), Step by step pipeline status tracking, Simulation Speed and Accuracy. Performance of simulation test for three validation case studies (TI TMS320C62x, TI TMS320C64x and ST200) are reported.
---
paper_title: A Platform-Independent Methodology for Performance Estimation of Multimedia Signal Processing Applications
paper_content:
A methodological framework for performance estimation of multimedia signal processing applications on different implementation platforms is presented. The methodology derives a complexity profile which is characteristic for an application, but completely platform-independent. By correlating the complexity profile with platform-specific data, performance estimation results for different platforms are obtained. The methodology is based on a reference software implementation of the targeted application, but is, in constrast to instruction-level profiling-based approaches, fully independent of its optimization degree. The proposed methodology is demonstrated by example of an MPEG-4 Advanced Simple Profile (ASP) video decoder. Performance estimation results are presented for two different platforms, a specialized VLIW media processor and an embedded general-purpose RISC processor, showing a high accuracy of he methodology. The approach can be employed to assist in design decisions in the specification phase of new architectures, in the selection process of a suitable target platform for a multimedia application, or in the optimization stage of a software implementation on a specific platform.
---
paper_title: System-Level Performance Analysis for Designing On-Chip Communication Architectures
paper_content:
This paper presents a novel system-level performance analysis technique to support the design of custom communication architectures for system-on-chip integrated circuits. Our technique fills a gap in existing techniques for system-level performance analysis, which are either too slow to use in an iterative communication architecture design framework (e.g., simulation of the complete system) or are not accurate enough to drive the design of the communication architecture (e.g., techniques that perform a "static" analysis of the system performance). Our technique is based on a hybrid trace-based performance-analysis methodology in which an initial cosimulation of the system is performed with the communication described in an abstract manner (e.g., as events or abstract data transfers). An abstract set of traces are extracted from the initial cosimulation containing necessary and sufficient information about the computations and communications of the system components. The system designer then specifies a communication architecture by: 1) selecting a topology consisting of dedicated as well as shared communication channels (shared buses) interconnected by bridges; 2) mapping the abstract communications to paths in the communication architecture; and 3) customizing the protocol used for each channel. The traces extracted in the initial step are represented as a communication analysis graph (CAG) and an analysis of the CAG provides an estimate of the system performance as well as various statistics about the components and their communication. Experimental results indicate that our performance-analysis technique achieves accuracy comparable to complete system simulation (an average error of 1.88%) while being over two orders of magnitude faster.
---
paper_title: Efficient optimal design space characterization methodologies
paper_content:
One of the primary advantages of a high-level synthesis system is its ability to explore the design space. This paper presents several methodologies for design space exploration that compute all optimal tradeoff points for the combined problem of scheduling, clock-length determination, and module selection. We discuss how each methodology takes advantage of the structure within the design space itself as well as the structure of, and interactions among, each of the three subproblems. (CAD)
---
paper_title: Comparing analytical modeling with simulation for network processors: a case study
paper_content:
Programming network processors remains an art due to the variety of different network processor architectures and due to little support to reason and explore implementations on such architectures. We present a case study of mapping an IPv4 forwarding switch application on the Intel IXP1200 network processor and we compare this implementation with an analytical model of both the application and architecture used to evaluate different design alternatives. Our results not only show that we are able to model the IXP1200 and our application within 15% of the accuracy compared to that of IXP1200 simulation, but also find closely matching trends for different workloads. This shows the clear potential of such analytical techniques for design space exploration.
---
paper_title: An integrated environment for HW/SW co-design based on a CAL specification and HW/SW code generators
paper_content:
This demonstration presents an integrated environment that translates a CAL-based dataflow specification [1] into a heterogeneous implementation, composed by HDL and C codes. The demonstration focuses on the capability of the co-design environment to automatically build an executable heterogeneous system implementation running on a platform composed of a processor and a FPGA from the annotation of the CAL specification. The possibility of direct synthesis from a high level specification is a crucial issue for enabling efficient re-design cycles that include rapid prototyping and validation of performances of the final implementation. The design approach enabled by such integrated environment is particularly suited for development of complex processing systems such as video codecs. As a case study, the demonstration provides the analysis and validation of different software and hardware partitioning of a MPEG-4 Simple Profile decoder.
---
paper_title: A hierarchical simulation framework for application development on system-on-chip architectures
paper_content:
We propose a hierarchical simulation methodology to assist application development on System-on-Chip architectures. Hierarchical simulation involves simulation of a SoC based system at different levels of abstraction. Thus, it enables a system designer to exploit simulation speed vs. accuracy of results trade-offs. Vertical simulation is a special case of hierarchical simulation, where a feedback mechanism between the different simulation levels helps in "interpreting" the results of stand-alone simulations in the system-wide context. The paper presents an approach to perform vertical simulation of a class of applications under a simplified scenario.
---
paper_title: A framework for evaluating design tradeoffs in packet processing architectures
paper_content:
We present an analytical method to evaluate embedded network packet processor architectures, and to explore their design space. Our approach is in contrast to those based on simulation, which tend to be infeasible when the design space is very large. We illustrate the feasibility of our method using a detailed case study.
---
paper_title: YAPI: application modeling for signal processing systems
paper_content:
We present a programming interface called YAPI to model signal processing applications as process networks. The purpose of YAPI is to enable the reuse of signal processing applications and the mapping of signal processing applications onto heterogeneous systems that contain hardware and software components. To this end, YAPI separates the concerns of the application programmer, who determines the functionality of the system, and the system designer, who determines the implementation of the functionality. The proposed model of computation extends the existing model of Kahn process networks with channel selection to support non-deterministic events. We provide an efficient implementation of YAPI in the form of a C++ run-time library to execute the applications on a workstation. Subsequently, the applications are used by the system designer as input for mapping and performance analysis in the design of complex signal processing systems. We evaluate this methodology on the design of a digital video broadcast system-on-chip.
---
paper_title: Algorithm/Architecture Co-Design of 3-D Spatio–Temporal Motion Estimation for Video Coding
paper_content:
This paper presents a new spatio-temporal motion estimation algorithm and its VLSI architecture for video coding based on algorithm and architecture co-design methodology. The algorithm consists of the new strategies of spatio-temporal motion vector prediction, modified one-at-a-time search scheme, and multiple update paths derived from optimization theory. The hardware specification is for high-definition video coding. We applied the ME algorithm to H.264 reference software. Our algorithm surpasses recently published research and achieves close performance to full search. The VLSI implementation proves the low cost feature of our algorithm. The algorithm and architecture co-design concept is highly emphasized in this paper. We provide some quantitative example to show the necessity of algorithm and architecture co-design
---
paper_title: Parallel architecture overview
paper_content:
Abstract An increasing number of parallel computer products are appearing in the market place. Their design motivations and market areas cover a broad spectrum: (i) Transaction Processing Systems, such as Parallel UNIX systems (e.g. SEQUENT Balance), for data processing applications; (ii) Numeric Supercomputers, such as Hypercube systems (e.g. INTEL iPSC), for scientific and engineering applications; (iii) VLSI Architectures, such as parallel microcomputers (e.g. INMOS Transputer), for exploiting very large scales of integration; (iv) High-Level Language Computers, such as Logic machines (e.g. FUJITSU Kabu-Wake), for symbolic computation; and (v) Neurocomputers, such as Connectionist computers (e.g. THINKING MACHINES Connection Machine), for general-purpose pattern matching applications. This survey paper gives an overview of these novel parallel computers and discusses the likely commercial impact of parallel computers.
---
paper_title: VLSI Array processors
paper_content:
High speed signal processing depends critically on parallel processor technology. In most applications, general-purpose parallel computers cannot offer satisfactory real-time processing speed due to severe system overhead. Therefore, for real-time digital signal processing (DSP) systems, special-purpose array processors have become the only appealing alternative. In designing or using such array Processors, most signal processing algorithms share the critical attributes of regularity, recursiveness, and local communication. These properties are effectively exploited in innovative systolic and wavefront array processors. These arrays maximize the strength of very large scale integration (VLSI) in terms of intensive and pipelined computing, and yet circumvent its main limitation on communication. The application domain of such array processors covers a very broad range, including digital filtering, spectrum estimation, adaptive array processing, image/vision processing, and seismic and tomographic signal processing, This article provides a general overview of VLSI array processors and a unified treatment from algorithm, architecture, and application perspectives.
---
paper_title: High-abstraction level complexity analysis and memory architecture simulations of multimedia algorithms
paper_content:
An appropriate complexity analysis stage is the first and fundamental step for any methodology aiming at the implementation of today's (complex) multimedia algorithms. Such a stage may have different final implementation goals such as defining a new architecture dedicated to the specific multimedia standard under study, or defining an optimal instruction set for a selected processor architecture, or to guide the software optimization process in terms of control-flow and data-flow optimization targeting a specific architecture. The complexity of nowadays multimedia standards, in terms of number of lines of codes and cross-relations among processing algorithms that are activated by specific input signals, goes far beyond what the designer can reasonably grasp from the "pencil and paper" analysis of the (software) specifications. Moreover, depending on the implementation goal different measures and metrics are required at different steps of the implementation methodology or design flow. The process of extracting the desired measures needs to be supported by appropriate automatic tools, since code rewriting, at each design stage, may result resource consuming and error prone. This paper reviews the state of the art of complexity analysis methodologies oriented to the design of multimedia systems and presents an integrated tool for automatic analysis capable of producing complexity results based on rich and customizable metrics. The tool is based on a C virtual machine that allows extracting from any C program execution the operations and data-flow information, according to the defined metrics. The tool capabilities include the simulation of virtual memory architectures. This paper shows some examples of complexity analysis results that can be yielded with the tool and presents how the tools can be used at different stages of implementation methodologies.
---
paper_title: Overview of parallel processing approaches to image and video compression
paper_content:
In this paper we present an overview of techniques used to implement various image and video compression algorithms using parallel processing. Approaches used can largely be divided into four areas. The first is the use of special purpose architectures designed specifically for image and video compression. An example of this is the use of an array of DSP chips to implement a version of MPEG1. The second approach is the use of VLSI techniques. These include various chip sets for JPEG and MPEG1. The third approach is algorithm driven, in which the structure of the compression algorithm describes the architecture, e.g. pyramid algorithms. The fourth approach is the implementation of algorithms on high performance parallel computers. Examples of this approach are the use of a massively parallel computer such as the MasPar MP-1 or the use of a coarse-grained machine such as the Intel Touchstone Delta.
---
paper_title: Evaluation of the parallelization potential for efficient multimedia implementations: dynamic evaluation of algorithm critical path
paper_content:
This paper presents a model metrics and a methodology for evaluating the critical path on the data flow execution graph (DFEG) of multimedia algorithms specified as C programs. The paper describes an efficient dynamic critical path evaluation approach that generates no explicit execution graph. Such approach includes two key stages: 1) the instrumentation of the C code and the mapping into a C++ code version and 2) the execution of the C++ code under real input data and finally the actual dynamic evaluation of the critical path. The model metrics and the software analysis methodologies aim at the estimation and at the increase of the upper bound of execution speed and parallelization potential. Both metrics and methodology are particularly tailored for application with complex multimedia algorithms. Critical path analysis and the subsequent algorithmic development stage is a fundamental methodological preliminary step for the efficient definition of architectures when the objective is the implementation of the multimedia algorithm on systems-on-chips or heterogeneous platforms.
---
paper_title: Profiling dataflow programs
paper_content:
As dataflow descriptions of media processing become popular, the techniques for analyzing and profiling the performance of sequential algorithms are no longer applicable. This paper describes some of the basic concepts and techniques for analyzing the computations described by dataflow programs, and illustrates them on an MPEG-4 decoder.
---
paper_title: Algorithm/Architecture Co-Design of 3-D Spatio–Temporal Motion Estimation for Video Coding
paper_content:
This paper presents a new spatio-temporal motion estimation algorithm and its VLSI architecture for video coding based on algorithm and architecture co-design methodology. The algorithm consists of the new strategies of spatio-temporal motion vector prediction, modified one-at-a-time search scheme, and multiple update paths derived from optimization theory. The hardware specification is for high-definition video coding. We applied the ME algorithm to H.264 reference software. Our algorithm surpasses recently published research and achieves close performance to full search. The VLSI implementation proves the low cost feature of our algorithm. The algorithm and architecture co-design concept is highly emphasized in this paper. We provide some quantitative example to show the necessity of algorithm and architecture co-design
---
paper_title: REALIZATION AND OPTIMIZATION OF H.264 DECODER FOR DUAL-CORE SOC
paper_content:
A filter for filtering micro-emboli from a patient's blood during an angioplasty procedure is disclosed which comprises a plurality of curved wires connected to a rod between a first connector fixed with respect to the rod and a second connector slidingly mounted on the rod. Two layers of filter material are connected to opposite sides of the wires, and each layer includes perforations which are offset from the perforations in the other layer. When the rod and the wires are disposed within a catheter, the inner wall of the catheter compresses the wires toward the rod and when the rod is extended from the catheter, the wires resume their curved shape and pull the sliding connector along the rod toward the fixed connector.
---
paper_title: Parallel Encoding - Decoding Operation for Multiview Video Coding with High Coding Efficiency
paper_content:
Multiview Video Coding (MVC) standardization is an ongoing effort aiming to extend H.264/AVC by developing novel tools optimized for 3D and multiview video use cases. One of the key identified requirements for the MVC standard is its ability to support parallel processing of different views. The parallel operation is known to be especially important for 3DTV applications, where the display needs to output many views simultaneously to support head-motion parallax. In this paper, we present a novel coding structure that enables parallel encoder/decoder operation for different views, without compromising from the coding efficiency. This is achieved by systematically restricting the reference area of each view, so that encoding and decoding of macroblocks from different views could be efficiently pipelined and parallel operation of separate views becomes possible. As the inter-view prediction is still used, proposed structure achieves up to 0.9 dB gain compared to simulcast, maintaining very similar desirable parallelism characteristics.
---
paper_title: Parallel Process of Hyper-Space-Based Multiview Video Compression
paper_content:
Multiview video coding (MVC) is a key technology in free-viewpoint television. MVC based on traditional existing codec system has been studied widely, but all of them need powerful computational capacity in processing. Parallel process of MVC can facilitate the efficient implementation of encoder and decoder and has been required as a function by MPEG. In this paper, a parallelization methodology for MVC based on hyper-space theory is presented and tested on the local area multi-computer - message passing interface (LAM-MPI) parallel platform and modified H.264 codec. Experimental results show that the proposed method can speed up processing of multiview video compression and obtain high rate-distortion results.
---
paper_title: Multi-pass algorithm of motion estimation in video encoding for generic GPU
paper_content:
TABLEI Abstract- Theimportance ofvideoencoding hasboomed rapidly since video datacommunication waswidely needed. In this paper, wepropose amulti-pass algorithm toaccelerate the motionestimation (ME),thedominant partinvideoencoding, Tools
---
paper_title: Parallelization of AdaBoost algorithm on multi-core processors
paper_content:
This paper examines and extracts the parallelism in the AdaBoost person detection algorithm on multi-core processors. As multi-core processors become pervasive, effectively executing many threads simultaneously is crucial in harnessing the computation power. Although the application exposes many levels of parallelism, none of them delivers a satisfactory scaling performance on newest multi-core processors due to load imbalance and parallel overhead. This paper demonstrates how to analyze the thread-level parallelism, and how to choose appropriate one to utilize current 4-core and 8-core processors. With careful optimization and parallelization, the AdaBoost person detection algorithm can efficiently utilize the power of multi-core processors, and now it is 7 times faster than the serial version.
---
paper_title: Parallel Scalability of Video Decoders
paper_content:
An important question is whether emerging and future applications exhibit sufficient parallelism, in particular thread-level parallelism, to exploit the large numbers of cores future chip multiprocessors (CMPs) are expected to contain. As a case study we investigate the parallelism available in video decoders, an important application domain now and in the future. Specifically, we analyze the parallel scalability of the H.264 decoding process. First we discuss the data structures and dependencies of H.264 and show what types of parallelism it allows to be exploited. We also show that previously proposed parallelization strategies such as slice-level, frame-level, and intra-frame macroblock (MB) level parallelism, are not sufficiently scalable. Based on the observation that inter-frame dependencies have a limited spatial range we propose a new parallelization strategy, called Dynamic 3D-Wave. It allows certain MBs of consecutive frames to be decoded in parallel. Using this new strategy we analyze the limits to the available MB-level parallelism in H.264. Using real movie sequences we find a maximum MB parallelism ranging from 4000 to 7000. We also perform a case study to assess the practical value and possibilities of a highly parallelized H.264 application. The results show that H.264 exhibits sufficient parallelism to efficiently exploit the capabilities of future manycore CMPs.
---
paper_title: REAL-TIME MUTUAL-INFORMATION-BASED LINEAR REGISTRATION ON THE CELL BROADBAND ENGINE PROCESSOR
paper_content:
Emerging multi-core processors are able to accelerate medical imaging applications by exploiting the parallelism available in their algorithms. We have implemented a mutual-information-based 3D linear registration algorithm on the Cell Broadband Enginetrade (CBE) processor, which has nine processor cores on a chip and has a 4-way SIMD unit for each core. By exploiting the highly parallel architecture and its high memory bandwidth, our implementation with two CBE processors can compute mutual information for about 33 million pixel pairs in a second. This implementation is significantly faster than a conventional one on a traditional microprocessor or even faster than a previously reported custom-hardware implementation. As a result, it can register a pair of 256times256times30 3D images in one second by using a multi-resolution method. This paper describes our implementation with a focus on localized sampling and speculative packing techniques, which reduce the amount of the memory traffic by 82%
---
paper_title: A Multi-core Architecture Based Parallel Framework for H.264/AVC Deblocking Filters
paper_content:
Deblocking filter is one of the most time consuming modules in the H.264/AVC decoder as indicated in many studies. Therefore, accelerating deblocking filter is critical for improving the overall decoding performance. This paper proposes a novel parallel algorithm for H.264/AVC deblocking filter to speed the H.264/AVC decoder up. We exploit pixel-level data parallelism among filtering steps, and observe that results of each filtering step only affect a limited region of pixels. We call this "the limited propagation effect". Based on this observation, the proposed algorithm could partition a frame into multiple independent rectangles with arbitrary granularity. The proposed parallel deblocking filter algorithm requires very little synchronization overhead, and provides good scalability. Experimental results show that applying the proposed parallelization method to a SIMD optimized sequential deblocking filter achieves up to 95.31% and 224.07% speedup on a two-core and four-core processor, respectively. We have also observed a significant speedup for H.264/AVC decoding, 21% and 34% on a two-core and four-core processor, respectively.
---
paper_title: On the efficient algorithm/architecture co-exploration for complex video processing
paper_content:
Targeted for highly sophisticated visual signal processing, we introduce in this paper complexity metrics or measures of algorithms which featuring architectural information are feedback or back annotated in early design stages to facilitate concurrent exploration of both algorithmic and architectural optimizations. With application to 3D spatio-temporal motion estimation for video coding, we have demonstrated significant reduction in design cost while the algorithmic performance still surpasses recent published works and even full search under many circumstances. Moreover, we have also shown the importance and substantiality of this complexity analysis technique in the extraction of features common to various different de-interlacing algorithms adapted for versatile video content, in designing highly efficient reconfigurable video architectures. As such this novel algorithm/architecture co-exploration methodology forms the basis for dataflow models with more accurate software/hardware partitioning resulting in multi-million gate and/or instruction software simulation platforms and fast prototyping hardware platforms for the next generation electronic system level design of SoCpsilas.
---
paper_title: Real-time Visual Tracker by Stream Processing
paper_content:
In this work, we implement a real-time visual tracker that targets the position and 3D pose of objects in video sequences, specifically faces. The use of stream processors for the computations and efficient Sparse-Template-based particle filtering allows us to achieve real-time processing even when tracking multiple objects simultaneously in high-resolution video frames. Stream processing is a relatively new computing paradigm that permits the expression and execution of data-parallel algorithms with great efficiency and minimum effort. Using a GPU (graphics processing unit, a consumer-grade stream processor) and the NVIDIA CUDA? technology, we can achieve performance improvements as large as ten times compared to a similar CPU-only tracker. At the same time, the Stream processing approach opens the door to other computing devices, like the Cell/BE? or other multicore CPUs.
---
paper_title: Mapping of h.264 decoding on a multiprocessor architecture
paper_content:
Due to the increasing significance of development costs in the competitive domain of high-volume consumer electronics, generic solutions are required to enable reuse of the design effort and to increase the potential market volume. As a result from this, Systems-on-Chip (SoCs) contain a growing amount of fully programmable media processing devices as opposed to application-specific systems, which offered the most attractive solutions due to a high performance density. The following motivates this trend. First, SoCs are increasingly dominated by their communication infrastructure and embedded memory, thereby making the cost of the functional units less significant. Moreover, the continuously growing design costs require generic solutions that can be applied over a broad product range. Hence, powerful programmable SoCs are becoming increasingly attractive. However, to enable power-efficient designs, that are also scalable over the advancing VLSI technology, parallelism should be fully exploited. Both task-level and instruction-level parallelism can be provided by means of e.g. a VLIW multiprocessor architecture. ::: ::: To provide the above-mentioned scalability, we propose to partition the data over the processors, instead of traditional functional partitioning. An advantage of this approach is the inherent locality of data, which is extremely important for communication-efficient software implementations. Consequently, a software implementation is discussed, enabling e.g. SD resolution H.264 decoding with a two-processor architecture, whereas High-Definition (HD) decoding can be achieved with an eight-processor system, executing the same software. Experimental results show that the data communication considerably reduces up to 65% directly improving the overall performance. Apart from considerable improvement in memory bandwidth, this novel concept of partitioning offers a natural approach for optimally balancing the load of all processors, thereby further improving the overall speedup.
---
paper_title: Parallelization strategies and performance analysis of media mining applications on multicore processors
paper_content:
This paper studies how to parallelize the emerging media mining workloads on existing small-scale multi-core processors and future large-scale platforms. Media mining is an emerging technology to extract meaningful knowledge from large amounts of multimedia data, aiming at helping end users search, browse, and manage multimedia data. Many of the media mining applications are very complicated and require a huge amount of computing power. The advent of multi-core architectures provides the acceleration opportunity for media mining. However, to efficiently utilize the multi-core processors, we must effectively execute many threads at the same time. In this paper, we present how to explore the multi-core processors to speed up the computation-intensive media mining applications. We first parallelize two media mining applications by extracting the coarse-grained parallelism and evaluate their parallel speedups on a small-scale multi-core system. Our experiment shows that the coarse-grained parallelization achieves good scaling performance, but not perfect. When examining the memory requirements, we find that these coarse-grained parallelized workloads expose high memory demand. Their working set sizes increase almost linearly with the degree of parallelism, and the instantaneous memory bandwidth usage prevents them from perfect scalability on the 8-core machine. To avoid the memory bandwidth bottleneck, we turn to exploit the fine-grained parallelism and evaluate the parallel performance on the 8-core machine and a simulated 64-core processor. Experimental data show that the fine-grained parallelization demonstrates much lower memory requirements than the coarse-grained one, but exhibits significant read-write data sharing behavior. Therefore, the expensive inter-thread communication limits the parallel speedup on the 8-core machine, while excellent speedup is observed on the large-scale processor as fast core-to-core communication is provided via a shared cache. Our study suggests that (1) extracting the coarse-grained parallelism scales well on small-scale platforms, but poorly on large-scale system; (2) exploiting the fine-grained parallelism is suitable to realize the power of large-scale platforms; (3) future many-core chips can provide shared cache and sufficient on-chip interconnect bandwidth to enable efficient inter-core communication for applications with significant amounts of shared data. In short, this work demonstrates proper parallelization techniques are critical to the performance of multi-core processors. We also demonstrate that one of the important factors in parallelization is the performance analysis. The parallelization principles, practice, and performance analysis methodology presented in this paper are also useful for everyone to exploit the thread-level parallelism in their applications.
---
paper_title: An efficient block motion estimation method on CELL BE
paper_content:
In order to take advantage of the byte-type data parallelism in CELL BE's SIMD technique, this paper introduces a new algorithm, which can achieve much higher prediction accuracy and lower computational complexity compared with diamond search (DS) algorithm. Most of the conventional fast motion estimation algorithms just focus on reducing the number of search points. However, less search points is not equal to less computational complexity for CELL BE's architecture. The introduced method uses a large rectangular search pattern including 35 search points at the first search step. Through increasing search points, it gains 4.32% coding efficiency compared with DS. Through SAD reuse and reducing unaligned memory access, it needs 29.5% less computation than DS method on CELL BE. Our implementation of this algorithm on CELL BE shows that we can satisfy the requirement of real-time HD H.264 encoding.
---
paper_title: A 3D Spatio-Temporal Motion Estimation Algorithm for Video Coding
paper_content:
This paper presents a new spatio-temporal motion estimation algorithm for video coding. The algorithm is based on optimization theory and consists of the strategies including 3D spatio-temporal motion vector prediction, modified one-at-a-time search scheme, and multiple update paths. The simulation results indicate our algorithm is better than other recently proposed ones under the same computational budget and is very close to full search. The low-cost feature and regular demand of computational resource make our algorithm suitable for VLSI implementation. The algorithm also makes single chip solution for high-definition coding feasible.
---
paper_title: Multicore system-on-chip architecture for MPEG-4 streaming video
paper_content:
The newly defined MPEG-4 Advanced Simple (AS) profile delivers single-layered streaming video in digital television (DTV) quality in the promising 1-2 Mbit/s range. However, the coding tools involved add significantly to the complexity of the decoding process, raising the need for further hardware acceleration. A programmable multicore system-on-chip (SOC) architecture is presented which targets MPEG-4 AS profile decoding of ITU-R 601 resolution streaming video. Based on a detailed analysis of corresponding bitstream statistics, the implementation of an optimized software video decoder for the proposed architecture is described. Results show that overall performance is sufficient for real-time AS profile decoding of ITU-R 601 resolution video.
---
paper_title: An Architecture for Programmable Multi-core IP Accelerated Platform with an Advanced Application of H.264 Codec Implementation
paper_content:
A new integrated programmable platform architecture is presented, with the support of multiple accelerators and extensible processing cores. An advanced application for this architecture is to facilitate the implementation of H.264 baseline profile video codec. The platform architecture employs the novel concept of virtual socket and optimized memory access to increase the efficiency for video encoding. The proposed architecture is mapped on an integrated FPGA device, Annapolis WildCard-II? or WildCard-4?, for verification. According to the evaluation under different configurations, the results show that the overall performance of the architecture, with the integrated accelerators, can sufficiently meet the real-time encoding requirement for H.264 BP at basic levels, and achieve about 2---5.5 and 1---3 dB improvement, in terms of PSNR, as compared with MPEG-2 MP and MPEG-4 SP, respectively. The architecture is highly extensible, and thus can be utilized to benefit the development of multi-standard video codec beyond the description in this paper.
---
paper_title: Analysis and Parallelization of H.264 decoder on Cell Broadband Engine Architecture
paper_content:
Emerging video coding technology like H.264/AVC achieves high compression efficiency, which enables high quality video at the same or lower bitrate. However, those advanced coding techniques come at the cost of more computational power. Developed with such multimedia applications in mind, the CELL broadband engine (BE) processor was designed as a heterogeneous on-chip multicore processor to meet the required high performance. In this paper, we analyze the computational requirements of H.264 decoder per-module basis and implement parallelized H.264 decoder on the CELL processor based on the profile result. We propose and implement a hybrid partitioning technique that combines both functional and data partitioning to avoid the dependencies imposed by H.264 decoder, and optimize it using SIMD instructions. Through experiments, the parallelized H.264 decoder runs about 3.5 times faster than the single core (PPE only) decoder, by using 1 PPE and 4 SPEs.
---
| Title: Algorithm/Architecture Co-Exploration of Visual Computing on Emergent Platforms: Overview and Future Prospects
Section 1: Introduction
Description 1: This section introduces the concept of Algorithm/Architecture Co-Exploration, the challenges of traditional design methodologies, and the increasing demands of video design challenges.
Section 2: Advanced Visual Computing Algorithms for Versatile Applications
Description 2: This section surveys advanced visual computing algorithms for emerging applications in four areas: video coding, video processing, computer vision, and computer graphics.
Section 3: Visual Computing on Multicore and Reconfigurable Architectures
Description 3: This section describes the spectrum of emerging architectures for visual computing, the challenges of mapping and porting algorithms onto these platforms, and the need for algorithm/architecture co-exploration.
Section 4: Algorithm/Architecture Co-Exploration
Description 4: This section outlines different design scenarios (architecture-oriented, algorithm-oriented, and algorithm/architecture design), and discusses the importance of concurrent exploration of both algorithmic and architecture optimizations.
Section 5: Dataflow Models for the Representation and Co-Design of Algorithms and Architectures
Description 5: This section discusses various dataflow representations that provide good models for the co-exploration of algorithms and architecture.
Section 6: Algorithmic Intrinsic Complexity Characterization
Description 6: This section elaborates on intrinsic complexity metrics or measures of algorithms that provide architectural information to facilitate concurrent exploration of both algorithmic and architectural optimizations.
Section 7: Dataflow Modeling and Complexity Characterization for Multicore and Reconfigurable Systems Design
Description 7: This section discusses the complexity measurements and dataflow modeling techniques used for designing and mapping algorithms onto multicore and reconfigurable architectures.
Section 8: Innovative Architectures with Multiple Processors and Reconfigurability for Video Coding and Processing
Description 8: This section surveys how visual computing algorithms are implemented on multicore platforms and how architecture-algorithm co-exploration can be used to map these applications effectively.
Section 9: Conclusion
Description 9: This section summarizes the importance of concurrently optimizing both algorithm and architecture, noting the advantages of advanced multicore platforms and reconfigurable architectures for future visual computing algorithms. |
A Survey on QoS in Next Generation Networks | 12 | ---
paper_title: QoS Over Heterogeneous Networks
paper_content:
The importance of quality of service (QoS) has risen with the recent evolution of telecommunication networks, which are characterised by a great heterogeneity. While many applications require a specific level of assurance from the network; communication networks are characterized by different service providers, transmission means and implementer solutions such as asynchronous transfer mode (ATM), Internet protocol version 4 (IPv4), IPv6 and MPLS. Providing comprehensive coverage of QoS issues within heterogeneous network environments, QoS Over Heterogeneous Networks looks to find solutions to questions such as does QoS fit within heterogeneous networks and what is the impact on performance if information traverses different network portions that implement specific QoS schemes. Includes: A series of algorithms and protocols to help solve potential QoS problems. State of the art case studies and operative examples to illustrate points made. Information on QoS mapping in terms of service-level specification (SLS) and an in-depth discussion of related issues Chapters end-to-end (E2E) QoS, QoS architecture, QoS over heterogeneous networks and QoS internetworking and mapping. An ideal book for graduate students, researchers and lecturers. System designers, developers and engineers will also find QoS Over Heterogeneous Networks a valuable reference.
---
paper_title: Cadenus: creation and deployment of end-user services in premium IP networks
paper_content:
Current trends in the information and communications technology industry clearly indicate the existence of a business requirement for a market-enabling technology that allows network operators to interact with users in a seamless, transparent manner for the sale and delivery of a wide range of services with guaranteed quality of service. In this context the need arises for the dynamic creation, configuration and delivery of services with QoS guarantees via the automated management of service level agreements. The aim of the Cadenus project is to bring theoretical and practical contributions to this area by defining a framework for the provisioning of advanced communication services in premium IP networks. Such networks might be characterized by a high degree of complexity, in terms not only of scale, but also of number of operators and technological heterogeneity. Our contribution is twofold, comprising both the design of the proposed framework and its actual implementation. An innovative approach was taken to framework design, based on the concept of mediation. With respect to the framework implementation, an example illustrating the realization of a virtual private network scenario is presented.
---
paper_title: Supporting IP on the ATM networks: an overview
paper_content:
In the past 10years, asynchronous transfer mode (ATM) technology has emerged as a key component of next-generation networks. It can offer unprecedented scalability and performance/cost ratio, as well as the ability to reserve network resources for real-time traffic and support for multimedia and multipoint communications. Obviously, in the future information infrastructure, ATM will play an important role. However, today's information infrastructure, e.g. the vast installed base of local area networks (LANs) and wide area networks (WANs) is constructed by using internetwork layer protocols such as IP, IPX and AppleTalk to internetwork the subnets. Therefore, a key to ATM's success and the Internet's further success will be the ability to allow for interoperation between existing network technologies and ATM. The key to such connectivity is the use of the same network layer protocols, such as IP and IPX, on both existing networks and on ATM, since it is the function of the network layer to provide a uniform network view to higher-level protocols and applications [RFC1943, IP over ATM: A Framework Document; A. Alles, ATM Internetworking, Cisco Systems, Inc., May, 1995]. Until now, there have been various different ways of running IP across an ATM network, e.g. LAN Emulation and Multiprotocol over ATM standardized by ATM Forum, classical IP over ATM and Next Hop Resolution Protocol proposed by the Internet Engineering Task Force, IP Switching implemented by Ipsilon Networks Inc., Tag Switching presented by Cisco Systems Inc., etc. This paper gives an overview and an assessment on these proposed schemes.
---
paper_title: QoS-based routing algorithms for ATM networks
paper_content:
This paper presents a planned routing algorithm (PRA) and a hierarchical routing algorithm (HRA) for ATM networks. The PRA can establish the multicast tree with the presence of bandwidth and delay constraints. The HRA can be compliant with the PNNI specification from the ATM Forum. It uses an adaptive and iterative path search approach and takes advantage of the PNNI hierarchical network structure to reduce path computation complexity and maximize network throughput. The performances of the PRA and HRA are evaluated by simulations. The simulation results show that the PRA can provide the best performance while the complexity is acceptable and the HRA can reduce processing time and improve network utilization, and both are suited for QoS requirements of ATM networks' routing.
---
paper_title: Efficient hierarchical QoS routing in ATM networks
paper_content:
For reducing network information to achieve scalability in large ATM networks, ATM Private Network-to-Network Interface (PNNI) adopts hierarchical routing. Consequently, although routing complexity is significantly reduced, numerous issues in PNNI routing require further study to achieve more efficient, accurate, scalable, and QoS-aware routing. Several methods are adopted herein to achieve efficient, scalable, and QoS-aware ATM PNNI routing. First, an efficient aggregation scheme, referred to as Asymmetric Simple, is proposed. The aggregated routing information includes available bandwidth, delay and cost. Second, two approaches for defining link costs are investigated, namely, the Markov Decision Process (MDP) approach and the Competitive On-Line (COL) routing approach, and these are compared with the Widest Path (WP) approach. Finally, a dynamic update policy, referred to as the dynamic cost-based update (DCU) policy, is proposed to improve the accuracy of the aggregated information and the performance of hierarchical routing, while decreasing the frequency of re-aggregation and information distribution. Simulation results demonstrate that the proposed Asymmetric Simple aggregation scheme yields very good network utilization while significantly reducing the amount of advertised information. Between these two link cost functions, the MDP approach provides a systematic method of defining call admission function and yields better network utilization than the COL approach. The proposed DCU policy also yields an enhanced network utilization while significantly reducing the frequency of re-aggregation and the amount of distributed aggregation information.
---
paper_title: A role for ATM in telephony and IP networks
paper_content:
Abstract Public telecommunication network operators are now concentrating their attention on the Internet, not only for realising datacommunication services but also for a network infrastructure for providing telephony services. This has resulted in a reluctance to invest in existing circuit-switched technology and in the implementation of a number of Internet protocol (IP) telephony services. This paper concentrates on two topics – firstly, quality of service (QoS) in IP and asynchronous transfer mode (ATM) networks and secondly, the use of ATM networks for providing large-scale telephony, or more generally “narrowband”, services that are usually supported by public circuit-switched networks. The methods of providing a range of qualities of service in IP routers and ATM switches are reviewed and the applicability of these mechanisms to the support of telephony is considered. Although public telephony services over the Internet are offered today with reasonable quality (some network operators claim that voice quality in general is better than for a call involving a mobile phone) the volume of traffic is generally low and Internet service providers overprovision their networks to improve service quality. A number of scenarios in which ATM can be used as an infrastructure for narrowband services are presented. If bandwidth is neither abundant nor inexpensive, ATM can provide a cost-effective means of supporting both existing narrowband circuit-switched services and datacommunications. A possible migration path is outlined from the current circuit-switched networks through use of an ATM network infrastructure to a potential integrated IP network. This paper is tutorial in nature and represents the personal opinions of the author (rather than a corporate position).
---
paper_title: QoS Over Heterogeneous Networks
paper_content:
The importance of quality of service (QoS) has risen with the recent evolution of telecommunication networks, which are characterised by a great heterogeneity. While many applications require a specific level of assurance from the network; communication networks are characterized by different service providers, transmission means and implementer solutions such as asynchronous transfer mode (ATM), Internet protocol version 4 (IPv4), IPv6 and MPLS. Providing comprehensive coverage of QoS issues within heterogeneous network environments, QoS Over Heterogeneous Networks looks to find solutions to questions such as does QoS fit within heterogeneous networks and what is the impact on performance if information traverses different network portions that implement specific QoS schemes. Includes: A series of algorithms and protocols to help solve potential QoS problems. State of the art case studies and operative examples to illustrate points made. Information on QoS mapping in terms of service-level specification (SLS) and an in-depth discussion of related issues Chapters end-to-end (E2E) QoS, QoS architecture, QoS over heterogeneous networks and QoS internetworking and mapping. An ideal book for graduate students, researchers and lecturers. System designers, developers and engineers will also find QoS Over Heterogeneous Networks a valuable reference.
---
paper_title: Survey: Flow control in ATM networks: a survey
paper_content:
In this paper, we present results of a study of flow control in ATM networks. Two classes of flow control are described: preventive and reactive. A preventive control, which uses a static resources allocation, seems inadequate to handle rough traffic in high speed networks. On the other hand, the reactive control schemes, which employ a closed-loop feedback mechanism, seem more attractive than the preventive control schemes, especially in ATM networks. Four ATM service categories are provided to serve a variety of users. To enable each service class to function effectively two closed-loop flow control mechanisms: rate-based and credit-based, have been introduced for ATM networks. Rate-based flow control schemes use the rate of traffic flowing from the source to control the transmission rate. In this paper, a number of variants of rate-based schemes are described and analysed. Credit-based schemes use window flow control where a traffic source is allowed to transmit data only when there is available buffer space in the downstream node. In this paper, a number of credit-based schemes are also described and analysed.
---
paper_title: Efficient simulation of QoS in ATM switches using connection traffic descriptors
paper_content:
Abstract High speed networks using asynchronous transfer mode (ATM) will be able to carry a broad range of traffic classes and will be required to provide QoS measures, such as the cell loss and cell delay probabilities, to many of these traffic classes. The design and testing of ATM networks and the algorithms that perform connection admission control is difficult due to the rare event nature associated with QoS measures, and the unwieldiness of matching statistical models of the broad range of traffic classes entering the network to the connection traffic descriptors used by the connection admission control algorithms. In this paper, as an alternative to using statistical traffic models, we describe the traffic entering the network by the connection traffic descriptors standardized by the ATM Forum and used by the connection admission control algorithms. We present a Monte Carlo simulation model for estimating the cell loss and cell delay probabilities using a multinomial formulation to remove the correlation associated with estimating bursty events. We develop importance sampling techniques to increase the efficiency of the simulation for ATM networks with heterogeneous input traffic classes, namely constant bit rate and variable bit rate traffic. For the experimental examples considered here, the improvement in simulation efficiency compared to conventional Monte Carlo simulation is inversely proportional to the probability estimate. The efficient simulation methods developed here are suitable for the design and testing of the switches and connection admission control algorithms planned for use in ATM networks.
---
paper_title: Multicasting in MPLS domains
paper_content:
Explicit routing in MPLS is utilized in traffic engineering to maximize the operational network performance and to provide Quality of Service (QoS). However, difficulties arise while integrating native IP multicasting with MPLS traffic engineering, such as point-to-multipoint or multipoint-to-multipoint LSPs layout design and traffic aggregation. In this paper, we have proposed an edge router multicasting (ERM) scheme by limiting branching point of multicast delivery tree to only the edges of MPLS domains. As a result, multicast LSP setups,multicast flow assignments, and multicast traffic aggregation are reduced to unicast problems. We have studied two types of ERM routing protocols in the paper. The first approach is based on modifications to the existing multicast protocols, while the second approach applies Steiner tree-based heuristic routing algorithm in the edge router multicasting environment. The simulation results demonstrate that the ERM scheme based on Steiner tree heuristic can provide near-optimal performance. The results also demonstrate that ERM provides a traffic engineering friendly approach without sacrificing the benefits of native IP multicasting.
---
paper_title: ATM: The broadband telecommunications solution
paper_content:
Since ATM was identified by the CCITT in 1988 as the target transfer mode for broadband communications, there has been considerable research activity on the topic world-wide. Within Europe, the RACE programme of the European Community has brought together experts from a wide variety of organisations to work on several projects. This book results from the work of one of those projects. Aimed at those interested in ATM generally, or those needing to understand the issues in designing or implementing broadband networks, the book draws on the results of the research project to present a description of ATM from a network point of view. Starting with the principles of ATM, it goes on to cover topics such as network performance, network structure, evolution and interworking. It also discusses more general issues including numbering, charging and the need for intelligence in the network. It concludes by explaining the current position on traffic engineering for broadband ATM networks.
---
paper_title: MPLS advantages for traffic engineering
paper_content:
This article discusses the architectural aspects of MPLS which enable it to address IP traffic management. Specific MPLS architectural features discussed are separation of control and forwarding, the label stack, multiple control planes, and integrated IP and constraint-based routing. The article then discusses how these features address network scalability, simplify network service integration, offer integrated recovery, and simplify network management. Scalability is addressed through integrated routing enabling a natural assignment of traffic to the appropriate traffic engineering tunnels without requiring special mechanisms for loop prevention. Change is greatly reduced. The label stack enables an effective means for local tunnel repair providing fast restoration. Feedback through the routing system permits fast and intelligent reaction to topology changes. Service integration is simplified through a unified QoS paradigm which makes it simple for services to request QoS and have it mapped through to traffic engineering.
---
paper_title: Frame Relay: Technology and Practice
paper_content:
Preface. Acknowledgments. 1. Introduction. Driving Forces for Frame Relay. The Need for Frame Relay. Accelerators for the Growth of Frame Relay. Frame Relay Network Basics. Benefits and Limitations of Frame Relay. Limitations. Frame Relay and Other Networking Technologies. Dial-Up Modem Lines. ISDN and Other Switched Digital Facilities. Leased Lines. X.25 Packet-Switching Services. Asynchronous Transfer Mode. Data-Oriented Virtual Private Networks. Switched Multimegabit Data Service. 2. Who's Who in Frame Relay. Standards Organizations. The American National Standards Institute. The Frame Relay Forum. The Internet Engineering Task Force. Other Standards Organization. Commercial Organizations. Frame Relay Service Providers. Frame Relay Vendors. 3. Frame Relay Architecture. Frame Relay Layers. Frame Relay and X.25 Packet Switching. Network Interfaces. User-Network Interface. Network-to-Network Interface. Local Management Interface. Frame Relay Layer 2 Formats. Frame Format. Header Format. Data Link Connection Identifiers. How DLCIs Identify Visual Circuits. Mapping DLCIs within a Network. Globally Significant DLCIs. Exercises. 4. Connecting to the Network. Access Circuits. Leased Access Circuits. Local Frame Relay Services. Dial-Up Access. Physical Connections to the Access Circuit. Physical Interfaces. Data Service Units/Channel Service Units. Port Connections. Network-to-Network Interfaces. Access Devices. Routers for Frame Relay Networks. Frame Relay Access Devices. Other Interfaces for Frame Relay Access. Recovery from Physical Circuit Failures. Failure of the Access Circuits. Failure of the Backbone Trunks. 5. Frame Relay Virtual Circuits. Virtual Circuits. Switches. Differences between PVCs and SVCs. Permanent Virtual Circuits. Switched Virtual Circuits. More on SVCs. SVC Signaling Specifications. Advantages and Disadvantages of SVCs. Switched Physical Access and SVCs. Recovery from Virtual Circuit Failures. 6. Traffic Management. Committed Information Rate. The User View of CIR. The Standards View of CIR. Capacity Allocation. Bursting. Dynamic Allocation. Oversubscription of Port Connections. Asymmetric PVCs. Congestion Management and Flow Control. Frame Discarding and the Discard-Eligible Bit. Explicit Congestion Notification Using the FECN and BECN Bits. Implicit Congestion Notification. Where Congestion Can Occur. Congestion across the Local Access Circuit. Congestion across the Provider's Network. Congestion across the Network-to-Network Interface. Congestion across the Remote Access Circuit. Limitations of Congestion Management. Proprietary Implementations of CIR and the DE, FECN, and BECN Bits. The Customer's Inability to Respond to the FECN and BECN Bits. Use and Misuse of the DE Bit. 7. Engineering of Frame Relay Networks. Frame Relay Switch Families. Public Service Provider Switches. The Non-CIR Approach. PVC Services and Bursting. Capacity Planning. Traffic Handling. Congestion Management. Summary. The Flow-Controlled Approach. PVC Services. Capacity Planning. Traffic and Burst Handling. Congestion Management. Summary. Comparison of Non-CIR and Flow-Controlled Approaches. Advantages of Non-CIR. Advantages of Flow-Controlled Networks. Second-Generation Frame Relay Switches. Quality of Service Support. Greater Speeds and Scalability. Improved Traffic Routing. The Zero CIR Controversy. 8. Network Management. Network Management System Functions. Management Data Sources. Data from Switches. Data from Routers. Data from Protocol Analzyers. Data from Enhanced DSU/CSUs. Frame Relay Standards versus Proprietary Network Management Systems. Frame Relay Standards in Network Management. The Local Management Interface. Consolidated Link Layer Management. Review of the Simple Network Management Protocol. The Management Information Base for Frame Relay Service. Frame Relay Management Approaches. User-Based Monitoring. Carrier-Based Monitoring. Managed Network Services. Open Network Management Systems. Frame Relay Network Management Functions. Configuration Management. Fault Management. Performance Management. Accounting Management. Security Management. Managed Network Services. 9. Frame Relay Pricing. Pricing Structures. PVC Pricing. Access Circuit Charges. Port Connection Charges. Permanent Virtual Circuit Changes. Variations in PVC Pricing. SVC Pricing. International Issues. Ancillary Carrier Services. 10. Procurement of Frame Relay Services. The RFP Process. Objectives. Evaluation Criteria. The RFP. Contract Negotiations. Monitoring the Contract. 11. Design of Frame Relay Networks. Overview. Physical Network Design. Backbone Network Design. Access Network Design. Virtual Circuit Network Design. The Access Network Design Process. Set Objectives. Inventory the Sites. Collect Traffic Statistics. Sketch the PVC Map. Consider Asymmetric PVCs. Determine CIR. Determine Port Connection Speed. Determine Access Circuit Speed. Decide on Backup Options. Plan for Implementation. Implement and Fine-Tune. Case Study: Redesigning a Private Line Network. Solution 1: Star Topology Frame Relay Network. Solution 2: Hybrid Frame Relay Network. Solution 3: Partial Mesh Topology Frame Relay Network. Other Design Issues. Designing for Performance. Designing for Switched Virtual Circuits. Designing for Disaster Recovery. Exercise. 12. Voice over Frame Relay. Advantages. Challenges. Measuring Voice Quality. Improving Voice Performance. Voice Compression. Silence Suppression. Voice-Engineering Techniques. Small Voice Frames. Fragmentation of Data Frames. Priority of Voice Frames. QoS in the Frame Relay Network. Fax over Frame Relay. Voice-Band Modem Data over Frame Relay. Video over Frame Relay. Considerations. Performance and Quality Issues. Technical Issues. Management and Administrative Issues. Perception Issues. Standards. FRF.11 Voice over Frame Relay Implementation Agreement. FRF.12 Frame Relay Fragmentation Implementation Agreement. 13. Internetworking with Frame Relay. Routing over Frame Relay Networks. Routing Protocols. Improved Routing Protocols. Frame Relay Interfaces for Routers. TCP and Congestion Control. Routers and Congestion. Prioritizing Traffic within Routers. Effects of Frame Relay on Router Interconnectivity. RFC 1490/2427 Multiprotocol Encapsulation. RFC 1490 Encapsulation Formats. Address Resolution with RFC 1490. Encapsulation of X.25. RFC 1490 Misunderstandings. Routable Protocols over Frame Relay. TCP/IP over Frame Relay. IPX over Frame Relay. IBM's SNA over Frame Relay. SNA Background. SNA over Frame Relay. IBM Hardware/Software Support for Frame Relay. SNA Gateways. Router-Based RFC 1490 Multiprotocol Encapsulation. Data Link Switching. Sending SNA over Frame Relay. Nonroutable Protocols over Frame Relay. 14. Frame Relay and ATM. Comparison. Network Interworking. Service Interworking. The ATM Frame User-Network Interface. Migration Path. Conclusion. Appendix A - Frame Relay Information Sources. Web Sites. Internet Newsgroups (Usenet). Books. Magazines (monthly). Periodicals (weekly). Newsletters, Mailing Lists. Frame Relay Vendors and Carriers. Appendix B - Answers to Exercises. Chapter 3 Exercises. Chapter 11 Exercises. Glossary. References. Index. 0201485249T04062001
---
paper_title: MONTE: an implementation of an MPLS online traffic engineering tool
paper_content:
Multiservice networks require careful mapping of traffic in order to provide quality of service. Applying offline Traffic Engineering techniques leads to a better usage of resources and allows to assure some degree of quality of service. Even with those techniques applied, as network and traffic conditions change dynamically, the initial quality could be reduced. When addressing this problem, online Traffic Engineering has a major role. In MONTE project a solution for addressing this problem in Multiprotocol Label Switching networks was proposed and implemented in software. Such solution involves network discovering and monitoring, congestion detection, a corrective algorithm, and a mechanism for signalling changes in the network. The entire solution was conceived to work in real time and vendor independent. This paper explains the details of the solution and its implementation. Results validating the correct operation of the tool are also shown. This results were obtained through tests in a live network.
---
paper_title: Mpls and traffic engineering in ip networks
paper_content:
Rapid growth and increasing requirements for service quality, reliability, and efficiency have made traffic engineering an essential consideration in the design and operation of large public Internet backbone networks. Internet traffic engineering addresses the issue of performance optimization of operational networks. A paramount objective of Internet traffic engineering is to facilitate the transport of IP traffic through a given network in the most efficient, reliable, and expeditious manner possible. Historically, traffic engineering in the Internet has been hampered by the limited functional capabilities of conventional IP technologies. Recent developments in multiprotocol label switching (MPLS) and differentiated services have opened up new possibilities to address some of the limitations of the conventional technologies. This article discusses the applications of MPLS to traffic engineering in IP networks.
---
paper_title: QoS Over Heterogeneous Networks
paper_content:
The importance of quality of service (QoS) has risen with the recent evolution of telecommunication networks, which are characterised by a great heterogeneity. While many applications require a specific level of assurance from the network; communication networks are characterized by different service providers, transmission means and implementer solutions such as asynchronous transfer mode (ATM), Internet protocol version 4 (IPv4), IPv6 and MPLS. Providing comprehensive coverage of QoS issues within heterogeneous network environments, QoS Over Heterogeneous Networks looks to find solutions to questions such as does QoS fit within heterogeneous networks and what is the impact on performance if information traverses different network portions that implement specific QoS schemes. Includes: A series of algorithms and protocols to help solve potential QoS problems. State of the art case studies and operative examples to illustrate points made. Information on QoS mapping in terms of service-level specification (SLS) and an in-depth discussion of related issues Chapters end-to-end (E2E) QoS, QoS architecture, QoS over heterogeneous networks and QoS internetworking and mapping. An ideal book for graduate students, researchers and lecturers. System designers, developers and engineers will also find QoS Over Heterogeneous Networks a valuable reference.
---
paper_title: Many Sources Asymptotics for Networks with Small Buffers
paper_content:
In this paper, we obtain the overflow asymptotics in a network with small buffers when the resources are accessed by a large number of stationary independent sources. Under the assumption that the network is loop-free with respect to source–destination routes, we identify the precise large deviations rate functions for the buffer overflow at each node in terms of the external input characteristics. It is assumed that each type of source requires a Quality of Service (QoS) defined by bounds on the fraction of offered work lost. We then obtain the admissible region for sources which access the network based on these QoS requirements. When all the sources require the same QoS, we show that the admissible region asymptotically corresponds to that which is obtained by assuming that flows pass through each node unchanged.
---
paper_title: Fast overflow probability estimation tool for MPLS networks
paper_content:
The constant growth of internet and the variety of services provided makes the estimation of QoS parameters a fundamental need for every Internet Service Provider. The present work introduces a software tool that calculates the overflow probability on the core links of a MPLS network. The calculation is based on the statistical properties of the arriving traffic and the routing on the network. The procedure uses the results of the large deviations theory and the work of Likhanov et al. [22] for small buffer. The results obtained show high degree of accuracy as well as very short processing times. This allows the user to determine the overflow status of the network without the need to use the traditional highly time consuming simulation techniques.
---
paper_title: Designing the simulative evaluation of an architecture for supporting QoS on a large scale
paper_content:
The EuQoS system is a complete QoS system, scalable to large dimensions and addressing QoS at all relevant layers, which has been developed within the framework of the IST-EuQoS project. Its design has been aided by a considerable amount of modeling and simulation work, aimed at testing the various QoS mechanisms devised and their interaction. This paper describes the modeling and simulation work done in the framework of the project. We describe the three simulation models which have been developed, based on the different timescales at which the QoS mechanisms have effect. Furthermore, as a sample case of performance evaluation, involving different simulators, we describe the performance evaluation of the EuQoS signaling subsystem.
---
paper_title: The EuQoS system: a solution for QoS routing in heterogeneous networks [Quality of Service based Routing Algorithms for Heterogeneous Networks]
paper_content:
EuQoS is the acronym for "end-to-end quality of service support over heterogeneous networks", which is a European research project aimed at building an entire QoS framework, addressing all the relevant network layers, protocols, and technologies. This framework, which includes the most common access networks (xDSL, UMTS, WiFi, and LAN) is being prototyped and tested in a multidomain scenario throughout Europe, composing what we call the EuQoS system. In this article we present the novel QoS routing mechanisms that are being developed and evaluated in the framework of this project. The preliminary performance results validate the design choices of the EuQoS system, and confirm the potential impact this project is likely to have in the near future
---
paper_title: AN ARCHITECTURAL FRAMEWORK FOR HETEROGENEOUS NETWORKING
paper_content:
The growth over the last decade in the use of wireless networking devices has been explosive. Soon many ::: devices will have multiple network interfaces, each with very different characteristics. We believe that a ::: framework that encapsulates the key challenges of heterogeneous networking is required. Like a map clearly ::: helps one to plan a journey, a framework is needed to help us move forward in this unexplored area. The ::: approach taken here is similar to the OSImodel in which tightly defined layers are used to specify functionality, ::: allowing a modular approach to the extension of systems and the interchange of their components, whilst ::: providing a model that is more oriented to heterogeneity and mobility.
---
paper_title: Y-Comm: a global architecture for heterogeneous networking
paper_content:
In the near future mobile devices with several interfaces will become commonplace. Most of the peripheral networks using the Internet will therefore employ wireless technology. To provide support for these devices, this paper proposes a new framework which encompasses the functions of both peripheral and core networks. The framework is called Y-Comm and is defined in a layered manner like the OSI model.
---
paper_title: Dynamic and automatic interworking between personal area networks using composition
paper_content:
Next generation communication networks will be characterized by the coexistence of multiple technologies and user devices in an integrated fashion. The increasing number of devices owned by a single user will lead to a new communication paradigm: users owning multiple devices that form cooperative networks, and networks of different users that communicate with each other, e.g., acquiring Internet access through each other. In this communication scenario no user intervention should be required and technology should seamlessly adapt to the user's context, preferences, and needs. In this paper we address one of those scenarios, interworking between personal area networks, using legacy technologies and the Ambient Network and network composition concepts, herein explained. We argue that new functionalities should be introduced to enable effortless use of legacy technologies in such dynamic and heterogeneous environments
---
paper_title: Context generation and structuralization for ambient networks
paper_content:
This paper presents a model of context information data source structuralization, having in mind requirements of Ambient Networks applications. The proposed solution extends the ContextWare architecture, which is a general framework for context information dissemination. It assumes that the transfer of context information which represents a temporal state of sensors is accompanied by semantic information. This is a key concept in achieving true interoperability between heterogeneous system domains in mobile applications. A system for semi-automatic generation of data sources in the form of Web Services wrapping the sensors is presented. The input data of this system consists of information about sensors and an ontology describing their semantics. A proposed notation, describing the mapping of values from sensors to the ontology used in the Ambient Networks project, was employed at this stage. The outcome of the system consists of a Web Service which exposes data and semantics of context information.
---
paper_title: Ambient networks: a framework for mobile network cooperation
paper_content:
Ambient Networks represents a new networking approach that aims at enabling the cooperation of heterogeneous networks and networking resources, on demand and transparently. This should happen without the need for pre-configuration or offline negotiation between network operators. Ambient Networks clearly separates the control space from the transport protocol, and introduces a common control space called the Ambient Control Space that comprised all control functionality within an Ambient Networks. The Ambient Control Space is also responsible for the composition of different networks, with the control spaces of composing networks forming one Ambient Control Space. This paper discusses the justification of Ambient Networks, the main novelties being introduced by Ambient Networks, and the requirements imposed on the Ambient Control Space. It also goes into a bit more details about the Ambient Networks architecture and draws initial conclusions.
---
paper_title: Names, addresses and identities in ambient networks
paper_content:
Ambient Networks interconnect independent realms that may use different local network technologies and may belong to different administrative or legal entities. At the core of these advanced internetworking concepts is a flexible naming architecture based on dynamic indirections between names, addresses and identities. This paper gives an overview of the connectivity abstractions of Ambient Networks and then describes its naming architecture in detail, comparing and contrasting them to other related next-generation network architectures.
---
paper_title: Towards modular mobility management in ambient networks
paper_content:
The overall goal of the Ambient Networks Integrated Project is to develop a vision for future wireless and mobile networks. Even though mobility management has been investigated within numerous research and standardisation projects, the novel approach of Ambient Networks on the distribution of control space functions brings up possibilities to tailor mobility support for specific environments such as 3GPP and IEEE802 networks and on the other hand to efficiently manage the large variety of co-existing legacy and novel solutions foreseen in beyond 3G networks. The paper describes how modular design of mobility management functions can contribute to this objective. A topic, which is very much related to modularity deals with different types of communication endpoints. The paper elucidates how this discussion will help to coordinate different mobility solutions residing at different layers of the communication stack.
---
paper_title: Ambient networks: bridging heterogeneous network domains
paper_content:
Providing end-to-end communication in heterogeneous internetworking environments is a challenge. Two fundamental problems are bridging between different internetworking technologies and hiding of network complexity and differences from both applications and application developers. This paper presents abstraction and naming mechanisms that address these challenges in the Ambient Networks project. Connectivity abstractions hide the differences of heterogeneous internetworking technologies and enable applications to operate across them. A common naming framework enables end-to-end communication across otherwise independent internetworks and supports advanced networking capabilities, such as indirection or delegation, through dynamic bindings between named entities
---
paper_title: Ambient Networks - A Framework for Multi-Access Control in Heterogeneous Networks
paper_content:
In the ambient networks project research is ongoing in order to define a new control space for future internetworking. This internetworking will be characterized by a high degree of network heterogeneity as well as nomadicity and mobility of both users as well as networks. The so called ambient control space (ACS), provides functionality to support how to manage these traits of future internetworking. This paper provides a short presentation on the architecture of ambient networks and the ACS, and then focus on how connectivity is handled from a multi-access and mobility perspective through the support of the ACS.
---
paper_title: Ambient Networks : Co-operative Mobile Networking For The Wireless World
paper_content:
Ambient Networks defines a new kind of network architecture, which embeds support for co operation and competition between diverse network types within a common control layer. This unified networking concept can adapt to the heterogeneous environments of different radio technologies and service and network environments. Special focus is placed on facilitating both competition and co-operation of various market players, by defining interfaces which allow the instant negotiation of cooperation agreements. The Ambient Networking concept has been developed in the framework of the Ambient Networks project, which is co-sponsored by the European Union under the Information Society Technology (IST) priority of the 6th Framework Programme. The Ambient Networks project mobilised the work of researchers from over forty different organisations, both major industrial corporations and leading academic institutions, from Europe and worldwide. This book offers a complete and detailed overview of the Ambient Networking concept and its core technologies. The authors explain the problems with current mobile IP networks and the need for a new mobility-aware IP-based control architecture, before presenting the Ambient Networking concept itself and the business opportunities that it offers. The architecture, components, features and challenges of Ambient Networking are covered in depth, with comprehensive discussions of multi-radio access, generic Ambient Network signalling, mobility support, context and network management and built-in media delivery overlay control. Ambient Networks: Co-operative Mobile Networking for the Wireless World Explains the need for Ambient Networking, discussing the limitations of todays proposed architectures, and explaining the business potential of edge networks and network co-operation. Describes Ambient Networking technology in detail, and addresses the technical challenges for implementation. Includes practical user scenarios which are fully analysed and assessed through simulation studies. Including a complete examination of the research and technologies arising from the Ambient Networks concept, Ambient Networks will be invaluable for research and development teams in networking and communications technology, as well as advanced students in electrical engineering and computer science faculties. Standardisation specialists, research departments, and telecommunications analysts will also find this a helpful resource.
---
paper_title: Ambient networks: a framework for future wireless internetworking
paper_content:
An increasingly wireless world faces new challenges due to the dynamicity of interactions, range of applications, multitude of available radio access technologies and network functionality. The ambient networks project recognizes these trends and enables the creation of innovative network solutions for mobile and wireless systems beyond 3G. These networks enables scalable and affordable wireless networking while providing pervasive, rich and easy-to-use communication. A specific focus lies on enabling advanced capabilities in environments with increased competition as well as cooperation, environments that are populated by a multitude of user devices, wireless technologies, network operators and business actors. The project adopts a modular architecture that enables plug-and-play control extensibility that supports a wide range of different applications and network technologies. Based on a small subset of common functionality, this approach supports the dynamic deployment of advanced internetworking capabilities, such as media and context-awareness or multi-radio access.
---
paper_title: Testbed infrastructure supporting pervasive services
paper_content:
Daidalos II [1] works towards its overall mission of providing a beyond 3G Framework that is capable of supporting pervasive services built on network and service infrastructures for mobile end users. In order to validate such a framework, Daidalos worked towards the completion of a prototype demonstrator was verified on a Daidalos specific testbed infrastructure, enabling end user validation of the Daidalos framework. Having a Daidalos specific testbed infrastructure up and running with Daidalos components installed, provides a valuable opportunity for the Daidalos consortium to realistically debug and evaluate the Daidalos framework to assess if it lives up to its objectives. This paper will provide you with an overview of the testbed layout and the evolution of this testbed. It will also give you an insight into the validation aspects of work completed wthin this Daidalos testbed via a scenario driven process, detailing some of the applications specifically designed for the Daidalos II demonstrator that support and help convey the pervasive aspects of the Daidalos framework..
---
paper_title: The DAIDALOS Architecture for QoS over Heterogeneous Wireless Networks
paper_content:
A toilet flush water colorizer comprises a container adapted for receiving therein a flush water-coloring block and having at least one entry opening for admitting flush water thereinto, and outlet means for the discharge of colored flush water from the container, which colorizer further comprises suspending means for suspending the container at the inside of a toilet bowl, a diluting chamber in the interior of the container adapted for collecting drops of color concentrate dripping off a block after each flushing, the outlet means being located in the bottom of the diluting chamber.
---
paper_title: Daidalos Framework for Successful Testbed Integration
paper_content:
The Daidalos Testbed integration framework required detailed planning and implementation, constantly adapting to the demanding changes of a research project as it advances from development phase to integration phase. This paper describes the various integration and validation efforts required to deploy an operational Daidalos Testbed infrastructure, demonstrating the effort required to achieve a successful overall integration process. With such a large scale project as Daidalos with a consortium of 49 partners, the Testbed deployment, operation and management was indeed an immense task having to create and enforce lest bed processes suitable for the efficient and effective operation of the Daidalos system during integration and validation.
---
paper_title: Daidalos II: Implementing a Scenario Driven Process
paper_content:
User centric scenario based approaches, provide us with an opportunity to analyse, investigate, implement and evaluate new and innovative technologies within a large scale research project. It helps consortium partners focus on the end user and their needs, and provides direction during the requirements, and implementation phases of the projects lifecycle. The Daidalos II project actively uses this scenario based approach to implement and validate its innovative and new architectural advancements. Based on our experience of using a scenario based approach in Daidalos I, we now have reviewed our approach, and refined it to work more effectively for a large scale project, based on recommendations and feedback obtained. This paper will provide you with an overview of the various activities involved in the scenario design and implementation stage, discussing the scenario design methodology, criteria for final scenario selection, and selected scenarios.
---
paper_title: The SMART Project Exploiting the Heterogeneous Mobile World
paper_content:
The wide proliferation of wireless systems and the use of software radio technologies enables the employment of a heterogeneous network. In this concept services are delivered via the network that is most efficient for that service. Our solution is based on a common core network that interconnects access points of various wireless access points. A mobile host can apply multiple different access networks simultaneously to increase capacity or efficiency. Furthermore, a basic access network, separated from other wireless access networks, is used as a means for wireless system discovery, signaling and paging. Quality of Service is of prominent importance due to the heterogeneous environment and the characteristics of ::: the wireless channel. This paper describes the concepts of our architecture, and presents an overview of the architecture.
---
paper_title: A QoS enabling queuing scheme for 4G wireless access networks
paper_content:
There will be heterogeneous wireless access networks in 4G systems, e.g., WLAN, UTRAN etc. The networks will need to support QoS and full mobility. For each connection request by a mobile user in such networks, there will be need for a scheme for selecting the best network from among the available networks. This paper proposes a scheme that enhances existing approaches to addressing this challenge. The scheme models the radio access network as a network of queuing nodes. With this model, and for each available path, we determine the statistical QoS parameters of the user traffic in the radio channel. We postulate that the statistics indicate the QoS capabilities of the network and can therefore be used to select the best network to serve the mobile user.
---
paper_title: On schemes for supporting QoS and mobility in heterogeneous networks
paper_content:
This paper proposes a scheme to provide service mobility with required QoS for different applications in heterogeneous access networks. Some existing schemes and proposed schemes are surveyed and compared with respect to key QoS-related mechanisms and capabilities. We then introduce a proposed scheme supporting QoS and mobility across different access networks. This utilizes the Media-Independent Handover (MIH) Services and follows principles of the Next Generation Network (NGN) architecture. Correspondence among different service classes and parameters is also provided for service mobility using Media-Independent Handover (MIH) Services.
---
| Title: A Survey on QoS in Next Generation Networks
Section 1: Introduction
Description 1: Introduce the topic of QoS in Next Generation Networks, explain the importance and relevance of the topic, and provide an overview of the paper structure.
Section 2: Description Framework
Description 2: Provide several frameworks, including the concepts around QoS-based Networks, the factors that may affect QoS, and the approaches.
Section 3: Integrated Services (IntServ)
Description 3: Describe the IntServ model, its functioning, and discuss its scalability issues that hinder widespread implementation.
Section 4: Differentiated Services (DiffServ)
Description 4: Discuss the DiffServ paradigm, its architecture, and various projects that have employed DiffServ to ensure QoS.
Section 5: Asynchronous Transfer Mode (ATM)
Description 5: Outline ATM technology, its capabilities in providing high-speed transfer and QoS, and notable research activities regarding ATM and QoS.
Section 6: MultiProtocol Label Switching (MPLS)
Description 6: Describe MPLS technology, its combination of IP and ATM, and review some research activities focusing on MPLS's traffic engineering capabilities.
Section 7: EU-QoS
Description 7: Explain the EuQoS project, its aim to construct a QoS-based framework for heterogeneous networks, and significant findings from the project.
Section 8: Y-Comm
Description 8: Outline the Y-Comm architecture for heterogeneous networks, its components, and related research contributions and findings.
Section 9: Ambient Networks (AN)
Description 9: Discuss the Ambient Networks project, its motivation, features, and innovative solutions for network cooperation and heterogeneous network interaction.
Section 10: DAIDALOS
Description 10: Describe the DAIDALOS project, its goal for advanced network interfaces and communication services, and highlight various research outputs from the project.
Section 11: Smart
Description 11: Provide an overview of the Smart architecture, its components, and how it supports efficient use of wireless access networks for QoS provisioning.
Section 12: Methods Discussion & Comparison
Description 12: Compare and contrast the various QoS architectures discussed, listing their strengths and drawbacks, and discuss the evolving requirements for QoS in current networks.
Section 13: Conclusion
Description 13: Summarize the findings from the survey, emphasizing the advancements and ongoing challenges in achieving QoS over heterogeneous networks. |
A Survey of Point-of-interest Recommendation in Location-based Social Networks | 16 | ---
paper_title: On the Levy-Walk Nature of Human Mobility
paper_content:
We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance.
---
paper_title: Modeling User Activity Preference by Leveraging User Spatial Temporal Characteristics in LBSNs
paper_content:
With the recent surge of location based social networks (LBSNs), activity data of millions of users has become attainable. This data contains not only spatial and temporal stamps of user activity, but also its semantic information. LBSNs can help to understand mobile users’ spatial temporal activity preference (STAP), which can enable a wide range of ubiquitous applications, such as personalized context-aware location recommendation and group-oriented advertisement. However, modeling such user-specific STAP needs to tackle high-dimensional data, i.e., user-location-time-activity quadruples, which is complicated and usually suffers from a data sparsity problem. In order to address this problem, we propose a STAP model. It first models the spatial and temporal activity preference separately, and then uses a principle way to combine them for preference inference. In order to characterize the impact of spatial features on user activity preference, we propose the notion of personal functional region and related parameters to model and infer user spatial activity preference. In order to model the user temporal activity preference with sparse user activity data in LBSNs, we propose to exploit the temporal activity similarity among different users and apply nonnegative tensor factorization to collaboratively infer temporal activity preference. Finally, we put forward a context-aware fusion framework to combine the spatial and temporal activity preference models for preference inference. We evaluate our proposed approach on three real-world datasets collected from New York and Tokyo, and show that our STAP model consistently outperforms the baseline approaches in various settings.
---
paper_title: Who, What, When, and Where: Multi-Dimensional Collaborative Recommendations Using Tensor Factorization on Sparse User-Generated Data
paper_content:
Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them.
---
paper_title: Activity Sensor: Check-In Usage Mining for Local Recommendation
paper_content:
While on the go, people are using their phones as a personal concierge discovering what is around and deciding what to do. Mobile phone has become a recommendation terminal customized for individuals—capable of recommending activities and simplifying the accomplishment of related tasks. In this article, we conduct usage mining on the check-in data, with summarized statistics identifying the local recommendation challenges of huge solution space, sparse available data, and complicated user intent, and discovered observations to motivate the hierarchical, contextual, and sequential solution. We present a point-of-interest (POI) category-transition--based approach, with a goal of estimating the visiting probability of a series of successive POIs conditioned on current user context and sensor context. A mobile local recommendation demo application is deployed. The objective and subjective evaluations validate the effectiveness in providing mobile users both accurate recommendation and favorable user experience.
---
paper_title: STELLAR: spatial-temporal latent ranking for successive point-of-interest recommendation
paper_content:
Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected]
---
paper_title: Joint Modeling of User Check-in Behaviors for Real-time Point-of-Interest Recommendation
paper_content:
Point-of-Interest (POI) recommendation has become an important means to help people discover attractive and interesting places, especially when users travel out of town. However, the extreme sparsity of a user-POI matrix creates a severe challenge. To cope with this challenge, we propose a unified probabilistic generative model, the Topic-Region Model (TRM), to simultaneously discover the semantic, temporal, and spatial patterns of users’ check-in activities, and to model their joint effect on users’ decision making for selection of POIs to visit. To demonstrate the applicability and flexibility of TRM, we investigate how it supports two recommendation scenarios in a unified way, that is, hometown recommendation and out-of-town recommendation. TRM effectively overcomes data sparsity by the complementarity and mutual enhancement of the diverse information associated with users’ check-in activities (e.g., check-in content, time, and location) in the processes of discovering heterogeneous patterns and producing recommendations. To support real-time POI recommendations, we further extend the TRM model to an online learning model, TRM-Online, to track changing user interests and speed up the model training. In addition, based on the learned model, we propose a clustering-based branch and bound algorithm (CBB) to prune the POI search space and facilitate fast retrieval of the top-k recommendations. ::: We conduct extensive experiments to evaluate the performance of our proposals on two real-world datasets, including recommendation effectiveness, overcoming the cold-start problem, recommendation efficiency, and model-training efficiency. The experimental results demonstrate the superiority of our TRM models, especially TRM-Online, compared with state-of-the-art competitive methods, by making more effective and efficient mobile recommendations. In addition, we study the importance of each type of pattern in the two recommendation scenarios, respectively, and find that exploiting temporal patterns is most important for the hometown recommendation scenario, while the semantic patterns play a dominant role in improving the recommendation effectiveness for out-of-town users.
---
paper_title: SPORE: A sequential personalized spatial item recommender system
paper_content:
With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important way of helping users discover interesting locations to increase their engagement with location-based services. Although human movement exhibits sequential patterns in LBSNs, most current studies on spatial item recommendations do not consider the sequential influence of locations. Leveraging sequential patterns in spatial item recommendation is, however, very challenging, considering 1) users' check-in data in LBSNs has a low sampling rate in both space and time, which renders existing prediction techniques on GPS trajectories ineffective; 2) the prediction space is extremely large, with millions of distinct locations as the next prediction target, which impedes the application of classical Markov chain models; and 3) there is no existing framework that unifies users' personal interests and the sequential influence in a principled manner. In light of the above challenges, we propose a sequential personalized spatial item recommendation framework (SPORE) which introduces a novel latent variable topic-region to model and fuse sequential influence with personal interests in the latent and exponential space. The advantages of modeling the sequential effect at the topic-region level include a significantly reduced prediction space, an effective alleviation of data sparsity and a direct expression of the semantic meaning of users' spatial activities. Furthermore, we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online top-k recommendation process by extending the traditional LSH. We evaluate the performance of SPORE on two real datasets and one large-scale synthetic dataset. The results demonstrate a significant improvement in SPORE's ability to recommend spatial items, in terms of both effectiveness and efficiency, compared with the state-of-the-art methods.
---
paper_title: Discovering interpretable geo-social communities for user behavior prediction
paper_content:
Social community detection is a growing field of interest in the area of social network applications, and many approaches have been developed, including graph partitioning, latent space model, block model and spectral clustering. Most existing work purely focuses on network structure information which is, however, often sparse, noisy and lack of interpretability. To improve the accuracy and interpretability of community discovery, we propose to infer users' social communities by incorporating their spatiotemporal data and semantic information. Technically, we propose a unified probabilistic generative model, User-Community-Geo-Topic (UCGT), to simulate the generative process of communities as a result of network proximities, spatiotemporal co-occurrences and semantic similarity. With a well-designed multi-component model structure and a parallel inference implementation to leverage the power of multicores and clusters, our UCGT model is expressive while remaining efficient and scalable to growing large-scale geo-social networking data. We deploy UCGT to two application scenarios of user behavior predictions: check-in prediction and social interaction prediction. Extensive experiments on two large-scale geo-social networking datasets show that UCGT achieves better performance than existing state-of-the-art comparison methods.
---
paper_title: gSCorr: modeling geo-social correlations for new check-ins on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures.
---
paper_title: Content-Aware Collaborative Filtering for Location Recommendation Based on Human Mobility Data
paper_content:
Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem.
---
paper_title: Exploring millions of footprints in location sharing services
paper_content:
Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., "checkins"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services.
---
paper_title: Location-based and preference-aware recommendation using sparse geo-social networking data
paper_content:
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
---
paper_title: ORec: An Opinion-Based Point-of-Interest Recommendation Framework
paper_content:
As location-based social networks (LBSNs) rapidly grow, it is a timely topic to study how to recommend users with interesting locations, known as points-of-interest (POIs). Most existing POI recommendation techniques only employ the check-in data of users in LBSNs to learn their preferences on POIs by assuming a user's check-in frequency to a POI explicitly reflects the level of her preference on the POI. However, in reality users usually visit POIs only once, so the users' check-ins may not be sufficient to derive their preferences using their check-in frequencies only. Actually, the preferences of users are exactly implied in their opinions in text-based tips commenting on POIs. In this paper, we propose an opinion-based POI recommendation framework called ORec to take full advantage of the user opinions on POIs expressed as tips. In ORec, there are two main challenges: (i) detecting the polarities of tips (positive, neutral or negative), and (ii) integrating them with check-in data including social links between users and geographical information of POIs. To address these two challenges, (1) we develop a supervised aspect-dependent approach to detect the polarity of a tip, and (2) we devise a method to fuse tip polarities with social links and geographical information into a unified POI recommendation framework. Finally, we conduct a comprehensive performance evaluation for ORec using two large-scale real data sets collected from Foursquare and Yelp. Experimental results show that ORec achieves significantly superior polarity detection and POI recommendation accuracy compared to other state-of-the-art polarity detection and POI recommendation techniques.
---
paper_title: Addressing the cold-start problem in location recommendation using geo-social correlations
paper_content:
Location-based social networks (LBSNs) have attracted an increasing number of users in recent years, resulting in large amounts of geographical and social data. Such LBSN data provide an unprecedented opportunity to study the human movement from their socio-spatial behavior, in order to improve location-based applications like location recommendation. As users can check-in at new places, traditional work on location prediction that relies on mining a user's historical moving trajectories fails as it is not designed for the cold-start problem of recommending new check-ins. While previous work on LBSNs attempting to utilize a user's social connections for location recommendation observed limited help from social network information. In this work, we propose to address the cold-start location recommendation problem by capturing the correlations between social networks and geographical distance on LBSNs with a geo-social correlation model. The experimental results on a real-world LBSN dataset demonstrate that our approach properly models the geo-social correlations of a user's cold-start check-ins and significantly improves the location recommendation performance.
---
paper_title: Location recommendation for location-based social networks
paper_content:
In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.
---
paper_title: Predicting POI visits with a heterogeneous information network
paper_content:
A point of interest (POI) is a specific location that people may find useful or interesting. Examples include restaurants, stores, attractions, and hotels. With recent proliferation of location-based social networks (LBSNs), numerous users are gathered to share information on various POIs and to interact with each other. POI recommendation is then a crucial issue because it not only helps users to explore potential places but also gives LBSN providers a chance to post POI advertisements. As we utilize a heterogeneous information network to represent a LBSN in this work, POI recommendation is remodeled as a link prediction problem, which is significant in the field of social network analysis. Moreover, we propose to utilize the meta-path-based approach to extract implicit (but potentially useful) relationships between a user and a POI. Then, the extracted topological features are used to construct a prediction model with appropriate data classification techniques. In our experimental studies, the Yelp dataset is utilized as our testbed for performance evaluation purposes. Results of the experiments show that our prediction model is of good prediction quality in practical applications.
---
paper_title: Exploring social-historical ties on location-based social networks
paper_content:
Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction.
---
paper_title: Activity Sensor: Check-In Usage Mining for Local Recommendation
paper_content:
While on the go, people are using their phones as a personal concierge discovering what is around and deciding what to do. Mobile phone has become a recommendation terminal customized for individuals—capable of recommending activities and simplifying the accomplishment of related tasks. In this article, we conduct usage mining on the check-in data, with summarized statistics identifying the local recommendation challenges of huge solution space, sparse available data, and complicated user intent, and discovered observations to motivate the hierarchical, contextual, and sequential solution. We present a point-of-interest (POI) category-transition--based approach, with a goal of estimating the visiting probability of a series of successive POIs conditioned on current user context and sensor context. A mobile local recommendation demo application is deployed. The objective and subjective evaluations validate the effectiveness in providing mobile users both accurate recommendation and favorable user experience.
---
paper_title: Discovering regions of different functions in a city using human mobility and POIs
paper_content:
The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.
---
paper_title: A Survey of Point-of-Interest Recommendation in Location-Based Social Networks
paper_content:
With the rapid development of mobile devices, global position system (GPS) and Web 2.0 technologies, location-based social networks (LBSNs) have attracted millions of users to share rich information, such as experiences and tips. Point-of-Interest (POI) recommender system plays an important role in LBSNs since it can help users explore attractive locations as well as help social network service providers design location-aware advertisements for Point-of-Interest. In this paper, we present a brief survey over the task of Point-of-Interest recommendation in LBSNs and discuss some research directions for Point-of-Interest recommendation. We first describe the unique characteristics of Point-of-Interest recommendation, which distinguish Point-of-Interest recommendation approaches from traditional recommendation approaches. Then, according to what type of additional information are integrated with check-in data by POI recommendation algorithms, we classify POI recommendation algorithms into four categories: pure check-in data based POI recommendation approaches, geographical influence enhanced POI recommendation approaches, social influence enhanced POI recommendation approaches and temporal influence enhanced POI recommendation approaches. Finally, we discuss future research directions for Point-of-Interest recommendation.
---
paper_title: Regularity and Conformity: Location Prediction Using Heterogeneous Mobility Data
paper_content:
Mobility prediction enables appealing proactive experiences for location-aware services and offers essential intelligence to business and governments. Recent studies suggest that human mobility is highly regular and predictable. Additionally, social conformity theory indicates that people's movements are influenced by others. However, existing approaches for location prediction fail to organically combine both the regularity and conformity of human mobility in a unified model, and lack the capacity to incorporate heterogeneous mobility datasets to boost prediction performance. To address these challenges, in this paper we propose a hybrid predictive model integrating both the regularity and conformity of human mobility as well as their mutual reinforcement. In addition, we further elevate the predictive power of our model by learning location profiles from heterogeneous mobility datasets based on a gravity model. We evaluate the proposed model using several city-scale mobility datasets including location check-ins, GPS trajectories of taxis, and public transit data. The experimental results validate that our model significantly outperforms state-of-the-art approaches for mobility prediction in terms of multiple metrics such as accuracy and percentile rank. The results also suggest that the predictability of human mobility is time-varying, e.g., the overall predictability is higher on workdays than holidays while predicting users' unvisited locations is more challenging for workdays than holidays.
---
paper_title: Exploiting place features in link prediction on location-based social networks
paper_content:
Link prediction systems have been largely adopted to recommend new friends in online social networks using data about social interactions. With the soaring adoption of location-based social services it becomes possible to take advantage of an additional source of information: the places people visit. In this paper we study the problem of designing a link prediction system for online location-based social networks. We have gathered extensive data about one of these services, Gowalla, with periodic snapshots to capture its temporal evolution. We study the link prediction space, finding that about 30% of new links are added among "place-friends", i.e., among users who visit the same places. We show how this prediction space can be made 15 times smaller, while still 66% of future connections can be discovered. Thus, we define new prediction features based on the properties of the places visited by users which are able to discriminate potential future links among them. Building on these findings, we describe a supervised learning framework which exploits these prediction features to predict new links among friends-of-friends and place-friends. Our evaluation shows how the inclusion of information about places and related user activity offers high link prediction performance. These results open new directions for real-world link recommendation systems on location-based social networks.
---
paper_title: Geo-activity recommendations by using improved feature combination
paper_content:
In this paper, we propose a new model to integrate additional data, which is obtained from geospatial resources other than original data set in order to improve Location/Activity recommendations. The data set that is used in this work is a GPS trajectory of some users, which is gathered over 2 years. In order to have more accurate predictions and recommendations, we present a model that injects additional information to the main data set and we aim to apply a mathematical method on the merged data. On the merged data set, singular value decomposition technique is applied to extract latent relations. Several tests have been conducted, and the results of our proposed method are compared with a similar work for the same data set.
---
paper_title: Mining correlation between locations using human location history
paper_content:
The advance of location-acquisition technologies enables people to record their location histories with spatio-temporal datasets, which imply the correlation between geographical regions. This correlation indicates the relationship between locations in the space of human behavior, and can enable many valuable services, such as sales promotion and location recommendation. In this paper, by taking into account a user's travel experience and the sequentiality locations have been visited, we propose an approach to mine the correlation between locations from a large number of users' location histories. We conducted a personalized location recommendation system using the location correlation, and evaluated this system with a large-scale real-world GPS dataset. As a result, our method outperforms the related work using the Pearson correlation.
---
paper_title: Distance Matters: Geo-social Metrics for Online Social Networks
paper_content:
Online Social Networks (OSNs) are increasingly becoming one of the key media of communication over the Internet. The potential of these services as the basis to gather statistics and exploit information about user behavior is appealing and, as a consequence, the number of applications developed for these purposes has been soaring. At the same time, users are now willing to share information about their location, allowing for the study of the role of geographic distance in social ties. ::: ::: In this paper we present a graph analysis based approach to study social networks with geographic information and new metrics to characterize how geographic distance affects social structure. We apply our analysis to four large-scale OSN datasets: our results show that there is a vast portion of users with short-distance links and that clusters of friends are often geographically close. In addition, we demonstrate that different social networking services exhibit different geo-social properties: OSNs based mainly on location-advertising largely foster local ties and clusters, while services used mainly for news and content sharing present more connections and clusters on longer distances. The results of this work can be exploited to improve many classes of systems and a potential vast number of applications, as we illustrate by means of some practical examples.
---
paper_title: Exploiting Geographical Neighborhood Characteristics for Location Recommendation
paper_content:
Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach.
---
paper_title: Graph-based Point-of-interest Recommendation with Geographical and Temporal Influences
paper_content:
The availability of user check-in data in large volume from the rapid growing location-based social networks (LBSNs) enables a number of important location-aware services. Point-of-interest (POI) recommendation is one of such services, which is to recommend POIs that users have not visited before. It has been observed that: (i) users tend to visit nearby places, and (ii) users tend to visit different places in different time slots, and in the same time slot, users tend to periodically visit the same places. For example, users usually visit a restaurant during lunch hours, and visit a pub at night. In this paper, we focus on the problem of time-aware POI recommendation, which aims at recommending a list of POIs for a user to visit at a given time. To exploit both geographical and temporal influences in time aware POI recommendation, we propose the Geographical-Temporal influences Aware Graph (GTAG) to model check-in records, geographical influence and temporal influence. For effective and efficient recommendation based on GTAG, we develop a preference propagation algorithm named Breadth first Preference Propagation (BPP). The algorithm follows a relaxed breath-first search strategy, and returns recommendation results within at most 6 propagation steps. Our experimental results on two real-world datasets show that the proposed graph-based approach outperforms state-of-the-art POI recommendation methods substantially.
---
paper_title: Point-of-Interest Recommendation in Location Based Social Networks with Topic and Location Awareness.
paper_content:
The wide spread use of location based social networks (LBSNs) has enabled the opportunities for better location based services through Point-of-Interest (POI) recommendation. Indeed, the problem of POI recommendation is to provide personalized recommendations of places of interest. Unlike traditional recommendation tasks, POI recommendation is personalized, locationaware, and context depended. In light of this difference, this paper proposes a topic and location aware POI recommender system by exploiting associated textual and context information. Specifically, we first exploit an aggregated latent Dirichlet allocation (LDA) model to learn the interest topics of users and to infer the interest POIs by mining textual information associated with POIs. Then, a Topic and Location-aware probabilistic matrix factorization (TL-PMF) method is proposed for POI recommendation. A unique perspective of TL-PMF is to consider both the extent to which a user interest matches the POI in terms of topic distribution and the word-of-mouth opinions of the POIs. Finally, experiments on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art probabilistic latent factor models with a significant margin. Also, we have studied the impact of personalized interest topics and word-of-mouth opinions on POI recommendations.
---
paper_title: Transferring heterogeneous links across location-based social networks
paper_content:
ocation-based social networks (LBSNs) are one kind of online social networks offering geographic services and have been attracting much attention in recent years. LBSNs usually have complex structures, involving heterogeneous nodes and links. Many recommendation services in LBSNs (e.g., friend and location recommendation) can be cast as link prediction problems (e.g., social link and location link prediction). Traditional link prediction researches on LBSNs mostly focus on predicting either social links or location links, assuming the prediction tasks of different types of links to be independent. However, in many real-world LBSNs, the prediction tasks for social links and location links are strongly correlated and mutually influential. Another key challenge in link prediction on LBSNs is the data sparsity problem (i.e., "new network" problem), which can be encountered when LBSNs branch into new geographic areas or social groups. Actually, nowadays, many users are involved in multiple networks simultaneously and users who just join one LBSN may have been using other LBSNs for a long time. In this paper, we study the problem of predicting multiple types of links simultaneously for a new LBSN across partially aligned LBSNs and propose a novel method TRAIL (TRAnsfer heterogeneous lInks across LBSNs). TRAIL can accumulate information for locations from online posts and extract heterogeneous features for both social links and location links. TRAIL can predict multiple types of links simultaneously. In addition, TRAIL can transfer information from other aligned networks to the new network to solve the problem of lacking information. Extensive experiments conducted on two real-world aligned LBSNs show that TRAIL can achieve very good performance and substantially outperform the baseline methods.
---
paper_title: Mining User Mobility Features for Next Place Prediction in Location-Based Services
paper_content:
Mobile location-based services are thriving, providing an unprecedented opportunity to collect fine grained spatio-temporal data about the places users visit. This multi-dimensional source of data offers new possibilities to tackle established research problems on human mobility, but it also opens avenues for the development of novel mobile applications and services. In this work we study the problem of predicting the next venue a mobile user will visit, by exploring the predictive power offered by different facets of user behavior. We first analyze about 35 million check-ins made by about 1 million Foursquare users in over 5 million venues across the globe, spanning a period of five months. We then propose a set of features that aim to capture the factors that may drive users' movements. Our features exploit information on transitions between types of places, mobility flows between venues, and spatio-temporal characteristics of user check-in patterns. We further extend our study combining all individual features in two supervised learning models, based on linear regression and M5 model trees, resulting in a higher overall prediction accuracy. We find that the supervised methodology based on the combination of multiple features offers the highest levels of prediction accuracy: M5 model trees are able to rank in the top fifty venues one in two user check-ins, amongst thousands of candidate items in the prediction list.
---
paper_title: Dynamic User Modeling in Social Media Systems
paper_content:
Social media provides valuable resources to analyze user behaviors and capture user preferences. This article focuses on analyzing user behaviors in social media systems and designing a latent class statistical mixture model, named temporal context-aware mixture model (TCAM), to account for the intentions and preferences behind user behaviors. Based on the observation that the behaviors of a user in social media systems are generally influenced by intrinsic interest as well as the temporal context (e.g., the public's attention at that time), TCAM simultaneously models the topics related to users' intrinsic interests and the topics related to temporal context and then combines the influences from the two factors to model user behaviors in a unified way. Considering that users' interests are not always stable and may change over time, we extend TCAM to a dynamic temporal context-aware mixture model (DTCAM) to capture users' changing interests. To alleviate the problem of data sparsity, we exploit the social and temporal correlation information by integrating a social-temporal regularization framework into the DTCAM model. To further improve the performance of our proposed models (TCAM and DTCAM), an item-weighting scheme is proposed to enable them to favor items that better represent topics related to user interests and topics related to temporal context, respectively. Based on our proposed models, we design a temporal context-aware recommender system (TCARS). To speed up the process of producing the top-k recommendations from large-scale social media data, we develop an efficient query-processing technique to support TCARS. Extensive experiments have been conducted to evaluate the performance of our models on four real-world datasets crawled from different social media sites. The experimental results demonstrate the superiority of our models, compared with the state-of-the-art competitor methods, by modeling user behaviors more precisely and making more effective and efficient recommendations.
---
paper_title: Using location for personalized POI recommendations in mobile environments
paper_content:
Internet-based recommender systems have traditionally employed collaborative filtering techniques to deliver relevant "digital" results to users. In the mobile Internet however, recommendations typically involve "physical" entities (e.g., restaurants), requiring additional user effort for fulfillment. Thus, in addition to the inherent requirements of high scalability and low latency, we must also take into account a "convenience" metric in making recommendations. In this paper, we propose an enhanced collaborative filtering solution that uses location as a key criterion for generating recommendations. We frame the discussion in the context of our "restaurant recommender" system, and describe preliminary results that indicate the utility of such an approach. We conclude with a look at open issues in this space, and motivate a future discussion on the business impact and implications of mining the data in such systems.
---
paper_title: Learning travel recommendations from user-generated GPS traces
paper_content:
The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness.
---
paper_title: A General Geographical Probabilistic Factor Model for Point of Interest Recommendation
paper_content:
The problem of point of interest (POI) recommendation is to provide personalized recommendations of places, such as restaurants and movie theaters. The increasing prevalence of mobile devices and of location based social networks (LBSNs) poses significant new opportunities as well as challenges, which we address. The decision process for a user to choose a POI is complex and can be influenced by numerous factors, such as personal preferences, geographical considerations, and user mobility behaviors. This is further complicated by the connection LBSNs and mobile devices. While there are some studies on POI recommendations, they lack an integrated analysis of the joint effect of multiple factors. Meanwhile, although latent factor models have been proved effective and are thus widely used for recommendations, adopting them to POI recommendations requires delicate consideration of the unique characteristics of LBSNs. To this end, in this paper, we propose a general geographical probabilistic factor model ( $\sf{Geo}$ -PFM) framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user’s check-in behavior. Also, user mobility behaviors can be effectively leveraged in the recommendation model. Moreover, based our $\sf{Geo}$ -PFM framework, we further develop a Poisson $\sf{Geo}$ -PFM which provides a more rigorous probabilistic generative process for the entire model and is effective in modeling the skewed user check-in count data as implicit feedback for better POI recommendations. Finally, extensive experimental results on three real-world LBSN datasets (which differ in terms of user mobility, POI geographical distribution, implicit response data skewness, and user-POI observation sparsity), show that the proposed recommendation methods outperform state-of-the-art latent factor models by a significant margin.
---
paper_title: Location recommendation for location-based social networks
paper_content:
In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.
---
paper_title: On the Levy-Walk Nature of Human Mobility
paper_content:
We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance.
---
paper_title: Friendship and mobility: user movement in location-based social networks
paper_content:
Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.
---
paper_title: Exploiting geographical influence for collaborative point-of-interest recommendation
paper_content:
In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.
---
paper_title: Time-aware point-of-interest recommendation
paper_content:
The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, \eg visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially.
---
paper_title: GeoSoCa: Exploiting Geographical, Social and Categorical Correlations for Point-of-Interest Recommendations
paper_content:
Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.
---
paper_title: The scaling laws of human travel
paper_content:
The website wheresgeorge.com invites its users to enter the serial numbers of their US dollar bills and track them across America and beyond. Why? “For fun and because it had not been done yet”, they say. But the dataset accumulated since December 1998 has provided the ideal raw material to test the mathematical laws underlying human travel, and that has important implications for the epidemiology of infectious diseases. Analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a ‘two-parameter continuous-time random walk’ model: our travel habits conform to a type of random proliferation known as ‘superdiffusion’. And with that much established, it should soon be possible to develop a new class of models to account for the spread of human disease. The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools1,2,3. Human travel, for example, is responsible for the geographical spread of human infectious disease4,5,6,7,8,9. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic10, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as Levy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process.
---
paper_title: iGSLR: personalized geo-social location recommendation: a kernel density estimation approach
paper_content:
With the rapidly growing location-based social networks (LBSNs), personalized geo-social recommendation becomes an important feature for LBSNs. Personalized geo-social recommendation not only helps users explore new places but also makes LBSNs more prevalent to users. In LBSNs, aside from user preference and social influence, geographical influence has also been intensively exploited in the process of location recommendation based on the fact that geographical proximity significantly affects users' check-in behaviors. Although geographical influence on users should be personalized, current studies only model the geographical influence on all users' check-in behaviors in a universal way. In this paper, we propose a new framework called iGSLR to exploit personalized social and geographical influence on location recommendation. iGSLR uses a kernel density estimation approach to personalize the geographical influence on users' check-in behaviors as individual distributions rather than a universal distribution for all users. Furthermore, user preference, social influence, and personalized geographical influence are integrated into a unified geo-social recommendation framework. We conduct a comprehensive performance evaluation for iGSLR using two large-scale real data sets collected from Foursquare and Gowalla which are two of the most popular LBSNs. Experimental results show that iGSLR provides significantly superior location recommendation compared to other state-of-the-art geo-social recommendation techniques.
---
paper_title: Exploiting Geographical Neighborhood Characteristics for Location Recommendation
paper_content:
Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach.
---
paper_title: GeoMF: joint geographical modeling and matrix factorization for point-of-interest recommendation
paper_content:
Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance.
---
paper_title: Fused matrix factorization with geographical and social influence in location-based social networks
paper_content:
Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly.
---
paper_title: Capturing Geographical Influence in POI Recommendations
paper_content:
Point-of-Interest POI recommendation is a significant service for location-based social networks LBSNs. It recommends new places such as clubs, restaurants, and coffee bars to users. Whether recommended locations meet users' interests depends on three factors: user preference, social influence, and geographical influence. Hence extracting the information from users' check-in records is the key to POI recommendation in LBSNs. Capturing user preference and social influence is relatively easy since it is analogical to the methods in a movie recommender system. However, it is a new topic to capture geographical influence. Previous studies indicate that check-in locations disperse around several centers and we are able to employ Gaussian distribution based models to approximate users' check-in behaviors. Yet centers discovering methods are dissatisfactory. In this paper, we propose two models--Gaussian mixture model GMM and genetic algorithm based Gaussian mixture model GA-GMM to capture geographical influence. More specifically, we exploit GMM to automatically learn users' activity centers; further we utilize GA-GMM to improve GMM by eliminating outliers. Experimental results on a real-world LBSN dataset show that GMM beats several popular geographical capturing models in terms of POI recommendation, while GA-GMM excludes the effect of outliers and enhances GMM.
---
paper_title: Understanding individual human mobility patterns
paper_content:
Despite their importance for urban planning, traffic forecasting and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited owing to the lack of tools to monitor the time-resolved location of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six-month period. We find that, in contrast with the random trajectories predicted by the prevailing Lévy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time-independent characteristic travel distance and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that, despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent-based modelling.
---
paper_title: Introduction to Recommender Systems Handbook
paper_content:
Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the handbook offers.
---
paper_title: TrustWalker: a random walk model for combining trust-based and item-based recommendation
paper_content:
Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.
---
paper_title: Location recommendation for location-based social networks
paper_content:
In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.
---
paper_title: A sentiment-enhanced personalized location recommendation system
paper_content:
Although online recommendation systems such as recommendation of movies or music have been systematically studied in the past decade, location recommendation in Location Based Social Networks (LBSNs) is not well investigated yet. In LBSNs, users can check in and leave tips commenting on a venue. These two heterogeneous data sources both describe users' preference of venues. However, in current research work, only users' check-in behavior is considered in users' location preference model, users' tips on venues are seldom investigated yet. Moreover, while existing work mainly considers social influence in recommendation, we argue that considering venue similarity can further improve the recommendation performance. In this research, we ameliorate location recommendation by enhancing not only the user location preference model but also recommendation algorithm. First, we propose a hybrid user location preference model by combining the preference extracted from check-ins and text-based tips which are processed using sentiment analysis techniques. Second, we develop a location based social matrix factorization algorithm that takes both user social influence and venue similarity influence into account in location recommendation. Using two datasets extracted from the location based social networks Foursquare, experiment results demonstrate that the proposed hybrid preference model can better characterize user preference by maintaining the preference consistency, and the proposed algorithm outperforms the state-of-the-art methods.
---
paper_title: Point-of-Interest Recommendations: Learning Potential Check-ins from Friends
paper_content:
The emergence of Location-based Social Network (LBSN) services provides a wonderful opportunity to build personalized Point-of-Interest (POI) recommender systems. Although a personalized POI recommender system can significantly facilitate users' outdoor activities, it faces many challenging problems, such as the hardness to model user's POI decision making process and the difficulty to address data sparsity and user/location cold-start problem. To cope with these challenges, we define three types of friends (i.e., social friends, location friends, and neighboring friends) in LBSN, and develop a two-step framework to leverage the information of friends to improve POI recommendation accuracy and address cold-start problem. Specifically, we first propose to learn a set of potential locations that each individual's friends have checked-in before and this individual is most interested in. Then we incorporate three types of check-ins (i.e., observed check-ins, potential check-ins and other unobserved check-ins) into matrix factorization model using two different loss functions (i.e., the square error based loss and the ranking error based loss). To evaluate the proposed model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results demonstrate the effectiveness of our methods.
---
paper_title: LORE: exploiting sequential influence for location recommendations
paper_content:
Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques.
---
paper_title: Exploring social-historical ties on location-based social networks
paper_content:
Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction.
---
paper_title: GeoSoCa: Exploiting Geographical, Social and Categorical Correlations for Point-of-Interest Recommendations
paper_content:
Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.
---
paper_title: SoRec: social recommendation using probabilistic matrix factorization
paper_content:
Data sparsity, scalability and prediction quality have been recognized as the three most crucial challenges that every collaborative filtering algorithm or recommender system confronts. Many existing approaches to recommender systems can neither handle very large datasets nor easily deal with users who have made very few ratings or even none at all. Moreover, traditional recommender systems assume that all the users are independent and identically distributed; this assumption ignores the social interactions or connections among users. In view of the exponential growth of information generated by online social networks, social network analysis is becoming important for many Web applications. Following the intuition that a person's social network will affect personal behaviors on the Web, this paper proposes a factor analysis approach based on probabilistic matrix factorization to solve the data sparsity and poor prediction accuracy problems by employing both users' social network information and rating records. The complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations, while the experimental results shows that our method performs much better than the state-of-the-art approaches, especially in the circumstance that users have made few or no ratings.
---
paper_title: Fused matrix factorization with geographical and social influence in location-based social networks
paper_content:
Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly.
---
paper_title: gSCorr: modeling geo-social correlations for new check-ins on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures.
---
paper_title: A matrix factorization technique with trust propagation for recommendation in social networks
paper_content:
Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users that have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset that we have recently crawled from Flixster.com. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users.
---
paper_title: Trust-aware recommender systems
paper_content:
Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings.
---
paper_title: Recommender systems with social regularization
paper_content:
Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods.
---
paper_title: Probabilistic Matrix Factorization
paper_content:
Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system.
---
paper_title: LORE: exploiting sequential influence for location recommendations
paper_content:
Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques.
---
paper_title: Friendship and mobility: user movement in location-based social networks
paper_content:
Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.
---
paper_title: TICRec: A Probabilistic Framework to Utilize Temporal Influence Correlations for Time-Aware Location Recommendations
paper_content:
In location-based social networks (LBSNs), time significantly affects users’ check-in behaviors, for example, people usually visit different places at different times of weekdays and weekends, e.g., restaurants at noon on weekdays and bars at midnight on weekends. Current studies use the temporal influence to recommend locations through dividing users’ check-in locations into time slots based on their check-in time and learning their preferences to locations in each time slot separately. Unfortunately, these studies generally suffer from two major limitations: (1) the loss of time information because of dividing a day into time slots and (2) the lack of temporal influence correlations due to modeling users’ preferences to locations for each time slot separately. In this paper, we propose a probabilistic framework called TICRec that utilizes temporal influence correlations (TIC) of both weekdays and weekends for time-aware location recommendations. TICRec not only recommends locations to users, but it also suggests when a user should visit a recommended location. In TICRec, we estimate a time probability density of a user visiting a new location without splitting the continuous time into discrete time slots to avoid the time information loss. To leverage the TIC, TICRec considers both user-based TIC (i.e., different users’ check-in behaviors to the same location at different times ) and location-based TIC (i.e., the same user's check-in behaviors to different locations at different times ). Finally, we conduct a comprehensive performance evaluation for TICRec using two real data sets collected from Foursquare and Gowalla. Experimental results show that TICRec achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques with temporal influence.
---
paper_title: Time-aware point-of-interest recommendation
paper_content:
The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, \eg visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially.
---
paper_title: Collaborative filtering with temporal dynamics
paper_content:
Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.
---
paper_title: STELLAR: spatial-temporal latent ranking for successive point-of-interest recommendation
paper_content:
Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected]
---
paper_title: Where You Like to Go Next: Successive Point-of-Interest Recommendation
paper_content:
Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the "check-ins" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.
---
paper_title: Temporal collaborative filtering with bayesian probabilistic tensor factorization
paper_content:
Real-world relational data are seldom stationary, yet traditional collaborative filtering algorithms generally rely on this assumption. Motivated by our sales prediction problem, we propose a factor-based algorithm that is able to take time into account. By introducing additional factors for time, we formalize this problem as a tensor factorization with a special constraint on the time dimension. Further, we provide a fully Bayesian treatment to avoid tuning parameters and achieve automatic model complexity control. To learn the model we develop an efficient sampling procedure that is capable of analyzing large-scale data sets. This new algorithm, called Bayesian Probabilistic Tensor Factorization (BPTF), is evaluated on several real-world problems including sales prediction and movie recommendation. Empirical results demonstrate the superiority of our temporal model.
---
paper_title: Unified Point-of-Interest Recommendation with Temporal Interval Assessment
paper_content:
Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.
---
paper_title: WSPred: A Time-Aware Personalized QoS Prediction Framework for Web Services
paper_content:
The exponential growth of Web service makes building high-quality service-oriented applications an urgent and crucial research problem. User-side QoS evaluations of Web services are critical for selecting the optimal Web service from a set of functionally equivalent service candidates. Since QoS performance of Web services is highly related to the service status and network environments which are variable against time, service invocations are required at different instances during a long time interval for making accurate Web service QoS evaluation. However, invoking a huge number of Web services from user-side for quality evaluation purpose is time-consuming, resource-consuming, and sometimes even impractical (e.g., service invocations are charged by service providers). To address this critical challenge, this paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users. WSPred requires no additional invocation of Web services. Based on the past Web service usage experience from different service users, WSPred builds feature models and employs these models to make personalized QoS prediction for different users. The extensive experimental results show the effectiveness and efficiency of WSPred. Moreover, we publicly release our real-world time-aware Web service QoS dataset for future research, which makes our experiments verifiable and reproducible.
---
paper_title: Factorizing personalized Markov chains for next-basket recommendation
paper_content:
Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.
---
paper_title: gSCorr: modeling geo-social correlations for new check-ins on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures.
---
paper_title: Exploring temporal effects for location recommendation on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance.
---
paper_title: Personalized ranking metric embedding for next new POI recommendation
paper_content:
The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods.
---
paper_title: Inferring a personalized next point-of-interest recommendation model with latent behavior patterns
paper_content:
In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task in location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenario, human exhibits distinct mobility patterns, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods.
---
paper_title: A sentiment-enhanced personalized location recommendation system
paper_content:
Although online recommendation systems such as recommendation of movies or music have been systematically studied in the past decade, location recommendation in Location Based Social Networks (LBSNs) is not well investigated yet. In LBSNs, users can check in and leave tips commenting on a venue. These two heterogeneous data sources both describe users' preference of venues. However, in current research work, only users' check-in behavior is considered in users' location preference model, users' tips on venues are seldom investigated yet. Moreover, while existing work mainly considers social influence in recommendation, we argue that considering venue similarity can further improve the recommendation performance. In this research, we ameliorate location recommendation by enhancing not only the user location preference model but also recommendation algorithm. First, we propose a hybrid user location preference model by combining the preference extracted from check-ins and text-based tips which are processed using sentiment analysis techniques. Second, we develop a location based social matrix factorization algorithm that takes both user social influence and venue similarity influence into account in location recommendation. Using two datasets extracted from the location based social networks Foursquare, experiment results demonstrate that the proposed hybrid preference model can better characterize user preference by maintaining the preference consistency, and the proposed algorithm outperforms the state-of-the-art methods.
---
paper_title: LCARS: A Spatial Item Recommender System
paper_content:
Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge to the traditional collaborative filtering-based recommender systems. The problem becomes even more challenging when people travel to a new city where they have no activity information. In this article, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants and shopping malls) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item cooccurrence patterns and exploiting item contents. The online recommendation part takes a querying user along with a querying city as input, and automatically combines the learned interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up the online process, a scalable query processing technique is developed by extending both the Threshold Algorithm (TA) and TA-approximation algorithm. We evaluate the performance of our recommender system on two real datasets, that is, DoubanEvent and Foursquare, and one large-scale synthetic dataset. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency. Besides, the experimental analysis results also demonstrate the excellent interpretability of LCARS.
---
paper_title: Content-Aware Collaborative Filtering for Location Recommendation Based on Human Mobility Data
paper_content:
Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem.
---
paper_title: Content-aware point of interest recommendation on location-based social networks
paper_content:
The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs.
---
paper_title: Social Topic Modeling for Point-of-Interest Recommendation in Location-Based Social Networks
paper_content:
In this paper, we address the problem of recommending Point-of-Interests (POIs) to users in a location-based social network. To the best of our knowledge, we are the first to propose the ST (Social Topic) model capturing both the social and topic aspects of user check-ins. We conduct experiments on real life data sets from Foursquare and Yelp. We evaluate the effectiveness of ST by evaluating the accuracy of top-k POI recommendation. The experimental results show that ST achieves better performance than the state-of-the-art models in the areas of social network-based recommender systems, and exploits the power of the location-based social network that has never been utilized before.
---
paper_title: Exploiting geographical influence for collaborative point-of-interest recommendation
paper_content:
In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.
---
paper_title: GeoSoCa: Exploiting Geographical, Social and Categorical Correlations for Point-of-Interest Recommendations
paper_content:
Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.
---
paper_title: Probabilistic factor models for web site recommendation
paper_content:
Due to the prevalence of personalization and information filtering applications, modeling users' interests on the Web has become increasingly important during the past few years. In this paper, aiming at providing accurate personalized Web site recommendations for Web users, we propose a novel probabilistic factor model based on dimensionality reduction techniques. We also extend the proposed method to collective probabilistic factor modeling, which further improves model performance by incorporating heterogeneous data sources. The proposed method is general, and can be applied to not only Web site recommendations, but also a wide range of Web applications, including behavioral targeting, sponsored search, etc. The experimental analysis on Web site recommendation shows that our method outperforms other traditional recommendation approaches. Moreover, the complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations.
---
paper_title: Fused matrix factorization with geographical and social influence in location-based social networks
paper_content:
Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly.
---
paper_title: Exploring millions of footprints in location sharing services
paper_content:
Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., "checkins"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services.
---
paper_title: Introduction to Recommender Systems Handbook
paper_content:
Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the handbook offers.
---
paper_title: Probabilistic Matrix Factorization
paper_content:
Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system.
---
paper_title: GeoMF: joint geographical modeling and matrix factorization for point-of-interest recommendation
paper_content:
Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance.
---
paper_title: Exploring temporal effects for location recommendation on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance.
---
paper_title: Learning geographical preferences for point-of-interest recommendation
paper_content:
The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin.
---
paper_title: Location recommendation for location-based social networks
paper_content:
In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.
---
paper_title: Location and Time Aware Social Collaborative Retrieval for New Successive Point-of-Interest Recommendation
paper_content:
In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method.
---
paper_title: STELLAR: spatial-temporal latent ranking for successive point-of-interest recommendation
paper_content:
Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected]
---
paper_title: Where You Like to Go Next: Successive Point-of-Interest Recommendation
paper_content:
Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the "check-ins" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR.
---
paper_title: Personalized ranking metric embedding for next new POI recommendation
paper_content:
The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods.
---
paper_title: Inferring a personalized next point-of-interest recommendation model with latent behavior patterns
paper_content:
In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task in location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenario, human exhibits distinct mobility patterns, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods.
---
paper_title: Location-based and preference-aware recommendation using sparse geo-social networking data
paper_content:
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
---
paper_title: TrustWalker: a random walk model for combining trust-based and item-based recommendation
paper_content:
Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.
---
paper_title: Who, What, When, and Where: Multi-Dimensional Collaborative Recommendations Using Tensor Factorization on Sparse User-Generated Data
paper_content:
Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them.
---
paper_title: Exploiting geographical influence for collaborative point-of-interest recommendation
paper_content:
In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.
---
paper_title: Rank-GeoFM: A Ranking based Geographical Factorization Method for Point of Interest Recommendation
paper_content:
With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy.
---
paper_title: Collaborative filtering with temporal dynamics
paper_content:
Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.
---
paper_title: Learning geographical preferences for point-of-interest recommendation
paper_content:
The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin.
---
paper_title: Exploring temporal effects for location recommendation on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance.
---
paper_title: Location-based and preference-aware recommendation using sparse geo-social networking data
paper_content:
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
---
paper_title: Location and Time Aware Social Collaborative Retrieval for New Successive Point-of-Interest Recommendation
paper_content:
In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method.
---
paper_title: STELLAR: spatial-temporal latent ranking for successive point-of-interest recommendation
paper_content:
Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected]
---
paper_title: Fused matrix factorization with geographical and social influence in location-based social networks
paper_content:
Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly.
---
paper_title: Personalized ranking metric embedding for next new POI recommendation
paper_content:
The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods.
---
paper_title: Friendship and mobility: user movement in location-based social networks
paper_content:
Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.
---
paper_title: Exploring social-historical ties on location-based social networks
paper_content:
Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction.
---
paper_title: Fused matrix factorization with geographical and social influence in location-based social networks
paper_content:
Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly.
---
paper_title: gSCorr: modeling geo-social correlations for new check-ins on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures.
---
paper_title: Location-based and preference-aware recommendation using sparse geo-social networking data
paper_content:
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
---
paper_title: Location recommendation for location-based social networks
paper_content:
In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.
---
paper_title: Learning geographical preferences for point-of-interest recommendation
paper_content:
The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin.
---
paper_title: A probabilistic interpretation of precision, recall and F-score, with implication for evaluation
paper_content:
We address the problems of 1/ assessing the confidence of the standard point estimates, precision, recall and F-score, and 2/ comparing the results, in terms of precision, recall and F-score, obtained using two different methods. To do so, we use a probabilistic setting which allows us to obtain posterior distributions on these performance indicators, rather than point estimates. This framework is applied to the case where different methods are run on different datasets from the same source, as well as the standard situation where competing results are obtained on the same data.
---
paper_title: The relationship between Precision-Recall and ROC curves
paper_content:
Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve.
---
paper_title: Ranking with ordered weighted pairwise classification
paper_content:
In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach.
---
paper_title: Large scale image annotation: learning to rank with joint word-image embeddings
paper_content:
Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced "sibling" precision metric, where our method also obtains excellent results.
---
paper_title: Local collaborative ranking
paper_content:
Personalized recommendation systems are used in a wide variety of applications such as electronic commerce, social networks, web search, and more. Collaborative filtering approaches to recommendation systems typically assume that the rating matrix (e.g., movie ratings by viewers) is low-rank. In this paper, we examine an alternative approach in which the rating matrix is locally low-rank. Concretely, we assume that the rating matrix is low-rank within certain neighborhoods of the metric space defined by (user, item) pairs. We combine a recent approach for local low-rank approximation based on the Frobenius norm with a general empirical risk minimization for ranking losses. Our experiments indicate that the combination of a mixture of local low-rank matrices each of which was trained to minimize a ranking loss outperforms many of the currently used state-of-the-art recommendation systems. Moreover, our method is easy to parallelize, making it a viable approach for large scale real-world rank-based recommendation systems.
---
paper_title: Rank-GeoFM: A Ranking based Geographical Factorization Method for Point of Interest Recommendation
paper_content:
With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy.
---
paper_title: STELLAR: spatial-temporal latent ranking for successive point-of-interest recommendation
paper_content:
Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected]
---
paper_title: Learning to rank: from pairwise approach to listwise approach
paper_content:
The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach.
---
paper_title: Fused matrix factorization with geographical and social influence in location-based social networks
paper_content:
Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly.
---
paper_title: BPR: Bayesian Personalized Ranking from Implicit Feedback
paper_content:
Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.
---
paper_title: Exploring temporal effects for location recommendation on location-based social networks
paper_content:
Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance.
---
paper_title: Learning to rank for information retrieval
paper_content:
This tutorial is concerned with a comprehensive introduction to the research area of learning to rank for information retrieval. In the first part of the tutorial, we will introduce three major approaches to learning to rank, i.e., the pointwise, pairwise, and listwise approaches, analyze the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures, evaluate the performance of these approaches on the LETOR benchmark datasets, and demonstrate how to use these approaches to solve real ranking applications. In the second part of the tutorial, we will discuss some advanced topics regarding learning to rank, such as relational ranking, diverse ranking, semi-supervised ranking, transfer ranking, query-dependent ranking, and training data preprocessing. In the third part, we will briefly mention the recent advances on statistical learning theory for ranking, which explain the generalization ability and statistical consistency of different ranking methods. In the last part, we will conclude the tutorial and show several future research directions.
---
paper_title: Content-aware point of interest recommendation on location-based social networks
paper_content:
The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs.
---
paper_title: Personalized ranking metric embedding for next new POI recommendation
paper_content:
The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods.
---
paper_title: Latent Collaborative Retrieval
paper_content:
Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query x user x item tensor for training instead of the more traditional user x item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines.
---
paper_title: Location-based and preference-aware recommendation using sparse geo-social networking data
paper_content:
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
---
| Title: A Survey of Point-of-interest Recommendation in Location-based Social Networks
Section 1: Introduction
Description 1: Write about the prevalence of LBSNs, significance of POI recommendations, and the challenges associated with them.
Section 2: Problem Definition
Description 2: Define the fundamental concepts and formalize the problem of POI recommendation in LBSNs.
Section 3: Taxonomy by Influential Factors
Description 3: Discuss the categorization of POI recommendation systems based on influential factors like geographical influence, social influence, temporal influence, and content indication.
Section 4: Geographical Influence
Description 4: Explain the geographical factors affecting POI recommendations and how different models capture these influences.
Section 5: Social Influence
Description 5: Elaborate on ways to incorporate social influence in POI recommendations, including memory-based and model-based approaches.
Section 6: Temporal Influence
Description 6: Detail the role of temporal factors in POI recommendations including periodicity, consecutiveness, and non-uniformness.
Section 7: Content Indication
Description 7: Describe the use of content like user comments and ratings to enhance POI recommendations.
Section 8: Taxonomy by Methodology
Description 8: Categorize POI recommendation systems based on the methodologies used, including fused models and joint models.
Section 9: Representative Work for MF-based Joint Model
Description 9: Present examples of matrix factorization-based joint models that incorporate temporal and geographical influences for POI recommendations.
Section 10: Representative Work for Generative Graphical Model
Description 10: Discuss representative work involving generative graphical models and their applications in POI recommendations.
Section 11: Taxonomy by Task
Description 11: Classify POI recommendation tasks into general POI recommendation and successive POI recommendation and describe their characteristics.
Section 12: General POI Recommendation
Description 12: Explain methods for general POI recommendation that suggest top-N POIs for users.
Section 13: Successive POI Recommendation
Description 13: Focus on methods for successive POI recommendation that take into account users' most recent check-ins to promptly provide recommendations.
Section 14: Performance Evaluation
Description 14: Discuss the data sources and metrics used to evaluate the performance of POI recommendation systems.
Section 15: Trends and New Directions
Description 15: Highlight the emerging trends and possible future directions in POI recommendation research.
Section 16: Conclusion
Description 16: Summarize the key findings and contributions of the survey, and provide final remarks on the state of POI recommendation in LBSNs. |
A Comprehensive Survey on Anomaly-Based Intrusion Detection in MANET | 7 | ---
paper_title: System Health and Intrusion Monitoring Using a Hierarchy of Constraints
paper_content:
This paper presents a new approach to run-time security monitoring that can detect system abnormalities including attacks, faults, or operational errors. The approach, System Health and Intrusion Monitoring (SHIM), employs a hierarchy of constraints to describe correct operation of a system at various levels of abstraction. The constraints capture static behavior, dynamic behavior, and time-critical behavior of a system. A system in execution will be monitored for violation of the constraints, which may indicate potential security problems in the system. SHIM is based on specification-based intrusion detection, but it attempts to provide a systematic framework for developing the specifications/ constraints. SHIM does not detect directly the intrusive actions in an attack, but their manifestations as violations of constraints. In this paper, we describe the constraint model and the methodology for developing the constraints. In addition, we present preliminary results on the constraints developed for host programs and network protocols. By bounding the behavior of various system components at different levels of abstraction, SHIM has a high chance of detecting different types of attacks and their variants.
---
paper_title: Real-time intrusion detection for ad hoc networks
paper_content:
A mobile ad hoc network is a collection of nodes that are connected through a wireless medium and form rapidly changing topologies. The widely accepted existing routing protocols designed to accommodate the needs of such self-organised networks do not address possible threats aimed at the disruption of the protocol itself. The assumption of a trusted environment is not one that can be realistically expected; hence several efforts have been made towards the design of a secure routing protocol for ad hoc networks. The main problems with this approach are that it requires changes to the underlying routing protocol and that manual configuration of the initial security associations cannot be completely avoided. We propose RIDAN, a novel architecture that uses knowledge-based intrusion detection techniques to detect, in real-time, attacks that an adversary can perform against the routing fabric of a mobile ad hoc network. Our system is designed to take countermeasures minimising the effectiveness of an attack and maintaining the performance of the network within acceptable limits. RIDAN does not introduce any changes to the underlying routing protocol since it operates as an intermediate component between the network traffic and the utilised protocol with minimum processing overhead. We have developed a prototype that was evaluated in AODV-enabled networks using the ns-2 network simulator.
---
paper_title: Comparative study of Distributed Intrusion Detection in Ad-hoc Networks
paper_content:
In recent years ad hoc networks are widely used because of mobility and open architecture nature. But new technology always comes with its own set of problems. Security of ad hoc network is an area of widespread research in recent years. Some unique characteristics of ad hoc network itself are an immense dilemma in the way of security. In this paper we have presented study about characteristics of ad hoc network, how they are problematic in ad hoc network security, attacks in ad hoc network and brief description of some existing intrusion detection system. We have also justified why distributed intrusion detection is better for ad hoc network with comparative study of existing intrusion detections in ad hoc network.
---
paper_title: Multivariate Statistical Analysis of Audit Trails for Host-Based Intrusion Detection
paper_content:
Intrusion detection complements prevention mechanisms, such as firewalls, cryptography, and authentication, to capture intrusions into an information system while they are acting on the information system. Our study investigates a multivariate quality control technique to detect intrusions by building a long-term profile of normal activities in information systems (norm profile) and using the norm profile to detect anomalies. The multivariate quality control technique is based on Hotelling's T/sup 2/ test that detects both counterrelationship anomalies and mean-shift anomalies. The performance of the Hotelling's T/sup 2/ test is examined on two sets of computer audit data: a small data set and a large multiday data set. Both data sets contain sessions of normal and intrusive activities. For the small data set, the Hotelling's T/sup 2/ test signals all the intrusion sessions and produces no false alarms for the normal sessions. For the large data set, the Hotelling's T/sup 2/ test signals 92 percent of the intrusion sessions while producing no false alarms for the normal sessions. The performance of the Hotelling's T/sup 2/ test is also compared with the performance of a more scalable multivariate technique-a chi-squared distance test.
---
paper_title: Specification-based anomaly detection: a new approach for detecting network intrusions
paper_content:
Unlike signature or misuse based intrusion detection techniques, anomaly detection is capable of detecting novel attacks. However, the use of anomaly detection in practice is hampered by a high rate of false alarms. Specification-based techniques have been shown to produce a low rate of false alarms, but are not as effective as anomaly detection in detecting novel attacks, especially when it comes to network probing and denial-of-service attacks. This paper presents a new approach that combines specification-based and anomaly-based intrusion detection, mitigating the weaknesses of the two approaches while magnifying their strengths. Our approach begins with state-machine specifications of network protocols, and augments these state machines with information about statistics that need to be maintained to detect anomalies. We present a specification language in which all of this information can be captured in a succinct manner. We demonstrate the effectiveness of the approach on the 1999 Lincoln Labs intrusion detection evaluation data, where we are able to detect all of the probing and denial-of-service attacks with a low rate of false alarms (less than 10 per day). Whereas feature selection was a crucial step that required a great deal of expertise and insight in the case of previous anomaly detection approaches, we show that the use of protocol specifications in our approach simplifies this problem. Moreover, the machine learning component of our approach is robust enough to operate without human supervision, and fast enough that no sampling techniques need to be employed. As further evidence of effectiveness, we present results of applying our approach to detect stealthy email viruses in an intranet environment.
---
paper_title: Stochastic protocol modeling for anomaly based network intrusion detection
paper_content:
A new method for detecting anomalies in the usage of protocols in computer networks is presented. The proposed methodology is applied to TCP and disposed in two steps. First, a quantization of the TCP header space is accomplished, so that a unique symbol is associated with each TCP segment. TCP-based network traffic is thus captured, quantized and represented by a sequence of symbols. The second step in our approach is the modeling of these sequences by means of a Markov chain. The analysis of the model obtained for diverse TCP sources reveals that it captures adequately the essence of the protocol dynamics. Once the model is built it is possible to use it as a representation of the normal usage of the protocol, so that deviations from the behavior provided by the model can be considered as a sign of protocol misusage.
---
paper_title: Detection of Web-based attacks through Markovian protocol parsing
paper_content:
This paper presents a novel approach based on the monitoring of incoming HTTP requests to detect attacks against Web servers. The detection is accomplished through a Markovian model whose states and transitions between them are determined from the specification of the HTTP protocol while the probabilities of the symbols associated to the Markovian source are obtained during a training stage according to a set of attack-free requests for the target server. The experiments carried out show a high detection capability with low false positive rates at reasonable computation requirements.
---
paper_title: Host-Based Intrusion Detection Using Dynamic and Static Behavioral Models
paper_content:
Intrusion detection has emerged as an important approach to network security. In this paper, we adopt an anomaly detection approach by detecting possible intrusions based on program or user profiles built from normal usage data. In particular, program profiles based on Unix system calls and user profiles based on Unix shell commands are modeled using two different types of behavioral models for data mining. The dynamic modeling approach is based on hidden Markov models (HMM) and the principle of maximum likelihood, while the static modeling approach is based on event occurrence frequency distributions and the principle of minimum cross entropy. The novelty detection approach is adopted to estimate the model parameters using normal training data only, as opposed to the classification approach which has to use both normal and intrusion data for training. To determine whether or not a certain behavior is similar enough to the normal model and hence should be classified as normal, we use a scheme that can be justified from the perspective of hypothesis testing. Our experimental results show that the dynamic modeling approach is better than the static modeling approach for the system call datasets, while the dynamic modeling approach is worse for the shell command datasets. Moreover, the static modeling approach is similar in performance to instance-based learning reported previously by others for the same shell command database but with much higher computational and storage requirements than our method.
---
paper_title: Use of K-Nearest Neighbor Classifier for Intrusion Detection
paper_content:
A new approach, based on the k-Nearest Neighbor (kNN) classifier, is used to classify program behavior as normal or intrusive. Program behavior, in turn, is represented by frequencies of system calls. Each system call is treated as a word and the collection of system calls over each program execution as a document. These documents are then classified using kNN classifier, a popular method in text categorization. This method seems to offer some computational advantages over those that seek to characterize program behavior with short sequences of system calls and generate individual program profiles. Preliminary experiments with 1998 DARPA BSM audit data show that the kNN classifier can effectively detect intrusive attacks and achieve a low false positive rate.
---
paper_title: Bayesian event classification for intrusion detection
paper_content:
Intrusion detection systems (IDSs) attempt to identify attacks by comparing collected data to predefined signatures known to be malicious (misuse-based IDSs) or to a model of legal behavior (anomaly-based IDSs). Anomaly-based approaches have the advantage of being able to detect previously unknown attacks, but they suffer from the difficulty of building robust models of acceptable behavior, which may result in a large number of false alarms. Almost all current anomaly-based intrusion detection systems classify an input event as normal or anomalous by analyzing its features, utilizing a number of different models. A decision for an input event is made by aggregating the results of all employed models. We have identified two reasons for the large number of false alarms, caused by incorrect classification of events in current systems. One is the simplistic aggregation of model outputs in the decision phase. Often, only the sum of the model results is calculated and compared to a threshold. The other reason is the lack of integration of additional information into the decision process. This additional information can be related to the models, such as the confidence in a model's output, or can be extracted from external sources. To mitigate these shortcomings, we propose an event classification scheme that is based on Bayesian networks. Bayesian networks improve the aggregation of different model outputs and allow one to seamlessly incorporate additional information. Experimental results show that the accuracy of the event classification process is significantly improved using our proposed approach.
---
paper_title: Learning nonstationary models of normal network traffic for detecting novel attacks
paper_content:
Traditional intrusion detection systems (IDS) detect attacks by comparing current behavior to signatures of known attacks. One main drawback is the inability of detecting new attacks which do not have known signatures. In this paper we propose a learning algorithm that constructs models of normal behavior from attack-free network traffic. Behavior that deviates from the learned normal model signals possible novel attacks. Our IDS is unique in two respects. First, it is nonstationary, modeling probabilities based on the time since the last event rather than on average rate. This prevents alarm floods. Second, the IDS learns protocol vocabularies (at the data link through application layers) in order to detect unknown attacks that attempt to exploit implementation errors in poorly tested features of the target software. On the 1999 DARPA IDS evaluation data set [9], we detect 70 of 180 attacks (with 100 false alarms), about evenly divided between user behavioral anomalies (IP addresses and ports, as modeled by most other systems) and protocol anomalies. Because our methods are unconventional there is a significant non-overlap of our IDS with the original DARPA participants, which implies that they could be combined to increase coverage.
---
paper_title: Fuzzy network profiling for intrusion detection
paper_content:
The Fuzzy Intrusion Recognition Engine (FIRE) is an anomaly-based intrusion detection system that uses fuzzy logic to assess whether malicious activity is taking place on a network. It uses simple data mining techniques to process the network input data and help expose metrics that are particularly significant to anomaly detection. These metrics are then evaluated as fuzzy sets. FIRE uses a fuzzy analysis engine to evaluate the fuzzy inputs and trigger alert levels for the security administrator. This paper describes the components in the FIRE architecture and explains their roles. Particular attention is given to explaining the benefits of data mining and how this can improve the meaningfulness of the fuzzy sets. Fuzzy rules are developed for some common intrusion detection scenarios. The results of tests with actual network data and actual malicious attacks are described. The FIRE IDS can detect a wide-range of common attack types.
---
paper_title: Fuzzy Data Mining And Genetic Algorithms Applied To Intrusion Detection
paper_content:
We are developing a prototype intelligent intrusion detection system (IIDS) to demonstrate the effectiveness of data mining techniques that utilize fuzzy logic and genetic algorithms. This system combines both anomaly based intrusion detection using fuzzy data mining techniques and misuse detection using traditional rule-based expert system techniques. The anomaly-based components are developed using fuzzy data mining techniques. They look for deviations from stored patterns of normal behavior. Genetic algorithms are used to tune the fuzzy membership functions and to select an appropriate set of features. The misuse detection components look for previously described patterns of behavior that are likely to indicate an intrusion. Both network traffic and system audit data are used as inputs for both components.
---
paper_title: Intrusion Detection in Wireless Ad Hoc Networks
paper_content:
Intrusion detection has, over the last few years, assumed paramount importance within the broad realm of network security and it has significant impact on wireless ad hoc networks. These networks do not have an underlying infrastructure and the network topology constantly changes. The inherently vulnerable characteristics of wireless ad hoc networks make them susceptible to attacks, and it may be difficult to control before any prevention works out. Secondly, with so much advancement in hacking, if attackers use sophisticated technology, they will eventually succeed in infiltrating the system. This makes it important to constantly or periodically monitor what is taking place on a system and look for suspicious behavior. Intrusion detection systems (IDS)monitor audit data, look for intrusions to the system, and initiate a proper response. In this paper, we present a method for determining critical path that use the distributed security scheme to find out the malicious node. The nodes with the help of the critical links will find out the malicious node using distributed security scheme and inform all the nodes about malicious node. The simulation results describe the details of the critical path test.
---
paper_title: Intrusion Detection Techniques for Mobile Wireless Networks
paper_content:
The rapid proliferation of wireless networks and mobile computing applications has changed the landscape of network security. The traditional way of protecting networks with firewalls and encryption software is no longer sufficient and effective. We need to search for new architecture and mechanisms to protect the wireless networks and mobile computing application. In this paper, we examine the vulnerabilities of wireless networks and argue that we must include intrusion detection in the security architecture for mobile computing environment. We have developed such an architecture and evaluated a key mechanism in this architecture, anomaly detection for mobile ad-hoc network, through simulation experiments.
---
paper_title: Alert aggregation in mobile ad hoc networks
paper_content:
In Intrusion Detection Systems (IDSs) for Mobile Ad hoc NETworks (MANETs), IDS agents using local detection engines alone may lead to undesirable performance due to the dynamic feature of MANETs. In this paper, we present a nonoverlapping Zone-based Intrusion Detection System (ZBIDS) for MANETs. Focusing on the protection of MANET routing protocols, we propose the collaboration mechanism of ZBIDS agents and an aggregation algorithm used by ZBIDS gateway nodes. The aggregation algorithm mainly utilizes the probability distribution of the $Source$ attribute in order to make the final decisions to generate alarms. We demonstrate that, by integrating the security related information from a wider area, the aggregation algorithm can reduce the false alarm ratio and improve the detection ratio. Also, the gateway nodes in ZBIDS can provide more diagnostic information by presenting a global view of attacks. We also present an alert data model conformed to Intrusion Detection Message Exchange Format (IDMEF) to facilitate the interoperability of IDS agents. Based on the routing disruption attack aimed at the Dynamic Source Routing protocol (DSR), we study the performance of ZBIDS at different mobility levels. Simulation results show that our system can achieve lower false positive ratio and higher detection ratio, compared to systems with local detection only.
---
paper_title: Effective intrusion detection using multiple sensors in wireless ad hoc networks
paper_content:
In this paper we propose a distributed intrusion detection system for ad hoc wireless networks based on mobile agent technology. Wireless networks are particularly vulnerable to intrusion, as they operate in open medium, and use cooperative strategies for network communications. By efficiently merging audit data from multiple network sensors, we analyze the entire ad hoc wireless network for intrusions and try to inhibit intrusion attempts. In contrast to many intrusion detection systems designed for wired networks, we implement an efficient and bandwidth-conscious framework that targets intrusion at multiple levels and takes into account distributed nature of ad hoc wireless network management and decision policies.
---
paper_title: Intrusion Detection in Wireless Ad Hoc Networks
paper_content:
Intrusion detection has, over the last few years, assumed paramount importance within the broad realm of network security and it has significant impact on wireless ad hoc networks. These networks do not have an underlying infrastructure and the network topology constantly changes. The inherently vulnerable characteristics of wireless ad hoc networks make them susceptible to attacks, and it may be difficult to control before any prevention works out. Secondly, with so much advancement in hacking, if attackers use sophisticated technology, they will eventually succeed in infiltrating the system. This makes it important to constantly or periodically monitor what is taking place on a system and look for suspicious behavior. Intrusion detection systems (IDS)monitor audit data, look for intrusions to the system, and initiate a proper response. In this paper, we present a method for determining critical path that use the distributed security scheme to find out the malicious node. The nodes with the help of the critical links will find out the malicious node using distributed security scheme and inform all the nodes about malicious node. The simulation results describe the details of the critical path test.
---
paper_title: Intrusion Detection Techniques for Mobile Wireless Networks
paper_content:
The rapid proliferation of wireless networks and mobile computing applications has changed the landscape of network security. The traditional way of protecting networks with firewalls and encryption software is no longer sufficient and effective. We need to search for new architecture and mechanisms to protect the wireless networks and mobile computing application. In this paper, we examine the vulnerabilities of wireless networks and argue that we must include intrusion detection in the security architecture for mobile computing environment. We have developed such an architecture and evaluated a key mechanism in this architecture, anomaly detection for mobile ad-hoc network, through simulation experiments.
---
paper_title: Alert aggregation in mobile ad hoc networks
paper_content:
In Intrusion Detection Systems (IDSs) for Mobile Ad hoc NETworks (MANETs), IDS agents using local detection engines alone may lead to undesirable performance due to the dynamic feature of MANETs. In this paper, we present a nonoverlapping Zone-based Intrusion Detection System (ZBIDS) for MANETs. Focusing on the protection of MANET routing protocols, we propose the collaboration mechanism of ZBIDS agents and an aggregation algorithm used by ZBIDS gateway nodes. The aggregation algorithm mainly utilizes the probability distribution of the $Source$ attribute in order to make the final decisions to generate alarms. We demonstrate that, by integrating the security related information from a wider area, the aggregation algorithm can reduce the false alarm ratio and improve the detection ratio. Also, the gateway nodes in ZBIDS can provide more diagnostic information by presenting a global view of attacks. We also present an alert data model conformed to Intrusion Detection Message Exchange Format (IDMEF) to facilitate the interoperability of IDS agents. Based on the routing disruption attack aimed at the Dynamic Source Routing protocol (DSR), we study the performance of ZBIDS at different mobility levels. Simulation results show that our system can achieve lower false positive ratio and higher detection ratio, compared to systems with local detection only.
---
| Title: A Comprehensive Survey on Anomaly-Based Intrusion Detection in MANET
Section 1: Introduction
Description 1: This section introduces the concept of Mobile Ad Hoc Networks (MANETs) and discusses their advantages and inherent security vulnerabilities.
Section 2: Intrusion Detection Systems
Description 2: This section discusses the vital role of Intrusion Detection Systems (IDS) in securing MANETs and explains their multi-layer protection strategy.
Section 3: Classification of Intrusion Detection System
Description 3: This section classifies IDSs based on data collection mechanisms (Host-based and Network-based) and detection techniques (misuse-based, anomaly-based, and specification-based).
Section 4: Architecture of IDS
Description 4: This section presents different architectures for IDS in MANETs, including stand-alone, distributed and cooperative, hierarchical, and mobile agent-based systems.
Section 5: Anomaly-Based Detection Techniques
Description 5: This section categorizes anomaly-based detection techniques into statistical-based, knowledge-based, and machine learning-based methods, detailing the strengths and weaknesses of each.
Section 6: Analysis and Evaluation of Anomaly-Based Detection System
Description 6: This section evaluates various anomaly-based detection systems proposed for MANETs, highlighting their main capabilities, benefits, and limitations.
Section 7: Discussion and Summary
Description 7: This section summarizes the critical aspects of IDS and highlights the advantages and challenges of different architectures and anomaly-based detection techniques in MANETs.
Section 8: Conclusion and Further Guidelines
Description 8: This section concludes the survey by suggesting future research directions, such as using game theory and Bayesian networks, and emphasizes the need for combined detection techniques and protecting IDS themselves. |
A Survey on Low-Power Techniques with Emerging Technologies: From Devices to Systems | 7 | ---
paper_title: A 22nm SoC platform technology featuring 3-D tri-gate and high-k/metal gate, optimized for ultra low power, high performance and high density SoC applications
paper_content:
A leading edge 22nm 3-D tri-gate transistor technology has been optimized for low power SoC products for the first time. Low standby power and high voltage transistors exploiting the superior short channel control, < 65mV/dec subthreshold slope and <40mV DIBL, of the Tri-Gate architecture have been fabricated concurrently with high speed logic transistors in a single SoC chip to achieve industry leading drive currents at record low leakage levels. NMOS/PMOS Idsat=0.41/0.37mA/um at 30pA/um Ioff, 0.75V, were used to build a low standby power 380Mb SRAM capable of operating at 2.6GHz with 10pA/cell standby leakages. This technology offers mix-and-match flexibility of transistor types, high-density interconnect stacks, and RF/mixed-signal features for leadership in mobile, handheld, wireless and embedded SoC products.
---
paper_title: Polarity control in double-gate, gate-all-around vertically stacked silicon nanowire FETs
paper_content:
We fabricated and characterized new ambipolar silicon nanowire (SiNW) FET transistors featuring two independent gate-all-around electrodes and vertically stacked SiNW channels. One gate electrode enables dynamic configuration of the device polarity (n or p-type), while the other switches on/off the device. Measurement results on silicon show I on /I off > 106 and S ≈ 64mV/dec (70mV/dec) for p(n)-type operation in the same device. We show that XOR operation is embedded in the device characteristic, and we demonstrate for the first time a fully functional 2-transistor XOR gate.
---
paper_title: High performance and highly uniform gate-all-around silicon nanowire MOSFETs with wire size dependent scaling
paper_content:
We demonstrate undoped-body, gate-all-around (GAA) Si nanowire (NW) MOSFETs with excellent electrostatic scaling. These NW devices, with a TaN/Hf-based gate stack, have high drive-current performance with NFET/PFET I DSAT = 825/950 µA/µm (circumference-normalized) or 2592/2985 µA/µm (diameter-normalized) at supply voltage V DD = 1 V and off-current I OFF = 15 nA/µm. Superior NW uniformity is obtained through the use of a combined hydrogen annealing and oxidation process. Clear scaling of short-channel effects versus NW size is observed.
---
paper_title: Ultra-Wide Voltage Range designs in Fully-Depleted Silicon-On-Insulator FETs
paper_content:
Todays' MPSoC applications are requiring a convergence between very high speed and ultra low power. Ultra Wide Voltage Range (UWVR) capability appears as a solution for high energy efficiency with the objective to improve the speed at very low voltage and decrease the power at high speed. Using Fully Depleted Silicon-On-Insulator (FDSOI) devices significantly improves the trade-off between leakage, variability and speed even at low-voltage. A full design framework is presented for UWVR operation using FDSOI Ultra Thin Body and Box technology considering power management, multi-VT enablement, standard cells design and SRAM bitcells. Technology performances are demonstrated on a ARM A9 critical path showing a speed increase from 40% to 200% without added energy cost. In opposite, when performance is not required, FDSOI enables to reduce leakage power up to 10X using Reverse Body Biasing.
---
paper_title: Energy-Aware System Design: Algorithms and Architectures
paper_content:
Power consumption becomes the most important design goal in a wide range of electronic systems. There are two driving forces towards this trend: continuing device scaling and ever increasing demand of higher computing power. First, device scaling continues to satisfy Moores law via a conventional way of scaling (More Moore) and a new way of exploiting the vertical integration (More than Moore). Second, mobile and IT convergence requires more computing power on the silicon chip than ever. Cell phones are now evolving towards mobile PC. PCs and data centers are becoming commodities in house and a must in industry. Both supply enabled by device scaling and demand triggered by the convergence trend realize more computation on chip (via multi-core, integration of diverse functionalities on mobile SoCs, etc.) and finally more power consumption incurring power-related issues and constraints. Energy-Aware System Design: Algorithms and Architectures provides state-of-the-art ideas for low power design methods from circuit, architecture to software level andoffers design case studies in three fast growing areas of mobile storage, biomedical and security.Important topics and features:- Describes very recent advanced issues and methods for energy-aware design at each design level from circuit andarchitecture toalgorithm level, and also covering important blocks including low power main memory subsystem and on-chip network at architecture level- Explains efficient power conversion and delivery which is becoming important as heterogeneous power sources are adopted for digital and non-digital parts - Investigates 3D die stacking emphasizing temperature awareness for better perspective on energy efficiency- Presents three practical energy-aware design case studies; novel storage device (e.g., solid state disk), biomedical electronics (e.g., cochlear and retina implants), and wireless surveillance camera systems.Researchers and engineers in the field of hardware and software design will find this book an excellent starting point to catch up with the state-of-the-art ideas of low power design.
---
paper_title: A 22nm high performance and low-power CMOS technology featuring fully-depleted tri-gate transistors, self-aligned contacts and high density MIM capacitors
paper_content:
A 22nm generation logic technology is described incorporating fully-depleted tri-gate transistors for the first time. These transistors feature a 3rd-generation high-k + metal-gate technology and a 5th generation of channel strain techniques resulting in the highest drive currents yet reported for NMOS and PMOS. The use of tri-gate transistors provides steep subthreshold slopes (∼70mV/dec) and very low DIBL (∼50mV/V). Self-aligned contacts are implemented to eliminate restrictive contact to gate registration requirements. Interconnects feature 9 metal layers with ultra-low-k dielectrics throughout the interconnect stack. High density MIM capacitors using a hafnium based high-k dielectric are provided. The technology is in high volume manufacturing.
---
paper_title: Energy-Aware System Design: Algorithms and Architectures
paper_content:
Power consumption becomes the most important design goal in a wide range of electronic systems. There are two driving forces towards this trend: continuing device scaling and ever increasing demand of higher computing power. First, device scaling continues to satisfy Moores law via a conventional way of scaling (More Moore) and a new way of exploiting the vertical integration (More than Moore). Second, mobile and IT convergence requires more computing power on the silicon chip than ever. Cell phones are now evolving towards mobile PC. PCs and data centers are becoming commodities in house and a must in industry. Both supply enabled by device scaling and demand triggered by the convergence trend realize more computation on chip (via multi-core, integration of diverse functionalities on mobile SoCs, etc.) and finally more power consumption incurring power-related issues and constraints. Energy-Aware System Design: Algorithms and Architectures provides state-of-the-art ideas for low power design methods from circuit, architecture to software level andoffers design case studies in three fast growing areas of mobile storage, biomedical and security.Important topics and features:- Describes very recent advanced issues and methods for energy-aware design at each design level from circuit andarchitecture toalgorithm level, and also covering important blocks including low power main memory subsystem and on-chip network at architecture level- Explains efficient power conversion and delivery which is becoming important as heterogeneous power sources are adopted for digital and non-digital parts - Investigates 3D die stacking emphasizing temperature awareness for better perspective on energy efficiency- Presents three practical energy-aware design case studies; novel storage device (e.g., solid state disk), biomedical electronics (e.g., cochlear and retina implants), and wireless surveillance camera systems.Researchers and engineers in the field of hardware and software design will find this book an excellent starting point to catch up with the state-of-the-art ideas of low power design.
---
paper_title: Analysis and future trend of short-circuit power
paper_content:
A closed-form expression for short-circuit power dissipation of CMOS gates is presented which takes short-channel effects into consideration. The calculation results show good agreement with the SPICE simulation results over wide range of load capacitance and channel length. The change in the short-circuit power, P/sub S/, caused by the scaling in relation to the charging and discharging power, P/sub D/, is discussed and it is shown that basically power ratio, P/sub S//(P/sub D/+P/sub S/), will not change with scaling if V/sub TH//V/sub DD/ is kept constant. This paper also handles the short-circuit power of series-connected MOSFET structures which appear in NAND and other complex gates.
---
paper_title: Leakage Current Mechanisms and Leakage Reduction Techniques in Deep-Submicrometer CMOS Circuits
paper_content:
High leakage current in deep-submicrometer regimes is becoming a significant contributor to power dissipation of CMOS circuits as threshold voltage, channel length, and gate oxide thickness are reduced. Consequently, the identification and modeling of different leakage components is very important for estimation and reduction of leakage power, especially for low-power applications. This paper reviews various transistor intrinsic leakage mechanisms, including weak inversion, drain-induced barrier lowering, gate-induced drain leakage, and gate oxide tunneling. Channel engineering techniques including retrograde well and halo doping are explained as means to manage short-channel effects for continuous scaling of CMOS devices. Finally, the paper explores different circuit techniques to reduce the leakage power consumption.
---
paper_title: Programmable nanowire circuits for nanoprocessors
paper_content:
In a significant step forward in complexity and capability for bottom-up assembly of nanoelectronic circuits, this study demonstrates scalable and programmable logic tiles based on semiconductor nanowire transistor arrays. The same logic tile, consisting 496 configurable transistor nodes in an area of about 960 μm2, could be programmed and operated as, among other functions, a full-adder, full-subtractor and multiplexer. The promise is that these logic tiles can be cascaded to realize fully integrated nanoprocessors with computing, memory and addressing capabilities.
---
paper_title: Practical Strategies for Power-Efficient Computing Technologies
paper_content:
After decades of continuous scaling, further advancement of silicon microelectronics across the entire spectrum of computing applications is today limited by power dissipation. While the trade-off between power and performance is well-recognized, most recent studies focus on the extreme ends of this balance. By concentrating instead on an intermediate range, an ~ 8× improvement in power efficiency can be attained without system performance loss in parallelizable applications-those in which such efficiency is most critical. It is argued that power-efficient hardware is fundamentally limited by voltage scaling, which can be achieved only by blurring the boundaries between devices, circuits, and systems and cannot be realized by addressing any one area alone. By simultaneously considering all three perspectives, the major issues involved in improving power efficiency in light of performance and area constraints are identified. Solutions for the critical elements of a practical computing system are discussed, including the underlying logic device, associated cache memory, off-chip interconnect, and power delivery system. The IBM Blue Gene system is then presented as a case study to exemplify several proposed directions. Going forward, further power reduction may demand radical changes in device technologies and computer architecture; hence, a few such promising methods are briefly considered.
---
paper_title: Silicon Nanowire Tunnel FETs: Low-Temperature Operation and Influence of High- $k$ Gate Dielectric
paper_content:
In this paper, we demonstrate p-channel tunnel FETs based on silicon nanowires grown with an in situ p-i-n doping profile. The tunnel FETs were fabricated with three different gate dielectrics, SiO2, Al2O3, and HfO2, and show a performance enhancement when using high-k dielectric materials. The best performance is achieved for the devices using HfO2 as the gate dielectric, which reach an Ion of 0.1 μA/μm (VDS = -0.5 V, VGS = -2 V), combined with an average inverse subthreshold slope (SS) of ~ 120 mV/dec and an Ion/Ioff ratio of around 106. For the tunnel FETs with Al2O3 as the gate dielectric, different annealing steps were evaluated, and an activation anneal at only 700°C was found to yield the best results. Furthermore, we also investigated the temperature behavior of the tunnel FETs. Ideal tunnel FET behavior was observed for devices having ohmic Ni/Au contacts, and we demonstrate the invariance of both the SS and on-current with temperature, as expected for true tunnel FETs.
---
paper_title: Reconfigurable Silicon Nanowire Transistors
paper_content:
Over the past 30 years electronic applications have been dominated by complementary metal oxide semiconductor (CMOS) devices. These combine p- and n-type field effect transistors (FETs) to reduce static power consumption. However, CMOS transistors are limited to static electrical functions, i.e., electrical characteristics that cannot be changed. Here we present the concept and a demonstrator of a universal transistor that can be reversely configured as p-FET or n-FET simply by the application of an electric signal. This concept is enabled by employing an axial nanowire heterostructure (metal/intrinsic-silicon/metal) with independent gating of the Schottky junctions. In contrast to conventional FETs, charge carrier polarity and concentration are determined by selective and sensitive control of charge carrier injections at each Schottky junction, explicitly avoiding the use of dopants as shown by measurements and calculations. Besides the additional functionality, the fabricated nanoscale devices exhibit e...
---
paper_title: A 45nm Logic Technology with High-k+Metal Gate Transistors, Strained Silicon, 9 Cu Interconnect Layers, 193nm Dry Patterning, and 100% Pb-free Packaging
paper_content:
A 45 nm logic technology is described that for the first time incorporates high-k + metal gate transistors in a high volume manufacturing process. The transistors feature 1.0 nm EOT high-k gate dielectric, dual band edge workfunction metal gates and third generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. The technology also features trench contact based local routing, 9 layers of copper interconnect with low-k ILD, low cost 193 nm dry patterning, and 100% Pb-free packaging. Process yield, performance and reliability are demonstrated on 153 Mb SRAM arrays with SRAM cell size of 0.346 mum2, and on multiple microprocessors.
---
paper_title: A 22nm SoC platform technology featuring 3-D tri-gate and high-k/metal gate, optimized for ultra low power, high performance and high density SoC applications
paper_content:
A leading edge 22nm 3-D tri-gate transistor technology has been optimized for low power SoC products for the first time. Low standby power and high voltage transistors exploiting the superior short channel control, < 65mV/dec subthreshold slope and <40mV DIBL, of the Tri-Gate architecture have been fabricated concurrently with high speed logic transistors in a single SoC chip to achieve industry leading drive currents at record low leakage levels. NMOS/PMOS Idsat=0.41/0.37mA/um at 30pA/um Ioff, 0.75V, were used to build a low standby power 380Mb SRAM capable of operating at 2.6GHz with 10pA/cell standby leakages. This technology offers mix-and-match flexibility of transistor types, high-density interconnect stacks, and RF/mixed-signal features for leadership in mobile, handheld, wireless and embedded SoC products.
---
paper_title: Ultra-thin-body and BOX (UTBB) fully depleted (FD) device integration for 22nm node and beyond
paper_content:
We present UTBB devices with a gate length (L G ) of 25nm and competitive drive currents. The process flow features conventional gate-first high-k/metal and raised source/drains (RSD). Back bias (V bb ) enables V t modulation of more than 125mV with a V bb of 0.9V and BOX thickness of 12nm. This demonstrates the importance and viability of the UTBB structure for multi-V t and power management applications. We explore the impact of GP, BOX thickness and V bb on local V t variability for the first time. Excellent A Vt of 1.27 mV·µm is achieved. We also present simulations results that suggest UTBB has improved scalability, reduced gate leakage (I g ) and lower external resistance (R ext ), thanks to a thicker inversion gate dielectric (T inv ) and body (T si ) thickness.
---
paper_title: Extremely thin SOI for system-on-chip applications
paper_content:
We review the basics of the extremely thin SOI (ETSOI) technology and how it addresses the main challenges of the CMOS scaling at the 20-nm technology node and beyond. The possibility of V T tuning with backbias, while keeping the channel undoped, opens up new opportunities that are unique to ETSOI. The main device characteristics with regard to low-power and high-performance logic, SRAM, analog and passive devices, and embedded memory are reviewed.
---
paper_title: Vertically Stacked SiGe Nanowire Array Channel CMOS Transistors
paper_content:
We demonstrate, for the first time, the fabrication of vertically stacked SiGe nanowire (NW) arrays with a fully CMOS compatible technique. Our method uses the phenomenon of Ge condensation onto Si and the faster oxidation rate of SiGe than Si to realize the vertical stacking of NWs. Gate-all-around nand p-FETs, fabricated using these stacked NW arrays as the channel (Lgges0.35 mum), exhibit excellent device performance with high ION/IOFF ratio (~106), near ideal subthreshold slope (~62-75 mV/dec) and low drain induced barrier-lowering (~20 mV/V). The transconductance characteristics suggest quantum confinement of holes in the [Ge]-rich outer-surface of SiGe for p-FETs and confinement of electrons in the core Si with significantly less [Ge] for n-FETs. The presented device architecture can be a promising option to overcome the low drive current restriction of Si NW MOSFETs for a given planar estate
---
paper_title: Polarity control in double-gate, gate-all-around vertically stacked silicon nanowire FETs
paper_content:
We fabricated and characterized new ambipolar silicon nanowire (SiNW) FET transistors featuring two independent gate-all-around electrodes and vertically stacked SiNW channels. One gate electrode enables dynamic configuration of the device polarity (n or p-type), while the other switches on/off the device. Measurement results on silicon show I on /I off > 106 and S ≈ 64mV/dec (70mV/dec) for p(n)-type operation in the same device. We show that XOR operation is embedded in the device characteristic, and we demonstrate for the first time a fully functional 2-transistor XOR gate.
---
paper_title: Assembly and integration of semiconductor nanowires for functional nanosystems
paper_content:
Central to the bottom-up paradigm of nanoscience, which could lead to entirely new and highly integrated functional nanosystems, is the development of effective assembly methods that enable hierarchical organization of nanoscale building blocks over large areas. Semiconductor nanowires (NWs) represent one of the most powerful and versatile classes of synthetically tunable nanoscale building blocks for studies of the fundamental physical properties of nanostructures and the assembly of a wide range of functional nanoscale systems. In this article, we review several key advances in the recent development of general assembly approaches for organizing semiconductor NW building blocks into designed architectures, and the further integration of ordered structures to construct functional NW device arrays. We first introduce a series of rational assembly strategies to organize NWs into hierarchically ordered structures, with a focus on the blown bubble film (BBF) technique and chemically driven assembly. Next, we discuss significant advances in building integrated nanoelectronic systems based on the reproducible assembly of scalable NW crossbar arrays, such as high-density memory arrays and logic structures. Lastly, we describe unique applications of assembled NW device arrays for studying functional nanoelectronic-biological interfaces by building well-defined NW-cell/tissue hybrid junctions, including the highly integrated NW-neuron interface and the multiplexed, flexible NW-heart tissue interface.
---
paper_title: Controlling the Polarity of Silicon Nanowire Transistors
paper_content:
Each generation of integrated circuit (IC) technology has led to new applications. The most recent advances have enabled noninvasive surgery, three-dimensional (3D) games and movies, and intelligent cars, to name a few. A single chip can contain more than 1 billon elementary devices, and this gain in complexity has been achieved by fabricating nanometer-scale transistors used as switches or memories. Recent experimental work by De Marchi et al. ( 1 ) describes changes to the structure of one of the most basic bricks of ICs by controlling the type of conduction occurring in vertically stacked silicon (Si) nanowire transistors (see the figure, panels A and B), thus making a programmable transistor.
---
paper_title: Three Dimensionally Stacked SiGe Nanowire Array and Gate-All-Around p-MOSFETs
paper_content:
A novel method for realizing arrays of vertically stacked (e.g., times3 wires stacked) laterally spread out nanowires is presented for the first time using a fully Si-CMOS compatible process. The gate-all-around (GAA) MOSFET devices using these nanowire arrays show excellent performance in terms of near ideal sub-threshold slope (<70 mV/dec), high Ion/Ioff ratio (~107), and low leakage current. Vertical stacking economizes on silicon estate and improves the on-state IDSAT at the same time. Both n- and p-FET devices are demonstrated
---
paper_title: Process-Variation Effect, Metal-Gate Work-Function Fluctuation, and Random-Dopant Fluctuation in Emerging CMOS Technologies
paper_content:
This paper, for the first time, estimates the influences of the intrinsic-parameter fluctuations consisting of metal-gate work-function fluctuation (WKF), process-variation effect (PVE), and random-dopant fluctuation (RDF) on 16-nm-gate planar metal-oxide-semiconductor field-effect transistors (MOSFETs) and circuits. The WKF and RDF dominate the threshold-voltage fluctuation (?V th) ; however, the WKF brings less impact on the gate capacitance and the cutoff frequency due to the screening effect of the inversion layer. The fluctuation of timing characteristics depends on the ?V th and is therefore proportional to the trend of ?V th. The power fluctuation consisting of the dynamic, short-circuit, and static powers is further investigated. The total power fluctuation for the planar MOSFET circuits is 15.2%, which is substantial in the reliability of circuits and systems. The static power is a minor part of the total power; however, its fluctuation is significant because of the serious fluctuation of the leakage current. For an amplifier circuit, the high-frequency characteristics, the circuit gain, the 3-dB bandwidth, the unity-gain bandwidth power, and the power-added efficiency are explored consequently. Similar to the trend of the cutoff frequency, the PVE and RDF dominate both the device and circuit characteristic fluctuations due to the significant gate-capacitance fluctuations, and the WKF is less important at this simulation scenario. The extensive study assesses the fluctuations on circuit performance and reliability, which can, in turn, be used to optimize nanoscale MOSFETs and circuits.
---
paper_title: High performance and highly uniform gate-all-around silicon nanowire MOSFETs with wire size dependent scaling
paper_content:
We demonstrate undoped-body, gate-all-around (GAA) Si nanowire (NW) MOSFETs with excellent electrostatic scaling. These NW devices, with a TaN/Hf-based gate stack, have high drive-current performance with NFET/PFET I DSAT = 825/950 µA/µm (circumference-normalized) or 2592/2985 µA/µm (diameter-normalized) at supply voltage V DD = 1 V and off-current I OFF = 15 nA/µm. Superior NW uniformity is obtained through the use of a combined hydrogen annealing and oxidation process. Clear scaling of short-channel effects versus NW size is observed.
---
paper_title: FDSOI: From substrate to devices and circuit applications
paper_content:
Nanotechnology starts at the substrate level. The SOI substrates enable performance improvement, area saving and power reduction for ICs through a convolution of substrate design and device architecture to maximize the benefits at the IC level. SOI substrates have made possible an efficient PDSOI MOSFET optimization increasing current drive while minimizing leakage and reducing parasitic elements. Further development of the SOI substrate technology has made possible to position ultra thin silicon SOI (UTSOI) as an industrial option for the manufacturing of FDSOI device architectures where the SOI film thickness uniformities is controlled below +5A across the wafer and wafer to wafer. FDSOI enables the design for low power and high performance IC products. FDSOI circuit design does not have to take into consideration the history effect of PDSOI nor the high threshold voltage variation due to random dopant fluctuation given that the transistor channels are undoped. This makes the porting of designs from bulk to FDSOI much simpler. An overview of the advances in Smart Cut UTSOI and FDSOI devices and circuit applications will be given.
---
paper_title: A 90nm high volume manufacturing logic technology featuring novel 45nm gate length strained silicon CMOS transistors
paper_content:
This paper describes the details of a novel strained transistor architecture which is incorporated into a 90nm logic technology on 300mm wafers. The unique strained PMOS transistor structure features an epitaxially grown strained SiGe film embedded in the source drain regions. Dramatic performance enhancement relative to unstrained devices are reported. These transistors have gate length of 45nm and 50nm for NMOS and PMOS respectively, 1.2nm physical gate oxide and Ni salicide. World record PMOS drive currents of 700/spl mu/A//spl mu/m (high V/sub T/) and 800/spl mu/A//spl mu/m (low V/sub T/) at 1.2V are demonstrated. NMOS devices exercise a highly tensile silicon nitride capping layer to induce tensile strain in the NMOS channel region. High NMOS drive currents of 1.26mA//spl mu/m (high VT) and 1.45mA//spl mu/m (low VT) at 1.2V are reported. The technology is mature and is being ramped into high volume manufacturing to fabricate next generation Pentium/spl reg/ and Intel/spl reg/ Centrino/spl trade/ processor families.
---
paper_title: A 22nm high performance and low-power CMOS technology featuring fully-depleted tri-gate transistors, self-aligned contacts and high density MIM capacitors
paper_content:
A 22nm generation logic technology is described incorporating fully-depleted tri-gate transistors for the first time. These transistors feature a 3rd-generation high-k + metal-gate technology and a 5th generation of channel strain techniques resulting in the highest drive currents yet reported for NMOS and PMOS. The use of tri-gate transistors provides steep subthreshold slopes (∼70mV/dec) and very low DIBL (∼50mV/V). Self-aligned contacts are implemented to eliminate restrictive contact to gate registration requirements. Interconnects feature 9 metal layers with ultra-low-k dielectrics throughout the interconnect stack. High density MIM capacitors using a hafnium based high-k dielectric are provided. The technology is in high volume manufacturing.
---
paper_title: 15nm-diameter 3D stacked nanowires with independent gates operation: ΦFET
paper_content:
For the first time, we report a 3D stacked sub-15 nm diameter NanoWire FinFET-like CMOS technology (3D-NWFET) with a new optional independent gate nanowire structure named PhiFET. Extremely high driving currents for 3D-NWFET (6.5 mA/mum for NMOS and 3.3 mA/mum for PMOS) are demonstrated thanks to the 3D configuration using a high-k/metal gate stack. Co-processed reference FinFETs with fin widths down to 6 nm are achieved with record aspect ratios of 23. We show experimentally that the 3D-NWFET, compared to a co-processed FinFET, relaxes by a factor of 2.5 the channel width requirement for a targeted DIBL and improves transport properties. PhiFET exhibits significant performance boosts compared to Independent-Gate FinFET (IG-FinFET): a 2-decade smaller IOFF current and a lower subthreshold slope (82 mV/dec. instead of 95 mV/dec.). This highlights the better scalability of 3D-NWFET and PhiFET compared to FinFET and IG-FinFET, respectively.
---
paper_title: High-performance carbon nanotube field-effect transistor with tunable polarities
paper_content:
State-of-the-art carbon nanotube field-effect transistors (CNFETs) behave as Schottky barrier (SB)-modulated transistors. It is known that vertical scaling of the gate oxide significantly improves the performance of these devices. However, decreasing the oxide thickness also results in pronounced ambipolar transistor characteristics and increased drain leakage currents. Using a novel device concept, we have fabricated high-performance, enhancement-mode CNFETs exhibiting n or p-type unipolar behavior, tunable by electrostatic and/or chemical doping, with excellent OFF-state performance and a steep subthreshold swing (S =63 mV/dec). The device design allows for aggressive oxide thickness and gate length scaling while maintaining the desired device characteristics.
---
paper_title: Reconfigurable Silicon Nanowire Transistors
paper_content:
Over the past 30 years electronic applications have been dominated by complementary metal oxide semiconductor (CMOS) devices. These combine p- and n-type field effect transistors (FETs) to reduce static power consumption. However, CMOS transistors are limited to static electrical functions, i.e., electrical characteristics that cannot be changed. Here we present the concept and a demonstrator of a universal transistor that can be reversely configured as p-FET or n-FET simply by the application of an electric signal. This concept is enabled by employing an axial nanowire heterostructure (metal/intrinsic-silicon/metal) with independent gating of the Schottky junctions. In contrast to conventional FETs, charge carrier polarity and concentration are determined by selective and sensitive control of charge carrier injections at each Schottky junction, explicitly avoiding the use of dopants as shown by measurements and calculations. Besides the additional functionality, the fabricated nanoscale devices exhibit e...
---
paper_title: Device and Architecture Outlook for Beyond CMOS Switches
paper_content:
Sooner or later, fundamental limitations destine complementary metal-oxide-semiconductor (CMOS) scaling to a conclusion. A number of unique switches have been proposed as replacements, many of which do not even use electron charge as the state variable. Instead, these nanoscale structures pass tokens in the spin, excitonic, photonic, magnetic, quantum, or even heat domains. Emergent physical behaviors and idiosyncrasies of these novel switches can complement the execution of specific algorithms or workloads by enabling quite unique architectures. Ultimately, exploiting these unusual responses will extend throughput in high-performance computing. Alternative tokens also require new transport mechanisms to replace the conventional chip wire interconnect schemes of charge-based computing. New intrinsic limits to scaling in post-CMOS technologies are likely to be bounded ultimately by thermodynamic entropy and Shannon noise.
---
paper_title: Polarity control in double-gate, gate-all-around vertically stacked silicon nanowire FETs
paper_content:
We fabricated and characterized new ambipolar silicon nanowire (SiNW) FET transistors featuring two independent gate-all-around electrodes and vertically stacked SiNW channels. One gate electrode enables dynamic configuration of the device polarity (n or p-type), while the other switches on/off the device. Measurement results on silicon show I on /I off > 106 and S ≈ 64mV/dec (70mV/dec) for p(n)-type operation in the same device. We show that XOR operation is embedded in the device characteristic, and we demonstrate for the first time a fully functional 2-transistor XOR gate.
---
paper_title: A polarity-controllable graphene inverter
paper_content:
We propose and experimentally demonstrate a functional electron device, which is a polarity-controllable inverter constructed using a four-terminal ambipolar graphenefield effect transistor(FET). The FET has two input terminals, both a top gate and a back gate, and the polarity of the FET can be switched by switching the input to the back gate. The slope of the inverter transfer curves can be changed by changing the back-gate voltage. By adding binary digital data and sinusoidal carrier waves into the back gate and the top gate of the inverter, respectively, the one-transistor binary digital phase modulator can be constructed and operated.
---
paper_title: FinFET SONOS flash memory for embedded applications
paper_content:
FD-SOI (fully depleted silicon-on-insulator) FinFET SONOS flash memory devices are investigated for the first time, and they are found to be scalable to a gate length of 40 nm. Although the FinFET SONOS device does not have a body contact, excellent program/erase characteristics are achieved, together with high endurance, long retention time and low reading disturbance. Devices fabricated on [100] and [110] silicon surfaces are compared.
---
paper_title: 6T SRAM design for wide voltage range in 28nm FDSOI
paper_content:
Unique features of the 28nm ultra-thin body and buried oxide (UTBB) FDSOI technology enable the operation of SRAM in a wide voltage range. Minimum operating voltage limitations of a high-density (HD) 6-transistor (6T) SRAM can be overcome by using a single p-well (SPW) bitcell design in FDSOI. Transient simulations of dynamic failure metrics suggest that a HD 6T SPW array with 128 cells per bitline operates down to 0.65V in typical conditions with no assist techniques. In addition, a wide back-bias voltage range enables run-time tradeoffs between the low leakage current in the sleep mode and the short access time in the active mode, making it attractive for high-performance portable applications.
---
paper_title: Ultra-Wide Voltage Range designs in Fully-Depleted Silicon-On-Insulator FETs
paper_content:
Todays' MPSoC applications are requiring a convergence between very high speed and ultra low power. Ultra Wide Voltage Range (UWVR) capability appears as a solution for high energy efficiency with the objective to improve the speed at very low voltage and decrease the power at high speed. Using Fully Depleted Silicon-On-Insulator (FDSOI) devices significantly improves the trade-off between leakage, variability and speed even at low-voltage. A full design framework is presented for UWVR operation using FDSOI Ultra Thin Body and Box technology considering power management, multi-VT enablement, standard cells design and SRAM bitcells. Technology performances are demonstrated on a ARM A9 critical path showing a speed increase from 40% to 200% without added energy cost. In opposite, when performance is not required, FDSOI enables to reduce leakage power up to 10X using Reverse Body Biasing.
---
paper_title: Universal logic modules based on double-gate carbon nanotube transistors
paper_content:
Double-gate carbon nanotube field-effect transistors (DG-CNT-FETs) can be controlled in the field to be either n-type or p-type through an extra polarity gate. This results in an embedded XOR behavior, which has inspired several novel circuit designs and architectures. This work makes the following contributions. First, we propose an accurate and efficient semi-classical modeling approach to realize the first SPICE-compatible model for circuit design and optimization of DG-CNTFETs. Second, we design and optimize universal logic modules (ULMs) in two circuit styles based on DG-CNTFETs. The proposed ULMs can leverage the full potential of the embedded XOR through the FPGA-centric lookup table optimization flow. Further, we demonstrate that DG-CNTFET ULMs in the double pass-transistor logic style, which inherently produces dual-rail outputs with balanced delay, are faster than DG-CNTFET circuits in the conventional single-rail static logic style that relies on explicit input inversion. On average across 12 benchmarks, the proposed dual-rail ULMs outperform the best DG-CNTFET fabrics based on tiling patterns by 37%, 12%, and 33% in area, delay, and total power, respectively.
---
paper_title: Novel library of logic gates with ambipolar CNTFETs: Opportunities for multi-level logic synthesis
paper_content:
This paper exploits the unique in-field controllability of the device polarity of ambipolar carbon nanotube field effect transistors (CNT-FETs) to design a technology library with higher expressive power than conventional CMOS libraries. Based on generalized NOR-NAND-AOI-OAI primitives, the proposed library of static ambipolar CNTFET gates efficiently implements XOR functions, provides full-swing outputs, and is extensible to alternate forms with area-performance tradeoffs. Since the design of the gates can be regularized, the ability to functionalize them in-field opens opportunities for novel regular fabrics based on ambipolar CNTFETs. Technology mapping of several multi-level logic benchmarks --- including multipliers, adders, and linear circuits --- indicates that on average, it is possible to reduce both the number of gates and area by ~ 38% while also improving performance by 6.9x.
---
paper_title: Digital Integrated Circuits
paper_content:
Progressive in content and form, this practical book successfully bridges the gap between the circuit perspective and system perspective of digital integrated circuit design. Digital Integrated Circuits maintains a consistent, logical flow of subject matter throughout. Addresses today's most significant and compelling industry topics, including: the impact of interconnect, design for low power, issues in timing and clocking, design methodologies, and the tremendous effect of design automation on the digital design perspective. For readers interested in digital circuit design.
---
paper_title: Self-checking ripple-carry adder with Ambipolar Silicon NanoWire FET
paper_content:
For the rapid adoption of new and aggressive technologies such as ambipolar Silicon NanoWire (SiNW), addressing fault-tolerance is necessary. Traditionally, transient fault detection implies large hardware overhead or performance decrease compared to permanent fault detection. In this paper, we focus on on-line testing and its application to ambipolar SiNW. We demonstrate on self-checking ripple-carry adder how ambipolar design style can help reduce the hardware overhead. When compared with equivalent CMOS process, ambipolar SiNW design shows a reduction in area of at least 56% (28%) with a decreased delay of 62% (6%) for Static (Transmission Gate) design style.
---
paper_title: Digital Integrated Circuits
paper_content:
Progressive in content and form, this practical book successfully bridges the gap between the circuit perspective and system perspective of digital integrated circuit design. Digital Integrated Circuits maintains a consistent, logical flow of subject matter throughout. Addresses today's most significant and compelling industry topics, including: the impact of interconnect, design for low power, issues in timing and clocking, design methodologies, and the tremendous effect of design automation on the digital design perspective. For readers interested in digital circuit design.
---
paper_title: New single-clock CMOS latches and flipflops with improved speed and power savings
paper_content:
New dynamic, semistatic, and fully static single-clock CMOS latches and flipflops are proposed. By removing the speed and power bottlenecks of the original true-single-phase clocking (TSPC) and the existing differential latches and flipflops, both delays and power consumptions are considerably reduced. For the nondifferential dynamic, the differential dynamic, the semistatic, and the fully static flipflops, the best reduction factors are 1.3, 2.1, 2.2, and 2.4 for delays and 1.9, 3.5, 3.4, and 6.5 for power-delay products with an average activity ratio (0.25), respectively. The total and the clocked transistor numbers are decreased. In the new differential flipflops, clock loads are minimized and logic-related transistors are purely n-type in both n- and p-latches, giving additional speed advantage to this kind of CMOS circuits.
---
paper_title: Digital Integrated Circuits
paper_content:
Progressive in content and form, this practical book successfully bridges the gap between the circuit perspective and system perspective of digital integrated circuit design. Digital Integrated Circuits maintains a consistent, logical flow of subject matter throughout. Addresses today's most significant and compelling industry topics, including: the impact of interconnect, design for low power, issues in timing and clocking, design methodologies, and the tremendous effect of design automation on the digital design perspective. For readers interested in digital circuit design.
---
paper_title: TSPC Flip-Flop circuit design with three-independent-gate silicon nanowire FETs
paper_content:
True Single-Phase Clock (TSPC) Flip-Flops, based on dynamic logic implementation, are area-saving and high-speed compared to standard static flip-flops. Furthermore, logic gates can be embedded into TSPC flip-flops which significantly improves performance. As a promising approach to keep the pace of Moore's Law, functionality-enhanced devices with multiple independent gates have drown many recent interests. In particular, Three-Independent-Gate Silicon Nanowire FETs (TIG SiNWFETs) can realize the functionality of two serial transistors in a single device. Therefore, they open new opportunities to compact designs in both arithmetic and control circuits. In this paper, we propose TSPC flip-flop implementation with asynchronous set and reset using the compactness of TIG SiNWFET. Electrical simulations show that TIG SiNWFET-based TSPC flip-flop improves nearly 20%, 30% and 7% in area, delay and leakage power respectively as compared to its LSTP FinFET counterpart at 22nm.
---
paper_title: Dual metal gate FinFET integration by Ta/Mo diffusion technology for Vt reduction and multi-Vt CMOS application
paper_content:
Abstract Dual metal gate CMOS FinFETs have been integrated successfully by the Ta/Mo interdiffusion technology. For the first time, low- V t CMOS FinFETs representing on-current enhancement and high- V t CMOS FinFETs reducing stand-by power dramatically, namely multi- V t CMOS FinFETs, are demonstrated by selecting Ta/Mo gates for n or pMOS FinFETs with non-doped fin channels. The dual metal gate FinFET SRAM with a low- V t configuration is demonstrated with excellent noise margins at a reduced supply voltage.
---
paper_title: CMOS scaling for the 22nm node and beyond: Device physics and technology
paper_content:
This paper reviews options for CMOS scaling for the 22nm node and beyond. Advanced transistor architectures such as ultra-thin body (UTB), FinFET, gate-all-around (GAA) and vertical options are discussed. Technology challenges faced by all architectures (such as variation, resistance, and capacitance) are analyzed in relation to recent research results. The impact on the CMOS scaling roadmap of system-on-chip (SOC) technologies is reviewed.
---
paper_title: A DC-DC converter for short-channel CMOS technologies
paper_content:
An integrated DC-DC converter with two passive external components was designed and fabricated in an advanced, short-channel (L/sub eff/ 10 MHz) were used to minimize the size of external components, and novel circuits were used to reduce the stress on the short channel devices. Measured efficiencies for a 3.3 V to 1.65 V converter were approximately 75% for output currents from 15 to 40 mA.
---
paper_title: Thermal coupling in integrated circuits: application to thermal testing
paper_content:
The power dissipated by the devices of a circuit can be construed as a signature of the circuit's performance and state. Without disturbing the circuit operation, this power consumption can be monitored by temperature measurements of the silicon die surface via built-in differential temperature sensors. In this paper, dynamic and spatial thermal behavioral characterization of VLSI MOS devices is presented using laser thermoreflectance measurements and on-chip differential temperature sensing circuits. A discussion of the application of built-in differential temperature measurements as an IC test strategy is also presented.
---
paper_title: Circuit techniques for suppression and measurement of on-chip inductive supply noise
paper_content:
Increasing power consumption and clock frequency have significantly exacerbated the Ldi/dt drop, which has emerged as the dominant fraction of the overall power supply drop in high performance designs. We present the design and validation of a high-voltage, charge-pump based active decoupling circuit for the suppression of on-chip inductive power-supply noise. We also propose a low-power, high-resolution, digital on-chip oscilloscope technique, based on repetitive sampling, for measurement of high-frequency supply noise. The proposed circuits were implemented and fabricated in a 0.13mum CMOS process. Measurement results on the prototype demonstrate 48% and 53% reduction in power supply noise for rapidly switching current-loads and during resonance, respectively. On-chip supply noise is measured using the proposed on-chip oscilloscope and the noise waveforms are compared with those obtained from a traditional supply noise monitor and direct on-chip probing using probe pads.
---
paper_title: Power reduction techniques for microprocessor systems
paper_content:
Power consumption is a major factor that limits the performance of computers. We survey the “state of the art” in techniques that reduce the total power consumed by a microprocessor system over time. These techniques are applied at various levels ranging from circuits to architectures, architectures to system software, and system software to applications. They also include holistic approaches that will become more important over the next decade. We conclude that power management is a multifaceted discipline that is continually expanding with new techniques being developed at every level. These techniques may eventually allow computers to break through the “power wall” and achieve unprecedented levels of performance, versatility, and reliability. Yet it remains too early to tell which techniques will ultimately solve the power problem.
---
paper_title: Impact of die-to-die and within-die parameter fluctuations on the maximum clock frequency distribution for gigascale integration
paper_content:
A model describing the maximum clock frequency (FMAX) distribution of a microprocessor is derived and compared with wafer sort data for a recent 0.25-/spl mu/m microprocessor. The model agrees closely with measured data in mean, variance, and shape. Results demonstrate that within-die fluctuations primarily impact the FMAX mean and die-to-die fluctuations determine the majority of the FMAX variance. Employing rigorously derived device and circuit models, the impact of die-to-die and within-die parameter fluctuations on future FMAX distributions is forecast for the 180, 130, 100, 70, and 50-nm technology generations. Model predictions reveal that systematic within-die fluctuations impose the largest performance degradation resulting from parameter fluctuations. Assuming a 3/spl sigma/ channel length deviation of 20%, projections for the 50-nm technology generation indicate that essentially a generation of performance gain can be lost due to systematic within-die fluctuations. Key insights from this work elucidate the recommendations that manufacturing process controls be targeted specifically toward sources of systematic within-die fluctuations, and the development of new circuit design methodologies be aimed at suppressing the effect of within-die parameter fluctuations.
---
paper_title: All-digital PLL array provides reliable distributed clock for SOCs
paper_content:
This brief addresses the problem of clock generation and distribution in globally synchronous locally synchronous chips. A novel architecture of clock generation based on network of coupled all-digital PLLs is proposed. Solutions are proposed to overcome the issues of stability and undesirable synchronized modes (modelocks) of high-order bidirectional PLL networks. The VLSI implementation of the network is discussed in CMOS65 nm technology and the simulation results prove the reliability of the global synchronization by the proposed method.
---
paper_title: Low-Cost and Robust Control of a DFLL for Multi-Processor System-On-Chip
paper_content:
Abstract Fine-grain Dynamic Voltage and Frequency Scaling (DVFS) is becoming a requirement on Globally-Asynchronous Locally-Synchronous (GALS) architectures to ensure low power consumption of the whole chip. Each voltage/frequency island is driven by a voltage and a frequency “actuators”. However, due to process variability that naturally appears with technology scaling, the actuator design must be robust. Moreover, due to area constraints, the control law must be as simple as possible. Last but not least, response time constraints require these controllers to be implemented in hardware. In this paper, the design of a low-cost control law for a full Digital FLL in the context of GALS architecture, in presence of process variability and temperature variations has been proposed. The system has been first modelled, especially, the delay that naturally arises in the sensor has been taken into account. The FLL control has been done with classical control tools. The control problem is original by its hardware implementation that has been done in fixed-point arithmetic. The FLL has been implemented in a GALS chip.
---
paper_title: Dynamic frequency scaling algorithms for improving the CPU's energy efficiency
paper_content:
This paper approaches the problem of improving the service center server CPU's energy efficiency by executing dynamic frequency scaling actions and performing tradeoffs between CPU's computational performance and its power consumption. Two different algorithms are designed and implemented: an immune inspired algorithm and a fuzzy logic based algorithm. The immune inspired algorithm uses the human antigen as a model to represent the server power / performance state. Using a set of detectors the antigens are classified as self for optimal power consumption state or non-self for non-optimal power consumption state. For the non-self antigens a biologically inspired clonal selection approach is used to determine the actions that need to be executed to bring the server's CPU in an optimal power consumption state. The fuzzy logic based algorithm adaptively changes the processor performance states to the incoming workload. The algorithm also filters workload spikes because frequent p-states transition costs can outweigh the benefit of adaptation.
---
paper_title: A 32-bit PowerPC system-on-a-chip with support for dynamic voltage scaling and dynamic frequency scaling
paper_content:
A PowerPC system-on-a-chip processor which makes use of dynamic voltage scaling and on-the-fly frequency scaling to adapt to the dynamically changing performance demands and power consumption constraints of high-content, battery powered applications is described. The PowerPC core and caches achieve frequencies as high as 380 MHz at a supply of 1.8 V and active power consumption as low as 53 mW at a supply of 1.0 V. The system executes up to 500 MIPS and can achieve standby power as low as 54 /spl mu/W. Logic supply changes as fast as 10 mV//spl mu/s are supported. A low-voltage PLL supplied by an on-chip regulator, which isolates the clock generator from the variable logic supply, allows the SOC to operate continuously while the logic supply voltage is modified. Hardware accelerators for speech recognition, instruction-stream decompression and cryptography are included in the SOC. The SOC occupies 36 mm/sup 2/ in a 0.18 /spl mu/m, 1.8 V nominal supply, bulk CMOS process.
---
paper_title: All-digital PLL and transmitter for mobile phones
paper_content:
We present the first all-digital PLL and polar transmitter for mobile phones. They are part of a single-chip GSM/EDGE transceiver SoC fabricated in a 90 nm digital CMOS process. The circuits are architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrateable with a digital baseband and application processor. To achieve this, we exploit the new paradigm of a deep-submicron CMOS process environment by leveraging on the fast switching times of MOS transistors, the fine lithography and the precise device matching, while avoiding problems related to the limited voltage headroom. The transmitter architecture is fully digital and utilizes the wideband direct frequency modulation capability of the all-digital PLL. The amplitude modulation is realized digitally by regulating the number of active NMOS transistor switches in accordance with the instantaneous amplitude. The conventional RF frequency synthesizer architecture, based on a voltage-controlled oscillator and phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter. The transmitter performs GMSK modulation with less than 0.5/spl deg/ rms phase error, -165 dBc/Hz phase noise at 20 MHz offset, and 10 /spl mu/s settling time. The 8-PSK EDGE spectral mask is met with 1.2% EVM. The transmitter occupies 1.5 mm/sup 2/ and consumes 42 mA at 1.2 V supply while producing 6 dBm RF output power.
---
paper_title: Enabling improved power management in multicore processors through clustered DVFS
paper_content:
In recent years, chip multiprocessors (CMP) have emerged as a solution for high-speed computing demands. However, power dissipation in CMPs can be high if numerous cores are simultaneously active. Dynamic voltage and frequency scaling (DVFS) is widely used to reduce the active power, but its effectiveness and cost depends on the granularity at which it is applied. Per-core DVFS allows the greatest flexibility in controlling power, but incurs the expense of an unrealistically large number of on-chip voltage regulators. Per-chip DVFS, where all cores are controlled by a single regulator overcomes this problem at the expense of greatly reduced flexibility. This work considers the problem of building an intermediate solution, clustering the cores of a multicore processor into DVFS domains and implementing DVFS on a per-cluster basis. Based on a typical workload, we propose a scheme to find similarity among the cores and cluster them based on this similarity. We also provide an algorithm to implement DVFS for the clusters, and evaluate the effectiveness of per-cluster DVFS in power reduction.
---
paper_title: Low-Jitter Process-Independent DLL and PLL Based on Self-Biased Techniques
paper_content:
Delay-locked loop (DLL) and phase-locked loop (PLL) designs based upon self-biased techniques are presented. The DLL and PLL designs achieve process technology independence, fixed damping factor, fixed bandwidth to operating frequency ratio, broad frequency range, input phase offset cancellation, and, most importantly, low input tracking jitter. Both the damping factor and the bandwidth to operating frequency ratio are determined completely by a ratio of capacitances. Self-biasing avoids the necessity for external biasing, which can require special bandgap bias circuits, by generating all of the internal bias voltages and currents from each other so that the bias levels are completely determined by the operating conditions. Fabricated in a 0.5-/spl mu/m N-well CMOS gate array process, the PLL achieves an operating frequency range of 0.0025 MHz to 550 MHz and input tracking jitter of 384 ps at 250 MHz with 500 mV of low frequency square wave supply noise.
---
paper_title: Ultra Low-Power Clocking Scheme Using Energy Recovery and Clock Gating
paper_content:
A significant fraction of the total power in highly synchronous systems is dissipated over clock networks. Hence, low-power clocking schemes are promising approaches for low-power design. We propose four novel energy recovery clocked flip-flops that enable energy recovery from the clock network, resulting in significant energy savings. The proposed flip-flops operate with a single-phase sinusoidal clock, which can be generated with high efficiency. In the TSMC 0.25-mum CMOS technology, we implemented 1024 proposed energy recovery clocked flip-flops through an H-tree clock network driven by a resonant clock-generator to generate a sinusoidal clock. Simulation results show a power reduction of 90% on the clock-tree and total power savings of up to 83% as compared to the same implementation using the conventional square-wave clocking scheme and flip-flops. Using a sinusoidal clock signal for energy recovery prevents application of existing clock gating solutions. In this paper, we also propose clock gating solutions for energy recovery clocking. Applying our clock gating to the energy recovery clocked flip-flops reduces their power by more than 1000times in the idle mode with negligible power and delay overhead in the active mode. Finally, a test chip containing two pipelined multipliers one designed with conventional square wave clocked flip-flops and the other one with the proposed energy recovery clocked flip-flops is fabricated and measured. Based on measurement results, the energy recovery clocking scheme and flip-flops show a power reduction of 71% on the clock-tree and 39% on flip-flops, resulting in an overall power savings of 25% for the multiplier chip.
---
paper_title: Variation-Aware Adaptive Voltage Scaling System
paper_content:
Conventional voltage scaling systems require a delay margin to maintain a certain level of robustness across all possible device and wire process variations and temperature fluctuations. This margin is required to cover for a possible change in the critical path due to such variations. Moreover, a slower interconnect delay scaling with voltage compared to logic delay can cause the critical path to change from one operating voltage to another. With technology scaling, both process variation and interconnect delay are growing and demanding more margin to guarantee an error-free operation. Such margin is translated into a voltage overhead and a corresponding energy inefficiency. In this paper, a critical path emulator architecture is shown to track the changing critical path at different process splits by probing the actual transistor and wire conditions. Furthermore, voltage scaling characteristics of the actual critical path is closely tracked by programming logic and interconnect delay lines to achieve the same delay combination as the actual critical path. Compared to conventional open-loop and closed-loop systems, the proposed system is up to 39% and 24% more energy efficient, respectively. A 0.18-mum technology test chip is designed to verify the functionality of the proposed system showing critical path tracking of a 16times16 bit multiplier
---
paper_title: An Asynchronous Power Aware and Adaptive NoC Based Circuit
paper_content:
A fully power aware globally asynchronous locally synchronous network-on-chip circuit is presented in this paper. The circuit is arranged around an asynchronous network-on-chip providing a 17 Gbits/s throughput and automatically reducing its power consumption by activity detection. Both dynamic and static power consumptions are globally reduced using adaptive design techniques applied locally for each NoC units. The dynamic power consumption can be reduced up to a factor of 8 while the static power consumption is reduced by 2 decades in stand-by mode.
---
paper_title: Temperature-Aware Distributed Run-Time Optimization on MP-SoC Using Game Theory
paper_content:
With forecasted hundreds of processing elements (PE), future embedded systems will be able to handle multiple applications with very diverse running constraints. In order to avoid hot-spots and control the temperature of the tiles, dynamic voltage-frequency scaling (DVFS) can be applied at PE level. At system level, it implies to dynamically manage the different voltage-frequency couples of each PE in order to obtain a global optimization. In this article we present an approach based on game theory, which adjusts at run-time the frequency of each PE. It aims at reducing the tile temperature while maintaining the synchronization between the tasks of the application graph. A fully distributed scheme is assumed in order to build a scalable mechanism. Results show that the proposed run-time algorithm find solutions in few calculation cycles achieving temperature reductions of about 23%.
---
paper_title: RazorII: In Situ Error Detection and Correction for PVT and SER Tolerance
paper_content:
Traditional adaptive methods that compensate for PVT variations need safety margins and cannot respond to rapid environmental changes. In this paper, we present a design (RazorII) which implements a flip-flop with in situ detection and architectural correction of variation-induced delay errors. Error detection is based on flagging spurious transitions in the state-holding latch node. The RazorII flip-flop naturally detects logic and register SER. We implement a 64-bit processor in 0.13 mum technology which uses RazorII for SER tolerance and dynamic supply adaptation. RazorII based DVS allows elimination of safety margins and operation at the point of first failure of the processor. We tested and measured 32 different dies and obtained 33% energy savings over traditional DVS using RazorII for supply voltage control. We demonstrate SER tolerance on the RazorII processor through radiation experiments.
---
paper_title: Multiprocessor System-on-Chip (MPSoC) Technology
paper_content:
The multiprocessor system-on-chip (MPSoC) uses multiple CPUs along with other hardware subsystems to implement a system. A wide range of MPSoC architectures have been developed over the past decade. This paper surveys the history of MPSoCs to argue that they represent an important and distinct category of computer architecture. We consider some of the technological trends that have driven the design of MPSoCs. We also survey computer-aided design problems relevant to the design of MPSoCs.
---
paper_title: System-level energy-efficient dynamic task scheduling
paper_content:
Dynamic voltage scaling (DVS) is a well-known low power design technique that reduces the processor energy by slowing down the DVS processor and stretching the task execution time. But in a DVS system consisting of a DVS processor and multiple devices, slowing down the processor increases the device energy consumption and thereby the system-level energy consumption. In this paper, we present dynamic task scheduling algorithms for periodic tasks that minimize the system-level energy (CPU energy + device standby energy). The algorithms use a combination of (i) optimal speed setting, which is the speed that minimizes the system energy for a specific task, and (ii) limited preemption which reduces the numbers of possible preemptions. For the case when the CPU power and device power are comparable, these algorithms achieve up to 43% energy savings compared to [1], but only up to 12% over the non-DVS scheduling. If the device power is large compared to the CPU power, we show that DVS should not be employed.
---
paper_title: A 167-processor 65 nm computational platform with per-processor dynamic supply voltage and dynamic clock frequency scaling
paper_content:
A 167-processor 65 nm computational platform well suited for DSP, communication, and multimedia workloads contains 164 programmable processors with dynamic supply voltage and dynamic clock frequency circuits, three algorithm-specific processors, and three 16 KB shared memories, all clocked by independent oscillators and connected by configurable long-distance-capable links.
---
paper_title: Variation-aware dynamic voltage/frequency scaling
paper_content:
Fine-grained dynamic voltage/frequency scaling (DVFS) is an important tool in managing the balance between power and performance in chip-multiprocessors. Although manufacturing process variations are giving rise to significant core-to-core variations in power and performance, traditional DVFS controllers are unaware of these variations.
---
paper_title: A dynamic voltage scaled microprocessor system
paper_content:
The microprocessor system in portable electronic devices often has a time-varying computational load which is comprised of: (1) compute-intensive and low-latency processes, (2) background and high-latency processes, and (3) system idle. The key design objectives for the processor systems in these applications are providing the highest possible peak performance for the compute-intensive code (e.g., handwriting recognition, image decompression) while maximizing the battery life for the remaining low performance periods. If clock frequency and supply voltage are dynamically varied in response to computational load demands, then energy consumed per process can be reduced for the low computational periods, while retaining peak performance when required. This strategy, which achieves the highest possible energy efficiency for time-varying computational loads, is called dynamic voltage scaling (DVS).
---
paper_title: Polarity control in double-gate, gate-all-around vertically stacked silicon nanowire FETs
paper_content:
We fabricated and characterized new ambipolar silicon nanowire (SiNW) FET transistors featuring two independent gate-all-around electrodes and vertically stacked SiNW channels. One gate electrode enables dynamic configuration of the device polarity (n or p-type), while the other switches on/off the device. Measurement results on silicon show I on /I off > 106 and S ≈ 64mV/dec (70mV/dec) for p(n)-type operation in the same device. We show that XOR operation is embedded in the device characteristic, and we demonstrate for the first time a fully functional 2-transistor XOR gate.
---
paper_title: Embedding statistical tests for on-chip dynamic voltage and temperature monitoring
paper_content:
All mobile applications require high performances with very long battery life. The speed and power consumption trade-off clearly appears as a prominent challenge to optimize the overall energy efficiency. In Multiprocessor System-On-Chip architectures, the trade-off is usually achieved by dynamically adapting the supply voltage and the operating frequency of a processor cluster or of each processor at fine grain. This requires monitoring accurately, on-chip and at runtime, the supply voltage and temperature across the die. Within this context, this paper introduces a method to estimate, from on-chip measurements, using embedded statistical tests, the supply voltage and temperature of small die area using low-cost digital sensors featuring a set of ring oscillators solely. The results obtained, considering a 32nm process, demonstrate the efficiency of the proposed method. Indeed, voltage and temperature measurement errors are kept, in average, below 5mV and 7°C, respectively.
---
paper_title: Timing slack monitoring under process and environmental variations: Application to a DSP performance optimization
paper_content:
To compensate the variability effects in advanced technologies, Process, Voltage, Temperature (PVT) monitors are mandatory to use Adaptive Voltage Scaling (AVS) or Adaptive Body Biasing (ABB) techniques. This paper describes a new monitoring system, allowing failure anticipation in real-time, looking at the timing slack of a pre-defined set of observable flip-flops. This system is made of dedicated sensor structures located near monitored flip-flop, coupled with a specific timing detection window generator, embedded within the clock-tree. Validation and performances simulated in a 45nm low power technology, demonstrate a scalable, low power and low area system, and its compatibility with a standard CAD flow. Gains between an AVFS scheme based on those structures and a standard DVFS are given for a 32bits VLIW DSP.
---
| Title: A Survey on Low-Power Techniques with Emerging Technologies: From Devices to Systems
Section 1: INTRODUCTION
Description 1: Provide an overview of the significance of power consumption issues in electronic systems and the motivation for the survey.
Section 2: NECESSARY BACKGROUND
Description 2: Discuss the nature of power consumption in digital circuits and review the different types of power contributions, specifically dynamic power and static power.
Section 3: LOW-POWER HIGH-PERFORMANCES DEVICES: TOWARDS THIN DEVICES
Description 3: Review recent innovations in device-level technologies aimed at improving energy efficiency, such as fully depleted silicon-on-insulator (FDSOI) and fin-based FETs (FinFETs).
Section 4: Functionality-Enhanced Devices: An Alternative to Moore's Law
Description 4: Explore opportunities provided by emerging multiple-independent-gate (MIG) devices and their potential to enrich functionalities beyond traditional scaling.
Section 5: CIRCUIT-LEVEL OPPORTUNITIES FOR LOW-POWER SYSTEMS
Description 5: Discuss innovations in circuit-level design that leverage advanced devices to manage power more efficiently and improve performance.
Section 6: ARCHITECTURAL-LEVEL TECHNIQUES FOR LOW-POWER SYSTEMS
Description 6: Review architectural-level techniques for power management, including dynamic voltage and frequency scaling (DVFS) and adaptive voltage and frequency scaling (AVFS) in complex systems.
Section 7: Conclusions
Description 7: Summarize the key findings and implications of the survey, emphasizing the importance of a holistic approach in energy-aware design. |
A Survey on Model Based Approaches for 2D and 3D Visual Human Pose Recovery | 9 | ---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: A spatio-temporal 2D-models framework for human pose recovery in monocular sequences
paper_content:
This paper addresses the pose recovery problem of a particular articulated object: the human body. In this model-based approach, the 2D-shape is associated to the corresponding stick figure allowing the joint segmentation and pose recovery of the subject observed in the scene. The main disadvantage of 2D-models is their restriction to the viewpoint. To cope with this limitation, local spatio-temporal 2D-models corresponding to many views of the same sequences are trained, concatenated and sorted in a global framework. Temporal and spatial constraints are then considered to build the probabilistic transition matrix (PTM) that gives a frame to frame estimation of the most probable local models to use during the fitting procedure, thus limiting the feature space. This approach takes advantage of 3D information avoiding the use of a complex 3D human model. The experiments carried out on both indoor and outdoor sequences have demonstrated the ability of this approach to adequately segment pedestrians and estimate their poses independently of the direction of motion during the sequence.
---
paper_title: Recovering 3D human pose from monocular images
paper_content:
We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.
---
paper_title: Head pose estimation using stereo vision for human-robot interaction
paper_content:
We present a method for estimating a person's head pose with a stereo camera. Our approach focuses on the application of human-robot interaction, where people may be further away from the camera and move freely around in a room. We show that depth information acquired from a stereo camera not only helps improving the accuracy of the pose estimation, but also improves the robustness of the system when the lighting conditions change. The estimation is based on neural networks, which are trained to compute the head pose from grayscale and disparity images of the stereo camera. It can handle pan and tilt rotations from -90/spl deg/ to +90/spl deg/. Our system does not require any manual initialization and does not suffer from drift during an image sequence. Moreover the system is capable of real-time processing.
---
paper_title: Action recognition in cluttered dynamic scenes using Pose-Specific Part Models
paper_content:
We present an approach to recognizing single actor human actions in complex backgrounds. We adopt a Joint Tracking and Recognition approach, which track the actor pose by sampling from 3D action models. Most existing such approaches require large training data or MoCAP to handle multiple viewpoints, and often rely on clean actor silhouettes. The action models in our approach are obtained by annotating keyposes in 2D, lifting them to 3D stick figures and then computing the transformation matrices between the 3D keypose figures. Poses sampled from coarse action models may not fit the observations well; to overcome this difficulty, we propose an approach for efficiently localizing a pose by generating a Pose-Specific Part Model (PSPM) which captures appropriate kinematic and occlusion constraints in a tree-structure. In addition, our approach also does not require pose silhouettes. We show improvements to previous results on two publicly available datasets as well as on a novel, augmented dataset with dynamic backgrounds.
---
paper_title: Pedestrian Detection: An Evaluation of the State of the Art
paper_content:
Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.
---
paper_title: Representation and recognition of the movements of shapes
paper_content:
The problems posed by the representation and recognition of the movements of 3-D shapes are analysed. A representation is proposed for the movements of shapes that lie within the scope of the Marr & Nishihara (1978) 3-D model representation of static shapes. The basic problem is how to segment a stream of movement into pieces, each of which can be described separately. The representation proposed here is based upon segmenting a movement at moments when a component axis, e.g. an arm, starts to move relative to its local coordinate frame (here the torso). For example, walking is divided into a segment of the stationary states between each swing of the arms and legs, and the actual motions between the stationary points (relative to the torso, not the ground). This representation is called the state-motion-state (SMS) moving shape representation, and several examples of its application are given.
---
paper_title: Automated human behavior analysis from surveillance videos: a survey
paper_content:
With increasing crime rates in today's world, there is a corresponding awareness for the necessity of detecting abnormal activity. Automation of abnormal Human behavior analysis can play a significant role in security by decreasing the time taken to thwart unwanted events and picking them up during the suspicion stage itself. With advances in technology, surveillance systems can become more automated than manual. Human Behavior Analysis although crucial, is highly challenging. Tracking and recognizing objects and human motion from surveillance videos, followed by automatic summarization of its content has become a hot topic of research. Many researchers have contributed to the field of automated video surveillance through detection, classification and tracking algorithms. Earlier research work is insufficient for comprehensive analysis of human behavior. With the introduction of semantics, the context of a surveillance domain may be established. Such semantics may extend surveillance systems to perform event-based behavior analysis relevant to the domain. This paper presents a survey on research on human behavior analysis with a scope of analyzing the capabilities of the state-of-art methodologies with special focus on semantically enhanced analysis.
---
paper_title: Visual recognition of pointing gestures for human – robot interaction
paper_content:
In this paper, we present an approach for recognizing pointing gestures in the context of human-robot interaction. In order to obtain input features for gesture recognition, we perform visual tracking of head, hands and head orientation. Given the images provided by a calibrated stereo camera, color and disparity information are integrated into a multi-hypothesis tracking framework in order to find the 3D-positions of the respective body parts. Based on the hands' motion, an HMM-based classifier is trained to detect pointing gestures. We show experimentally that the gesture recognition performance can be improved significantly by using information about head orientation as an additional feature. Our system aims at applications in the field of human-robot interaction, where it is important to do run-on recognition in real-time, to allow for robot egomotion and not to rely on manual initialization.
---
paper_title: Computer Vision Approaches to Pedestrian Detection: Visible Spectrum Survey
paper_content:
Pedestrian detection from images of the visible spectrum is a high relevant area of research given its potential impact in the design of pedestrian protection systems. There are many proposals in the literature but they lack a comparative viewpoint. According to this, in this paper we first propose a common framework where we fit the different approaches, and second we use this framework to provide a comparative point of view of the details of such different approaches, pointing out also the main challenges to be solved in the future. In summary, we expect this survey to be useful for both novel and experienced researchers in the field. In the first case, as a clarifying snapshot of the state of the art; in the second, as a way to unveil trends and to take conclusions from the comparative study.
---
paper_title: Monocular Pedestrian Detection: Survey and Experiments
paper_content:
Pedestrian detection is a rapidly evolving area in computer vision with key applications in intelligent vehicles, surveillance, and advanced robotics. The objective of this paper is to provide an overview of the current state of the art from both methodological and experimental perspectives. The first part of the paper consists of a survey. We cover the main components of a pedestrian detection system and the underlying models. The second (and larger) part of the paper contains a corresponding experimental study. We consider a diverse set of state-of-the-art systems: wavelet-based AdaBoost cascade, HOG/linSVM, NN/LRF, and combined shape-texture detection. Experiments are performed on an extensive data set captured onboard a vehicle driving through urban environment. The data set includes many thousands of training samples as well as a 27-minute test sequence involving more than 20,000 images with annotated pedestrian locations. We consider a generic evaluation setting and one specific to pedestrian detection onboard a vehicle. Results indicate a clear advantage of HOG/linSVM at higher image resolutions and lower processing speeds, and a superiority of the wavelet-based AdaBoost cascade approach at lower image resolutions and (near) real-time processing speeds. The data set (8.5 GB) is made public for benchmarking purposes.
---
paper_title: The Visual Analysis of Human Movement: A Survey
paper_content:
The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions.
---
paper_title: Pedestrian Detection: An Evaluation of the State of the Art
paper_content:
Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.
---
paper_title: Part-based models for finding people and estimating their pose
paper_content:
This chapter will survey approaches to person detection and pose estimation with the use of part-based models. After a brief introduction/motivation for the need for parts, the bulk of the chapter will be split into three core sections on Representation, Inference, and Learning. We begin by describing various gradient-based and color descriptors for parts. We next focus on representations for encoding structural relations between parts, describing extensions of classic pictorial structures models to capture occlusion and appearance relations. We will use the formalism of probabilistic models to unify such representations and introduce the issues of inference and learning. We describe various efficient algorithms designed for tree-structured models, as well as focusing on discriminative formalisms for learning model parameters. We finally end with applications of pedestrian detection, human pose estimation, and people tracking.
---
paper_title: Spatio-Temporal GrabCut human segmentation for face and pose recovery
paper_content:
In this paper, we present a full-automatic Spatio-Temporal GrabCut human segmentation methodology. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model for seed initialization. Spatial information is included by means of Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, human segmentation is combined with Shape and Active Appearance Models to perform full face and pose recovery. Results over public data sets as well as proper human action base show a robust segmentation and recovery of both face and pose using the presented methodology.
---
paper_title: Steerable part models
paper_content:
We describe a method for learning steerable deformable part models. Our models exploit the fact that part templates can be written as linear filter banks. We demonstrate that one can enforce steerability and separability during learning by applying rank constraints. These constraints are enforced with a coordinate descent learning algorithm, where each step can be solved with an off-the-shelf structured SVM solver. The resulting models are orders of magnitude smaller than their counterparts, greatly simplifying learning and reducing run-time computation. Limiting the degrees of freedom also reduces overfitting, which is useful for learning large part vocabularies from limited training data. We learn steerable variants of several state-of-the-art models for object detection, human pose estimation, and facial landmark estimation. Our steerable models are smaller, faster, and often improve performance.
---
paper_title: Histograms of oriented gradients for human detection
paper_content:
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
---
paper_title: Poselets: Body part detectors trained using 3D human pose annotations
paper_content:
We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.
---
paper_title: Pictorial structures revisited: People detection and articulated pose estimation
paper_content:
Non-rigid object detection and articulated pose estimation are two related and challenging problems in computer vision. Numerous models have been proposed over the years and often address different special cases, such as pedestrian detection or upper body pose estimation in TV footage. This paper shows that such specialization may not be necessary, and proposes a generic approach based on the pictorial structures framework. We show that the right selection of components for both appearance and spatial modeling is crucial for general applicability and overall performance of the model. The appearance of body parts is modeled using densely sampled shape context descriptors and discriminatively trained AdaBoost classifiers. Furthermore, we interpret the normalized margin of each classifier as likelihood in a generative model. Non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between parts. The marginal posterior of each part is inferred using belief propagation. We demonstrate that such a model is equally suitable for both detection and pose estimation tasks, outperforming the state of the art on three recently proposed datasets.
---
paper_title: Real-time human pose recognition in parts from single depth images
paper_content:
We propose a new method to quickly and accurately predict 3D positions of body joints from a single depth image, using no temporal information. We take an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem. Our large and highly varied training dataset allows the classifier to estimate body parts invariant to pose, body shape, clothing, etc. Finally we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs at 200 frames per second on consumer hardware. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state of the art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.
---
paper_title: Learning hierarchical poselets for human parsing
paper_content:
We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets–a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations.
---
paper_title: Graph cuts optimization for multi-limb human segmentation in depth maps
paper_content:
We present a generic framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs in depth maps. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α-β swap Graph-cuts algorithm. Moreover, depth of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches.
---
paper_title: Learning realistic human actions from movies
paper_content:
The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8% accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.
---
paper_title: Spelling it out: Real-time ASL fingerspelling recognition
paper_content:
This article presents an interactive hand shape recognition user interface for American Sign Language (ASL) finger-spelling. The system makes use of a Microsoft Kinect device to collect appearance and depth images, and of the OpenNI+NITE framework for hand detection and tracking. Hand-shapes corresponding to letters of the alphabet are characterized using appearance and depth images and classified using random forests. We compare classification using appearance and depth images, and show a combination of both lead to best results, and validate on a dataset of four different users. This hand shape detection works in real-time and is integrated in an interactive user interface allowing the signer to select between ambiguous detections and integrated with an English dictionary for efficient writing.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Fourier Active Appearance Models
paper_content:
Gaining invariance to camera and illumination variations has been a well investigated topic in Active Appearance Model (AAM) fitting literature. The major problem lies in the inability of the appearance parameters of the AAM to generalize to unseen conditions. An attractive approach for gaining invariance is to fit an AAM to a multiple filter response (e.g. Gabor) representation of the input image. Naively applying this concept with a traditional AAM is computationally prohibitive, especially as the number of filter responses increase. In this paper, we present a computationally efficient AAM fitting algorithm based on the Lucas-Kanade (LK) algorithm posed in the Fourier domain that affords invariance to both expression and illumination. We refer to this as a Fourier AAM (FAAM), and show that this method gives substantial improvement in person specific AAM fitting performance over traditional AAM fitting methods.
---
paper_title: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters.
paper_content:
Two-dimensional spatial linear filters are constrained by general uncertainty relations that limit their attainable information resolution for orientation, spatial frequency, and two-dimensional (2D) spatial position. The theoretical lower limit for the joint entropy, or uncertainty, of these variables is achieved by an optimal 2D filter family whose spatial weighting functions are generated by exponentiated bivariate second-order polynomials with complex coefficients, the elliptic generalization of the one-dimensional elementary functions proposed in Gabor's famous theory of communication [J. Inst. Electr. Eng. 93, 429 (1946)]. The set includes filters with various orientation bandwidths, spatial-frequency bandwidths, and spatial dimensions, favoring the extraction of various kinds of information from an image. Each such filter occupies an irreducible quantal volume (corresponding to an independent datum) in a four-dimensional information hyperspace whose axes are interpretable as 2D visual space, orientation, and spatial frequency, and thus such a filter set could subserve an optimally efficient sampling of these variables. Evidence is presented that the 2D receptive-field profiles of simple cells in mammalian visual cortex are well described by members of this optimal 2D filter family, and thus such visual neurons could be said to optimize the general uncertainty relations for joint 2D-spatial-2D-spectral information resolution. The variety of their receptive-field dimensions and orientation and spatial-frequency bandwidths, and the correlations among these, reveal several underlying constraints, particularly in width/length aspect ratio and principal axis organization, suggesting a polar division of labor in occupying the quantal volumes of information hyperspace.(ABSTRACT TRUNCATED AT 250 WORDS)
---
paper_title: Real-time identification and localization of body parts from depth images
paper_content:
We deal with the problem of detecting and identifying body parts in depth images at video frame rates. Our solution involves a novel interest point detector for mesh and range data that is particularly well suited for analyzing human shape. The interest points, which are based on identifying geodesic extrema on the surface mesh, coincide with salient points of the body, which can be classified as, e.g., hand, foot or head using local shape descriptors. Our approach also provides a natural way of estimating a 3D orientation vector for a given interest point. This can be used to normalize the local shape descriptors to simplify the classification problem as well as to directly estimate the orientation of body parts in space. Experiments involving ground truth labels acquired via an active motion capture system show that our interest points in conjunction with a boosted patch classifier are significantly better in detecting body parts in depth images than state-of-the-art sliding-window based detectors.
---
paper_title: Selective spatio-temporal interest points
paper_content:
Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper, we present a novel approach for robust and selective STIP detection, by applying surround suppression combined with local and temporal constraints. This new method is significantly different from existing STIP detection techniques and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-video words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on popular benchmark datasets (KTH and Weizmann), more challenging datasets of complex scenes with background clutter and camera motion (CVC and CMU), movie and YouTube video clips (Hollywood 2 and YouTube), and complex scenes with multiple actors (MSR I and Multi-KTH), validates our approach and show state-of-the-art performance. Due to the unavailability of ground truth action annotation data for the Multi-KTH dataset, we introduce an actor specific spatio-temporal clustering of STIPs to address the problem of automatic action annotation of multiple simultaneous actors. Additionally, we perform cross-data action recognition by training on source datasets (KTH and Weizmann) and testing on completely different and more challenging target datasets (CVC, CMU, MSR I and Multi-KTH). This documents the robustness of our proposed approach in the realistic scenario, using separate training and test datasets, which in general has been a shortcoming in the performance evaluation of human action recognition techniques.
---
paper_title: Human body pose estimation using silhouette shape analysis
paper_content:
We describe a system for human body pose estimation from multiple views that is fast and completely automatic. The algorithm works in the presence of multiple people by decoupling the problems of pose estimation of different people. The pose is estimated based on a likelihood function that integrates information from multiple views and thus obtains a globally optimal solution. Other characteristics that make our method more general than previous work include: (1) no manual initialization; (2) no specification of the dimensions of the 3D structure; (3) no reliance on some learned poses or patterns of activity; (4) insensitivity to edges and clutter in the background and within the foreground. The algorithm has applications in surveillance and promising results have been obtained.
---
paper_title: Histograms of oriented gradients for human detection
paper_content:
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
---
paper_title: Evaluating Color Descriptors for Object and Scene Recognition
paper_content:
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.
---
paper_title: Grouplet: A structured image representation for recognizing human and object interactions
paper_content:
Psychologists have proposed that many human-object interaction activities form unique classes of scenes. Recognizing these scenes is important for many social functions. To enable a computer to do this is however a challenging task. Take people-playing-musical-instrument (PPMI) as an example; to distinguish a person playing violin from a person just holding a violin requires subtle distinction of characteristic image features and feature arrangements that differentiate these two scenes. Most of the existing image representation methods are either too coarse (e.g. BoW) or too sparse (e.g. constellation models) for performing this task. In this paper, we propose a new image feature representation called “grouplet”. The grouplet captures the structured information of an image by encoding a number of discriminative visual features and their spatial configurations. Using a dataset of 7 different PPMI activities, we show that grouplets are more effective in classifying and detecting human-object interactions than other state-of-the-art methods. In particular, our method can make a robust distinction between humans playing the instruments and humans co-occurring with the instruments without playing.
---
paper_title: Performance of optical flow techniques
paper_content:
The performance of six optical flow techniques is compared, emphasizing measurement accuracy. The most accurate methods are found to be the local differential approaches, where nu is computed explicitly in terms of a locally constant or linear model. Techniques using global smoothness constraints appear to produce visually attractive flow fields, but in general seem to be accurate enough for qualitative use only and insufficient as precursors to the computations of egomotion and 3D structures. It is found that some form of confidence measure/threshold is crucial for all techniques in order to separate the inaccurate from the accurate. Drawbacks of the six techniques are discussed. >
---
paper_title: On Space-Time Interest Points
paper_content:
Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. ::: ::: To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.
---
paper_title: Object Detection with Discriminatively Trained Part-Based Models
paper_content:
We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Histograms of oriented gradients for human detection
paper_content:
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
---
paper_title: Poselets: Body part detectors trained using 3D human pose annotations
paper_content:
We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.
---
paper_title: "GrabCut": interactive foreground extraction using iterated graph cuts
paper_content:
The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for "border matting" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.
---
paper_title: A Comparison of Affine Region Detectors
paper_content:
The paper gives a snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions. Six types of detectors are included: detectors based on affine normalization around Harris (Mikolajczyk and Schmid, 2002; Schaffalitzky and Zisserman, 2002) and Hessian points (Mikolajczyk and Schmid, 2002), a detector of `maximally stable extremal regions', proposed by Matas et al. (2002); an edge-based region detector (Tuytelaars and Van Gool, 1999) and a detector based on intensity extrema (Tuytelaars and Van Gool, 2000), and a detector of `salient regions', proposed by Kadir, Zisserman and Brady (2004). The performance is measured against changes in viewpoint, scale, illumination, defocus and image compression. ::: ::: The objective of this paper is also to establish a reference test set of images and performance software, so that future detectors can be evaluated in the same framework.
---
paper_title: On Space-Time Interest Points
paper_content:
Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. ::: ::: To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Pictorial structures revisited: People detection and articulated pose estimation
paper_content:
Non-rigid object detection and articulated pose estimation are two related and challenging problems in computer vision. Numerous models have been proposed over the years and often address different special cases, such as pedestrian detection or upper body pose estimation in TV footage. This paper shows that such specialization may not be necessary, and proposes a generic approach based on the pictorial structures framework. We show that the right selection of components for both appearance and spatial modeling is crucial for general applicability and overall performance of the model. The appearance of body parts is modeled using densely sampled shape context descriptors and discriminatively trained AdaBoost classifiers. Furthermore, we interpret the normalized margin of each classifier as likelihood in a generative model. Non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between parts. The marginal posterior of each part is inferred using belief propagation. We demonstrate that such a model is equally suitable for both detection and pose estimation tasks, outperforming the state of the art on three recently proposed datasets.
---
paper_title: A Hierarchical Model of Dynamics for Tracking People with a Single Video Camera
paper_content:
We propose a novel hierarchical model of human dynamics for view independent tracking of the human body in monocular video sequences. The model is trained using real data from a collection of people. Kinematics are encoded using Hierarchical Principal Component Analysis, and dynamics are encoded using Hidden Markov Models. The top of the hierarchy contains information about the whole body. The lower levels of the hierarchy contain more detailed information about possible poses of some subpart of the body. When tracking, the lower levels of the hierarchy are shown to improve accuracy. In this article we describe our model and present experiments that show we can recover 3D skeletons from 2D images in a view independent manner, and also track people the system was not trained on.
---
paper_title: 3D generic object categorization, localization and pose estimation
paper_content:
We propose a novel and robust model to represent and learn generic 3D object categories. We aim to solve the problem of true 3D object categorization for handling arbitrary rotations and scale changes. Our approach is to capture a compact model of an object category by linking together diagnostic parts of the objects from different viewing points. We emphasize on the fact that our "parts" are large and discriminative regions of the objects that are composed of many local invariant features. Instead of recovering a full 3D geometry, we connect these parts through their mutual homographic transformation. The resulting model is a compact summarization of both the appearance and geometry information of the object class. We propose a framework in which learning is done via minimal supervision compared to previous works. Our results on categorization show superior performances to state-of-the-art algorithms such as (Thomas et al., 2006). Furthermore, we have compiled a new 3D object dataset that consists of 10 different object categories. We have tested our algorithm on this dataset and have obtained highly promising results.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: A multi-view probabilistic model for 3D object classes
paper_content:
We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.
---
paper_title: Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories
paper_content:
Recognizing object classes and their 3D viewpoints is an important problem in computer vision. Based on a part-based probabilistic representation [31], we propose a new 3D object class model that is capable of recognizing unseen views by pose estimation and synthesis. We achieve this by using a dense, multiview representation of the viewing sphere parameterized by a triangular mesh of viewpoints. Each triangle of viewpoints can be morphed to synthesize new viewpoints. By incorporating 3D geometrical constraints, our model establishes explicit correspondences among object parts across viewpoints. We propose an incremental learning algorithm to train the generative model. A cellphone video clip of an object is first used to initialize model learning. Then the model is updated by a set of unsorted training images without viewpoint labels. We demonstrate the robustness of our model on object detection, viewpoint classification and synthesis tasks. Our model performs superiorly to and on par with state-of-the-art algorithms on the Savarese et al. 2007 and PASCAL datasets in object detection. It outperforms all previous work in viewpoint classification and offers promising results in viewpoint synthesis.
---
paper_title: Simultaneous pose, correspondence and non-rigid shape
paper_content:
Recent works have shown that 3D shape of non-rigid surfaces can be accurately retrieved from a single image given a set of 3D-to-2D correspondences between that image and another one for which the shape is known. However, existing approaches assume that such correspondences can be readily established, which is not necessarily true when large deformations produce significant appearance changes between the input and the reference images. Furthermore, it is either assumed that the pose of the camera is known, or the estimated solution is pose-ambiguous. In this paper we relax all these assumptions and, given a set of 3D and 2D unmatched points, we present an approach to simultaneously solve their correspondences, compute the camera pose and retrieve the shape of the surface in the input image. This is achieved by introducing weak priors on the pose and shape that we model as Gaussian Mixtures. By combining them into a Kalman filter we can progressively reduce the number of 2D candidates that can be potentially matched to each 3D point, while pose and shape are refined. This lets us to perform a complete and efficient exploration of the solution space and retain the best solution.
---
paper_title: P.: Pose Priors for Simultaneously Solving Alignment and Correspondence
paper_content:
Estimating a camera pose given a set of 3D-object and 2D-image feature points is a well understood problem when correspondences are given. However, when such correspondences cannot be established a priori, one must simultaneously compute them along with the pose. Most current approaches to solving this problem are too computationally intensive to be practical. An interesting exception is the SoftPosit algorithm, that looks for the solution as the minimum of a suitable objective function. It is arguably one of the best algorithms but its iterative nature means it can fail in the presence of clutter, occlusions, or repetitive patterns. In this paper, we propose an approach that overcomes this limitation by taking advantage of the fact that, in practice, some prior on the camera pose is often available. We model it as a Gaussian Mixture Model that we progressively refine by hypothesizing new correspondences. This rapidly reduces the number of potential matches for each 3D point and lets us explore the pose space more thoroughly than SoftPosit at a similar computational cost. We will demonstrate the superior performance of our approach on both synthetic and real data.
---
paper_title: Single image 3D human pose estimation from noisy observations
paper_content:
Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.
---
paper_title: Parsing human motion with stretchable models
paper_content:
We address the problem of articulated human pose estimation in videos using an ensemble of tractable models with rich appearance, shape, contour and motion cues. In previous articulated pose estimation work on unconstrained videos, using temporal coupling of limb positions has made little to no difference in performance over parsing frames individually [8, 28]. One crucial reason for this is that joint parsing of multiple articulated parts over time involves intractable inference and learning problems, and previous work has resorted to approximate inference and simplified models. We overcome these computational and modeling limitations using an ensemble of tractable submodels which couple locations of body joints within and across frames using expressive cues. Each submodel is responsible for tracking a single joint through time (e.g., left elbow) and also models the spatial arrangement of all joints in a single frame. Because of the tree structure of each submodel, we can perform efficient exact inference and use rich temporal features that depend on image appearance, e.g., color tracking and optical flow contours. We propose and experimentally investigate a hierarchy of submodel combination methods, and we find that a highly efficient max-marginal combination method outperforms much slower (by orders of magnitude) approximate inference using dual decomposition. We apply our pose model on a new video dataset of highly varied and articulated poses from TV shows. We show significant quantitative and qualitative improvements over state-of-the-art single-frame pose estimation approaches.
---
paper_title: 2D Articulated Human Pose Estimation and Retrieval in (Almost) Unconstrained Still Images
paper_content:
We present a technique for estimating the spatial layout of humans in still images—the position of the head, torso and arms. The theme we explore is that once a person is localized using an upper body detector, the search for their body parts can be considerably simplified using weak constraints on position and appearance arising from that detection. Our approach is capable of estimating upper body pose in highly challenging uncontrolled images, without prior knowledge of background, clothing, lighting, or the location and scale of the person in the image. People are only required to be upright and seen from the front or the back (not side).We evaluate the stages of our approach experimentally using ground truth layout annotation on a variety of challenging material, such as images from the PASCAL VOC 2008 challenge and video frames from TV shows and feature films.We also propose and evaluate techniques for searching a video dataset for people in a specific pose. To this end, we develop three new pose descriptors and compare their classification and retrieval performance to two baselines built on state-of-the-art object detection models.
---
paper_title: Object Detection with Discriminatively Trained Part-Based Models
paper_content:
We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.
---
paper_title: Toward Real-Time Pedestrian Detection Based on a Deformable Template Model
paper_content:
Most advanced driving assistance systems already include pedestrian detection systems. Unfortunately, there is still a tradeoff between precision and real time. For a reliable detection, excellent precision-recall such a tradeoff is needed to detect as many pedestrians as possible while, at the same time, avoiding too many false alarms; in addition, a very fast computation is needed for fast reactions to dangerous situations. Recently, novel approaches based on deformable templates have been proposed since these show a reasonable detection performance although they are computationally too expensive for real-time performance. In this paper, we present a system for pedestrian detection based on a hierarchical multiresolution part-based model. The proposed system is able to achieve state-of-the-art detection accuracy due to the local deformations of the parts while exhibiting a speedup of more than one order of magnitude due to a fast coarse-to-fine inference technique. Moreover, our system explicitly infers the level of resolution available so that the detection of small examples is feasible with a very reduced computational cost. We conclude this contribution by presenting how a graphics processing unit-optimized implementation of our proposed system is suitable for real-time pedestrian detection in terms of both accuracy and speed.
---
paper_title: Object detection with grammar models
paper_content:
Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.
---
paper_title: Tracking loose-limbed people
paper_content:
We pose the problem of 3D human tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected limbs. Conditional probabilities relating the 3D pose of connected limbs are learned from motion-captured training data. Similarly, we learn probabilistic models for the temporal evolution of each limb (forward and backward in time). Human pose and motion estimation is then solved with non-parametric belief propagation using a variation of particle filtering that can be applied over a general loopy graph. The loose-limbed model and decentralized graph structure facilitate the use of low-level visual cues. We adopt simple limb and head detectors to provide "bottom-up" information that is incorporated into the inference process at every time-step; these detectors permit automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking a walking person in video imagery using four calibrated cameras. Our experimental apparatus includes a marker-based motion capture system aligned with the coordinate frame of the calibrated cameras with which we quantitatively evaluate the accuracy of our 3D person tracker.
---
paper_title: Pictorial Structures for Object Recognition
paper_content:
In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.
---
paper_title: Articulated pose estimation with flexible mixtures-of-parts
paper_content:
We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.
---
paper_title: Beyond trees: common-factor models for 2D human pose recovery
paper_content:
Tree structured models have been widely used for determining the pose of a human body, from either 2D or 3D data. While such models can effectively represent the kinematic constraints of the skeletal structure, they do not capture additional constraints such as coordination of the limbs. Tree structured models thus miss an important source of information about human body pose, as limb coordination is necessary for balance while standing, walking, or running, as well as being evident in other activities such as dancing and throwing. In this paper, we consider the use of undirected graphical models that augment a tree structure with latent variables in order to account for coordination between limbs. We refer to these as common-factor models, since they are constructed by using factor analysis to identify additional correlations in limb position that are not accounted for by the kinematic tree structure. These common-factor models have an underlying tree structure and thus a variant of the standard Viterbi algorithm for a tree can be applied for efficient estimation. We present some experimental results contrasting common-factor models with tree models, and quantify the improvement in pose estimation for 2D image data.
---
paper_title: Poselets: Body part detectors trained using 3D human pose annotations
paper_content:
We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.
---
paper_title: Max Margin AND/OR Graph learning for parsing the human body
paper_content:
We present a novel structure learning method, Max Margin AND/OR graph (MM-AOG), for parsing the human body into parts and recovering their poses. Our method represents the human body and its parts by an AND/OR graph, which is a multi-level mixture of Markov random fields (MRFs). Max-margin learning, which is a generalization of the training algorithm for support vector machines (SVMs), is used to learn the parameters of the AND/OR graph model discriminatively. There are four advantages from this combination of AND/OR graphs and max-margin learning. Firstly, the AND/OR graph allows us to handle enormous articulated poses with a compact graphical model. Secondly, max-margin learning has more discriminative power than the traditional maximum likelihood approach. Thirdly, the parameters of the AND/OR graph model are optimized globally. In particular, the weights of the appearance model for individual nodes and the relative importance of spatial relationships between nodes are learnt simultaneously. Finally, the kernel trick can be used to handle high dimensional features and to enable complex similarity measure of shapes. We perform comparison experiments on the base ball datasets, showing significant improvements over state of the art methods.
---
paper_title: Pictorial structures revisited: People detection and articulated pose estimation
paper_content:
Non-rigid object detection and articulated pose estimation are two related and challenging problems in computer vision. Numerous models have been proposed over the years and often address different special cases, such as pedestrian detection or upper body pose estimation in TV footage. This paper shows that such specialization may not be necessary, and proposes a generic approach based on the pictorial structures framework. We show that the right selection of components for both appearance and spatial modeling is crucial for general applicability and overall performance of the model. The appearance of body parts is modeled using densely sampled shape context descriptors and discriminatively trained AdaBoost classifiers. Furthermore, we interpret the normalized margin of each classifier as likelihood in a generative model. Non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between parts. The marginal posterior of each part is inferred using belief propagation. We demonstrate that such a model is equally suitable for both detection and pose estimation tasks, outperforming the state of the art on three recently proposed datasets.
---
paper_title: Object detection grammars
paper_content:
In this talk I will discuss various aspects of object detection using compositional models, focusing on the framework of object detection grammars, discriminative training and efficient computation.
---
paper_title: Rapid inference on a novel and/or graph for object detection, segmentation and parsing
paper_content:
In this paper we formulate a novel AND/OR graph representation capable of describing the different configurations of deformable articulated objects such as horses. The representation makes use of the summarization principle so that lower level nodes in the graph only pass on summary statistics to the higher level nodes. The probability distributions are invariant to position, orientation, and scale. We develop a novel inference algorithm that combined a bottom-up process for proposing configurations for horses together with a top-down process for refining and validating these proposals. The strategy of surround suppression is applied to ensure that the inference time is polynomial in the size of input data. The algorithm was applied to the tasks of detecting, segmenting and parsing horses. We demonstrate that the algorithm is fast and comparable with the state of the art approaches.
---
paper_title: Kinematic jump processes for monocular 3D human tracking
paper_content:
A major difficulty for 3D (three-dimensional) human body tracking from monocular image sequences is the near nonobservability of kinematic degrees of freedom that generate motion in depth. For known link (body segment) lengths, the strict nonobservabilities reduce to twofold 'forwards/backwards flipping' ambiguities for each link. These imply 2/sup # links/ formal inverse kinematics solutions for the full model, and hence linked groups of O(2/sup # links/) local minima in the model-image matching cost function. Choosing the wrong minimum leads to rapid mistracking, so for reliable tracking, rapid methods of investigating alternative minima within a group are needed. Previous approaches to this have used generic search methods that do not exploit the specific problem structure. Here, we complement these by using simple kinematic reasoning to enumerate the tree of possible forwards/backwards flips, thus greatly speeding the search within each linked group of minima. Our methods can be used either deterministically, or within stochastic 'jump-diffusion' style search processes. We give experimental results on some challenging monocular human tracking sequences, showing how the new kinematic-flipping based sampling method improves and complements existing ones.
---
paper_title: Learning hierarchical poselets for human parsing
paper_content:
We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets–a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations.
---
paper_title: Loose-limbed People: Estimating 3D Human Pose and Motion Using Non-parametric Belief Propagation
paper_content:
We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from "bottom-up" visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.
---
paper_title: The Representation and Matching of Pictorial Structures
paper_content:
The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of "goodness" of matching or detection.
---
paper_title: Efficient inference with multiple heterogeneous part detectors for human pose estimation
paper_content:
We address the problem of estimating human pose in a single image using a part based approach. Pose accuracy is directly affected by the accuracy of the part detectors but more accurate detectors are likely to be also more computationally expensive. We propose to use multiple, heterogeneous part detectors with varying accuracy and computation requirements, ordered in a hierarchy, to achieve more accurate and efficient pose estimation. For inference, we propose an algorithm to localize articulated objects by exploiting an ordered hierarchy of detectors with increasing accuracy. The inference uses branch and bound method to search for each part and use kinematics from neighboring parts to guide the branching behavior and compute bounds on the best part estimate. We demonstrate our approach on a publicly available People dataset and outperform the state-of-art methods. Our inference is 3 times faster than one based on using a single, highly accurate detector.
---
paper_title: Single image 3D human pose estimation from noisy observations
paper_content:
Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Bayesian Reconstruction of 3D Human Motion from Single-Camera Video
paper_content:
The three-dimensional motion of humans is underdetermined when the observation is limited to a single camera, due to the inherent 3D ambiguity of 2D video. We present a system that reconstructs the 3D motion of human subjects from single-camera video, relying on prior knowledge about human motion, learned from training data, to resolve those ambiguities. After initialization in 2D, the tracking and 3D reconstruction is automatic; we show results for several video sequences. The results show the power of treating 3D body tracking as an inference problem.
---
paper_title: Deterministic 3D Human Pose Estimation Using Rigid Structure
paper_content:
This paper explores a method, first proposed by Wei and Chai [1], for estimating 3D human pose from several frames of uncalibrated 2D point correspondences containing projected body joint locations. In their work Wei and Chai boldly claimed that, through the introduction of rigid constraints to the torso and hip, camera scales, bone lengths and absolute depths could be estimated from a finite number of frames (i.e. ≥ 5). In this paper we show this claim to be false, demonstrating in principle one can never estimate these parameters in a finite number of frames. Further, we demonstrate their approach is only valid for rigid sub-structures of the body (e.g. torso). Based on this analysis we propose a novel approach using deterministic structure from motion based on assumptions of rigidity in the body's torso. Our approach provides notably more accurate estimates and is substantially faster than Wei and Chai's approach, and unlike the original, can be solved as a deterministic least-squares problem.
---
paper_title: Twist Based Acquisition and Tracking of Animal and Human Kinematics
paper_content:
This paper demonstrates a new visual motion estimation technique that is able to recover high degree-of-freedom articulated human body configurations in complex video sequences. We introduce the use and integration of a mathematical technique, the product of exponential maps and twist motions, into a differential motion estimation. This results in solving simple linear systems, and enables us to recover robustly the kinematic degrees-of-freedom in noise and complex self occluded configurations. A new factorization technique lets us also recover the kinematic chain model itself. We are able to track several human walk cycles, several wallaby hop cycles, and two walk cycels of the famous movements of Eadweard Muybridge's motion studies from the last century. To the best of our knowledge, this is the first computer vision based system that is able to process such challenging footage.
---
paper_title: Tracking Articulated Motion with Piecewise Learned Dynamical Models
paper_content:
We present a novel approach to modelling the non-linear and time- varying dynamics of human motion, using statistical methods to capture the char- acteristic motion patterns that exist in typical human activities. Our method is based on automatically clustering the body pose space into connected regions ex- hibiting similar dynamical characteristics, modelling the dynamics in each region as a Gaussian autoregressive process. Activities that would require large numbers of exemplars in example based methods are covered by comparatively few mo- tion models. Different regions correspond roughly to different action-fragments and our class inference scheme allows for smooth transitions between these, thus making it useful for activity recognition tasks. The method is used to track activi- ties including walking, running, etc., using a planar 2D body model. Its effective- ness is demonstrated by its success in tracking complicated motions like turns, without any key frames or 3D information.
---
paper_title: Motion capture using joint skeleton tracking and surface estimation
paper_content:
This paper proposes a method for capturing the performance of a human or an animal from a multi-view video sequence. Given an articulated template model and silhouettes from a multi-view image sequence, our approach recovers not only the movement of the skeleton, but also the possibly non-rigid temporal deformation of the 3D surface. While large scale deformations or fast movements are captured by the skeleton pose and approximate surface skinning, true small scale deformations or non-rigid garment motion are captured by fitting the surface to the silhouette. We further propose a novel optimization scheme for skeleton-based pose estimation that exploits the skeleton's tree structure to split the optimization problem into a local one and a lower dimensional global one. We show on various sequences that our approach can capture the 3D motion of animals and humans accurately even in the case of rapid movements and wide apparel like skirts.
---
paper_title: Gait recognition using active shape model and motion prediction
paper_content:
This study presents a novel, robust gait recognition algorithm for human identification from a sequence of segmented noisy silhouettes in a low-resolution video. The proposed recognition algorithm enables automatic human recognition from model-based gait cycle extraction based on the prediction-based hierarchical active shape model (ASM). The proposed algorithm overcomes drawbacks of existing works by extracting a set of relative model parameters instead of directly analysing the gait pattern. The feature extraction function in the proposed algorithm consists of motion detection, object region detection and ASM, which alleviate problems in the baseline algorithm such as background generation, shadow removal and higher recognition rate. Performance of the proposed algorithm has been evaluated by using the HumanID Gait Challenge data set, which is the largest gait benchmarking data set with 122 objects with different realistic parameters including viewpoint, shoe, surface, carrying condition and time.
---
paper_title: Articulated pose estimation with flexible mixtures-of-parts
paper_content:
We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.
---
paper_title: Head Pose Estimation in Computer Vision: A Survey
paper_content:
The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.
---
paper_title: Modeling 3D human poses from uncalibrated monocular images
paper_content:
This paper introduces an efficient algorithm that reconstructs 3D human poses as well as camera parameters from a small number of 2D point correspondences obtained from uncalibrated monocular images. This problem is challenging because 2D image constraints (e.g. 2D point correspondences) are often not sufficient to determine 3D poses of an articulated object. The key idea of this paper is to identify a set of new constraints and use them to eliminate the ambiguity of 3D pose reconstruction. We also develop an optimization process to simultaneously reconstruct both human poses and camera parameters from various forms of reconstruction constraints. We demonstrate the power and effectiveness of our system by evaluating the performance of the algorithm on both real and synthetic data. We show the algorithm can accurately reconstruct 3D poses and camera parameters from a wide variety of real images, including internet photos and key frames extracted from monocular video sequences.
---
paper_title: Kinematic jump processes for monocular 3D human tracking
paper_content:
A major difficulty for 3D (three-dimensional) human body tracking from monocular image sequences is the near nonobservability of kinematic degrees of freedom that generate motion in depth. For known link (body segment) lengths, the strict nonobservabilities reduce to twofold 'forwards/backwards flipping' ambiguities for each link. These imply 2/sup # links/ formal inverse kinematics solutions for the full model, and hence linked groups of O(2/sup # links/) local minima in the model-image matching cost function. Choosing the wrong minimum leads to rapid mistracking, so for reliable tracking, rapid methods of investigating alternative minima within a group are needed. Previous approaches to this have used generic search methods that do not exploit the specific problem structure. Here, we complement these by using simple kinematic reasoning to enumerate the tree of possible forwards/backwards flips, thus greatly speeding the search within each linked group of minima. Our methods can be used either deterministically, or within stochastic 'jump-diffusion' style search processes. We give experimental results on some challenging monocular human tracking sequences, showing how the new kinematic-flipping based sampling method improves and complements existing ones.
---
paper_title: A Hierarchical Model of Dynamics for Tracking People with a Single Video Camera
paper_content:
We propose a novel hierarchical model of human dynamics for view independent tracking of the human body in monocular video sequences. The model is trained using real data from a collection of people. Kinematics are encoded using Hierarchical Principal Component Analysis, and dynamics are encoded using Hidden Markov Models. The top of the hierarchy contains information about the whole body. The lower levels of the hierarchy contain more detailed information about possible poses of some subpart of the body. When tracking, the lower levels of the hierarchy are shown to improve accuracy. In this article we describe our model and present experiments that show we can recover 3D skeletons from 2D images in a view independent manner, and also track people the system was not trained on.
---
paper_title: Active Appearance Models
paper_content:
We demonstrate a novel method of interpreting images using an Active Appearance Model (AAM). An AAM contains a statistical model of the shape and grey-level appearance of the object of interest which can generalise to almost any valid example. During a training phase we learn the relationship between model parameter displacements and the residual errors induced between a training image and a synthesised model example. To match to an image we measure the current residuals and use the model to predict changes to the current parameters, leading to a better fit. A good overall match is obtained in a few iterations, even from poor starting estimates. We describe the technique in detail and give results of quantitative performance tests. We anticipate that the AAM algorithm will be an important method for locating deformable objects in many applications.
---
paper_title: Single image 3D human pose estimation from noisy observations
paper_content:
Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Probabilistic simultaneous pose and non-rigid shape recovery
paper_content:
We present an algorithm to simultaneously recover non-rigid shape and camera poses from point correspondences between a reference shape and a sequence of input images. The key novel contribution of our approach is in bringing the tools of the probabilistic SLAM methodology from a rigid to a deformable domain. Under the assumption that the shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, may be probabilistically formulated as a maximum a posterior estimate and solved using an iterative least squares optimization. An extensive evaluation on synthetic and real data, shows that our approach has several significant advantages over current approaches, such as performing robustly under large amounts of noise and outliers, and neither requiring to track points over the whole sequence nor initializations close from the ground truth solution.
---
paper_title: Temporal motion models for monocular and multiview 3d human body tracking
paper_content:
We explore an approach to 3D people tracking with learned motion models and deterministic optimization. The tracking problem is formulated as the minimization of a differentiable criterion whose differential structure is rich enough for optimization to be accomplished via hill-climbing. This avoids the computational expense of Monte Carlo methods, while yielding good results under challenging conditions. To demonstrate the generality of the approach we show that we can learn and track cyclic motions such as walking and running, as well as acyclic motions such as a golf swing. We also show results from both monocular and multi-camera tracking. Finally, we provide results with a motion model learned from multiple activities, and show how this models might be used for recognition.
---
paper_title: A Hierarchical Model of Dynamics for Tracking People with a Single Video Camera
paper_content:
We propose a novel hierarchical model of human dynamics for view independent tracking of the human body in monocular video sequences. The model is trained using real data from a collection of people. Kinematics are encoded using Hierarchical Principal Component Analysis, and dynamics are encoded using Hidden Markov Models. The top of the hierarchy contains information about the whole body. The lower levels of the hierarchy contain more detailed information about possible poses of some subpart of the body. When tracking, the lower levels of the hierarchy are shown to improve accuracy. In this article we describe our model and present experiments that show we can recover 3D skeletons from 2D images in a view independent manner, and also track people the system was not trained on.
---
paper_title: Trajectory Space: A Dual Representation for Nonrigid Structure from Motion
paper_content:
Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes. These bases are object dependent and therefore have to be estimated anew for each video sequence. In contrast, we propose a dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories. We describe the dual relationship between the two approaches, showing that they both have equal power for representing 3D structure. We further show that the temporal smoothness in 3D trajectories alone can be used for recovering nonrigid structure from a moving camera. The principal advantage of expressing deforming 3D structure in trajectory space is that we can define an object independent basis. This results in a significant reduction in unknowns and corresponding stability in estimation. We propose the use of the Discrete Cosine Transform (DCT) as the object independent basis and empirically demonstrate that it approaches Principal Component Analysis (PCA) for natural motions. We report the performance of the proposed method, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions, including piecewise rigid motion, partially nonrigid motion (such as a facial expressions), and highly nonrigid motion (such as a person walking or dancing).
---
paper_title: Nonrigid structure from motion in trajectory space
paper_content:
Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes, which have to be estimated anew for each video sequence. In contrast, we propose that the evolving 3D structure be described by a linear combination of basis trajectories. The principal advantage of this approach is that we do not need to estimate any basis vectors during computation. We show that generic bases over trajectories, such as the Discrete Cosine Transform (DCT) basis, can be used to compactly describe most real motions. This results in a significant reduction in unknowns, and corresponding stability in estimation. We report empirical performance, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions including piece-wise rigid motion, partially nonrigid motion (such as a facial expression), and highly nonrigid motion (such as a person dancing).
---
paper_title: 3D Human Body Tracking Using Deterministic Temporal Motion Models
paper_content:
There has been much effort invested in increasing the robustness of human body tracking by incorporating motion models. Most approaches are probabilistic in nature and seek to avoid becoming trapped into local minima by considering multiple hypotheses, which typically requires exponentially large amounts of computation as the number of degrees of freedom increases.
---
paper_title: Monocular 3D tracking of the golf swing
paper_content:
We propose an approach to incorporating dynamic models into the human body tracking process that yields full 3D reconstructions from monocular sequences. We formulate the tracking problem in terms of minimizing a differentiable criterion whose differential structure is rich enough for successful optimization using a simple hill-climbing approach as opposed to a multihypotheses probabilistic one. In other words, we avoid the computational complexity of multihypotheses algorithms while obtaining excellent results under challenging conditions. To demonstrate this, we focus on monocular tracking of a golf swing from ordinary video. It involves both dealing with potentially very different swing styles, recovering arm motions that are perpendicular to the camera plane and handling strong self-occlusions.
---
paper_title: Tracking Articulated Motion with Piecewise Learned Dynamical Models
paper_content:
We present a novel approach to modelling the non-linear and time- varying dynamics of human motion, using statistical methods to capture the char- acteristic motion patterns that exist in typical human activities. Our method is based on automatically clustering the body pose space into connected regions ex- hibiting similar dynamical characteristics, modelling the dynamics in each region as a Gaussian autoregressive process. Activities that would require large numbers of exemplars in example based methods are covered by comparatively few mo- tion models. Different regions correspond roughly to different action-fragments and our class inference scheme allows for smooth transitions between these, thus making it useful for activity recognition tasks. The method is used to track activi- ties including walking, running, etc., using a planar 2D body model. Its effective- ness is demonstrated by its success in tracking complicated motions like turns, without any key frames or 3D information.
---
paper_title: 3D reconstruction of a smooth articulated trajectory from a monocular image sequence
paper_content:
An articulated trajectory is defined as a trajectory that remains at a fixed distance with respect to a parent trajectory. In this paper, we present a method to reconstruct an articulated trajectory in three dimensions given the two dimensional projection of the articulated trajectory, the 3D parent trajectory, and the camera pose at each time instant. This is a core challenge in reconstructing the 3D motion of articulated structures such as the human body because endpoints of each limb form articulated trajectories. We simultaneously apply activity-independent spatial and temporal constraints, in the form of fixed 3D distance to the parent trajectory and smooth 3D motion. There exist two solutions that satisfy each instantaneous 2D projection and articulation constraint (a ray intersects a sphere at up to two locations) and we show that resolving this ambiguity by enforcing smoothness is equivalent to solving a binary quadratic programming problem. A geometric analysis of the reconstruction of articulated trajectories is also presented and a measure of the reconstructibility of an articulated trajectory is proposed.
---
paper_title: Priors for people tracking from small training sets
paper_content:
We advocate the use of scaled Gaussian process latent variable models (SGPLVM) to learn prior models of 3D human pose for 3D people tracking. The SGPLVM simultaneously optimizes a low-dimensional embedding of the high-dimensional pose data and a density function that both gives higher probability to points close to training data and provides a nonlinear probabilistic mapping from the low-dimensional latent space to the full-dimensional pose space. The SGPLVM is a natural choice when only small amounts of training data are available. We demonstrate our approach with two distinct motions, golfing and walking. We show that the SGPLVM sufficiently constrains the problem such that tracking can be accomplished with straightforward deterministic optimization.
---
paper_title: 3d reconstruction of a moving point from a series of 2d projections
paper_content:
This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.
---
paper_title: Observable subspaces for 3D human motion recovery
paper_content:
The articulated body models used to represent human motion typically have many degrees of freedom, usually expressed as joint angles that are highly correlated. The true range of motion can therefore be represented by latent variables that span a low-dimensional space. This has often been used to make motion tracking easier. However, learning the latent space in a problem- independent way makes it non trivial to initialize the tracking process by picking appropriate initial values for the latent variables, and thus for the pose. In this paper, we show that by directly using observable quantities as our latent variables, we eliminate this problem and achieve full automation given only modest amounts of training data. More specifically, we exploit the fact that the trajectory of a person's feet or hands strongly constrains body pose in motions such as skating, skiing, or golfing. These trajectories are easy to compute and to parameterize using a few variables. We treat these as our latent variables and learn a mapping between them and sequences of body poses. In this manner, by simply tracking the feet or the hands, we can reliably guess initial poses over whole sequences and, then, refine them.
---
paper_title: HumanEva: Synchronized video and Motion Capture Dataset for Evaluation of Articulated Human Motion
paper_content:
While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. Current algorithms make many different choices about how to model the human body, how to exploit image evidence and how to approach the inference problem. We argue that there is a need for common datasets that allow fair comparison between different methods and their design choices. Until recently gathering ground-truth data for evaluation of results (especially in 3D) was challenging. In this report we present a novel dataset obtained using a unique setup for capturing synchronized video and ground-truth 3D motion. Data was captured simultaneously using a calibrated marker-based motion capture system and multiple high-speed video capture systems. The video and motion capture streams were synchronized in software using a direct optimization method. The resulting HumanEvaI dataset contains multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 50,000 frames of synchronized motion capture and video was collected at 60 Hz with an additional 37,000 frames of pure motion capture data. The data is partitioned into training, validation, and testing sub-sets. A standard set of error metrics is defined that can be used for evaluation of both 2D and 3D pose estimation and tracking algorithms. Support software and an on-line evaluation system for quantifying results using the test data is being made available to the community. This report provides an overview of the dataset and evaluation metrics and provides pointers into the dataset for additional details. It is our hope that HumanEva-I will become a standard dataset for the evaluation of articulated human motion and pose estimation.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Monocular 3D tracking of the golf swing
paper_content:
We propose an approach to incorporating dynamic models into the human body tracking process that yields full 3D reconstructions from monocular sequences. We formulate the tracking problem in terms of minimizing a differentiable criterion whose differential structure is rich enough for successful optimization using a simple hill-climbing approach as opposed to a multihypotheses probabilistic one. In other words, we avoid the computational complexity of multihypotheses algorithms while obtaining excellent results under challenging conditions. To demonstrate this, we focus on monocular tracking of a golf swing from ordinary video. It involves both dealing with potentially very different swing styles, recovering arm motions that are perpendicular to the camera plane and handling strong self-occlusions.
---
paper_title: Tracking Articulated Motion with Piecewise Learned Dynamical Models
paper_content:
We present a novel approach to modelling the non-linear and time- varying dynamics of human motion, using statistical methods to capture the char- acteristic motion patterns that exist in typical human activities. Our method is based on automatically clustering the body pose space into connected regions ex- hibiting similar dynamical characteristics, modelling the dynamics in each region as a Gaussian autoregressive process. Activities that would require large numbers of exemplars in example based methods are covered by comparatively few mo- tion models. Different regions correspond roughly to different action-fragments and our class inference scheme allows for smooth transitions between these, thus making it useful for activity recognition tasks. The method is used to track activi- ties including walking, running, etc., using a planar 2D body model. Its effective- ness is demonstrated by its success in tracking complicated motions like turns, without any key frames or 3D information.
---
paper_title: Modeling mutual context of object and human pose in human-object interaction activities
paper_content:
Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g. playing tennis), where the relevant object tends to be small or only partially visible, and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other. In this paper we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose, and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm. On a sports data set of six classes of human-object interactions [12], we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses.
---
paper_title: Human Context: Modeling human-human interactions for monocular 3D pose estimation
paper_content:
Automatic recovery of 3d pose of multiple interacting subjects from unconstrained monocular image sequence is a challenging and largely unaddressed problem. We observe, however, that by tacking the interactions explicitly into account, treating individual subjects as mutual "context" for one another, performance on this challenging problem can be improved. Building on this observation, in this paper we develop an approach that first jointly estimates 2d poses of people using multi-person extension of the pictorial structures model and then lifts them to 3d. We illustrate effectiveness of our method on a new dataset of dancing couples and challenging videos from dance competitions.
---
paper_title: Action recognition in cluttered dynamic scenes using Pose-Specific Part Models
paper_content:
We present an approach to recognizing single actor human actions in complex backgrounds. We adopt a Joint Tracking and Recognition approach, which track the actor pose by sampling from 3D action models. Most existing such approaches require large training data or MoCAP to handle multiple viewpoints, and often rely on clean actor silhouettes. The action models in our approach are obtained by annotating keyposes in 2D, lifting them to 3D stick figures and then computing the transformation matrices between the 3D keypose figures. Poses sampled from coarse action models may not fit the observations well; to overcome this difficulty, we propose an approach for efficiently localizing a pose by generating a Pose-Specific Part Model (PSPM) which captures appropriate kinematic and occlusion constraints in a tree-structure. In addition, our approach also does not require pose silhouettes. We show improvements to previous results on two publicly available datasets as well as on a novel, augmented dataset with dynamic backgrounds.
---
paper_title: Semantics of Human Behavior in Image Sequences
paper_content:
Human behavior is contextualized and understanding the scene of an action is crucial for giving proper semantics to behavior. In this chapter we present a novel approach for scene understanding. The emphasis of this work is on the particular case of Human Event Understanding. We introduce a new taxonomy to organize the different semantic levels of the Human Event Understanding framework proposed. Such a framework particularly contributes to the scene understanding domain by (i) extracting behavioral patterns from the integrative analysis of spatial, temporal, and contextual evidence and (ii) integrative analysis of bottom-up and top-down approaches in Human Event Understanding. We will explore how the information about interactions between humans and their environment influences the performance of activity recognition, and how this can be extrapolated to the temporal domain in order to extract higher inferences from human events observed in sequences of images.
---
paper_title: Object Detection with Discriminatively Trained Part-Based Models
paper_content:
We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.
---
paper_title: Monocular 3D pose estimation and tracking by detection
paper_content:
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
---
paper_title: Articulated pose estimation with flexible mixtures-of-parts
paper_content:
We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.
---
paper_title: Modeling mutual context of object and human pose in human-object interaction activities
paper_content:
Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g. playing tennis), where the relevant object tends to be small or only partially visible, and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other. In this paper we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose, and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm. On a sports data set of six classes of human-object interactions [12], we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses.
---
paper_title: 3D reconstruction of a smooth articulated trajectory from a monocular image sequence
paper_content:
An articulated trajectory is defined as a trajectory that remains at a fixed distance with respect to a parent trajectory. In this paper, we present a method to reconstruct an articulated trajectory in three dimensions given the two dimensional projection of the articulated trajectory, the 3D parent trajectory, and the camera pose at each time instant. This is a core challenge in reconstructing the 3D motion of articulated structures such as the human body because endpoints of each limb form articulated trajectories. We simultaneously apply activity-independent spatial and temporal constraints, in the form of fixed 3D distance to the parent trajectory and smooth 3D motion. There exist two solutions that satisfy each instantaneous 2D projection and articulation constraint (a ray intersects a sphere at up to two locations) and we show that resolving this ambiguity by enforcing smoothness is equivalent to solving a binary quadratic programming problem. A geometric analysis of the reconstruction of articulated trajectories is also presented and a measure of the reconstructibility of an articulated trajectory is proposed.
---
paper_title: Simultaneous pose, correspondence and non-rigid shape
paper_content:
Recent works have shown that 3D shape of non-rigid surfaces can be accurately retrieved from a single image given a set of 3D-to-2D correspondences between that image and another one for which the shape is known. However, existing approaches assume that such correspondences can be readily established, which is not necessarily true when large deformations produce significant appearance changes between the input and the reference images. Furthermore, it is either assumed that the pose of the camera is known, or the estimated solution is pose-ambiguous. In this paper we relax all these assumptions and, given a set of 3D and 2D unmatched points, we present an approach to simultaneously solve their correspondences, compute the camera pose and retrieve the shape of the surface in the input image. This is achieved by introducing weak priors on the pose and shape that we model as Gaussian Mixtures. By combining them into a Kalman filter we can progressively reduce the number of 2D candidates that can be potentially matched to each 3D point, while pose and shape are refined. This lets us to perform a complete and efficient exploration of the solution space and retain the best solution.
---
paper_title: Single image 3D human pose estimation from noisy observations
paper_content:
Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.
---
| Title: A Survey on Model Based Approaches for 2D and 3D Visual Human Pose Recovery
Section 1: Introduction
Description 1: Introduce the concept of human pose recovery, its challenges, and the importance of model-based approaches.
Section 2: State of the Art
Description 2: Review and categorize the state-of-the-art methods in human pose recovery, based on the proposed taxonomy.
Section 3: Appearance
Description 3: Discuss the appearance module, including image features, descriptors, and detection stages at different levels (pixel, local, global).
Section 4: Viewpoint
Description 4: Describe methods for viewpoint estimation, both discrete and continuous, and their relevance in pose recovery.
Section 5: Spatial Models
Description 5: Explain the spatial models used in human pose recovery, differentiating between ensembles of parts and structure models.
Section 6: Temporal Models
Description 6: Discuss temporal consistency and tracking methods, as well as motion models that incorporate body movements over time.
Section 7: Behavior
Description 7: Explore how behavior and context information are integrated into pose recovery to improve accuracy and robustness.
Section 8: Discussion
Description 8: Analyze the trends and methodologies in the field, comparing the approaches based on the taxonomy and discussing their strengths and weaknesses.
Section 9: Conclusions
Description 9: Summarize the findings of the survey, highlighting the most successful strategies and potential directions for future research. |
Deliberation on Design Strategies of Automatic Harvesting Systems: A Survey | 8 | ---
paper_title: Robotics for Plant Production
paper_content:
Applying robotics in plant production requires the integration of robot capabilities, plant culture, and the work environment. Commercial plant production requires certain cultural practices to be performed on the plants under certain environmental conditions. Some of the environmental conditions are mostly natural and some are modified or controlled. In many cases, the required cultural practices dictate the layout and materials flow of the production system. Both the cultural and environmental factors significantly affect when, where and how the plants are manipulated. Several cultural practices are commonly known in the plant production industry. The ones which have been the subject of robotics research include division and transfer of plant materials in micropropagation, transplanting of seedlings, sticking of cuttings, grafting, pruning, and harvesting of fruit and vegetables. The plants are expected to change their shape and size during growth and development. Robotics technology includes many sub-topics including the manipulator mechanism and its control, end-effector design, sensing techniques, mobility, and workcell development. The robots which are to be used for performing plant cultural tasks must recognize and understand the physical properties of each unique object and must be able to work under various environmental conditions in fields or controlled environments. This article will present some considerations and examples of robotics development for plant production followed by a description of the key components of plant production robots. A case study on developing a harvesting robot for an up-side-down single truss tomato production system will also be described.
---
paper_title: Mechanical harvesting of California table and oil olives
paper_content:
Mechanical harvesting must be developed for successful table and olive oil production in California. Both canopy contact shaking head and trunk shaking harvesters can produce processed black ripe olives that trained sensory panels and consumer panels cannot distinguish from hand-harvested olives. However, both types of harvesters remove and capture less than the 80% efficiency required for economically feasible mechanical table olive harvesting. The harvesters differ in their removal patterns, efficiency, and types of tree damage. No successful abscission compounds to decrease fruit removal force have been identified. Therefore, as with oil oli- ves, the tree shape must be modified for successful mechanical table olive harvesting. Recent results demonstra- te training to an espalier shape, with and without a trellis, in high density hedgerows, does not decrease yield. These espaliered hedgerow orchards can be harvested with both canopy contact and trunk shakers. Therefore, the traditional California table olive industry must adapt a modified version of the high density and super high density orchards, designed specifically for mechanical harvesting, now being developed for olive oil production in California.
---
paper_title: Image change detection algorithms: a systematic survey
paper_content:
Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer.
---
paper_title: Automatic fruit recognition: a survey and new results using Range/Attenuation images
paper_content:
An automatic fruit recognition system and a review of previous fruit detection work are reported. The methodology presented is able to recognize spherical fruits in natural conditions facing difficult situations: shadows, bright areas, occlusions and overlapping fruits. The sensor used is a laser range-finder giving range/attenuation data of the sensed surface. The recognition system uses a laser range-finder model and a dual color/shape analysis algorithm to locate the fruit. The three-dimensional position of the fruit, radius and the reflectance are obtained after the recognition stages. Results for a set of artificial orange tree images and real-time considerations are presented.
---
paper_title: Stability tests of two-finger tomato grasping for harvesting robots
paper_content:
In this study, the theories of spatial and contact grasp stability were extended and integrated into a whole system, and then a vision processing approach that extracts the relevant information for synthesising plate and curved finger grasps for unknown tomato fruits from tomato images was presented. Finally, stability tests involving grasping tomatoes with two parallel fingers were performed using two types of fingers (plate and curved fingers). Existing theories of grasp stability related to rigid objects could be integrated and extended to analyse the grasping stability for half-ripe tomatoes. Curved fingers were more suitable for stably grasping tomatoes than were plate fingers. The prediction method of stable grasp regions can be regarded as a potential strategy (algorithm) for achieving a programmed control of two-fingered tomato grasp stability based on vision feedback. Visual perception is used to reduce the uncertainty and obtain relevant geometric information about the tomatoes during harvesting.
---
paper_title: Interactive teaching of task-oriented robot grasps
paper_content:
This paper focuses on the problem of grasp stability and grasp quality analysis. An elegant way to evaluate the stability of a grasp is to model its wrench space. However, classical grasp quality measures suffer from several disadvantages, the main drawback being that they are not task related. Indeed, constructive approaches for approximating the wrench space including also task information have been rarely considered. This work presents an effective method for task-oriented grasp quality evaluation based on a novel grasp quality measure. We address the general case of multifingered grasps with point contacts with friction. The proposed approach is based on the concept of programming by demonstration and interactive teaching, wherein an expert user provides in a teaching phase a set of exemplar grasps appropriate for the task. Following this phase, a representation of task-related grasps is built. During task planning and execution, a grasp could be either submitted interactively for evaluation by a non-expert user or synthesized by an automatic planning system. Grasp quality is then assessed based on the proposed measure, which takes into account grasp stability along with its suitability for the task. To enable real-time evaluation of grasps, a fast algorithm for computing an approximation of the quality measure is also proposed. Finally, a local grasp optimization technique is described which can amend uncertainties arising in supplied grasps by non-expert users or assist in planning more valuable grasps in the neighborhood of candidate ones. The paper reports experiments performed in virtual reality with both an anthropomorphic virtual hand and a three-fingered robot hand. These experiments suggest the effectiveness and task relevance of the proposed grasp quality measure.
---
paper_title: Sensing and End-Effector for a Robotic Tomato Harvester
paper_content:
Fresh produce is important for long-term space missions. It provides valuable nutritional ::: needs and psychological boost for mission crews. Labor requirements to grow and harvest the crops, ::: however, must be reduced through automation to allow the crew to perform other tasks. A robotic ::: tomato harvester was developed for continuous, selective picking of mature tomatoes. The goal of ::: this project was to develop a sensing unit and a robotic hand unit that could be integrated with a ::: commercial robotic manipulator for the automated tomato harvesting task. Image processing ::: algorithms were developed to determine sizes and locations of mature tomatoes including the ones ::: that are partially occluded by leaves and/or branches. An end-effector subsystem, including a fourfinger ::: prosthetic hand and an embedded hand controller, was designed and assembled for the ::: tomato picking, holding and placing task. Improvement of a previously designed robotic hand ::: resulted in a 50% weight reduction. The sensing and picking capability of the units has been ::: demonstrated in laboratory and commercial greenhouse environments. Success rates of tomato fruit ::: sensing and picking were better than 95% and 85%, respectively.
---
paper_title: On the Influence of Contact Geometry on Grasp Stability
paper_content:
This paper demonstrates that the predicted grasp stabilityis highly sensitive to only small changes in thecharacter of the con- tact forces. The contribution of the geometry and stiffnessat the contact points to the grasp stability is investigated by a pl anar grasp with three contact points. Limit cases of zero and infin ite contact curvatures, and finite to infinite contact stiffnesses are considered. The stability is predicted based on the approac h of Howard and Kumar (1), and verified with multibody dynamic simulations. For rigid objects and fingers with only normal c tact stiffness, the grasp stability is dominated by the cont act ge- ometry, whereas the local contact stiffness and preload hav e a minor effect. Furthermore, grasps with pointed finger tips a re more likely to be stable than grasps with flat finger tips.
---
paper_title: Grasp-state plane analysis of two-phalanx underactuated fingers
paper_content:
Abstract This paper presents a new technique to analyze the grasp stability of two-phalanx underactuated fingers of general architecture using a grasp-state plane approach. Similarly to the state plane analysis for dynamical systems, a grasp-state plane technique is very elegant and efficient to study the grasp stability of two-phalanx underactuated fingers. The concept of underactuation in robotic fingers—with fewer actuators than degrees of freedom—allows the hand to adjust itself to an irregularly shaped object without complex control strategy and numerous sensors. However, in some configurations, the force distribution of an underactuated finger can degenerate, i.e. the finger can no longer apply forces on the object. This situation leads in some cases to the ejection of the object from the hand. This paper focuses on two-phalanx fingers and studies their ability to seize objects with a secure grasp, considering practical issues on the grasp, namely the local geometry of the contact, the influence of design parameters and friction. A grasp-state representation which allows to accurately visualize the contact state trajectory as well as equilibrium and unstable situations is presented.
---
paper_title: Heavy material handling manipulator for agricultural robot
paper_content:
This paper presents a manipulator which is able to handle heavy materials for agricultural applications. The characteristics of agricultural operation are discussed and extracted. As the manipulator for handling heavy materials is analyzed using kinematic indices, the parallel type manipulator is shown to be superior to the other manipulators (i.e. the polar coordinate type, articulated type and cylindrical coordinate type manipulators). A parallel type manipulator has therefore been designed and developed. The robotic harvesting experiment was carried out using the parallel type manipulator in a watermelon field.
---
paper_title: Control of Grasp Stability When Humans Lift Objects With Different Surface Curvatures
paper_content:
Jenmalm, Per, Antony W. Goodwin, and Roland S. Johansson. Control of grasp stability when humans lift objects with different surface curvatures. J. Neurophysiol. 79: 1643–1652, 1998. In previous investigations of the control of grasp stability, humans manipulated test objects with flat grasp surfaces. The surfaces of most objects that we handle in everyday activities, however, are curved. In the present study, we examined the influence of surface curvature on the fingertip forces used when humans lifted and held objects of various weights. Subjects grasped the test object between the thumb and the index finger. The matching pair of grasped surfaces were spherically curved with one of six different curvatures (concave with radius 20 or 40 mm; flat; convex with radius 20, 10, or 5 mm) and the object had one of five different weights ranging from 168 to 705 g. The grip force used by subjects (force along the axis between the 2 grasped surfaces) increased with increasing weight of the object but was modified inconsistently and incompletely by surface curvature. Similarly, the duration and rate of force generation, when the grip and load forces increased isometrically in the load phase before object lift-off, were not influenced by surface curvature. In contrast, surface curvature did affect the minimum grip forces required to prevent frictional slips (the slip force). The slip force was smaller for larger curvatures (both concave and convex) than for flatter surfaces. Therefore the force safety margin against slips (difference between the employed grip force and the slip force) was higher for the higher curvatures. We conclude that surface curvature has little influence on grip force regulation during this type of manipulation; the moderate changes in slip force resulting from changes in curvature are not fully compensated for by changes in grip force.
---
paper_title: A Multi-Sensory End-Effector for Spherical Fruit Harvesting Robot
paper_content:
An end-effector for spherical fruit harvesting robot was developed. This end-effector is a multi-sensory one that is universal for spherical fruit such as tomatoes, apples and citrus. It performs fruit singulation with a vacuum suction pad device, fruit gripping and peduncle locating with a two-finger (an upper finger and a lower finger) gripper and peduncle cutting with a laser cutting device. In order to percept sufficient information of the internal state, harvesting object and environment, different types of sensors are configured, including a vacuum pressure sensor, distance sensors, proximity sensors and force sensors. An open architecture of control system based on IL+DSP is adopted, which is more open, flexible, universal and lighter to be more suitable for a mobile harvesting robot and end-effector.
---
paper_title: Grasp stability analysis considering the curvatures at contact points
paper_content:
Proposes a method for analyzing the stability of grasps. The characteristic points in this paper are to consider the curvature of both hand and object at contact points and the grasp with friction and frictionless contact. From this analysis, it is shown that the grasp using round fingers is more stable than using sharp fingers. Moreover, we establish the condition on the finger's stiffness to stabilize the grasp with friction. It is proved that the required stiffness of fingers are decreased by considering the curvature. The stability analysis is greatly simplified by using potential energy of the grasp system and is of practical use.
---
paper_title: Robot Gripper Analysis: Finite Element Modeling and Optimization
paper_content:
A procedure for analysis and optimization of robot gripper designs has been developed. The procedure was applied to the specific problem of optimizing the weight of two different robot grippers for harvesting melons. The initial designs were modeled and analyzed using the finite element method to determine the critical static stresses. In one design, the stresses obtained were compared to analytical results and the finite element results were within 7% on average, which is sufficient for this preliminary design. In the second design, the trend and location of the maximum stresses agreed with a previously published photoelasticity study. The initial designs were then improved by applying optimization techniques to minimize the weight of the gripper while ensuring that the gripper was strong enough to withstand the load of the melon. The final designs were significantly lighter than the initial ones which reduce the required inertia forces. Based on the results of the proposed approach, an optimized gripper could then be constructed.
---
paper_title: Contact stability for two-fingered grasps
paper_content:
Two types of grasp stability, spatial grasp stability and contact grasp stability, each with a different concept of the state of a grasp, are distinguished and characterized. Examples are presented to show that spatial stability cannot capture certain intuitive concepts of grasp stability and hence that any full understanding of grasp stability must include contact stability. A model of how the positions of the points of contact evolve in time on the surface of a grasped object in the absence of any external force or active feedback is then derived. From the model, a general measure of the contact stability of any two-fingered grasp is obtained. Finally, the consequences of this stability measure and a related measure of contact manipulability on strategies for grasp selection are discussed. >
---
paper_title: Internal forces and stability in multi-finger grasps
paper_content:
Abstract This paper deals with the rotational stability of a rigid body under constant contact forces. For this system, the stiffness tensor is derived, and its basic properties are analyzed. For the gravity-induced stiffness, one condition for stability, formulated in terms of geometric and gravity centers, is obtained. The internal forces are introduced with the use of a virtual linkage model. Within this representation, two conditions for stability under internal force loading are formulated in an analytical form. The conditions obtained are applied to the synthesis of a three-fingered grasp.
---
paper_title: Vision-based three-finger grasp synthesis constrained by hand geometry
paper_content:
Abstract This paper addresses the problem of designing a practical system able to grasp real objects with a three-fingered robot hand. A general approach for synthesizing two- and three-finger grasps on planar unknown objects is presented. Visual perception is used to reduce the uncertainty and to obtain relevant information about the objects. We focus on non-modeled planar extruded objects, which can represent many real-world objects. In addition, particular mechanical constraints of the robot hand are considered. First, a vision processing module that extracts from object images the relevant information for the grasp synthesis is developed. This is completed with a set of algorithms for synthesizing two- and three-finger grasps taking into account force-closure and contact stability conditions, with a low computational effort. Finally, a procedure for constraining these results to the kinematics of the particular hand, is also developed. In addition, a set of heuristic metrics for assessing the quality of the computed grasps is described. All these components are integrated in a complete system. Experimental results using the Barrett hand are shown and discussed.
---
paper_title: Manipulation of polygonal objects with two wheeled-tip fingers: Planning in the presence of contact position error
paper_content:
This paper addresses the planning problem of object manipulation using wheeled-tip robots considering the wheel-object contact positioning error. The term wheeled-tip refers to a new mechanism that incorporates active wheels at robot's fingertips and allows the grasp contact-point to move along the object's surface. The benefits of unlimited rolling contact is achieved at the cost of contact positioning error that may cause the manipulation to fail. We propose a probabilistic based algorithm for robot motion planning that in addition to being collision free, guarantees the stability of the grasp throughout the planned path. To do so, first we introduce an algorithm that ensures the kinematical stability of the grasp during manipulation by respecting the force closure constraint. Further we extend the algorithm to address the practical uncertainties involved in the position of wheel-object contact points. The proposed algorithms can be employed for manipulators with limited rolling contacts, as well. The algorithms have been tested and the results prove that the planned path can be trusted in uncertain situations.
---
paper_title: Optimal Thresholding for the Automatic Recognition of Apple Fruits
paper_content:
The aging and the decreasing number of farm workers in Japan have been a ::: potential problem. That is why research on the automation of agricultural operations is ::: conducted in recent years. One of these operations is the harvesting of fruit trees such as the ::: apples. A robotic hand that could harvest the apple fruit similar to the human picker has been ::: developed, however the visual guidance of the developed hand has not yet been made. In this ::: paper, a machine vision system that would guide the robotic harvesting hand was studied. The ::: machine vision system consisted of a digital video camera and a personal computer. ::: Images of a Fuji apple tree were analyzed and histograms of its luminance and color difference ::: of red were developed. The threshold for segmentation of the images to recognize the fruit ::: portion was estimated from the histograms using the optimal thresholding method. The ::: estimated threshold effectively recognized the fruit portion. The threshold calculated from the ::: luminance histogram using the optimal thresholding method was not effective in recognizing the ::: Fuji apple while the threshold selected from the color difference of red histogram was effective ::: in recognizing the Fuji apple.
---
paper_title: Recognition and cutting system of sweet pepper for picking robot in greenhouse horticulture
paper_content:
This paper describes recognition and cutting system of sweet peppers for picking robots in greenhouse horticulture. This picking robot has an image processing system with a parallel stereovision, a camera positioning system to follow the sweet pepper by visual feedback control, and a cutting device. A prototype robot system has been made and is introduced. Experiments of the prototype prove that performance of the cutting system depends on recognition of fruits of sweet peppers. Consequently, the robot has ability for picking sweet peppers.
---
paper_title: A Machine Vision for Tomato Cluster Harvesting Robot
paper_content:
Dutch style greenhouses for tomato production are recently becoming popular in many countries while fruit cluster harvesting is also becoming popular in the Netherland and other countries where the Dutch system is introduced due to higher workability and fruit freshness. In the large scale Dutch production system, it is desirable to replace human operations into automated machines. In this paper, a machine vision system for a tomato fruit cluster harvesting robot is described. This machine vision system consisted of two identical color TV cameras (VGA class), four lighting devices with PL filters, and two image capture boards. Two images were acquired at a time and RGB color component images were converted into HSI images. Using colors on the HSI images, main stems, peduncles, and fruits were discriminate and an end-effector grasping point on the main stem was recognized based on physical properties of the tomato plant. Since difficulty to recognize the grasping point depended on exposure of plant parts and on robot access angle, acquired images were classified into three groups; Group A was images in which the fruit cluster, the stem, and the peduncle were isolated from the other plant parts. Group B was images in which they existed with adjacent plant parts. Group C was images in which some of them were occluded. From an experiment, results showed that 73% of grasping points on main stems were successfully recognized excluding Group C which was not able to be recognized also by human eyes.
---
paper_title: Improvement of the Ability to Recognize Sweet Peppers for Picking Robot in Greenhouse Horticulture
paper_content:
This paper describes improvement of the ability to recognize sweet peppers for picking robot in greenhouse horticulture. A prototype of the picking robot for sweet peppers has been manufactured. This robot has an image processing system, a camera positioning system, and a cutting device. However the ability to recognize sweet peppers was low, the picking ability was also low. The color of fruits of sweet peppers is almost same color of leaves of it. For identifying a fruit from leaves, this system needs the lighting system. At first we used the fluorescent lamp on the lighting system. In this system, it was possible to identify only fruits in the condition of mixed fruits and leaves. However, if the fruits overlapping with several other fruits, the shapes of these fruits did not exactly recognize, and it is possible to be different the two fruits that are recognized by right and left camera. In this case, because it cannot to exactly measure by stereovision system, the picking ability became low. So we use the new lighting system with LED. In the result of experiments, this CDD lighting system improves the ability to recognize the sweet peppers.
---
paper_title: Fruit harvesting robots in Japan
paper_content:
Abstract We have developed harvesting robots for tomato /1/, petty-tomato, cucumber /2/ and grape /3/ in Japan. These robots mainly consist of manipulators, end-effectors, visual sensors and traveling devices. These mechanisms of the robot components were developed based on the physical properties of the work objects. The robots must work automatically by themselves in greenhouses or fields, since we are considering for one operator to tend several robots in the production system. The system is modeled after Japanese agriculture which is commonly seen to produce many kinds of crops in greenhouses and in many small fields intensively. Bioproduction in space is somewhat similar to the agricultural system in Japan, because few operators have to work in a small space. Employing robots for bioproduction in space is considered desirable in near future. The following is a description of the harvesting robots.
---
paper_title: Measurement of 3-D Locations of Ripe Tomato by Binocular Stereo Vision for Tomato Harvesting
paper_content:
A measurement of 3-D locations of ripe tomato by binocular stereo vision was developed for tomato harvesting in greenhouse. In this method, a pair of stereo images was obtained by stereo cameras, and transformed to grey-scale images. According to the grey correlation, corresponding points of the stereo images were searched, and a depth image was obtained by calculating distances between tomatoes and stereo cameras based on triangulation principle. The center of tomato was extracted by distinguishing the tomato from background with image processing. The 3-D locations of ripe tomato were obtained by comparing coordinate values of the center of tomato with the depth image. The error of depth was ranged within ±20 mm when distance was less than 1000 mm.
---
paper_title: Current Developments in Automated Citrus Harvesting
paper_content:
The area of intelligent automated citrus harvesting has become a renewed area of research ::: in recent years. The renewed interest is a result of increased economic demand for better solutions ::: for selective automated citrus harvesting than are currently available by purely mechanical ::: harvesters. Throughout this paper the main challenges facing intelligent automated citrus harvesting ::: are addressed: fruit detection and robotic harvesting. The area of fruit detection is discussed, and ::: incorporates the important properties of citrus that can be used for detection. Robotic harvesting is ::: covered, and involves the discussion of the main mechanical design needs as well as the use of ::: visual servoing for the control of the robotic harvester. A description of our proposed intelligent citrus ::: harvesting system as well as our current prototype is presented.
---
paper_title: Image analysis for agricultural processes: a review of potential opportunities
paper_content:
Abstract Image analysis and interpretation by computers has many potential applications for guidingor controlling agricultural processes. However, research from industrial applications cannot be applied directly to agriculture because of problems such as the biological variability of objects or workpieces, and difficulty in interpreting unstructured environments. This paper describes the process of image analysis, its application to agriculture, and thegeneric problems particularly relevant to agricultural processes. Priority areas for further research are suggested. Ways of including knowledge in algorithms should be studied. Specific applications should also be studied including the practical problems of integrating image processing into larger systems.
---
paper_title: Machine Vision Algorithm for Robots to Harvest Strawberries in Tabletop Culture Greenhouses
paper_content:
A strawberry harvesting robot consisting of a four DOF manipulator, an end-effector with suction pad, a three camera vision system and a rail type traveling device was developed as a trial to conduct experiments in a tabletop culture greenhouse. In order to harvest the strawberries with curved or inclined peduncles, a wrist joint which can rotate 15 degrees to the left or right from its base position was added. On the algorithm side, peduncle inclination angle was measured by the center camera. Harvesting experiments show that it was possible to precisely harvest more than 75% of fruits which were not occluded by other fruits with the developed robot. Experimental data also show that peduncle length, color and inclination pattern change with the seasons. Complex situations often exist in the real field conditions such as limited visibility of back end strawberries, occluded fruits, obstructions and complex peduncle patterns. Further studies are desirable to automate the harvesting task using a robot.
---
paper_title: THE FLORIDA ROBOTIC GROVE-LAB
paper_content:
ABSTRACT A mobile grove-lab was developed to study the use of robotic technology for picking oranges under actual production conditions. The design and operation of the citms picking robot developed for this facility is described. The sensor system developed to identify and locate fruit in real-time and the state network programming technique used to develop a task-level control program for the citrus picking robot are discussed. The suitability of the vision system and state network programming technique for real-time, vision-servo robotic picking is demonstrated. It was concluded that the technical and economic practicality of robotic citrus harvesting can only be demonstrated with an operational multiple-arm harvesting system. Multiple usage of robotic harvesting equipment and acquisition of detailed production data by a robotic harvester were identified as intangible benefits of robotic harvesting which should encourage the commercial development of a multiple-arm machine.
---
paper_title: Collision-free Motion Planning for a Cucumber Picking Robot
paper_content:
One of the most challenging aspects of the development, at the Institute of Agricultural and Environmental Engineering (IMAG B.V.), of an automatic harvesting machine for cucumbers was to achieve a fast and accurate eye-hand co-ordination during the picking operation. This paper presents a procedure developed for the cucumber harvesting robot to pursue this objective. The procedure contains two main components. First of all acquisition of sensory information about the working environment of the robot and, secondly, a program to generate collision-free manipulator motions to direct the end-effector to and from the cucumber. This paper elaborates on the latter. Collision-free manipulator motions were generated with a so-called path search algorithm. In this research the A*-search algorithm was used. With some numerical examples the search procedure is illustrated and analysed in view of application to cucumber harvesting. It is concluded that collision-free motions can be calculated for the seven degrees-of-freedom manipulator used in the cucumber picking device. The A*-search algorithm is easy to implement and robust. This algorithm either produces a solution or stops when a solution cannot be found. This favourable property, however, makes the algorithm prohibitively slow. The results showed that the algorithm does not include much intelligence in the search procedure. It is concluded that to meet the required 10 s for a single harvest cycle, further research is needed to find fast algorithms that produce solutions using as much information about the particular structure of the problem as possible and give a clear message if such a solution can not be found
---
paper_title: Automatic recognition vision system guided for apple harvesting robot
paper_content:
In apple harvesting robot, the first key part is the machine vision system, which is used to recognize and locate the apples. In this paper, the procedure on how to develop an automatic recognition vision system guided for apple harvesting robot, is proposed. We first use a color charge coupled device camera to capture apple images, and then utilize an industrial computer to process images for recognising fruit. Meanwhile, the vector median filter is applied to remove the color images noise of apple, and images segmentation method based on region growing and color feature is investigated. After that the color feature and shape feature of image are extract, a new classification algorithm based on support vector machine for apple recognition is introduced to improve recognition accuracy and efficiency. Finally, these procedures proposed have been tested on apple harvesting robot under natural conditions in September 2009, and showed a recognition success rate of approximately 89% and average recognition time of 352ms.
---
paper_title: Computer vision for fruit harvesting robots – state of the art and challenges ahead
paper_content:
Despite extensive research conducted in machine vision for harvesting robots, practical success in this field of agrobotics is still limited. This article presents a comprehensive review of classical and state-of-the-art machine vision solutions employed in such systems, with special emphasis on the visual cues and machine vision algorithms used. We discuss the advantages and limitations of each approach and we examine these capacities in light of the challenges ahead. We conclude with suggested directions from the general computer vision literature which could assist our research community meet these challenges and bring us closer to the goal of practical selective fruit harvesting robots.
---
paper_title: Measurement of 3-D Locations of Fruit by Binocular Stereo Vision for Apple Harvesting in an Orchard
paper_content:
This paper describes the results of measurement of 3-D locations of fruit by binocular stereo vision ::: for apple harvesting in an orchard. In the method of image processing, a 3-D space is divided into a number of ::: cross sections at an interval based on disparity that is calculated from a gaze distance, and is reconstructed by ::: integrating central composite images owing to stereo pairs. Three measures to restrict false images were ::: proposed: (1) a set of narrow searching range values, (2) comparison of an amount of color featured on the ::: half side in a common area, and (3) the central composition of the half side. Experiments with a trial stereo ::: system were conducted on ripe apples in red (search distance ranging from 1.5m to 5.5m) and in yellow-green ::: (search range of 2m to 4m) in an orchard. The results showed that two measures of (1) and (3) were effective, ::: whereas the other was effective if there was little influence of background color similar to that of the objects. ::: The rate of fruit discrimination was about 90% or higher in the images with 20 to 30 red fruits, and from 65% to ::: 70% in images dense with red fruit and in the images of yellow-green apples. The errors of distance ::: measurement were about ±5%.
---
paper_title: Robotic melon harvesting
paper_content:
Intelligent sensing, planning, and control of a prototype robotic melon harvester is described. The robot consists of a Cartesian manipulator mounted on a mobile platform pulled by a tractor. Black and white image processing is used to detect and locate the melons. Incorporation of knowledge-based rules adapted to the specific melon variety reduces false detections. Task, motion and trajectory planning algorithms and their integration are described. The intelligent control system consists of a distributed blackboard system with autonomous modules for sensing, planning and control. Procedures for evaluating performance of the robot performing in an unstructured and changing environment are described. The robot was tested in the field on two different melon cultivars during two different seasons. Over 85% of the fruit were successfully detected and picked.
---
paper_title: Apple Fruits Recognition Under Natural Luminance Using Machine Vision
paper_content:
In this study, edge detection and combination of color and shape analyses was utilized to segment images of red apples obtained under natural lighting. Thirty images were acquired from an orchard in order to find an apple in each image and to determine its location. Two algorithms (edge detection-based and color- shape based) were developed to process the images. They were filtered, converted to binary images, and noise- reduced. Edge detection based algorithm was not successful, while color-shape based algorithm could detect apple fruits in 83.33% of images.
---
paper_title: DETERMINING THE 3-D LOCATION OF THE APPLE FRUIT DURING HARVEST
paper_content:
The objective of this research is to develop an apple harvesting robot. One of the engineering challenges is the real-time determination of the 3-D location of the fruit. In this paper, the development of a sensor to determine the 3-D location of Fuji apples is discussed. Two methods were considered in determining the 3-D location of the fruit: machine vision system and laser ranging system. Both of these systems would be mounted on the end effector. The machine vision system consisted of the color CCD video camera and the PC for the image processing. The distance from end effector to fruit can be determined using machine vision only, the differential object size method and the binocular stereo image method. However, the laser ranging system was considered because distance measurement using machine vision only is computationally expensive and time consuming. In the laser ranging system, this system needs the aid of the machine vision system to locate the two-dimensional location of the fruit. Then the laser ranging system determines the distance from the end effector to the fruit. The result obtained from this measurement system will be used as input to guide the motion of the manipulator.
---
paper_title: Detection and three-dimensional localization by stereoscopic visual sensor and its application to a robot for picking asparagus
paper_content:
Abstract This paper describes a three-dimensional localization process used on an agricultural robot designed to pick white asparagus. The system uses two cameras (CCD and Newvicon). Thanks to diascopic lighting, the images can easily be binarized. The threshold of digitalization is determined automatically by the system. A statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localization is done stereometrically with two cameras. As the robot carrying the system moves, the images are altered and decision criteria modified. A study of the images from mobile objects produced by both tube and CCD cameras was carried out. A simulation of this phenomenon has been done to determine the modifications concerning object shapes, thresholding levels and decision parameters as a function of robot speed.
---
paper_title: Applied machine vision of plants: a review with implications for field deployment in automated farming operations
paper_content:
Automated visual assessment of plant condition, specifically foliage wilting, reflectance and growth parameters, using machine vision has potential use as input for real-time variable-rate irrigation and fertigation systems in precision agriculture. This paper reviews the research literature for both outdoor and indoor applications of machine vision of plants, which reveals that different environments necessitate varying levels of complexity in both apparatus and nature of plant measurement which can be achieved. Deployment of systems to the field environment in precision agriculture applications presents the challenge of overcoming image variation caused by the diurnal and seasonal variation of sunlight. From the literature reviewed, it is argued that augmenting a monocular RGB vision system with additional sensing techniques potentially reduces image analysis complexity while enhancing system robustness to environmental variables. Therefore, machine vision systems with a foundation in optical and lighting design may potentially expedite the transition from laboratory and research prototype to robust field tool.
---
paper_title: A Harvesting Robot for Small Fruit in Bunches Based on 3-D Stereoscopic Vision
paper_content:
This paper describes the concept of a harvesting robotic system for small and delicate fruit ::: that may grow in bunches. The main component of the robot is the vision system that, unlike traditional ::: vision methods, is able to distinguish and locate each individual fruit in a bunch. It combines a passive and ::: an active 3-D reconstruction technique: stereoscopic vision and structured lighting. It is composed of two ::: stereoscopic cameras and a matrix of laser diodes, which is used to project a reticule of luminous spots ::: when a bunch is detected in order that the stereoscopic camera system can perform a 3-D reconstruction of ::: their position. A prototype of a harvesting robot based on it has been developed and tested with real ::: strawberry crops in hydroponic greenhouses. Some experimental results of these tests are included.
---
paper_title: Design of an Agricultural Robot for Harvesting Melons
paper_content:
The performance of an agricultural robot has been evaluated through simulation to determine design parameters for a robotic melon harvester. Animated, visual simulation provided a powerful tool to initiate the evaluation of alternative designs. To quantify the many, closely-related design parameters, numerical simulation tools were developed and applied. Simulations using measured cantaloupe locations revealed the effect of design parameters (configuration, number of arms, and actuator speeds) on the average cycle time. Simulation results predicted that a Cartesian robot would perform faster than a cylindrical robot for the melon harvesting task. Activating two arms in tandem was the fastest configuration evaluated. Additional sets of melon locations were stochastically generated from distributions of the field data to determine performance for planting distances between 25 and 125 cm. The fastest cycle time was achieved for an experimental cultural practice that consisted of one plant on each half row in an alternating sequence with 125 cm planting distance. The performance of the robotic melon harvester was found to be highly dependent on the picking time, actuator speeds and planting distance.
---
paper_title: Olive Fruits Recognition Using Neural Networks
paper_content:
A new method for olive fruit recognition is presented. Olive fruits size and weight are used for estimating the best harvesting moment of olive trees. Olive fruit recognition is performed by analyzing RGB images taken from olive trees. The harvesting decision comprehends two stages, the first stage focused on deciding whether or not the candidate identified in the picture corresponds to an olive fruit, and the second stage focused on olives overlapping in the pictures. The analyses required in these two stages are performed by implementing a neural networks solution approach.
---
paper_title: Visual feedback guided robotic cherry tomato harvesting
paper_content:
Harvesting cherry tomatoes is more laborious than harvesting larger size tomatoes because of the high fruit density in every cluster. To save labor costs, robotic harvesting of cherry tomatoes has been studied in Japan. An effective vision algorithm, to detect positions of many small fruits, was developed for guidance of robotically harvested cherry tomatoes. A spectral reflectance in the visible region was identified and extracted to provide high contrast images for the fruit cluster identification. The 3-D position of each fruit cluster was determined using a binocular stereo vision technique. The robot harvested one fruit at a time and the position of the next target fruit was updated based on a newly acquired image and the latest manipulator position. The experimental results showed that this visual feedback control based harvesting method was effective, with a success rate of 70%.
---
paper_title: Detection of Green Apples in Hyperspectral Images of Apple-Tree Foliage Using Machine Vision
paper_content:
It is important for orchard owners to be able to estimate the quantity of fruit on the trees at the various growth stages, because a tree that bears too many fruits will yield small fruits. Thus, if growers are interested in controlling the fruit size, knowing in advance that there are too many developing fruits will give them the opportunity to treat the tree. This study proposes a machine vision-based method of automating the yield estimation of apples on trees at different stages of their growth. Since one of the most difficult aspects of apple yield estimation is distinguishing between green varieties of apples or those that are green in the first stages of growth, and the green leaves that surround them, this investigation concentrates on estimating the yield of green varieties of apples. Hyperspectral imaging was used, because it is capable of giving a wealth of information both in the visible and the near-infrared (NIR) regions and thus offers the potential to provide useful results. A multistage algorithm was developed that uses several techniques, such as principle components analysis (PCA) and extraction and classification of homogenous objects (ECHO) for analyzing hyperspectral data, as well as machine vision techniques such as morphological operations, watershed, and blob analysis. The method developed was tested on images taken in a Golden Delicious apple orchard in the Golan Heights, Israel, in two sessions: one during the first stages of growth, and the second just before harvest. The overall correct detection rate was 88.1%, with an overall error rate of 14.1%.
---
paper_title: Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper
paper_content:
Sweet-pepper plant parts should be distinguished to construct an obstacle map to plan collision-free motion for a harvesting manipulator. Objectives were to segment vegetation from the background; to segment non-vegetation objects; to construct a classifier robust to variation among scenes; and to classify vegetation primarily into soft (top of a leaf, bottom of leaf and petiole) and hard obstacles (stem and fruit) and secondarily into five plant parts: stem, top of a leaf, bottom of a leaf, fruit and petiole. A multi-spectral system with artificial lighting was developed to mitigate disturbances caused by natural lighting conditions. The background was successfully segmented from vegetation using a threshold in a near-infrared wavelength (>900nm). Non-vegetation objects occurring in the scene, including drippers, pots, sticks, construction elements and support wires, were removed using a threshold in the blue wavelength (447nm). Vegetation was classified, using a Classification and Regression Trees (CART) classifier trained with 46 pixel-based features. The Normalized Difference Index features were the strongest as selected by a Sequential Floating Forward Selection algorithm. A new robust-and-balanced accuracy performance measure P"R"o"b was introduced for CART pruning and feature selection. Use of P"R"o"b rendered the classifier more robust to variation among scenes because standard deviation among scenes reduced 59% for hard obstacles and 43% for soft obstacles compared with balanced accuracy. Two approaches were derived to classify vegetation: Approach A was based on hard vs. soft obstacle classification and Approach B was based on separability of classes. Approach A (P"R"o"b=58.9) performed slightly better than Approach B (P"R"o"b=56.1). For Approach A, mean true-positive detection rate (standard deviation) among scenes was 59.2 (7.1)% for hard obstacles, 91.5 (4.0)% for soft obstacles, 40.0 (12.4)% for stems, 78.7 (16.0)% for top of a leaf, 68.5 (11.4)% for bottom of a leaf, 54.5 (9.9)% for fruit and 49.5 (13.6)% for petiole. These results are insufficient to construct an accurate obstacle map and suggestions for improvements are described. Nevertheless, this is the first study that reports quantitative performance for classification of several plant parts under varying lighting conditions.
---
paper_title: Spectral Imaging for Greenhouse Cucumber Fruit Detection Based on Binocular Stereovision
paper_content:
For a greenhouse cucumber-harvesting robot, two major challenges in the automated fruit-picking are to identify the target and to measure its position in the three dimensions. In this paper, a machine vision algorithm for recognition and location of cucumber fruits was presented. Firstly, a stereovision imaging system was established to capture monochrome near-infrared images, which are beneficial to deal with the similar-color segmentation problem in complex environment. Secondly, an image processing algorithm for cucumber recognition was proposed with the following steps: region partition according to 1-dimensional gray histogram distribution, adaptive threshold processing on divided local image, noise elimination using morphological analysis, and feature extraction with texture detection. Thirdly, the 3-D spatial position of fruit was calculated on the basis of standard triangulation model, meanwhile an auxiliary camera on the end-effector was introduced to improve cutting position. The experimental results from 120 cucumber image pairs taken in greenhouse showed that the proposed algorithm could detect fruit with a recognition rate of 86%. Also, the distance error of the grasping position(less than 8.6mm) and the maximum deviation of the cutting position (1.64mm in x-axis, 1.41mm in y-axis, 3.9mm in z-axis) were satisfied for robotic operation.
---
paper_title: Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review
paper_content:
An overview is given of near infrared (NIR) spectroscopy for use in measuring quality attributes of horticultural produce. Different spectrophotometer designs and measurement principles are compared, and novel techniques, such as time and spatially resolved spectroscopy for the estimation of light absorption and scattering properties of vegetable tissue, as well as NIR multi- and hyperspectral imaging techniques are reviewed. Special attention is paid to recent developments in portable systems. Chemometrics is an essential part of NIR spectroscopy, and the available preprocessing and regression techniques, including nonlinear ones, such as kernel-based methods, are discussed. Robustness issues due to orchard and species effects and fluctuating temperatures are addressed. The problem of calibration transfer from one spectrophotometer to another is introduced, as well as techniques for calibration transfer. Most applications of NIR spectroscopy have focussed on the nondestructive measurement of soluble solids content of fruit where typically a root mean square error of prediction of 1° Brix can be achieved, but also other applications involving texture, dry matter, acidity or disorders of fruit and vegetables have been reported. Areas where more research is required are identified.
---
paper_title: Robotic harvesting of Gerbera Jamesonii based on detection and three-dimensional modeling of cut flower pedicels
paper_content:
Within the present study, a system for the automated harvest of Gerbera jamesonii pedicels with the help of image analytic methods was developed. The study can be divided mainly into two parts: the development of algorithms for the identification of pedicels in digital images and the development of procedures for harvesting these pedicels with a robot. Images of plants were taken with a stereo camera system, which consisted of two high-resolution CCD-cameras with near-infrared filters. The plant was positioned on a rotatable working desk and images of eight different positions were shot. The developed image processing algorithm segmented the potential pedicel regions in the images, removed noise, differentiated overlapping pedicels by using different algorithms and combined the remaining regions to pedicel objects. From the data of both images and eight plant positions three-dimensional models of the pedicels were created by triangulation. The remaining parts of the plants were modeled in a simple fashion. The evaluated 3D model is used to calculate spatial coordinates for the applied robot control. For harvesting the pedicels, an industrial robot with six axes (plus an additional linear axis) was used. A pneumatic harvest grabber was developed, which harvested the pedicels by cutting them off. In order to guarantee the collision free path of the robot, a path planning module was integrated, which includes the three-dimensional model of the plant and the test facility. With the applied techniques it was possible to correctly detect all pedicels on about 72% of the images. Regarding the whole image series of the respective plant, all pedicels could be detected in at least one photographing position in 97% of all cases. In the harvest experiments 80% of all pedicels could be harvested. The harvest rates decreased with increasing numbers of pedicels on a plant. Therefore, 98% of the pedicels could be harvested of plants with one or two pedicels, but only 51% were harvested of plants with five or more pedicels. In horticultural practice, an identification system for evaluating the stage of maturity should be included. An implementation for harvesting pedicels of different species with similar basic characteristics is imaginable.
---
paper_title: An Autonomous Robot for Harvesting Cucumbers in Greenhouses
paper_content:
This paper describes the concept of an autonomous robot for harvesting cucumbers in greenhouses. A description is given of the working environment of the robot and the logistics of harvesting. It is stated that for a 2 ha Dutch nursery, 4 harvesting robots and one docking station are needed during the peak season. Based on these preliminaries, the design specifications of the harvest robot are defined. The main requirement is that a single harvest operation may take at most 10 s. Then, the paper focuses on the individual hardware and software components of the robot. These include, the autonomous vehicle, the manipulator, the end-effector, the two computer vision systems for detection and 3D imaging of the fruit and the environment and, finally, a control scheme that generates collision-free motions for the manipulator during harvesting. The manipulator has seven degrees-of-freedom (DOF). This is sufficient for the harvesting task. The end-effector is designed such that it handles the soft fruit without loss of quality. The thermal cutting device included in the end-effector prevents the spreading of viruses through the greenhouse. The computer vision system is able to detect more than 95% of the cucumbers in a greenhouse. Using geometric models the ripeness of the cucumbers is determined. A motion planner based on the Aa-search algorithm assures collision-free eye-hand co-ordination. In autumn 2001 system integration took place and the harvesting robot was tested in a greenhouse. With a success rate of 80%, field tests confirmed the ability of the robot to pick cucumbers without human interference. On average the robot needed 45 s to pick one cucumber. Future research focuses on hardware and software solutions to improve the picking speed and accuracy of the eye-hand co-ordination of the robot.
---
paper_title: Robot design and testing for greenhouse applications
paper_content:
The latest results of technology and research are increasingly used in agriculture, especially in intensive cultures that ensure remunerative returns. Most cultures in greenhouses are in this category where, despite the large use of technology, human operators still manually perform most operations on the crop although they are often highly repetitive and sometime even dangerous. This fact greatly impacts on the quality of the product, on the production costs and on collateral issues, such as pollution and safety. In this paper, the state of research in robotic automation in agriculture outlining the characteristics that robots should have to allow their profitable use is considered. A multi-purpose low-cost robot prototype, designed and built according to such characteristics, is then presented together with the results of some preliminary experimentation with it. Although more research is needed, the results prove to be promising and show some advantages that can be achieved with robotic automation. In particular, precision spraying and precision fertilisation applications have been developed and tested. Although the productivity of the prototype is quite low (in the range of 400–500 plant/h), experiments conducted continuously for several hours show that the robot can perform tasks unaffordable by human operators.
---
paper_title: AN AUTONOMOUS ROBOT FOR DE-LEAFING CUCUMBER PLANTS GROWN IN A HIGH-WIRE CULTIVATION SYSTEM
paper_content:
The paper presents an autonomous robot for removing the leaves from cucumber plants grown in a high-wire cultivation system. Leaves at the lower end of the plants are removed because of their reduced vitality, their negligible contribution to canopy photosynthesis and their increased sensitivity for fungal diseases. Consuming 19% of the total labour input, leaf removal is considered by the growers and their staff as a tedious, repetitive and costly task. Automation alleviates their job and results in a significant cost reduction. Additionally, removal of the leaves results in an open structure of the canopy in which the fruit is clearly visible and accessible which is an advantage for automatic cucumber harvesting. The paper describes a functional model and results of a field test of the de-leafing robot. The field test confirmed the feasibility of the concept of the de-leafing robot.
---
paper_title: Optimal manipulator design for a cucumber harvesting robot
paper_content:
This paper presents a procedure and the results of an optimal design of the kinematic structure of a manipulator to be used for autonomous cucumber harvesting in greenhouses. The design objective included the time needed to perform a collision-free motion from an initial position to the target position as well as a dexterity measure to allow for motion corrections in the neighborhood of the fruits. The optimisation problem was solved using the DIRECT algorithm implemented in the Tomlab package. A four link PPRR type manipulator was found to be most suitable. For cucumber harvesting four degrees-of-freedom, i.e. three translations and one rotation around the vertical axis, are sufficient. The PPRR manipulator described in this paper meets this requirement. Although computationally expensive, the methodology used in this research was found to be powerful and offered an objective way to evaluate and optimise the kinematic structure of a robot to be used for cucumber harvesting.
---
paper_title: Recognition and cutting system of sweet pepper for picking robot in greenhouse horticulture
paper_content:
This paper describes recognition and cutting system of sweet peppers for picking robots in greenhouse horticulture. This picking robot has an image processing system with a parallel stereovision, a camera positioning system to follow the sweet pepper by visual feedback control, and a cutting device. A prototype robot system has been made and is introduced. Experiments of the prototype prove that performance of the cutting system depends on recognition of fruits of sweet peppers. Consequently, the robot has ability for picking sweet peppers.
---
paper_title: Agricultural robot for radicchio harvesting
paper_content:
In the last few years, robotics has been increasingly adopted in agriculture to improve productivity and efficiency. This paper presents recent and current work at the Politecnico of Bari, in collaboration with the University of Lecce, in the field of agriculture robotics. A cost effective robotic arm is introduced for the harvesting of radicchio, which employs visual localization of the plants in the field. The proposed harvester is composed of a double four-bar linkage manipulator and a special gripper, which fulfills the requirement for a plant cut approximately 10 mm underground. Both manipulator and end-effector are pneumatically actuated, and the gripper works with flexible pneumatic muscles. The system employs computer vision to localize the plants in the field based on intelligent color filtering and morphological operations; we call this algorithm the radicchio visual localization RVL. Details are provided for the functional and executive design of the robotic arm and its control system. Experimental results are discussed; obtained with a prototype operating in a laboratory testbed showing the feasibility of the system in localizing and harvesting radicchio plants. The performance of the RVL is analyzed in terms of accuracy, robustness to noises, and variations in lighting, and is also validated in field
---
paper_title: Ergonomics of manual harvesting
paper_content:
Abstract Manual harvesting has many advantages compared with the mechanical harvesting of most fruit crops. The most important advantage is visual image processing ability which enables workers rapidly to detect fruit suitable for harvest and direct their hand to the fruit selected for detachment. Lacking the necessary computer based image processing equipment, designers of mechanical harvesters have settled for mass removal approaches that typically results in more damage than normal when fruit is harvested individually. Although manual harvesting has the disadvantage of low capacity, it is expected that much of the world's fruit will continue to be harvested by hand for the foreseeable future. Several ergonomics principles that relate to manual harvesting are discussed. Methods for improving worker conditions and productivity are presented. Worker positioners increase productivity by 20 to 40% and enable use of sun shades, fans, conveyors and other devices that increase comfort and reduce fatique. Testing and training can yield substantial benefits from small inputs. Tests for visual acuity, colour sensitivity, strength, etc, can help managers assign tasks to the most suitable workers. Training programmes help workers to have a clear mental picture of acceptable fruit and encourage compliance with handling, safety and other procedures. Satisfaction of human drives such as thirst, hunger, thermal comfort and avoidance of pain results in long-range benefits.
---
paper_title: Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead
paper_content:
This review article analyzes state-of-the-art and future perspectives for harvesting robots in high-value crops. The objectives were to characterize the crop environment relevant for robotic harvesting, to perform a literature review on the state-of-the-art of harvesting robots using quantitative measures, and to reflect on the crop environment and literature review to formulate challenges and directions for future research and development. Harvesting robots were reviewed regarding the crop harvested in a production environment, performance indicators, design process techniques used, hardware design decisions, and algorithm characteristics. On average, localization success was 85%, detachment success was 75%, harvest success was 66%, fruit damage was 5%, peduncle damage was 45%, and cycle time was 33 s. A kiwi harvesting robot achieved the shortest cycle time of 1 s. Moreover, the performance of harvesting robots did not improve in the past three decades, and none of these 50 robots was commercialized. Four future challenges with R&D directions were identified to realize a positive trend in performance and to successfully implement harvesting robots in practice: 1 simplifying the task, 2 enhancing the robot, 3 defining requirements and measuring performance, and 4 considering additional requirements for successful implementation. This review article may provide new directions for future automation projects in high-value crops.
---
paper_title: A setup of mobile robotic unit for fruit harvesting
paper_content:
The description of a mobile robotic unit for fruit harvesting was illustrated in this paper. The setup of the system was developed to harvest date palm fruit which is the most common fruit in Saudi Arabia. The system was based on readymade industrial robotic arm.
---
paper_title: Fruit harvesting robots in Japan
paper_content:
Abstract We have developed harvesting robots for tomato /1/, petty-tomato, cucumber /2/ and grape /3/ in Japan. These robots mainly consist of manipulators, end-effectors, visual sensors and traveling devices. These mechanisms of the robot components were developed based on the physical properties of the work objects. The robots must work automatically by themselves in greenhouses or fields, since we are considering for one operator to tend several robots in the production system. The system is modeled after Japanese agriculture which is commonly seen to produce many kinds of crops in greenhouses and in many small fields intensively. Bioproduction in space is somewhat similar to the agricultural system in Japan, because few operators have to work in a small space. Employing robots for bioproduction in space is considered desirable in near future. The following is a description of the harvesting robots.
---
paper_title: Status of citrus harvesting in Florida
paper_content:
Abstract Florida citrus production peaked at 11·2 Mt prior to several damaging freezes of the early 1980s. Currently, production is seven Mt, slowly increasing, and approximately 83% is utilized for processing purposes. The picking of citrus is still a manual task and currently requires 20 000 labourers. The labour supply appears adequate with a reasonably good profit margin above harvesting costs. Although 30 years of research and development have not yielded a feasible alternative to manual picking, considerable technology has been developed to evaluate the merits of various approaches. Picking aids have increased picker productivity, but their economic justification under Florida labour conditions continues to be questionable. Mass removal mechanical systems could only compete on a small scale to traditional citrus harvesting methods. The freezes of the early 1980s eliminated many of the old, larger producing trees in the north central area of the state. New plantings are being spaced closer together to achieve high yields in hedgerows at an early age. Hand harvesters generally dislike hedgerows because across-row movement with ladders and containers is more difficult. Also, mechanical placement and retrieval of fruit containers in the rows are difficult. The application of robotic principles favours hedgerows because positioning of a robotic arm would be much simpler than with individual trees. A robotic arm under development in Florida has demonstrated a picking rate of one fruit every five seconds from the outer canopy. Research is underway to develop smaller and more productive hedgerow trees which would be easier to harvest by hand or machine.
---
paper_title: Current Developments in Automated Citrus Harvesting
paper_content:
The area of intelligent automated citrus harvesting has become a renewed area of research ::: in recent years. The renewed interest is a result of increased economic demand for better solutions ::: for selective automated citrus harvesting than are currently available by purely mechanical ::: harvesters. Throughout this paper the main challenges facing intelligent automated citrus harvesting ::: are addressed: fruit detection and robotic harvesting. The area of fruit detection is discussed, and ::: incorporates the important properties of citrus that can be used for detection. Robotic harvesting is ::: covered, and involves the discussion of the main mechanical design needs as well as the use of ::: visual servoing for the control of the robotic harvester. A description of our proposed intelligent citrus ::: harvesting system as well as our current prototype is presented.
---
paper_title: Design of an autonomous agricultural robot
paper_content:
This paper presents a state-of-the-art review in the development of autonomous agricultural robots including guidance systems, greenhouse autonomous systems and fruit-harvesting robots. A general concept for a field crops robotic machine to selectively harvest easily bruised fruit and vegetables is designed. Future trends that must be pursued in order to make robots a viable option for agricultural operations are focused upon.A prototype machine which includes part of this design has been implemented for melon harvesting. The machine consists of a Cartesian manipulator mounted on a mobile chassis pulled by a tractor. Two vision sensors are used to locate the fruit and guide the robotic arm toward it. A gripper grasps the melon and detaches it from the vine. The real-time control hardware architecture consists of a blackboard system, with autonomous modules for sensing, planning and control connected through a PC bus. Approximately 85% of the fruit are successfully located and harvested.
---
paper_title: Design and implementation of an aided fruit‐harvesting robot (Agribot)
paper_content:
This work presents a robot prototype designed and built for a new aided fruit‐harvesting strategy in highly unstructured environments, involving human‐machine task distribution. The operator drives the robotic harvester and performs the detection of fruits by means of a laser range‐finder, the computer performs the precise location of the fruits, computes adequate picking sequences and controls the motion of all the mechanical components (picking arm and gripper‐cutter). Throughout this work, the specific design of every module of the robotized fruit harvester is presented. The harvester has been built and laboratory tests with artificial trees were conducted to check range‐finder’s localization accuracy and dependence on external conditions, harvesting arm’s velocity, positioning accuracy and repeatability; and gripper‐cutter performance. Results show excellent range‐finder and harvesting arm operation, while a bottleneck is detected in gripper‐cutter performance. Some figures showing overall performance are given.
---
paper_title: Review on fruit harvesting method for potential use of automatic fruit harvesting systems
paper_content:
In horticultural industry, conventional harvesting is done by ‘handpicking’ methods to remove hundreds of fruits such as citrus fruits in random spatial locations on the individual fruit trees. It is well known that harvesting fruits in a large scale is still inefficient and not cost effective. To solve this challenging task, mechanical harvesting systems have been investigated and practiced to enhance profitability and efficiency of horticultural businesses. However they often damage fruits in the harvesting process. Development of efficient fruit removal methods are required to maintain the fruits quality. This paper reviews fruit harvesting systems from purely mechanical based systems in which operator involvement is still required, to automatic robotic harvesting systems which require minimal or no human intervention in their operation. The researches on machine vision system methodologies used in the automatic detection, inspection and the location of fruits for harvesting are also included. The review is focused on the citrus fruits due to the fact that the research on citrus fruit harvesting mechanism is a bit more advanced than others. Major issues are addressed in the camera sensor and filter designs and image segmentation methods used to identify the fruits within the image. From this review, the major research issues are addressed as future research directions.
---
paper_title: Development of an autonomous kiwifruit picking robot
paper_content:
The design concept and development status of an autonomous kiwifruit-picking robot is presented. The robot has an intelligent vision system that ensures that only “good” fruit is picked. The robot receives instruction by radio link and operates autonomously as it navigates through the orchard, picking fruit, unloading full bins of fruit, fetching empty bins and protecting the picked fruit from rain. The robot has four picking arms, each of which will pick one fruit per second. To extend the useful annual work period of the robot, it is envisaged that it will also be used to pollinate kiwifruit flowers.
---
paper_title: On the future of automated selective asparagus harvesting technology
paper_content:
Abstract This paper assesses the current state and likely future developments of automated selective asparagus harvesting. It reports the current stage of a research and development programme, based on a selective asparagus harvester concept embodying electronic technology, that has been underway since 1989 at the Centre for Advanced Manufacturing and Industrial Automation (CAMIA) in the University of Wollongong. The findings of some seminal field trials of current asparagus harvesting technologies conducted in the US in May 1995 by the Washington Asparagus Commission in which the CAMIA machine was included, are discussed. The implications of these results are examined in regard to the future development of automated selective asparagus harvesters. It is concluded that further research and development is needed to balance the technological and financial factors before the economic viability of mechanised selective asparagus harvesting is assured. In this context, the CAMIA concept of a selective asparagus harvester employing electronic technology provides a sound platform for exploiting on-going rapid developments in control and sensors.
---
paper_title: Robotic Harvesting System for Eggplants
paper_content:
The harvesting operation for eggplants is complicated and accounts for a little less than 40% of the total number of working hours. For automating the harvesting operation, an intelligent robot that can emulate the judgment of human labor is necessary. This study was conducted with a view to developing a robotic harvesting system that performs recognition, approach, and picking tasks. In order to accomplish these tasks, 3 essential components were developed. First, a machine vision algorithm combining a color segment operation and a vertical dividing operation was developed. The algorithm could detect the fruit even under different light conditions. Next, a visual feedback fuzzy control model to actuate a manipulator was designed. The control model enabled the manipulator end to approach the fruit from a distance of 300 mm. Furthermore, an end-effector composed of fruit-grasping mechanism, a size-judging mechanism, and a peduncle-cutting mechanism was developed. It produced enough force for grasping the fruit and cutting the tough peduncle. Finally, the 3 essential components were functionally combined, and a basic harvesting experiment was conducted in the laboratory to evaluate the performance of the system. The system showed a successful harvesting rate of 62.5%, although the end-effector cut the peduncle at a slightly higher position from the fruit base. The execution time for harvesting of an eggplant was 64.1 s.
---
paper_title: An End-Effector for Robotic Removal of Citrus from the tree
paper_content:
ABSTRACT The design of a robotic end-effector for picking citrus fruit is presented and its performance evaluated. The end-effector utilized a rotating-lip mechanism to capture a fruit. Incorporated into the end-effector were a color CCD camera and an ultrasonic transducer for determining the location of a fruit in three dimensions. End-effector performance was assessed by quantifying its capture envelope, fruit removal success rate, and damage inflicted to fruit and tree. Capture envelop was determined with laboratory tests while success and damage rates were quantified through field trials. The end-effector successfully removed fruit in 69% of the pick attempts and caused damage on 37% of the pick attempts. It was concluded that the rotating-lip approach to citrus removal was appropriate but refinement of the end-effector was needed to improve its success rate and to reduce damage rates. KEYWORDS. Robotics, Harvesting, Citrus, End-Effector.
---
paper_title: Autonomous Fruit Picking Machine: A Robotic Apple Harvester
paper_content:
This paper describes the construction and functionality of an Autonomous Fruit Picking Machine (AFPM) for robotic apple harvesting. The key element for the success of the AFPM is the integrated approach which combines state of the art industrial components with the newly designed flexible gripper. The gripper consist of a silicone funnel with a camera mounted inside. The proposed concepts guarantee adequate control of the autonomous fruit harvesting operation globally and of the fruit picking cycle particularly. Extensive experiments in the field validate the functionality of the AFPM.
---
paper_title: Design of an Agricultural Robot for Harvesting Melons
paper_content:
The performance of an agricultural robot has been evaluated through simulation to determine design parameters for a robotic melon harvester. Animated, visual simulation provided a powerful tool to initiate the evaluation of alternative designs. To quantify the many, closely-related design parameters, numerical simulation tools were developed and applied. Simulations using measured cantaloupe locations revealed the effect of design parameters (configuration, number of arms, and actuator speeds) on the average cycle time. Simulation results predicted that a Cartesian robot would perform faster than a cylindrical robot for the melon harvesting task. Activating two arms in tandem was the fastest configuration evaluated. Additional sets of melon locations were stochastically generated from distributions of the field data to determine performance for planting distances between 25 and 125 cm. The fastest cycle time was achieved for an experimental cultural practice that consisted of one plant on each half row in an alternating sequence with 125 cm planting distance. The performance of the robotic melon harvester was found to be highly dependent on the picking time, actuator speeds and planting distance.
---
paper_title: Design and Experiment of Intelligent Monorail Cucumbers Harvester System
paper_content:
Abstract. The cucumber with widespread planted area has a large market share. In order to achieve the requirement of high harvesting efficient under the existing planting patterns, the intelligent monorail cucumbers harvester system has been designed. The intelligent harvester which runs on the monorail is mainly composed with harvest box, monorail assembly bracket system, and control system. Its harvest box is placed on the hooks of the harvester. The way of image capturing with combination of infrared sensor and intelligent camera is feasible. The mature cucumbers can be distinguished by using gray transformation algorithm, trimming image edge algorithm, and locally maximal variance between-class threshold algorithm. The end-effector can be controlled to cut or avoid the cucumbers which hang on the hooks under the monorail during the falling vines. The results show that the average harvesting success rate is 97.28% when the running speed is 0.6m/s. It meets harvesting requirements, and has good prospects.
---
paper_title: Robotic Harvesting of Rosa Damascena Using Stereoscopic Machine Vision
paper_content:
4 Abstract: In this paper, we propose a system for the automated harvest of Rosa Damascena by aid of computer vision techniques. Three dimensional positions of flowers are obtained by stereo vision technique. For harvesting flowers a four-DOF manipulator is used. Also, an end-effector is designed to harvest the flowers by cutting them. To analyze the stereoscopic error, a factorial experiment in the form of a completely randomized design with 2 factors was conducted. The first factor was the distance between cameras at the three levels of 5,10 and 20cm and the second factor was the distance between the camera and the flower at the four levels of 50,75,100 and 125cm. The analysis was done using Duncan's Multiple Range Test at the 1% level. We concluded that the increase in the distance between the cameras reduces the stereoscopic error, while the increase in the distance between the cameras and the flowers increases the error. Finally, the manipulator and the vision system were evaluated altogether. The best results were obtained when the relative distance between cameras was 100 mm. In this case, 82.22 percent of the flowers were successfully harvested.
---
paper_title: Strawberry Harvesting Robot on Table-top Culture
paper_content:
In this paper, it is reported that a robot was developed for harvesting strawberry grown on table top culture. The robot mainly consisted of a 4 DOF manipulator, a harvesting end-effector using sucking force and a visual sensor. As its manipulator, a Cartesian coordinate type was adopted and it was suspended under the planting bed of strawberry. The robot was capable of moving along the planting bed without a traveling device because one prismatic joint of the manipulator played the role of a traveling device. The end-effector could suck a fruit using a vacuum device and it could compensate detecting errors caused by the visual sensor. The visual sensor gave the robot two dimensional information based on an acquired image and fruit depth was calculated as an average value of previously harvested fruit depths obtained from end-effector positions when the robot actually harvested. The end-effector moved toward a target fruit based on the three dimensional position of the target fruit until the fruit was detected by three pairs of photo-interrupters on sucking head. After cutting the peduncle by using the robot's wrist joint, the fruits passed the tube and were transported to the tray. From the results of the harvesting experiments, it was observed that the robot could harvest all target fruits with no injury, and that depth measurement by a visual sensor was simplified because a distance between the robot and the fruits was kept an approximately constant by suspending the robot under the planting bed.
---
paper_title: Automatic harvesting of asparagus: an application of robot vision to agriculture
paper_content:
This work presents a system for the automatic selective harvesting of asparagus in open field being developed in the framework of the Italian National Project on Robotics. It is composed of a mobile robot, equipped with a suitable manipulator, and driven by a stereo-vision module. In this paper we discuss in detail the problems related to the vision module.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Robotics and intelligent machines in agriculture
paper_content:
-From the prehistoric times of the hunter/gatherers until the present, man has been the sole source of intelligence in his food production system. The combined factors of increased international competition in the agricultural sector, advances in computer technology, and the rapidly decreasing costs of new technology have now brought us to the time when the widespread application of intelligent machines in agriculture is imminent. Thus practical agricultural robots now seem possible. Agricultural robots and intelligent machines will increase and become commonplace in the developed countries during the next decade. The status of intelligent machines and robotics in agriculture as it stands at present is reviewed; i.e., where it is going and some of the obstacles that must be overcome.
---
paper_title: Initial experiments in robotic mushroom harvesting
paper_content:
Abstract The locating and picking performance of a robotic mushroom harvesting rig is assessed. The robot system has three main elements: a black and white vision system incorporating a mushroom locating image analysis algorithm, a computer-controlled Cartesian robot and a specialised mushroom picking end-effector. In 18 experiments, on 815 mushroom targets, 689 (84%) were located by the image analysis algorithm and 465 (57%) were picked successfully. Overlapped, small mushrooms and closely packed, touching mushrooms proved most difficult to pick using twist as the primary detachment method. The paper concludes that considerable improvements to picking performance could be achieved by using bend as the primary detachment method and by developing a suitable picking strategy to predict bend direction and picking order.
---
paper_title: MECHANICAL HARVESTING SYSTEMS FOR THE FLORIDA CITRUS JUICE INDUSTRY
paper_content:
Florida harvests 95% of its 245,000 ha of oranges for processing into ::: juice. Until recently, all fruit were hand harvested. The supply of workers is ::: decreasing and harvest cost is increasing. Harvest cost must be reduced about 50% ::: to effectively compete in free-trade markets. The Florida growers are funding a ::: Harvesting Program to develop new harvesting technologies. Eight mechanical ::: harvesting systems that are emerging for commercial use are described. About ::: 6,000 ha were mechanically harvested during the 2001-2002 season, and the total ::: should increase every year.
---
paper_title: n End-Effector and Manipulator Control for Tomato Cluster Harvesting Robot
paper_content:
An end-effector and a control method for tomato-cluster harvesting manipulator are proposed in this study. When fruit cluster harvesting is conducted, peduncle direction is necessary to cut, but is not easy to detect because peduncles are often occluded by leaves, stems and fruits. The end-effector needs to grasp the peduncle without its direction information. An end-effector which can surround main stem and can grasp and cut the peduncle by fingers was made as a trial. When a tomato cluster is transported into a container with the manipulator, both its transportation speed and vibration damping are required. Such a control problem is generally called a motion and vibration control (MOVIC). An input shaping method is one of the representative control methods for the MOVIC. It requires accurate natural frequencies of the manipulating target fruit cluster to damp the flexible vibration when the robot is accelerating the target. The tomato clusters, however, have individual variation with natural frequencies; hence, it is not easy to apply the input shaping method directly. To overcome this problem, identification method of natural frequency was combined with the input shaping method in the proposed method. This identification was based on real time sensing data from a machine vision and a force sensor and database of physical properties of the tomato clusters. Usefulness of the proposed method was verified through both numerical simulations and hardware experiments.
---
paper_title: Design of an advanced prototype robot for white asparagus harvesting
paper_content:
Agricultural workplaces are a prototypical example of unstructured and variant environments, offering a novel challenge to robotic research and automation. In this paper, a robot prototype is presented for white asparagus harvesting. White asparagus is a rather delicate vegetable with unique cultivation characteristics, which is exceptionally tiring and laborious to collect and requires specialized workers for harvesting. In this paper, we propose an integrated robotic system able to move in the field, identify white asparagus stems and collect them without damaging them. We highlight the design decisions for every module of the harvester and outline the overall architecture of the system.
---
paper_title: Development of a cucumber leaf picking device for greenhouse production
paper_content:
A leaf picking device for cucumbers was designed and evaluated. The picking device is manually operated but can be used as a picking tool for a robot. The device consisted of a picking rotor composed from knives and brushes, a motor and a vacuum cleaner. The performance of removal, cutting, torque and shredding was investigated in the laboratory experiments. In the greenhouse experiments, the performance of picking and cutting was investigated. The results were as follows: (1) the highest removal success rate was achieved at a rotation speed of 1000 min −1 , a rotor configuration of ‘two knives and two brushes’ and an insertion speed of 50 mm s −1 ; (2) in this mechanical setting, the percentage of the summation of the smooth cut surface of the leaf stalk and the smooth cut surface with small skin was 90%; (3) the required torque was 0.09–0.96 N m and the average particle size of the shredded leaves was 7.3–21.2 cm 2 ; (4) the percentages of area of the dropped particles to the average area of leaves was 10–16%; (5) the average execution time per leaf was 1.1–2.3 s.
---
paper_title: Design and co-simulation for tomato harvesting robots
paper_content:
A tomato harvesting robot consisting of a mobile vehicle, a manipulator, a vision system, an end-effector, a cutter and a control system is designed. Each module is described in detail. The manipulator is 4 DOF with one parallel linkage, which generate two directions' motion. The harvesting robot performed the harvesting task based on machine vision servo system. A personal computer working as the host computer conducted the manipulator, the end-effector and the cutter. The viability and validity of the tomato harvesting robot was preliminarily confirmed by co-simulation for the electromechanical system.
---
paper_title: Robotics of Fruit Harvesting: A State-of-the-art Review
paper_content:
Abstract Mechanization of the harvesting of fruits, and primarily of those that are destined for the fresh market, is highly desirable in many countries due to the decrease in seasonal labour availability. Some of the technology exists for harvesting fruit intended for processing, but its utilization for soft, fresh fruit is limited, because of the excessive mechanical damage to the fruit during mechanical harvesting. An alternative to the current mechanical harvesting systems, superior from the point of view of fruit quality, but far more ambitious, is automated fruit picking with a robotic system which emulates the human picker. The challenge of developing a cost-effective robotic system for fruit picking has been taken up by researchers at several places in the world. The major problems that have to be solved with a robotic picking system are recognizing and locating the fruit and detaching it according to prescribed criteria, without damaging either the fruit or the tree. In addition, the robotic system needs to be economically sound to warrant its use as an alternative method to hand picking. This paper reviews the work carried out during the past 10 years in several countries, in developing a robot for picking fruit. Its major objective is to focus on the technological progress made so far, point out the problems still to be solved, and outline the conditions, technological and socio-economic, under which the robotic method will be accepted.
---
paper_title: A prototype of an orange picking robot: past history, the new robot and experimental results
paper_content:
Purpose – To construct a commercial agricultural manipulation for fruit picking and handling without human intervention.Design/methodology/approach – Describes a research activity involving a totally autonomous robot for fruit picking and handling crates.Findings – Picking time for the robotic fruit picker at 8.7 s per orange is longer than the evaluated cited time of 6 s per orange.Research limitations/implications – The final system, recently tested, has not yet achieved a level of productivity capable of replacing human pickers. Further mechanical modifications and more robust and adaptive algorithms are needed to achieve a stronger robot system.Practical implications – Experimental results and new simulations look very promising.Originality/value – Will help to limit costs and guarantee a high degree of reliability.
---
paper_title: THE FLORIDA ROBOTIC GROVE-LAB
paper_content:
ABSTRACT A mobile grove-lab was developed to study the use of robotic technology for picking oranges under actual production conditions. The design and operation of the citms picking robot developed for this facility is described. The sensor system developed to identify and locate fruit in real-time and the state network programming technique used to develop a task-level control program for the citrus picking robot are discussed. The suitability of the vision system and state network programming technique for real-time, vision-servo robotic picking is demonstrated. It was concluded that the technical and economic practicality of robotic citrus harvesting can only be demonstrated with an operational multiple-arm harvesting system. Multiple usage of robotic harvesting equipment and acquisition of detailed production data by a robotic harvester were identified as intangible benefits of robotic harvesting which should encourage the commercial development of a multiple-arm machine.
---
paper_title: Robotic manipulators in horticulture : a review
paper_content:
Abstract This paper covers the use of jointed manipulators in handling biological materials within the horticultural industry. Potential applications are reviewed by looking at previous research work and the scope for future development based on economic as well as technical considerations. It is concluded that applications with long seasons are the most viable, with flexibility of use between tasks an important consideration for applications with short seasons. A general specification highlighting the differences between horticultural and manufacturing industry requirements has been derived and some engineering proposals put forward to meet it. The use of computer vision is thought to be the best choice of primary sensor system. A geometrical layout which allows the manipulator to approach the target along a "line of sight" is favoured, as it simplifies collision avoidance in an unstructured environment and might allow recalibration of manipulator position during extension. Pneumatic actuators are thought to offer many advantages over electric or hydraulic drives provided that adequate dynamic control can be achieved. It is concluded that the generic differences in specification between industrial and horticultural manipulators provide an opportunity to take a new approach in manipulator design. In order to provide the analytical tools necessary for this new approach further research is needed in the interrelated areas of actuator choice, geometrical configuration and control strategy.
---
paper_title: New strawberry harvesting robot for elevated-trough culture
paper_content:
Feng Qingchun*, Wang Xiu, Zheng Wengang, Qiu Quan, Jiang Kai ::: (National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China) ::: ::: Abstract: In order to improve robotic harvesting and reduce production cost, a harvesting robot system for strawberry on the elevated-trough culture was designed. It was supposed to serve for sightseeing agriculture and technological education. Based on the sonar-camera sensor, an autonomous navigation system of the harvesting robot was built to move along the trough lines independently. The mature fruits were recognized according to the H (Hue) and S (Saturation) color feature and the picking-point were located by the binocular-vision unit. A nondestructive end-effector, used to suck the fruit, hold and cut the fruit-stem, was designed to prevent pericarp damage and disease infection. A joint-type industrial manipulator with six degrees-of-freedom (DOF) was utilized to carry the end-effector. The key points and time steps for the collision-free and rapid motion of manipulator were planned. Experimental results showed that all the 100 mature strawberry targets were recognized automatically in the harvesting test. The success harvesting rate was 86%, and the success harvesting operation cost 31.3 seconds on average, including a single harvest operation of 10 seconds. The average error for fruit location was less than 4.6 mm. ::: Keywords: strawberry harvesting robot, elevated-trough culture, machine vision, nondestructive end-effector, autonomous navigation system, manipulator, sensor ::: DOI: 10.3965/j.ijabe.20120502.001 ::: ::: Citation: Feng Q C, Wang X, Zheng W G, Qiu Q, Jiang K. New strawberry harvesting robot for elevated-trough culture. Int J Agric & Biol Eng, 2012; 5(2): 1
---
paper_title: Evaluation of a strawberry-harvesting robot in a field test
paper_content:
We developed a strawberry-harvesting robot, consisting of a cylindrical manipulator, end-effector, machine vision unit, storage unit and travelling unit, for application to an elevated substrate culture. The robot was based on the development concepts of night operation, peduncle handling and task sharing with workers, to overcome the robotic harvesting problems identified by previous studies, such as low work efficiency, low success rate, fruit damage, difficulty of detection in unstable illumination and high cost. In functional tests, the machine vision assessments of fruit maturity agreed with human assessments for the Amaotome and Beni-hoppe cultivars, but the performance for Amaotome was significantly better. Moreover, the machine vision unit correctly detected a peduncle of the target fruit at a rate of 60%. In harvesting tests conducted throughout the harvest season on target fruits with a maturity of 80% or more, the successful harvesting rate of the system was 41.3% when fruits were picked using a suction device before cutting the peduncle, while the rate was 34.9% when fruits were picked without suction. There were no significant differences between the two picking methods in terms of unsuccessful picking rates. The execution time for the successful harvest of a single fruit, including the time taken to transfer the harvested fruit to a tray, was 11.5 s.
---
paper_title: Agricultural robot in grape production system
paper_content:
A multipurpose agricultural robot which works in vineyard has been studied. This robot, which consists of a manipulator, a visual sensor, a travelling device and end-effecters, is able to do several works by changing the end-effecters. Four end-effectors for harvesting, berry thinning, spraying and bagging have been developed for this robot system. The harvesting end-effector which grasps and cuts rachis was able to harvest bunches with no damage. The berry thinning end-effector which consists of three unified bunch shape parts. The spraying end-effector sprays the target uniformly, and the bagging end-effector is able to put bags on growing bunches one by one continuously. From the experimental results in a field and laboratory, it was observed that each end-effector could perform efficiently.
---
paper_title: An Autonomous Robot for Harvesting Cucumbers in Greenhouses
paper_content:
This paper describes the concept of an autonomous robot for harvesting cucumbers in greenhouses. A description is given of the working environment of the robot and the logistics of harvesting. It is stated that for a 2 ha Dutch nursery, 4 harvesting robots and one docking station are needed during the peak season. Based on these preliminaries, the design specifications of the harvest robot are defined. The main requirement is that a single harvest operation may take at most 10 s. Then, the paper focuses on the individual hardware and software components of the robot. These include, the autonomous vehicle, the manipulator, the end-effector, the two computer vision systems for detection and 3D imaging of the fruit and the environment and, finally, a control scheme that generates collision-free motions for the manipulator during harvesting. The manipulator has seven degrees-of-freedom (DOF). This is sufficient for the harvesting task. The end-effector is designed such that it handles the soft fruit without loss of quality. The thermal cutting device included in the end-effector prevents the spreading of viruses through the greenhouse. The computer vision system is able to detect more than 95% of the cucumbers in a greenhouse. Using geometric models the ripeness of the cucumbers is determined. A motion planner based on the Aa-search algorithm assures collision-free eye-hand co-ordination. In autumn 2001 system integration took place and the harvesting robot was tested in a greenhouse. With a success rate of 80%, field tests confirmed the ability of the robot to pick cucumbers without human interference. On average the robot needed 45 s to pick one cucumber. Future research focuses on hardware and software solutions to improve the picking speed and accuracy of the eye-hand co-ordination of the robot.
---
paper_title: Combination of RGB and Multispectral Imagery for Discrimination of Cabernet Sauvignon Grapevine Elements
paper_content:
This paper proposes a sequential masking algorithm based on the K-means method that combines RGB and multispectral imagery for discrimination of Cabernet Sauvignon grapevine elements in unstructured natural environments, without placing any screen behind the canopy and without any previous preparation of the vineyard. In this way, image pixels are classified into five clusters corresponding to leaves, stems, branches, fruit and background. A custom-made sensory rig that integrates a CCD camera and a servo-controlled filter wheel has been specially designed and manufactured for the acquisition of images during the experimental stage. The proposed algorithm is extremely simple, efficient, and provides a satisfactory rate of classification success. All these features turn out the proposed algorithm into an appropriate candidate to be employed in numerous tasks of the precision viticulture, such as yield estimation, water and nutrients needs estimation, spraying and harvesting.
---
| Title: Deliberation on Design Strategies of Automatic Harvesting Systems: A Survey
Section 1: Introduction
Description 1: Introduce the importance of agriculture and the necessity for mechanization and automation in modern agriculture.
Section 2: Design Strategies
Description 2: Discuss the components and design strategies involved in creating fruit harvesting robots, including recognition, picking, and moving systems.
Section 3: Picking System
Description 3: Detail the requirements and strategies for designing the picking or gripping systems of fruit harvesting robots.
Section 4: Color Camera Recognition System
Description 4: Explain the use of color camera systems for fruit recognition and location in agricultural settings.
Section 5: Multispectral Recognition System
Description 5: Discuss the application and benefits of multispectral recognition systems in detecting and analyzing agricultural products.
Section 6: Fruit Harvesting Robots
Description 6: Provide a historical overview and discuss various developments and advancements in fruit harvesting robots over the years.
Section 7: Discussion
Description 7: Analyze the current state, challenges, and future prospects for commercializing fruit harvesting robots.
Section 8: Conflicts of Interest
Description 8: State any potential conflicts of interest related to the survey. |
Information Retrieval (IR) through Semantic Web (SW): An Overview | 14 | ---
paper_title: Information retrieval on the Semantic Web:
paper_content:
vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the informa- tion retrieval paradigm might be recast in such an environ- ment. We suggest that retrieval can be tightly bound to infer- ence. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
---
paper_title: An Ontology-Based Information Retrieval Model
paper_content:
Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.
---
| Title: Information Retrieval (IR) through Semantic Web (SW): An Overview
Section 1: INTRODUCTION
Description 1: Introduce the concept of combining text documents with Semantic Web technologies and the limitations of current IR techniques.
Section 2: SEMANTIC WEB (SW)
Description 2: Discuss the current state and philosophy of the Semantic Web, including its definition and reasons for its limited use in practice.
Section 3: PRINCIPLE OF SW
Description 3: Explain the differences between the Semantic Web and the World Wide Web, highlighting how SW is machine understandable.
Section 4: SEMANTIC WEB ARCHITECTURE
Description 4: Detail the components and structure of Semantic Web architecture, including URIs, UNICODE, RDF, and inference mechanisms.
Section 5: INFORMATION RETRIEVAL (IR)
Description 5: Describe the processes and techniques involved in Information Retrieval, focusing on keyword extraction and document ranking.
Section 6: IR PROCESS and ARCHITECTURE
Description 6: Outline the IR process and its architecture, emphasizing the role of ontology in storing background knowledge and aiding query transformation.
Section 7: HIR (HYBRID INFORMATION RETRIEVAL)
Description 7: Introduce the Hybrid Information Retrieval approach, explaining how it integrates standard IR methods to avoid document differences.
Section 8: Components of HIR
Description 8: List and describe the specific components involved in the Hybrid Information Retrieval system.
Section 9: PROTOTYPE SYSTEMS
Description 9: Present the three prototype systems developed for IR within the Semantic Web: OWLIR, SWANGLER, and SWOOGLE, explaining their functionalities and use cases.
Section 10: OWLIR
Description 10: Dive into the details of OWLIR, its architecture, and its components like HAIRCUT and WONDIR, describing how it retrieves text and semantic documents.
Section 11: Swangler
Description 11: Explain the SWANGLER prototype system, its purpose in enriching RDF documents, and how it interacts with search engines like Google.
Section 12: Swoogle
Description 12: Describe the SWOOGLE prototype search engine, its mechanism for indexing and retrieving Semantic Web documents, and its analysis activities.
Section 13: Analysis
Description 13: Summarize the analytical outcomes of using Swoogle, focusing on its ability to search appropriate ontologies, data instances, and characterizing the Semantic Web.
Section 14: CONCLUSIONS
Description 14: Conclude with a summary emphasizing the importance of the Semantic Web in IR, the effectiveness of the described prototype systems, and future opportunities. |
Quartz-Enhanced Photoacoustic Spectroscopy: A Review | 19 | ---
paper_title: High-Performance InP-Based Mid-IR Quantum Cascade Lasers
paper_content:
Quantum cascade lasers (QCLs) were once considered as inefficient devices, as the wall-plug efficiency (WPE) was merely a few percent at room temperature. But this situation has changed in the past few years, as dramatic enhancements to the output power and WPE have been made for InP-based mid-IR QCLs. Room temperature continuous-wave (CW) output power as high as 2.8 W and WPE as high as 15% have now been demonstrated for individual devices. Along with the fundamental exploration of refining the design and improving the material quality, a consistent determination of important device performance parameters allows for strategically addressing each component that can be improved potentially. In this paper, we present quantitative experimental evidence backing up the strategies we have adopted to improve the WPE for QCLs with room temperature CW operation.
---
paper_title: External cavity quantum cascade laser
paper_content:
In this paper we review the progress of the development of mid-infrared quantum cascade lasers (QCLs) operated in an external cavity configuration. We concentrate on QCLs based on the bound-to-continuum design, since this design is especially suitable for broadband applications. Since they were first demonstrated, these laser-based tunable sources have improved in performance in terms of output power, duty cycle, operation temperature and tuneability. Nowadays they are an interesting alternative to FTIRs for some applications. They operate at room temperature, feature a high spectral resolution while being small in size. They were successfully used in different absorption spectroscopy techniques. Due to their vast potential for applications in industry, medicine, security and research, these sources enjoy increasing interest within the research community as well as in industry.
---
paper_title: Cavity Ring-Down Spectroscopy: Techniques and Applications
paper_content:
Preface List of contributors Glossary Chapter 1 - An introduction to cavity ring-down spectroscopy 1.1 Introduction 1.2 Direct absorption spectroscopy 1.3 Basic cavity ring down spectroscopy setup 1.4 A more refined picture 1.5 Fitting of cavity ring down transients 1.6 A few examples 1.7 Going beyond the standard pulsed CRDS experiment 1.8 Summary 1.9 References Chapter 2 - Cavity enhanced techniques using continuous wave lasers 2.1 Introduction 2.1 Properties of optical cavities and cw lasers relevant to cavity enhanced spectroscopy 2.3 Experimental methods for cw laser cavity enhanced spectroscopy 2.4 Spectroscopy with resonant cavities 2.5 Summary Chapter 3 - Broadband cavity ring-down spectroscopy 3.1 Introduction. 3.2 The time and wavelength evolution of a single ringdown event. 3.3 Two dimensional techniques: resolving broadband cavity output in time and wavelength. 3.4 One dimensional techniques: time or wavelength. 3.5 How to extract quantitative information from broadband spectra. 3.6 Optimising the sensitivity of a broadband measurement. 3.7 Applications of broadband cavity methods. 3.8 References . Chapter 4 - Cavity ring-down spectroscopy in analytical chemistry 4.1 Introduction 4.2 Condensed media CRDS 4.3 Evanescent-wave CRDS 4.4 Future trends and perspectives Chapter 5 - Cavity ring-down spectroscopy using waveguides 5.1. Introduction 5.2. The basic experiments 5.3. Optics and Instrumentation 5.4. Review of waveguide CRD literature 5.5. Conclusion and outlook 5.6. Acknowledgements Chapter 6 - Cavity ring down spectroscopy of molecular transients of astrophysical interest 6.1. Introduction 6.2. Experimental 6.3. Astronomical considerations 6.4. Results 6.5. Outlook Acknowledgements References Chapter 7 - Applications of cavity ring-down spectroscopy in atmospheric chemistry 7.1. Brief overview 7.2. Measurement of trace atmospheric species by CRDS 7.3. Laboratory based studies of atmospheric interest 7.4. Optical properties of atmospheric aerosol particles 7.5. Future developments Chapter 8 - Cavity ring-down spectroscopy for medical applications 8.1. Introduction 8.2. Trace gases in medicine and biology 8.3. Instrumentation for laser analytics of breath and other biological gas samples 8.4. Applications to life sciences 8.5. Conclusion and Perspectives 8.6. References Chapter 9: Studies into the growth mechanism of a-Si:H using in situ cavity ring-down techniques 9.1. Introduction 9.2. Gas phase CRDS on SiH x radicals 9.3. Thin film CRDS on dangling bonds in a-Si:H films (ex situ) 9.4. Evanescent wave CRDS on dangling bonds during a-Si:H film growth Chapter 10 - Cavity ring down spectroscopy for combustion studies 10.1. Introduction 10.2. General description of cavity ring down spectroscopy in flames 10.3. Experimental set-up 10.4. Quantitative concentration measurements in flames 10.5. Concentration profile determination 10.6. Specific difficulties in combustion studies 10.7. Case of particles: soot volume fraction determination 10.8. Conclusion and prospective References Appendix A Literature
---
paper_title: Molecular gas sensing below parts per trillion: radiocarbon-dioxide optical detection.
paper_content:
Radiocarbon (C) concentrations at a 43 parts-per-quadrillion level are measured by using saturatedabsorption cavity ringdown spectroscopy by exciting radiocarbon-dioxide (CO2) molecules at the 4:5 m wavelength. The ultimate sensitivity limits of molecular trace gas sensing are pushed down to attobar pressures using a comb-assisted absorption spectroscopy setup. Such a result represents the lowest pressure ever detected for a gas of simple molecules. The unique sensitivity, the wide dynamic range, the compactness, and the relatively low cost of this table-top setup open new perspectives for C-tracing applications, such as radiocarbon dating, biomedicine, or environmental and earth sciences. The detection of other very rare molecules can be pursued as well thanks to the wide and continuous mid-IR spectral coverage of the described setup.
---
paper_title: Photoacoustic Techniques for Trace Gas Sensing Based on Semiconductor Laser Sources
paper_content:
The paper provides an overview on the use of photoacoustic sensors based on semiconductor laser sources for the detection of trace gases. We review the results obtained using standard, differential and quartz enhanced photoacoustic techniques.
---
paper_title: Terahertz quantum cascade lasers
paper_content:
Six years after their birth, terahertz quantum-cascade lasers can now deliver milliwatts or more of continuous-wave coherent radiation throughout the terahertz range — the spectral regime between millimetre and infrared wavelengths, which has long resisted development. This paper reviews the state-of-the-art and future prospects for these lasers, including efforts to increase their operating temperatures, deliver higher output powers and emit longer wavelengths.
---
paper_title: Cavity-Enhanced Direct Frequency Comb Spectroscopy: Technology and Applications
paper_content:
Cavity-enhanced direct frequency comb spectroscopy combines broad bandwidth, high spectral resolution, and ultrahigh detection sensitivity in one experimental platform based on an optical frequency comb efficiently coupled to a high-finesse cavity. The effective interaction length between light and matter is increased by the cavity, massively enhancing the sensitivity for measurement of optical losses. Individual comb components act as independent detection channels across a broad spectral window, providing rapid parallel processing. In this review we discuss the principles, the technology, and the first applications that demonstrate the enormous potential of this spectroscopic method. In particular, we describe various frequency comb sources, techniques for efficient coupling between comb and cavity, and detection schemes that utilize the technique's high-resolution, wide-bandwidth, and fast data-acquisition capabilities. We discuss a range of applications, including breath analysis for medical diagnosis, trace-impurity detection in specialty gases, and characterization of a supersonic jet of cold molecules.
---
paper_title: Quantum cascade laser-based integrated cavity output spectroscopy of exhaled nitric oxide
paper_content:
A nitric oxide (NO) sensor employing a ther- moelectrically cooled, continuous-wave, distributed feedback quantum cascade laser operating at 5.47 µm (1828 cm −1 )a nd off-axis integrated cavity output spectroscopy was used to meas- ure NO concentrations in exhaled breath. A minimum mea- surable concentration (3σ) of 3.6 parts-per-billion by volume (ppbv) of NO with a data-acquisition time of 4 s was demon- strated. Five prepared gas mixtures and 15 exhaled breath samples were measured with both the NO sensor and for in- tercomparison with a chemiluminescence-based NO analyzer and were found to be in agreement within 0.6 ppbv. Exhaled NO flow-independent parameters, which may provide diagnos- tic and therapeutic information in respiratory diseases where single-breath measurements are equivocal, were estimated from end-tidal NO concentration measurements collected at various flow rates. The results of this work indicate that a laser-based ex- haled NO sensor can be used to measure exhaled nitric oxide at a range of exhalation flow rates to determine flow-independent parameters in human clinical trials.
---
paper_title: High-Performance InP-Based Mid-IR Quantum Cascade Lasers
paper_content:
Quantum cascade lasers (QCLs) were once considered as inefficient devices, as the wall-plug efficiency (WPE) was merely a few percent at room temperature. But this situation has changed in the past few years, as dramatic enhancements to the output power and WPE have been made for InP-based mid-IR QCLs. Room temperature continuous-wave (CW) output power as high as 2.8 W and WPE as high as 15% have now been demonstrated for individual devices. Along with the fundamental exploration of refining the design and improving the material quality, a consistent determination of important device performance parameters allows for strategically addressing each component that can be improved potentially. In this paper, we present quantitative experimental evidence backing up the strategies we have adopted to improve the WPE for QCLs with room temperature CW operation.
---
paper_title: Geometrical optimization of a longitudinal resonant photoacoustic cell for sensitive and fast trace gas detection
paper_content:
We present a quantitative discussion of the acoustic transmission line theory pertaining to experimental results from a resonant photoacoustic cell excited in its first longitudinal mode. Window absorption is optimally suppressed by buffer volumes and tunable air columns. The acoustic behavior of an ultrasensitive one inch condenser microphone is quantitatively described. A small and sensitive photoacoustic cell has been developed for intracavity use in a CO2 waveguide laser permitting measurements of ethylene down to 6 pptv (long term stability 20 pptv) with a time response of 2 s at a trace gas flow of 6 1/h. To demonstrate the fast time response within a biological application the instant ethylene release of a single tomato is measured.
---
paper_title: On-line laser photoacoustic detection of ethene in exhaled air as biomarker of ultraviolet radiation damage of the human skin
paper_content:
The exhaled air and volatile emission by the skin of human subjects were analyzed for traces of ethene (C2H4) by means of CO2 laser photoacoustic trace gas detection. Due to the extreme sensitivity of the detection system (6 part per trillion volume, 6:1012), these measurements could be performed on-line and noninvasively. Exhaled ethene was used as a biomarker for lipid peroxidation in the skin of human subjects exposed to ultraviolet (UV) radiation from a solarium. A change in the ethene concentration was already observed in the exhaled air after 2 min. Adaptation of the skin to UV exposure and direct skin emission could also be observed.
---
paper_title: Dual cantilever enhanced photoacoustic detector with pulsed broadband IR-source
paper_content:
Abstract The cantilever enhanced photoacoustic (CEPA) trace gas detection was combined with an electrically modulated broadband infrared (EMBIR) source. The high sensitivity of the detection method was further improved by using two cantilever sensors on the opposite walls of the photoacoustic cell in order to suppress noise. Methane (CH 4 ) gas was used to demonstrate the sensitivity of the method yielding a detection limit of 0.5 ppm with 5 s sample integration time employing an optical filter with a center wavenumber 2950 cm −1 . The achieved result is very good with a low power infrared source and enables the utilization of the benefits of such a source: low price and power consumption, easily controlled and simple electrical modulation of the radiation intensity and absence of the mechanical chopper and its noise. The structure of the source enables a relatively high pulse rate to be used as the modulating frequency. Optical filters can be used to select the wavenumber region for the selective detection of different gases.
---
paper_title: High-resolution photoacoustic and direct absorption spectroscopy of main greenhouse gases by use of a pulsed entangled cavity doubly resonant OPO
paper_content:
An entangled cavity doubly resonant optical parametric oscillator (ECOPO) has been developed to provide tunable narrow line width (<100 MHz) pulsed (8 ns) radiation over the 3.8–4.3 μm spectral range at a multi-kilohertz repetition rate with up to 100-W peak power. We demonstrate that coarse single mode tuning is obtained over the full spectral range of oscillation (300 cm−1), while automated mode-hop-free fine tuning is carried out over more than 100 GHz. High-resolution spectra of main greenhouse gases (CO2, N2O, SO2 and CH4) have been obtained in good agreement with calculated spectra from the HITRAN database. These experiments outline the unique capabilities of the ECOPO for multi-gas sensing based on direct absorption as well as photoacoustic spectroscopy.
---
paper_title: Molecular gas sensing below parts per trillion: radiocarbon-dioxide optical detection.
paper_content:
Radiocarbon (C) concentrations at a 43 parts-per-quadrillion level are measured by using saturatedabsorption cavity ringdown spectroscopy by exciting radiocarbon-dioxide (CO2) molecules at the 4:5 m wavelength. The ultimate sensitivity limits of molecular trace gas sensing are pushed down to attobar pressures using a comb-assisted absorption spectroscopy setup. Such a result represents the lowest pressure ever detected for a gas of simple molecules. The unique sensitivity, the wide dynamic range, the compactness, and the relatively low cost of this table-top setup open new perspectives for C-tracing applications, such as radiocarbon dating, biomedicine, or environmental and earth sciences. The detection of other very rare molecules can be pursued as well thanks to the wide and continuous mid-IR spectral coverage of the described setup.
---
paper_title: The potential of mid-infrared photoacoustic spectroscopy for the detection of various doping agents used by athletes
paper_content:
The feasibility of laser-photoacoustic measurements for the detection and the analysis of different isolated doping agents in the vapour phase is discussed. To the best of our knowledge, this is the first time that photoacoustic vapour-phase measurements of doping substances have been presented. Spectra of different doping classes (stimulants, anabolica, diuretica, and beta blockers) are shown and discussed in terms of their detection sensitivity and selectivity. The potential of laser spectroscopy for detecting the intake of prohibited substances by athletes is explored.
---
paper_title: Improved photoacoustic detector for monitoring polar molecules such as ammonia with a 1.53 μm DFB diode laser
paper_content:
A new differential photoacoustic (PA) detector equipped with band-rejecting acoustic filters has been developed for diode laser photoacoustics. The differential design provides good flow and electric noise suppression. The detector was used for monitoring ammonia in continuous flow operation. The first test measurements of synthetic air-ammonia mixtures were carried out using a 5 mW distributed feedback (DFB) diode laser at 1.53 μm, providing a sensitivity limit of about 300 ppbV. Because of the adsorption of ammonia molecules on the walls of the detector and the gas tubing, high flow rates were needed for the measurements in the 1–100 ppmV concentration range. The smallest concentration detected was 1 ppmV.
---
paper_title: Fiber-amplifier-enhanced photoacoustic spectroscopy with near-infrared tunable diode lasers
paper_content:
A new approach to wavelength-modulation photoacoustic spectroscopy is reported, which incorporates diode lasers in the near infrared and optical fiber amplifiers to enhance sensitivity. We demonstrate the technique with ammonia detection, yielding a sensitivity limit less than 6 parts in 10(9), by interrogating a transition near 1532 nm with 500 mW of output power from the fiber amplifier, an optical pathlength of 18.4 cm, and an integration time constant of 10 s. This sensitivity is 15 times better than in prior published results for detecting ammonia with near-infrared diode lasers. The normalized minimum detectable fractional optical density, alphaminl, is 1.8 x 10(-8); the minimum detectable absorption coefficient, alphamin, is 9.5 x 10(-10) cm(-1); and the minimum detectable absorption coefficient normalized by power and bandwidth is 1.5 x 10(-9) W cm(-1)/square root Hz. These measurements represent what we believe to be the first use of fiber amplifiers to enhance photoacoustic spectroscopy, and this technique is applicable to all other species that fall within the gain curves of optical fiber amplifiers.
---
paper_title: Photoacoustic Techniques for Trace Gas Sensing Based on Semiconductor Laser Sources
paper_content:
The paper provides an overview on the use of photoacoustic sensors based on semiconductor laser sources for the detection of trace gases. We review the results obtained using standard, differential and quartz enhanced photoacoustic techniques.
---
paper_title: Applications of Photoacoustic Spectroscopy to Problems in Dermatology Research
paper_content:
The technique of photoacoustic spectroscopy (PAS) was applied in two areas of dermatology research: 1) drug detection and drug diffusion rates in skin, and 2) thermal properties and water content of skin. The drug studies involved detection of the drug tetracycline in the skin and determination of the diffusion rate of the drug through the skin. The water content studies involved determining the thermal properties of the epidermis as a function of water content and the effect of the water concentration gradient across the epidermis. A multilayer model for the photoacoustic effect was developed to account for the nonuniform thermal properties of the intact skin arising from the water concentration gradient. This model was used to determine the width of the region comprising the diffusional barrier in skin. The width of the barrier region was found to correspond to that of the outermost layer of the epidermis, the stratum corneum. This finding coincides with previous research indicating that the stratum corneum comprises the primary barrier to the diffusion of water through the epidermis.
---
paper_title: Applications of quartz tuning forks in spectroscopic gas sensing
paper_content:
A recently introduced approach to photoacoustic detection of trace gases utilizing a quartz tuning fork (TF) as a resonant acoustic transducer is described in detail. Advantages of the technique called quartz-enhanced photoacoustic spectroscopy (QEPAS) compared to conventional resonant photoacoustic spectroscopy include QEPAS sensor immunity to environmental acoustic noise, a simple absorption detection module design, and its capability to analyze gas samples ∼1mm3 in volume. Noise sources and the TF properties as a function of the sampled gas pressure, temperature and chemical composition are analyzed. Previously published results for QEPAS based chemical gas sensing are summarized. The achieved sensitivity of 5.4×10−9cm−1W∕√Hz is compared to recent published results of photoacoustic gas sensing by other research groups. An experimental study of the long-term stability of a QEPAS-based ammonia sensor is presented. The results of this study indicate that the sensor exhibits very low drift, which allows da...
---
paper_title: QEPAS based ppb-level detection of CO and N_2O using a high power CW DFB-QCL
paper_content:
An ultra-sensitive and selective quartz-enhanced photoacoustic spectroscopy (QEPAS) sensor platform was demonstrated for detection of carbon monoxide (CO) and nitrous oxide (N2O). This sensor used a state-of-the art 4.61 μm high power, continuous wave (CW), distributed feedback quantum cascade laser (DFB-QCL) operating at 10°C as the excitation source. For the R(6) CO absorption line, located at 2169.2 cm−1, a minimum detection limit (MDL) of 1.5 parts per billion by volume (ppbv) at atmospheric pressure was achieved with a 1 sec acquisition time and the addition of 2.6% water vapor concentration in the analyzed gas mixture. For the N2O detection, a MDL of 23 ppbv was obtained at an optimum gas pressure of 100 Torr and with the same water vapor content of 2.6%. In both cases the presence of water vapor increases the detected CO and N2O QEPAS signal levels as a result of enhancing the vibrational-translational relaxation rate of both target gases. Allan deviation analyses were performed to investigate the long term performance of the CO and N2O QEPAS sensor systems. For the optimum data acquisition time of 500 sec a MDL of 340 pptv and 4 ppbv was obtained for CO and N2O detection, respectively. To demonstrate reliable and robust operation of the QEPAS sensor a continuous monitoring of atmospheric CO and N2O concentration levels for a period of 5 hours were performed.
---
paper_title: Applications of Photoacoustic Spectroscopy to Problems in Dermatology Research
paper_content:
The technique of photoacoustic spectroscopy (PAS) was applied in two areas of dermatology research: 1) drug detection and drug diffusion rates in skin, and 2) thermal properties and water content of skin. The drug studies involved detection of the drug tetracycline in the skin and determination of the diffusion rate of the drug through the skin. The water content studies involved determining the thermal properties of the epidermis as a function of water content and the effect of the water concentration gradient across the epidermis. A multilayer model for the photoacoustic effect was developed to account for the nonuniform thermal properties of the intact skin arising from the water concentration gradient. This model was used to determine the width of the region comprising the diffusional barrier in skin. The width of the barrier region was found to correspond to that of the outermost layer of the epidermis, the stratum corneum. This finding coincides with previous research indicating that the stratum corneum comprises the primary barrier to the diffusion of water through the epidermis.
---
paper_title: Quartz-enhanced photoacoustic spectroscopy
paper_content:
A new approach to detecting a weak photoacoustic signal in a gas medium is described. Instead of a gas-filled resonant acoustic cavity, the sound energy is accumulated in a high- Q crystal element. Feasibility experiments utilizing a quartz-watch tuning fork demonstrate a sensitivity of 1.2x10(-7) cm(-1) W/ radicalHz . Potential further developments and applications of this technique are discussed.
---
paper_title: Applications of quartz tuning forks in spectroscopic gas sensing
paper_content:
A recently introduced approach to photoacoustic detection of trace gases utilizing a quartz tuning fork (TF) as a resonant acoustic transducer is described in detail. Advantages of the technique called quartz-enhanced photoacoustic spectroscopy (QEPAS) compared to conventional resonant photoacoustic spectroscopy include QEPAS sensor immunity to environmental acoustic noise, a simple absorption detection module design, and its capability to analyze gas samples ∼1mm3 in volume. Noise sources and the TF properties as a function of the sampled gas pressure, temperature and chemical composition are analyzed. Previously published results for QEPAS based chemical gas sensing are summarized. The achieved sensitivity of 5.4×10−9cm−1W∕√Hz is compared to recent published results of photoacoustic gas sensing by other research groups. An experimental study of the long-term stability of a QEPAS-based ammonia sensor is presented. The results of this study indicate that the sensor exhibits very low drift, which allows da...
---
paper_title: QEPAS based ppb-level detection of CO and N_2O using a high power CW DFB-QCL
paper_content:
An ultra-sensitive and selective quartz-enhanced photoacoustic spectroscopy (QEPAS) sensor platform was demonstrated for detection of carbon monoxide (CO) and nitrous oxide (N2O). This sensor used a state-of-the art 4.61 μm high power, continuous wave (CW), distributed feedback quantum cascade laser (DFB-QCL) operating at 10°C as the excitation source. For the R(6) CO absorption line, located at 2169.2 cm−1, a minimum detection limit (MDL) of 1.5 parts per billion by volume (ppbv) at atmospheric pressure was achieved with a 1 sec acquisition time and the addition of 2.6% water vapor concentration in the analyzed gas mixture. For the N2O detection, a MDL of 23 ppbv was obtained at an optimum gas pressure of 100 Torr and with the same water vapor content of 2.6%. In both cases the presence of water vapor increases the detected CO and N2O QEPAS signal levels as a result of enhancing the vibrational-translational relaxation rate of both target gases. Allan deviation analyses were performed to investigate the long term performance of the CO and N2O QEPAS sensor systems. For the optimum data acquisition time of 500 sec a MDL of 340 pptv and 4 ppbv was obtained for CO and N2O detection, respectively. To demonstrate reliable and robust operation of the QEPAS sensor a continuous monitoring of atmospheric CO and N2O concentration levels for a period of 5 hours were performed.
---
paper_title: Quartz-enhanced photoacoustic spectroscopy
paper_content:
A new approach to detecting a weak photoacoustic signal in a gas medium is described. Instead of a gas-filled resonant acoustic cavity, the sound energy is accumulated in a high- Q crystal element. Feasibility experiments utilizing a quartz-watch tuning fork demonstrate a sensitivity of 1.2x10(-7) cm(-1) W/ radicalHz . Potential further developments and applications of this technique are discussed.
---
paper_title: Geometrical optimization of a longitudinal resonant photoacoustic cell for sensitive and fast trace gas detection
paper_content:
We present a quantitative discussion of the acoustic transmission line theory pertaining to experimental results from a resonant photoacoustic cell excited in its first longitudinal mode. Window absorption is optimally suppressed by buffer volumes and tunable air columns. The acoustic behavior of an ultrasensitive one inch condenser microphone is quantitatively described. A small and sensitive photoacoustic cell has been developed for intracavity use in a CO2 waveguide laser permitting measurements of ethylene down to 6 pptv (long term stability 20 pptv) with a time response of 2 s at a trace gas flow of 6 1/h. To demonstrate the fast time response within a biological application the instant ethylene release of a single tomato is measured.
---
paper_title: Air and gas damping of quartz tuning forks
paper_content:
Abstract The effects of air and gas pressure on the vibration of miniature quartz tuning forks are investigated. Beside damping, which decrease the Q factor of the resonator, the resonant frequency appears to vary quite linearly with the gas pressure. This basic effect could lead to the realization of a simple pressure sensing element. Analysis of the experimental data for the damping, 1/ Q , and the relative frequency shift, Δ f / f 0 , of the resonant frequency is made in terms of acoustic loss and fluid loading of the tuning fork. Direct correlation between Δ f / f 0 and the gas mass carried along is clearly shown, so that the expected sensitivity of the quartz tuning fork as a pressure sensor and its useful range of operation can be well defined.
---
paper_title: On-line laser photoacoustic detection of ethene in exhaled air as biomarker of ultraviolet radiation damage of the human skin
paper_content:
The exhaled air and volatile emission by the skin of human subjects were analyzed for traces of ethene (C2H4) by means of CO2 laser photoacoustic trace gas detection. Due to the extreme sensitivity of the detection system (6 part per trillion volume, 6:1012), these measurements could be performed on-line and noninvasively. Exhaled ethene was used as a biomarker for lipid peroxidation in the skin of human subjects exposed to ultraviolet (UV) radiation from a solarium. A change in the ethene concentration was already observed in the exhaled air after 2 min. Adaptation of the skin to UV exposure and direct skin emission could also be observed.
---
paper_title: Dual cantilever enhanced photoacoustic detector with pulsed broadband IR-source
paper_content:
Abstract The cantilever enhanced photoacoustic (CEPA) trace gas detection was combined with an electrically modulated broadband infrared (EMBIR) source. The high sensitivity of the detection method was further improved by using two cantilever sensors on the opposite walls of the photoacoustic cell in order to suppress noise. Methane (CH 4 ) gas was used to demonstrate the sensitivity of the method yielding a detection limit of 0.5 ppm with 5 s sample integration time employing an optical filter with a center wavenumber 2950 cm −1 . The achieved result is very good with a low power infrared source and enables the utilization of the benefits of such a source: low price and power consumption, easily controlled and simple electrical modulation of the radiation intensity and absence of the mechanical chopper and its noise. The structure of the source enables a relatively high pulse rate to be used as the modulating frequency. Optical filters can be used to select the wavenumber region for the selective detection of different gases.
---
paper_title: Cavity Ring-Down Spectroscopy: Techniques and Applications
paper_content:
Preface List of contributors Glossary Chapter 1 - An introduction to cavity ring-down spectroscopy 1.1 Introduction 1.2 Direct absorption spectroscopy 1.3 Basic cavity ring down spectroscopy setup 1.4 A more refined picture 1.5 Fitting of cavity ring down transients 1.6 A few examples 1.7 Going beyond the standard pulsed CRDS experiment 1.8 Summary 1.9 References Chapter 2 - Cavity enhanced techniques using continuous wave lasers 2.1 Introduction 2.1 Properties of optical cavities and cw lasers relevant to cavity enhanced spectroscopy 2.3 Experimental methods for cw laser cavity enhanced spectroscopy 2.4 Spectroscopy with resonant cavities 2.5 Summary Chapter 3 - Broadband cavity ring-down spectroscopy 3.1 Introduction. 3.2 The time and wavelength evolution of a single ringdown event. 3.3 Two dimensional techniques: resolving broadband cavity output in time and wavelength. 3.4 One dimensional techniques: time or wavelength. 3.5 How to extract quantitative information from broadband spectra. 3.6 Optimising the sensitivity of a broadband measurement. 3.7 Applications of broadband cavity methods. 3.8 References . Chapter 4 - Cavity ring-down spectroscopy in analytical chemistry 4.1 Introduction 4.2 Condensed media CRDS 4.3 Evanescent-wave CRDS 4.4 Future trends and perspectives Chapter 5 - Cavity ring-down spectroscopy using waveguides 5.1. Introduction 5.2. The basic experiments 5.3. Optics and Instrumentation 5.4. Review of waveguide CRD literature 5.5. Conclusion and outlook 5.6. Acknowledgements Chapter 6 - Cavity ring down spectroscopy of molecular transients of astrophysical interest 6.1. Introduction 6.2. Experimental 6.3. Astronomical considerations 6.4. Results 6.5. Outlook Acknowledgements References Chapter 7 - Applications of cavity ring-down spectroscopy in atmospheric chemistry 7.1. Brief overview 7.2. Measurement of trace atmospheric species by CRDS 7.3. Laboratory based studies of atmospheric interest 7.4. Optical properties of atmospheric aerosol particles 7.5. Future developments Chapter 8 - Cavity ring-down spectroscopy for medical applications 8.1. Introduction 8.2. Trace gases in medicine and biology 8.3. Instrumentation for laser analytics of breath and other biological gas samples 8.4. Applications to life sciences 8.5. Conclusion and Perspectives 8.6. References Chapter 9: Studies into the growth mechanism of a-Si:H using in situ cavity ring-down techniques 9.1. Introduction 9.2. Gas phase CRDS on SiH x radicals 9.3. Thin film CRDS on dangling bonds in a-Si:H films (ex situ) 9.4. Evanescent wave CRDS on dangling bonds during a-Si:H film growth Chapter 10 - Cavity ring down spectroscopy for combustion studies 10.1. Introduction 10.2. General description of cavity ring down spectroscopy in flames 10.3. Experimental set-up 10.4. Quantitative concentration measurements in flames 10.5. Concentration profile determination 10.6. Specific difficulties in combustion studies 10.7. Case of particles: soot volume fraction determination 10.8. Conclusion and prospective References Appendix A Literature
---
paper_title: High-resolution photoacoustic and direct absorption spectroscopy of main greenhouse gases by use of a pulsed entangled cavity doubly resonant OPO
paper_content:
An entangled cavity doubly resonant optical parametric oscillator (ECOPO) has been developed to provide tunable narrow line width (<100 MHz) pulsed (8 ns) radiation over the 3.8–4.3 μm spectral range at a multi-kilohertz repetition rate with up to 100-W peak power. We demonstrate that coarse single mode tuning is obtained over the full spectral range of oscillation (300 cm−1), while automated mode-hop-free fine tuning is carried out over more than 100 GHz. High-resolution spectra of main greenhouse gases (CO2, N2O, SO2 and CH4) have been obtained in good agreement with calculated spectra from the HITRAN database. These experiments outline the unique capabilities of the ECOPO for multi-gas sensing based on direct absorption as well as photoacoustic spectroscopy.
---
paper_title: The potential of mid-infrared photoacoustic spectroscopy for the detection of various doping agents used by athletes
paper_content:
The feasibility of laser-photoacoustic measurements for the detection and the analysis of different isolated doping agents in the vapour phase is discussed. To the best of our knowledge, this is the first time that photoacoustic vapour-phase measurements of doping substances have been presented. Spectra of different doping classes (stimulants, anabolica, diuretica, and beta blockers) are shown and discussed in terms of their detection sensitivity and selectivity. The potential of laser spectroscopy for detecting the intake of prohibited substances by athletes is explored.
---
paper_title: Improved photoacoustic detector for monitoring polar molecules such as ammonia with a 1.53 μm DFB diode laser
paper_content:
A new differential photoacoustic (PA) detector equipped with band-rejecting acoustic filters has been developed for diode laser photoacoustics. The differential design provides good flow and electric noise suppression. The detector was used for monitoring ammonia in continuous flow operation. The first test measurements of synthetic air-ammonia mixtures were carried out using a 5 mW distributed feedback (DFB) diode laser at 1.53 μm, providing a sensitivity limit of about 300 ppbV. Because of the adsorption of ammonia molecules on the walls of the detector and the gas tubing, high flow rates were needed for the measurements in the 1–100 ppmV concentration range. The smallest concentration detected was 1 ppmV.
---
paper_title: Fiber-amplifier-enhanced photoacoustic spectroscopy with near-infrared tunable diode lasers
paper_content:
A new approach to wavelength-modulation photoacoustic spectroscopy is reported, which incorporates diode lasers in the near infrared and optical fiber amplifiers to enhance sensitivity. We demonstrate the technique with ammonia detection, yielding a sensitivity limit less than 6 parts in 10(9), by interrogating a transition near 1532 nm with 500 mW of output power from the fiber amplifier, an optical pathlength of 18.4 cm, and an integration time constant of 10 s. This sensitivity is 15 times better than in prior published results for detecting ammonia with near-infrared diode lasers. The normalized minimum detectable fractional optical density, alphaminl, is 1.8 x 10(-8); the minimum detectable absorption coefficient, alphamin, is 9.5 x 10(-10) cm(-1); and the minimum detectable absorption coefficient normalized by power and bandwidth is 1.5 x 10(-9) W cm(-1)/square root Hz. These measurements represent what we believe to be the first use of fiber amplifiers to enhance photoacoustic spectroscopy, and this technique is applicable to all other species that fall within the gain curves of optical fiber amplifiers.
---
paper_title: Frequency dependent fluid damping of micro/nano flexural resonators: Experiment, model and analysis
paper_content:
This research systematically investigates the fluid damping for micromachined beam-type resonators with high resonant frequencies. The work is aimed to find a general fluid damping law which can quantitatively be used as a design tool for the resonance-based micro sensors by accurately predicting the quality factor for resonators operated in air or liquid. Micro-cantilevers with different dimensions are fabricated and tested in order to extract the damping characteristics. Combined with dimensional analysis, a novel linear fluid damping model is proposed, considering the effects from both device's dimension and the resonant frequency. Further by numerical analysis, the model is generalized to resonators with differently shaped cross-sections. Also the proposed fluid damping model provides an attractive way to use simple beam-type resonators as fluid viscosity sensors and air pressure sensors.
---
paper_title: Applications of Photoacoustic Spectroscopy to Problems in Dermatology Research
paper_content:
The technique of photoacoustic spectroscopy (PAS) was applied in two areas of dermatology research: 1) drug detection and drug diffusion rates in skin, and 2) thermal properties and water content of skin. The drug studies involved detection of the drug tetracycline in the skin and determination of the diffusion rate of the drug through the skin. The water content studies involved determining the thermal properties of the epidermis as a function of water content and the effect of the water concentration gradient across the epidermis. A multilayer model for the photoacoustic effect was developed to account for the nonuniform thermal properties of the intact skin arising from the water concentration gradient. This model was used to determine the width of the region comprising the diffusional barrier in skin. The width of the barrier region was found to correspond to that of the outermost layer of the epidermis, the stratum corneum. This finding coincides with previous research indicating that the stratum corneum comprises the primary barrier to the diffusion of water through the epidermis.
---
paper_title: Quantum cascade laser-based integrated cavity output spectroscopy of exhaled nitric oxide
paper_content:
A nitric oxide (NO) sensor employing a ther- moelectrically cooled, continuous-wave, distributed feedback quantum cascade laser operating at 5.47 µm (1828 cm −1 )a nd off-axis integrated cavity output spectroscopy was used to meas- ure NO concentrations in exhaled breath. A minimum mea- surable concentration (3σ) of 3.6 parts-per-billion by volume (ppbv) of NO with a data-acquisition time of 4 s was demon- strated. Five prepared gas mixtures and 15 exhaled breath samples were measured with both the NO sensor and for in- tercomparison with a chemiluminescence-based NO analyzer and were found to be in agreement within 0.6 ppbv. Exhaled NO flow-independent parameters, which may provide diagnos- tic and therapeutic information in respiratory diseases where single-breath measurements are equivocal, were estimated from end-tidal NO concentration measurements collected at various flow rates. The results of this work indicate that a laser-based ex- haled NO sensor can be used to measure exhaled nitric oxide at a range of exhalation flow rates to determine flow-independent parameters in human clinical trials.
---
paper_title: Applications of quartz tuning forks in spectroscopic gas sensing
paper_content:
A recently introduced approach to photoacoustic detection of trace gases utilizing a quartz tuning fork (TF) as a resonant acoustic transducer is described in detail. Advantages of the technique called quartz-enhanced photoacoustic spectroscopy (QEPAS) compared to conventional resonant photoacoustic spectroscopy include QEPAS sensor immunity to environmental acoustic noise, a simple absorption detection module design, and its capability to analyze gas samples ∼1mm3 in volume. Noise sources and the TF properties as a function of the sampled gas pressure, temperature and chemical composition are analyzed. Previously published results for QEPAS based chemical gas sensing are summarized. The achieved sensitivity of 5.4×10−9cm−1W∕√Hz is compared to recent published results of photoacoustic gas sensing by other research groups. An experimental study of the long-term stability of a QEPAS-based ammonia sensor is presented. The results of this study indicate that the sensor exhibits very low drift, which allows da...
---
paper_title: QEPAS based detection of broadband absorbing molecules using a widely tunable, cw quantum cascade laser at 8.4 μm.
paper_content:
Detection of molecules with wide unresolved rotational-vibrational absorption bands is demonstrated by using Quartz Enhanced Photoacoustic Spectroscopy and an amplitude modulated, high power, thermoelectrically cooled quantum cascade laser operating at 8.4 μm in an external cavity configuration. The laser source exhibits single frequency tuning of 135 cm-1 with a maximum optical output power of 50 mW. For trace-gas detection of Freon 125 (pentafluoroethane) at 1208.62 cm-1 a normalized noise equivalent absorption coefficient of NNEA=2.64×10-9 cm-1∙W/Hz1/2 was obtained. Noise equivalent sensitivity at ppbv level as well as spectroscopic chemical analysis of a mixture of two broadband absorbers (Freon 125 and acetone) with overlapping absorption spectra were demonstrated.
---
paper_title: Air and gas damping of quartz tuning forks
paper_content:
Abstract The effects of air and gas pressure on the vibration of miniature quartz tuning forks are investigated. Beside damping, which decrease the Q factor of the resonator, the resonant frequency appears to vary quite linearly with the gas pressure. This basic effect could lead to the realization of a simple pressure sensing element. Analysis of the experimental data for the damping, 1/ Q , and the relative frequency shift, Δ f / f 0 , of the resonant frequency is made in terms of acoustic loss and fluid loading of the tuning fork. Direct correlation between Δ f / f 0 and the gas mass carried along is clearly shown, so that the expected sensitivity of the quartz tuning fork as a pressure sensor and its useful range of operation can be well defined.
---
paper_title: QEPAS based ppb-level detection of CO and N_2O using a high power CW DFB-QCL
paper_content:
An ultra-sensitive and selective quartz-enhanced photoacoustic spectroscopy (QEPAS) sensor platform was demonstrated for detection of carbon monoxide (CO) and nitrous oxide (N2O). This sensor used a state-of-the art 4.61 μm high power, continuous wave (CW), distributed feedback quantum cascade laser (DFB-QCL) operating at 10°C as the excitation source. For the R(6) CO absorption line, located at 2169.2 cm−1, a minimum detection limit (MDL) of 1.5 parts per billion by volume (ppbv) at atmospheric pressure was achieved with a 1 sec acquisition time and the addition of 2.6% water vapor concentration in the analyzed gas mixture. For the N2O detection, a MDL of 23 ppbv was obtained at an optimum gas pressure of 100 Torr and with the same water vapor content of 2.6%. In both cases the presence of water vapor increases the detected CO and N2O QEPAS signal levels as a result of enhancing the vibrational-translational relaxation rate of both target gases. Allan deviation analyses were performed to investigate the long term performance of the CO and N2O QEPAS sensor systems. For the optimum data acquisition time of 500 sec a MDL of 340 pptv and 4 ppbv was obtained for CO and N2O detection, respectively. To demonstrate reliable and robust operation of the QEPAS sensor a continuous monitoring of atmospheric CO and N2O concentration levels for a period of 5 hours were performed.
---
paper_title: Frequency dependent fluid damping of micro/nano flexural resonators: Experiment, model and analysis
paper_content:
This research systematically investigates the fluid damping for micromachined beam-type resonators with high resonant frequencies. The work is aimed to find a general fluid damping law which can quantitatively be used as a design tool for the resonance-based micro sensors by accurately predicting the quality factor for resonators operated in air or liquid. Micro-cantilevers with different dimensions are fabricated and tested in order to extract the damping characteristics. Combined with dimensional analysis, a novel linear fluid damping model is proposed, considering the effects from both device's dimension and the resonant frequency. Further by numerical analysis, the model is generalized to resonators with differently shaped cross-sections. Also the proposed fluid damping model provides an attractive way to use simple beam-type resonators as fluid viscosity sensors and air pressure sensors.
---
paper_title: Quartz-enhanced photoacoustic spectroscopy
paper_content:
A new approach to detecting a weak photoacoustic signal in a gas medium is described. Instead of a gas-filled resonant acoustic cavity, the sound energy is accumulated in a high- Q crystal element. Feasibility experiments utilizing a quartz-watch tuning fork demonstrate a sensitivity of 1.2x10(-7) cm(-1) W/ radicalHz . Potential further developments and applications of this technique are discussed.
---
paper_title: QEPAS detector for rapid spectral measurements
paper_content:
A quartz enhanced photoacoustic spectroscopy sensor designed for fast response was used in combination with a pulsed external cavity quantum cascade laser to rapidly acquire gas absorption data over the 1196–1281 cm−1 spectral range. The system was used to measure concentrations of water vapor, pentafluoroethane (freon-125), acetone, and ethanol both individually and in combined mixtures. The precision achieved for freon-125 concentration in a single 1.1 s long spectral scan is 13 ppbv.
---
paper_title: QEPAS based detection of broadband absorbing molecules using a widely tunable, cw quantum cascade laser at 8.4 μm.
paper_content:
Detection of molecules with wide unresolved rotational-vibrational absorption bands is demonstrated by using Quartz Enhanced Photoacoustic Spectroscopy and an amplitude modulated, high power, thermoelectrically cooled quantum cascade laser operating at 8.4 μm in an external cavity configuration. The laser source exhibits single frequency tuning of 135 cm-1 with a maximum optical output power of 50 mW. For trace-gas detection of Freon 125 (pentafluoroethane) at 1208.62 cm-1 a normalized noise equivalent absorption coefficient of NNEA=2.64×10-9 cm-1∙W/Hz1/2 was obtained. Noise equivalent sensitivity at ppbv level as well as spectroscopic chemical analysis of a mixture of two broadband absorbers (Freon 125 and acetone) with overlapping absorption spectra were demonstrated.
---
paper_title: Frequency dependent fluid damping of micro/nano flexural resonators: Experiment, model and analysis
paper_content:
This research systematically investigates the fluid damping for micromachined beam-type resonators with high resonant frequencies. The work is aimed to find a general fluid damping law which can quantitatively be used as a design tool for the resonance-based micro sensors by accurately predicting the quality factor for resonators operated in air or liquid. Micro-cantilevers with different dimensions are fabricated and tested in order to extract the damping characteristics. Combined with dimensional analysis, a novel linear fluid damping model is proposed, considering the effects from both device's dimension and the resonant frequency. Further by numerical analysis, the model is generalized to resonators with differently shaped cross-sections. Also the proposed fluid damping model provides an attractive way to use simple beam-type resonators as fluid viscosity sensors and air pressure sensors.
---
paper_title: Gas-phase photoacoustic sensor at 8.41 μm using quartz tuning forks and amplitude-modulated quantum cascade lasers
paper_content:
We demonstrate the performance of a novel infrared photoacoustic laser absorbance sensor for gas-phase species using an amplitude-modulated quantum cascade (QC) laser and a quartz tuning fork microphone. The photoacoustic signal was generated by focusing 5.3 mW of a Fabry–Perot QC laser operating at 8.41 μm between the tines of a quartz tuning fork which served as a transducer for the transient acoustic pressure wave. The sensitivity of this sensor was calibrated using the infrared absorber Freon 134a by performing a simultaneous absorption measurement using a 31-cm absorption cell. The power and bandwidth normalized noise equivalent absorption sensitivity (NEAS) of this sensor was determined to be D=2.0×10-8 W cm-1/Hz1/2. A corresponding theoretical analysis of the instrument sensitivity is presented and is capable of quantitatively reproducing the experimental NEAS, indicating that the fundamental sensitivity of this technique is limited by the noise floor of the tuning fork itself.
---
paper_title: Laser microphotoacoustic sensor of ammonia traces in the atmosphere
paper_content:
A microphotoacoustic highly selective sensor of ammonia is built. Main attention is paid to the operation mechanism of the acoustic sensor based on a quartz tuning fork. The optimal dimensions and configuration of the acoustic resonator are determined, which made it possible to increase the sensor sensitivity by two—three times compared to the sensitivity of the existing devices. The detector sensitivity for ammonia was 60ppb (0.05 mg m-3) for the measurement time of 10s and a 25-mW, 1.53-μm laser beam in the acoustic resonator.
---
paper_title: QEPAS detector for rapid spectral measurements
paper_content:
A quartz enhanced photoacoustic spectroscopy sensor designed for fast response was used in combination with a pulsed external cavity quantum cascade laser to rapidly acquire gas absorption data over the 1196–1281 cm−1 spectral range. The system was used to measure concentrations of water vapor, pentafluoroethane (freon-125), acetone, and ethanol both individually and in combined mixtures. The precision achieved for freon-125 concentration in a single 1.1 s long spectral scan is 13 ppbv.
---
paper_title: Applications of quartz tuning forks in spectroscopic gas sensing
paper_content:
A recently introduced approach to photoacoustic detection of trace gases utilizing a quartz tuning fork (TF) as a resonant acoustic transducer is described in detail. Advantages of the technique called quartz-enhanced photoacoustic spectroscopy (QEPAS) compared to conventional resonant photoacoustic spectroscopy include QEPAS sensor immunity to environmental acoustic noise, a simple absorption detection module design, and its capability to analyze gas samples ∼1mm3 in volume. Noise sources and the TF properties as a function of the sampled gas pressure, temperature and chemical composition are analyzed. Previously published results for QEPAS based chemical gas sensing are summarized. The achieved sensitivity of 5.4×10−9cm−1W∕√Hz is compared to recent published results of photoacoustic gas sensing by other research groups. An experimental study of the long-term stability of a QEPAS-based ammonia sensor is presented. The results of this study indicate that the sensor exhibits very low drift, which allows da...
---
paper_title: Application of acoustic resonators in photoacoustic trace gas analysis and metrology
paper_content:
The application of different types of acoustic resonators such as pipes, cylinders, and spheres in photoacoustics is considered. This includes a discussion of the fundamental properties of these resonant cavities. Modulated and pulsed laser excitation of acoustic modes is discussed. The theoretical and practical aspects of high-Q and low-Q resonators and their integration into complete photoacoustic detection systems for trace gas monitoring and metrology are covered in detail. The characteristics of the available laser sources and the performance of the photoacoustic resonators, such as signal amplification, are discussed. Setup properties and noise features are considered in detail. This review is intended to give newcomers the information needed to design and construct state-of-the-art photoacoustic detectors for specific purposes such as trace gas analysis, spectroscopy, and metrology.
---
paper_title: Theoretical analysis of a quartz-enhanced photoacoustic spectroscopy sensor
paper_content:
Quartz-enhanced photoacoustic spectroscopy (QEPAS) sensors are based on a recent approach to photoacoustic detection which employs a quartz tuning fork as an acoustic transducer. These sensors enable detection of trace gases for air quality monitoring, industrial process control, and medical diagnostics. To detect a trace gas, modulated laser radiation is directed between the tines of a tuning fork. The optical energy absorbed by the gas results in a periodic thermal expansion which gives rise to a weak acoustic pressure wave. This pressure wave excites a resonant vibration of the tuning fork thereby generating an electrical signal via the piezoelectric effect. This paper describes a theoretical model of a QEPAS sensor. By deriving analytical solutions for the partial differential equations in the model, we obtain a formula for the piezoelectric current in terms of the optical, mechanical, and electrical parameters of the system. We use the model to calculate the optimal position of the laser beam with respect to the tuning fork and the phase of the piezoelectric current. We also show that a QEPAS transducer with a particular 32.8 kHz tuning fork is 2–3 times as sensitive as one with a 4.25 kHz tuning fork. These simulation results closely match experimental data.
---
paper_title: Optimization of resonator radial dimensions for quartz enhanced photoacoustic spectroscopy systems
paper_content:
A finite element model for QEPAS systems has been developed that can apply to both on-axis and off-axis systems. The model includes the viscous and thermal loss on the acoustic resonator sidewalls, and these factors are found to significantly affect the signal to noise ratio. The model results are compared to experimental data and it is found that the model correctly predicts the optimal radial dimensions for resonator tubes of a given length. The model is applied to examine the dependence of signal-to-noise ratio on resonator diameter and sidewall thickness. The model is also applied to off-axis systems.
---
paper_title: QEPAS based ppb-level detection of CO and N_2O using a high power CW DFB-QCL
paper_content:
An ultra-sensitive and selective quartz-enhanced photoacoustic spectroscopy (QEPAS) sensor platform was demonstrated for detection of carbon monoxide (CO) and nitrous oxide (N2O). This sensor used a state-of-the art 4.61 μm high power, continuous wave (CW), distributed feedback quantum cascade laser (DFB-QCL) operating at 10°C as the excitation source. For the R(6) CO absorption line, located at 2169.2 cm−1, a minimum detection limit (MDL) of 1.5 parts per billion by volume (ppbv) at atmospheric pressure was achieved with a 1 sec acquisition time and the addition of 2.6% water vapor concentration in the analyzed gas mixture. For the N2O detection, a MDL of 23 ppbv was obtained at an optimum gas pressure of 100 Torr and with the same water vapor content of 2.6%. In both cases the presence of water vapor increases the detected CO and N2O QEPAS signal levels as a result of enhancing the vibrational-translational relaxation rate of both target gases. Allan deviation analyses were performed to investigate the long term performance of the CO and N2O QEPAS sensor systems. For the optimum data acquisition time of 500 sec a MDL of 340 pptv and 4 ppbv was obtained for CO and N2O detection, respectively. To demonstrate reliable and robust operation of the QEPAS sensor a continuous monitoring of atmospheric CO and N2O concentration levels for a period of 5 hours were performed.
---
paper_title: An acoustic model for microresonator in on-beam quartz-enhanced photoacoustic spectroscopy
paper_content:
Based on a new spectrophone configuration using a single microresonator (mR) in "on beam" quartz-enhanced photoacoustic spectroscopy (QEPAS), referred to "half on beam QEPAS", a classical acoustic model originated from "orifice ended tube" was introduced to model and optimize the mR geometrical parameters. The calculated optimum mR parameters were in good agreement with the experimental results obtained in "half on beam" as well as conventional "on beam" QEPAS approaches through monitoring of atmospheric H2O vapor absorption. In addition, spectrophone performances of different QEPAS configurations (off beam, on beam and half on beam) were compared in terms of signal-to-noise ratio (SNR) gain.
---
paper_title: Gas-phase photoacoustic sensor at 8.41 μm using quartz tuning forks and amplitude-modulated quantum cascade lasers
paper_content:
We demonstrate the performance of a novel infrared photoacoustic laser absorbance sensor for gas-phase species using an amplitude-modulated quantum cascade (QC) laser and a quartz tuning fork microphone. The photoacoustic signal was generated by focusing 5.3 mW of a Fabry–Perot QC laser operating at 8.41 μm between the tines of a quartz tuning fork which served as a transducer for the transient acoustic pressure wave. The sensitivity of this sensor was calibrated using the infrared absorber Freon 134a by performing a simultaneous absorption measurement using a 31-cm absorption cell. The power and bandwidth normalized noise equivalent absorption sensitivity (NEAS) of this sensor was determined to be D=2.0×10-8 W cm-1/Hz1/2. A corresponding theoretical analysis of the instrument sensitivity is presented and is capable of quantitatively reproducing the experimental NEAS, indicating that the fundamental sensitivity of this technique is limited by the noise floor of the tuning fork itself.
---
paper_title: QEPAS spectrophones: design, optimization, and performance
paper_content:
The impact of design parameters of a spectrophone for quartz-enhanced photoacoustic spectroscopy on its performance was investigated. The microresonator of spectrophone is optimized based on an experimental study. The results show that a 4.4 mm-long tube with 0.6 mm inner diameter yields the highest signal-to-noise ratio, which is ∼30 times higher than that of a bare QTF at gas pressures between 400 and 800 Torr. The optimized configuration demonstrates a normalized noise-equivalent absorption coefficient (1σ) of 3.3×10−9 cm−1W/Hz1/2 for C2H2 detection at atmospheric pressure. The effect of the changing carrier gas composition is studied. A side-by-side sensitivity comparison between QEPAS and conventional photoacoustic spectroscopy technique is reported.
---
paper_title: Quartz-enhanced photoacoustic spectroscopy
paper_content:
A new approach to detecting a weak photoacoustic signal in a gas medium is described. Instead of a gas-filled resonant acoustic cavity, the sound energy is accumulated in a high- Q crystal element. Feasibility experiments utilizing a quartz-watch tuning fork demonstrate a sensitivity of 1.2x10(-7) cm(-1) W/ radicalHz . Potential further developments and applications of this technique are discussed.
---
paper_title: Laser microphotoacoustic sensor of ammonia traces in the atmosphere
paper_content:
A microphotoacoustic highly selective sensor of ammonia is built. Main attention is paid to the operation mechanism of the acoustic sensor based on a quartz tuning fork. The optimal dimensions and configuration of the acoustic resonator are determined, which made it possible to increase the sensor sensitivity by two—three times compared to the sensitivity of the existing devices. The detector sensitivity for ammonia was 60ppb (0.05 mg m-3) for the measurement time of 10s and a 25-mW, 1.53-μm laser beam in the acoustic resonator.
---
paper_title: Trace gas detection based on off-beam quartz enhanced photoacoustic spectroscopy: Optimization and performance evaluation
paper_content:
A gas sensor based on off-beam quartz enhanced photoacoustic spectroscopy was developed and optimized. Specifically, the length and diameter of the microresonator tube were optimized, and the outer tube shape is modified for enhancing the trace gas detection sensitivity. The impact of the distance between the quartz tuning fork and an acoustic microresonator on the sensor performance was experimentally investigated. The sensor performance was evaluated by determining the detection sensitivity to H(2)O vapor in ambient air at normal atmospheric pressure. A normalized noise equivalent absorption coefficient (1σ) of 6.2×10(-9) cm(-1) W/Hz(1/2) was achieved.
---
paper_title: Application of a widely electrically tunable diode laser to chemical gas sensing with quartz-enhanced photoacoustic spectroscopy.
paper_content:
A near-infrared diode laser with sample-grating distributed Bragg reflectors was used as a widely tunable spectroscopic source for multispecies chemical sensing. Quartz-enhanced photoacoustic spectroscopy was utilized to obtain high absorption sensitivity in a compact gas cell. CO2, H2O C2H2, and NH3 were monitored. A noise equivalent sensitivity of 8 x 10(-9) cm(-1) W(-1) Hz(-1/2)for NH3 detection was achieved, which corresponds to a NH3 mixing ratio of 4.4 parts in 10(6) by volume (ppmv) with a 1-s time constant and available 5.2-mW optical power in the gas cell.
---
paper_title: Laser-based systems for trace gas detection in life sciences
paper_content:
Infrared gas phase spectroscopy is becoming very common in many life science applications. Here we present three types of trace gas detection systems based on CO2 laser and continuous wave (cw) optical parametric oscillator (OPO) in combination with photoacoustic spectroscopy and cw quantum cascade laser (QCL) in combination with wavelength modulation spectroscopy. Examples are included to illustrate the suitability of CO2 laser system to monitor in real time ethylene emission from various dynamic processes in plants and microorganisms as well as from car exhausts. The versatility of an OPO-based detector is demonstrated by simultaneous detection of 13C-methane and 12C-methane (at 3240 nm) at similar detection limits of 0.1 parts per billion by volume. Recent progress on a QCL-based spectrometer using a continuous wave QCL (output power 25 mW, tuning range of 1891–1908 cm-1) is presented and a comparison is made to a standard chemiluminescence instrument for analysis of NO in exhaled breath.
---
paper_title: Off-beam quartz-enhanced photoacoustic spectroscopy
paper_content:
An off-beam (OB) detection approach is suggested and experimentally investigated and optimized for quartz-enhanced photoacoustic spectroscopy (QEPAS). This OB-QEPAS configuration, very simple in assembly, not only allows for use of larger excitation optical beams and facilitating optical alignment but also provides higher enhancement of photoacoustic signals than previously published results based on the common on-beam QEPAS under the same experimental conditions. A normalized noise equivalent absorption coefficient (1σ) of 5.9×10−9 cm−1W/Hz1/2 was obtained for water vapor detection at normal atmospheric pressure.
---
paper_title: NO trace gas sensor based on quartz-enhanced photoacoustic spectroscopy and external cavity quantum cascade laser
paper_content:
A gas sensor based on quartz-enhanced photoa- coustic detection and an external cavity quantum cascade laser was realized and characterized for trace nitric ox- ide monitoring using the NO R(6.5) absorption doublet at 1900.075 cm −1 . Signal and noise dependence on gas pressure were studied to optimize sensor performance. The NO concentration resulting in a noise-equivalent signal was found to be 15 parts per billion by volume, with 100 mW optical excitation power and a data acquisition time of 5 s.
---
paper_title: Coupling external cavity mid-IR quantum cascade lasers with low loss hollow metallic/dielectric waveguides
paper_content:
We report on the optical coupling between hollow core waveguides and external cavity mid-IR quantum cascade lasers (QCLs). Waveguides with 1000 μm bore size and lengths ranging from 2 to 14 cm, with metallic (Ag)/dielectric (AgI or polystyrene) circular cross-section internal coatings, have been employed. Our results show that the QCL mode is perfectly matched to the hybrid HE11 waveguide mode, demonstrating that the internal dielectric coating thickness is effective to suppress the higher losses TE-like modes. Optical losses down to 0.44 dB/m at 5.27 μm were measured in Ag/polystyrene-coated waveguide with an almost unitary coupling efficiency.
---
paper_title: Low-Loss Hollow Waveguide Fibers for Mid-Infrared Quantum Cascade Laser Sensing Applications
paper_content:
We report on single mode optical transmission of hollow core glass waveguides (HWG) coupled with an external cavity mid-IR quantum cascade lasers (QCLs). The QCL mode results perfectly matched to the hybrid HE11 waveguide mode and the higher losses TE-like modes have efficiently suppressed by the deposited inner dielectric coating. Optical losses down to 0.44 dB/m and output beam divergence of ~5 mrad were measured. Using a HGW fiber with internal core size of 300 µm we obtained single mode laser transmission at 10.54 µm and successful employed it in a quartz enhanced photoacoustic gas sensor setup.
---
paper_title: Ppb-level detection of nitric oxide using an external cavity quantum cascade laser based QEPAS sensor
paper_content:
Geometrical parameters of micro-resonator for a quartz enhanced photoacoustic spectroscopy sensor are optimized to perform sensitive and background-free spectroscopic measurements using mid-IR quantum cascade laser (QCL) excitation sources. Such an optimized configuration is applied to nitric oxide (NO) detection at 1900.08 cm(-1) (5.26 µm) with a widely tunable, mode-hop-free external cavity QCL. For a selected NO absorption line that is free from H(2)O and CO(2) interference, a NO detection sensitivity of 4.9 parts per billion by volume is achieved with a 1-s averaging time and 66 mW optical excitation power. This NO detection limit is determined at an optimal gas pressure of 210 Torr and 2.5% of water vapor concentration. Water is added to the analyzed mixture in order to improve the NO vibrational-translational relaxation process.
---
paper_title: Mid-infrared fiber-coupled QCL-QEPAS sensor
paper_content:
An innovative spectroscopic system based on an external cavity quantum cascade laser (EC-QCL) coupled with a mid-infrared (mid-IR) fiber and quartz enhanced photoacoustic spectroscopy (QEPAS) is described. SF6 has been selected as a target gas in demonstration of the system for trace gas sensing. Single mode laser delivery through the prongs of the quartz tuning fork has been obtained employing a hollow waveguide fiber with inner silver–silver iodine (Ag–AgI) coatings and internal core diameter of 300 μm. A detailed design and realization of the QCL fiber coupling and output collimator system allowed almost practically all (99.4 %) of the laser beam to be transmitted through the spectrophone module. The achieved sensitivity of the system is 50 parts per trillion in 1 s, corresponding to a record for QEPAS normalized noise-equivalent absorption of 2.7 × 10−10 W cm−1 Hz−1/2.
---
paper_title: Part-per-trillion level detection of SF6 using a single-mode fiber-coupled quantum cascade laser and a quartz enhanced photoacoustic sensor
paper_content:
We will report here on the design and realization of optoacoustic sensors based on an external cavity QCL laser source ::: emitting at 10,54 μm, fiber-coupled with a QEPAS spectrophone module. SF 6 has been selected as the target gas. Single ::: mode laser delivery through the prongs of the quartz tuning fork has been realized using a hollow waveguide fiber with ::: internal core size of 300 μm. The achieved sensitivity of the system was 50 part per trillion in 1 s corresponding to a ::: record for QEPAS normalized noise-equivalent absorption of 2,7•10 -10 W•cm -1 •Hz- 1/2 .
---
paper_title: Part-per-trillion level SF6 detection using a quartz enhanced photoacoustic spectroscopy-based sensor with single-mode fiber-coupled quantum cascade laser excitation.
paper_content:
A sensitive spectroscopic sensor based on a hollow-core fiber-coupled quantum cascade laser (QCL) emitting at 10.54 μm and quartz enhanced photoacoustic spectroscopy (QEPAS) technique is reported. The design and realization of mid-IR fiber and coupler optics has ensured single-mode QCL beam delivery to the QEPAS sensor. The collimation optics was designed to produce a laser beam of significantly reduced beam size and waist so as to prevent illumination of the quartz tuning fork and microresonator tubes. SF6 was selected as the target gas. A minimum detection sensitivity of 50 parts per trillion in 1 s was achieved with a QCL power of 18 mW, corresponding to a normalized noise-equivalent absorption of 2.7×10−10 W·cm−1/Hz1/2.
---
paper_title: Modulation cancellation method for measurements of small temperature differences in a gas
paper_content:
An innovative spectroscopic technique based on balancing and cancellation of modulated signals induced by two excitation sources is reported. For its practical implementation, we used quartz-enhanced photoacoustic spectroscopy as an absorption-sensing technique and applied the new approach to measure small temperature differences between two gas samples. The achieved sensitivity was 30 mK in 17 s. A theoretical sensitivity analysis is presented, and the applicability of this method to isotopic measurements is discussed.
---
paper_title: Spectroscopic measurements of isotopic water composition using a new modulation cancellation method
paper_content:
We report on the application of an innovative spectroscopic balancing technique to measure isotopologue abundance ::: quantification. We employ quartz enhanced photoacoustic spectroscopy in a 2f wavelength modulation mode as an ::: absorption sensing technique and water vapor as a test analyte. Isotope absorption lines with very close lower energy ::: levels and with the same quantum numbers have been selected to limit the sensitivity to temperature variations and ::: guarantee identical broadening relaxation properties. A detection sensitivity in measuring the deviation from a standard ::: sample δ 18 O of 1.4% o , in 200 sec of integration time was achieved.
---
paper_title: Modulation cancellation method in laser spectroscopy
paper_content:
A novel spectroscopic technique based on modulation spectroscopy with two excitation sources and quartz enhanced photoacoustic spectroscopy is described. We demonstrated two potential applications of this detection technique. First, we investigated the measurement of small temperature differences in a gas mixture. In this case, a sensitivity of 30 mK in 17 sec was achieved for a C2H2/N2 gas mixture with a 0.5% C2H2 concentration. Second, we demonstrated the detection of broadband absorbing chemical species, for which we selected hydrazine as the target molecule and achieved a detection limit of ∼1 part per million in 1 sec. In both cases, the measurements were performed with near-IR laser diodes and overtone transitions.
---
paper_title: Modulation cancellation method (MOCAM) in modulation spectroscopy
paper_content:
An innovative spectroscopic technique based on balancing and cancellation of modulated signals induced by two ::: excitation sources. We used quartz enhanced photoacoustic spectroscopy (QEPAS) in a 2f wavelength modulation mode ::: as an absorption sensing technique and employed a modulation cancelation approach for spectroscopic measurements of ::: small temperature differences in a gas mixture and detection of broadband absorbers. We demonstrated measurement of ::: small temperature differences in a C2H2/N2gas mixture with a sensitivity of 30 mK in 17 sec and detection of hydrazine, ::: a broadband absorbing chemical species, down to concentration of 1 part per million in volume in 1 sec. In both cases we ::: used near-infrared laser diodes and selected overtone transitions.nuscrip
---
paper_title: Modulation cancellation method for laser spectroscopy
paper_content:
We report on novel methods employing a modulation cancellation technique which result in a significantly increase in ::: the sensitivity and accuracy of trace gas detectors. This method can be applied for isotopomer abundance quantification, ::: temperature measurements and the detection of large molecules.
---
paper_title: Modulation cancellation method for measurements of small temperature differences in a gas
paper_content:
An innovative spectroscopic technique based on balancing and cancellation of modulated signals induced by two excitation sources is reported. For its practical implementation, we used quartz-enhanced photoacoustic spectroscopy as an absorption-sensing technique and applied the new approach to measure small temperature differences between two gas samples. The achieved sensitivity was 30 mK in 17 s. A theoretical sensitivity analysis is presented, and the applicability of this method to isotopic measurements is discussed.
---
paper_title: Ammonia detection by use of quartz-enhanced photoacoustic spectroscopy with a near-IR telecommunication diode laser
paper_content:
A gas sensor based on quartz-enhanced photoacoustic detection and a fiber-coupled telecommunication distributed-feedback diode laser was designed and characterized for trace NH3 monitoring at a 1.53-microm wavelength (overtone absorption region). Signal and noise dependence on gas pressure were studied to optimize sensor performance. The ammonia concentration resulting in a noise-equivalent signal was found to be 0.65 parts per million by volume with 38-mW optical excitation power and a lock-in amplifier time constant of 1 s. This corresponds to a normalized absorption sensitivity of 7.2 x 10(-9) cm(-1) W/Hz1/2, comparable with detection sensitivity achieved in conventional photoacoustic spectroscopy. The sensor architecture can be the basis for a portable gas analyzer.
---
paper_title: Impact of humidity on quartz-enhanced photoacoustic spectroscopy based detection of HCN
paper_content:
The architecture and operation of a trace hydrogen cyanide (HCN) gas sensor based on quartz-enhanced photoacoustic spectroscopy and using a λ=1.53 μm telecommunication diode laser are described. The influence of humidity content in the analyzed gas on the sensor performance is investigated. A kinetic model describing the vibrational to translational (V–T) energy transfer following the laser excitation of a HCN molecule is developed. Based on this model and the experimental data, the V–T relaxation time of HCN was found to be (1.91±0.07)10-3 s Torr in collisions with N2 molecules and (2.1±0.2)10-6 s Torr in collisions with H2O molecules. The noise-equivalent concentration of HCN in air at normal indoor conditions was determined to be at the 155-ppbv level with a 1-s sensor time constant.
---
paper_title: Modulation cancellation method for measurements of small temperature differences in a gas
paper_content:
An innovative spectroscopic technique based on balancing and cancellation of modulated signals induced by two excitation sources is reported. For its practical implementation, we used quartz-enhanced photoacoustic spectroscopy as an absorption-sensing technique and applied the new approach to measure small temperature differences between two gas samples. The achieved sensitivity was 30 mK in 17 s. A theoretical sensitivity analysis is presented, and the applicability of this method to isotopic measurements is discussed.
---
paper_title: On-line laser photoacoustic detection of ethene in exhaled air as biomarker of ultraviolet radiation damage of the human skin
paper_content:
The exhaled air and volatile emission by the skin of human subjects were analyzed for traces of ethene (C2H4) by means of CO2 laser photoacoustic trace gas detection. Due to the extreme sensitivity of the detection system (6 part per trillion volume, 6:1012), these measurements could be performed on-line and noninvasively. Exhaled ethene was used as a biomarker for lipid peroxidation in the skin of human subjects exposed to ultraviolet (UV) radiation from a solarium. A change in the ethene concentration was already observed in the exhaled air after 2 min. Adaptation of the skin to UV exposure and direct skin emission could also be observed.
---
paper_title: Influence of molecular relaxation dynamics on quartz-enhanced photoacoustic detection of CO2 at λ=2 μm
paper_content:
Carbon dioxide (CO2) trace gas detection based on quartz enhanced photoacoustic spectroscopy (QEPAS) using a distributed feedback diode laser operating at λ=2 μm is performed, with a primary purpose of studying vibrational relaxation processes in the CO2-N2-H2O system. A simple model is developed and used to explain the experimentally observed dependence of amplitude and phase of the photoacoustic signal on pressure and gas humidity. A (1σ) sensitivity of 110 parts-per-million (with a 1 s lock-in time constant) was obtained for CO2 concentrations measured in humid gas samples.
---
paper_title: Ultrasensitive gas detection by quartz-enhanced photoacoustic spectroscopy in the fundamental molecular absorption bands region
paper_content:
A trace gas sensor based on quartz-enhanced photoacoustic spectroscopy with a quantum cascade laser operating at 4.55 μm as an excitation source was developed. The sensor performance was evaluated for the detection of N2O and CO. A noise-equivalent (1σ) sensitivity of 4 ppbv N2O with 3 s response time to (1-1/e) of the steady-state level was demonstrated. The influence of the relevant energy transfer processes on the detection limits was analyzed. Approaches to improve the current sensor performance are also discussed.
---
paper_title: Photoacoustic phase shift as a chemically selective spectroscopic parameter
paper_content:
The phase information obtained in photoacoustic experiments can be used to separate the signals originating from chemical species with overlapping absorption spectra. This approach was applied to quantify parts per million CO levels in propylene using quartz-enhanced photoacoustic spectroscopy and a quantum cascade laser as an excitation source. The experimental data were used to evaluate V–T relaxation rates of CO and N2O in propylene.
---
paper_title: Quartz-enhanced photoacoustic spectroscopy
paper_content:
A new approach to detecting a weak photoacoustic signal in a gas medium is described. Instead of a gas-filled resonant acoustic cavity, the sound energy is accumulated in a high- Q crystal element. Feasibility experiments utilizing a quartz-watch tuning fork demonstrate a sensitivity of 1.2x10(-7) cm(-1) W/ radicalHz . Potential further developments and applications of this technique are discussed.
---
paper_title: Spectroscopic measurements of isotopic water composition using a new modulation cancellation method
paper_content:
We report on the application of an innovative spectroscopic balancing technique to measure isotopologue abundance ::: quantification. We employ quartz enhanced photoacoustic spectroscopy in a 2f wavelength modulation mode as an ::: absorption sensing technique and water vapor as a test analyte. Isotope absorption lines with very close lower energy ::: levels and with the same quantum numbers have been selected to limit the sensitivity to temperature variations and ::: guarantee identical broadening relaxation properties. A detection sensitivity in measuring the deviation from a standard ::: sample δ 18 O of 1.4% o , in 200 sec of integration time was achieved.
---
paper_title: Quartz-enhanced photoacoustic spectroscopy
paper_content:
A new approach to detecting a weak photoacoustic signal in a gas medium is described. Instead of a gas-filled resonant acoustic cavity, the sound energy is accumulated in a high- Q crystal element. Feasibility experiments utilizing a quartz-watch tuning fork demonstrate a sensitivity of 1.2x10(-7) cm(-1) W/ radicalHz . Potential further developments and applications of this technique are discussed.
---
paper_title: Modulation cancellation method in laser spectroscopy
paper_content:
A novel spectroscopic technique based on modulation spectroscopy with two excitation sources and quartz enhanced photoacoustic spectroscopy is described. We demonstrated two potential applications of this detection technique. First, we investigated the measurement of small temperature differences in a gas mixture. In this case, a sensitivity of 30 mK in 17 sec was achieved for a C2H2/N2 gas mixture with a 0.5% C2H2 concentration. Second, we demonstrated the detection of broadband absorbing chemical species, for which we selected hydrazine as the target molecule and achieved a detection limit of ∼1 part per million in 1 sec. In both cases, the measurements were performed with near-IR laser diodes and overtone transitions.
---
paper_title: Modulation cancellation method (MOCAM) in modulation spectroscopy
paper_content:
An innovative spectroscopic technique based on balancing and cancellation of modulated signals induced by two ::: excitation sources. We used quartz enhanced photoacoustic spectroscopy (QEPAS) in a 2f wavelength modulation mode ::: as an absorption sensing technique and employed a modulation cancelation approach for spectroscopic measurements of ::: small temperature differences in a gas mixture and detection of broadband absorbers. We demonstrated measurement of ::: small temperature differences in a C2H2/N2gas mixture with a sensitivity of 30 mK in 17 sec and detection of hydrazine, ::: a broadband absorbing chemical species, down to concentration of 1 part per million in volume in 1 sec. In both cases we ::: used near-infrared laser diodes and selected overtone transitions.nuscrip
---
paper_title: Modulation cancellation method for laser spectroscopy
paper_content:
We report on novel methods employing a modulation cancellation technique which result in a significantly increase in ::: the sensitivity and accuracy of trace gas detectors. This method can be applied for isotopomer abundance quantification, ::: temperature measurements and the detection of large molecules.
---
paper_title: Gas detection with evanescent-wave quartz-enhanced photoacoustic spectroscopy
paper_content:
Evanescent-wave gas sensing with tapered optical fibers (TOFs) and quartz-enhanced photoacoustic spectroscopy ::: (QEPAS) is reported. The evanescent field of TOFs with diameter down to sub-wavelength is utilized for photoacoustic ::: excitation in photoacoustic spectroscopy. A quartz tuning fork (QTF) with resonant frequency about ~32.75 kHz is used ::: to detect the generated pressure wave. A normalized noise equivalent absorption coefficient of 1.5×10 -6 cm -1 W/√Hz is ::: achieved for acetylene detection with a fiber taper with a waist diameter of 1.1 μm. It is found that QEPAS with TOFs of ::: sub-wavelength diameters exhibit comparable sensitivities with open path QEPAS but with additional advantages of ::: lower insertion loss, easier alignment, and multiplexing capability.
---
paper_title: Long-period gratings in wavelength-scale microfibers.
paper_content:
We report the fabrication of long-period gratings (LPGs) in wavelength-scale microfibers with diameters from 1.5 to 3 microm. The LPGs were fabricated by use of a femtosecond IR laser to periodically modify the surface of the fibers. These LPGs have grating periods of a few tens of micrometers, much smaller than those in conventional optical fibers. A compact 10-period LPG with a device length of only approximately 150 microm demonstrated a strong resonant dip of >20 dB around 1330 nm. These microfiber LPGs would be useful in-fiber components for microfiber-based devices, circuits, and sensors.
---
paper_title: Evanescent-wave photoacoustic spectroscopy with optical micro/nano fibers
paper_content:
We demonstrate gas detection based on evanescent-wave photoacoustic (PA) spectroscopy with tapered optical fibers. Evanescent-field instead of open-path absorption is exploited for PA generation, and a quartz tuning fork is used for PA detection. A tapered optical fiber with a diameter down to the wavelength scale demonstrates detection sensitivity similar to an open-path system but with the advantages of easier optical alignment, smaller insertion loss, and multiplexing capability.
---
paper_title: Cutting-edge terahertz technology
paper_content:
Research into terahertz technology is now receiving increasing attention around the world, and devices exploiting this waveband are set to become increasingly important in a very diverse range of applications. Here, an overview of the status of the technology, its uses and its future prospects are presented.
---
paper_title: Rotational Spectroscopy of Diatomic Molecules
paper_content:
1. General introduction 2. The separation of nuclear and electronic motion 3. The electronic hamiltonian 4. Interactions arising from nuclear magnetic and electric moments 5. Angular momentum theory and spherical tensor algebra 6. Electronic and vibrational states 7. Derivation of the effective hamiltonian 8. Molecular beam magnetic and electric resonance 9. Microwave and far-infrared magnetic resonance 10. Pure rotational spectroscopy 11. Double resonance spectroscopy Appendices.
---
paper_title: Frequency and amplitude stabilized terahertz quantum cascade laser as local oscillator
paper_content:
We demonstrate an experimental scheme to simultaneously stabilize the frequency and amplitude of a 3.5 THz third-order distributed feedback quantum cascade laser as a local oscillator. The frequency stabilization has been realized using a methanol absorption line, a power detector, and a proportional-integral-derivative (PID) loop. The amplitude stabilization of the incident power has been achieved using a swing-arm voice coil actuator as a fast optical attenuator, using the direct detection output of a superconducting mixer in combination with a 2nd PID loop. Improved Allan variance times of the entire receiver, as well as the heterodyne molecular spectra, are demonstrated.
---
paper_title: Terahertz spectroscopy: a powerful new tool for the chemical sciences?
paper_content:
Terahertz spectroscopy is only now beginning to make its transition from initial development by physicists and engineers to broader use by chemists, materials scientists and biologists, thanks to the increasing availability of commercial terahertz spectrometers. With the unique insights that terahertz spectroscopy can provide into intermolecular bonding and crystalline matter, it could prove to be an invaluable addition to the chemist's analytical toolset. This tutorial review aims to give an introduction to terahertz spectroscopy, its techniques, equipment, current applications and potential for the chemical sciences to a broad readership.
---
paper_title: Chemical recognition in terahertz time-domain spectroscopy and imaging
paper_content:
In this paper, we present an overview of chemical recognition with ultrashort THz pulses. We describe the experimental technique and demonstrate how signals for chemical recognition of substances in sealed containers can be obtained, based on the broadband absorption spectra of the substances. We then discuss chemical recognition in combination with THz imaging and show that certain groups of biological substances may give rise to characteristic recognition signals. Finally, we explore the power of numerical prediction of absorption spectra of molecular crystals and illuminate some of the challenges facing state-of-the-art computational chemistry software.
---
paper_title: Molecular Spectroscopy with TeraHertz Quantum Cascade Lasers
paper_content:
We have implemented quantum cascade lasers (QCLs) operating at about 2.5 THz in a spectrometer for high resolution molecular spectroscopy. One QCL has a Fabry-Perot resonator while the other is a distributed feedback laser. Linewidth and frequency tunability of both QCLs were investigated by mixing the radiation from the QCL with that from a 2.5 THz gas laser. Both were found sufficient for Doppler limited spectroscopy. Rotational tranistions of methanol were detected in absorption using a QCL as radiation source. Amplitude as well as frequency modulation of the output power of the QCL were used. The absolute frequency was determined simultaneously with the absorption signal by mixing a small part of the radiation from the QCL with that from a gas laser. The pressure broadening and the pressure shift of a rotational transition of methanol at 2.519 THz were measured in order to demonstrate the performance of the spectrometer.
---
paper_title: Terahertz Spectroscopy: System and Sensitivity Considerations
paper_content:
Terahertz spectroscopy is a backbone method in many areas of research. We have analyzed typically employed THz spectroscopy systems and their sensitivity in a general comparative approach. Recent progress to reduce the data acquisition time by frequency multiplexing using a spectrometer with a THz quantum cascade laser is described. The performance of a spectrometer using a pulsed Ge THz laser with a few μs long integration time and recent progress to modulate the laser current within such a short pulse are presented. We also investigate the origin of random errors in intensity spectra of a THz TDS with the goal to identify common error sources in TDS systems to allow reduction of the total measurement time.
---
paper_title: THz QCL-Based Cryogen-Free Spectrometer for in Situ Trace Gas Sensing
paper_content:
We report on a set of high-sensitivity terahertz spectroscopy experiments making use of QCLs to detect rotational molecular transitions in the far-infrared. We demonstrate that using a compact and transportable cryogen-free setup, based on a quantum cascade laser in a closed-cycle Stirling cryostat, and pyroelectric detectors, a considerable improvement in sensitivity can be obtained by implementing a wavelength modulation spectroscopy technique. Indeed, we show that the sensitivity of methanol vapour detection can be improved by a factor ≈ 4 with respect to standard direct absorption approaches, offering perspectives for high sensitivity detection of a number of chemical compounds across the far-infrared spectral range.
---
paper_title: Nanowire-based field effect transistors for terahertz detection and imaging systems.
paper_content:
The development of self-assembled nanostructure technologies has recently opened the way towards a wide class of semiconductor integrated devices, with progressively optimized performances and the potential for a widespread range of electronic and photonic applications. Here we report on the development of field effect transistors (FETs) based on semiconductor nanowires (NWs) as highly-sensitive room-temperature plasma-wave broadband terahertz (THz) detectors. The electromagnetic radiation at 0.3 THz is funneled onto a broadband bow-tie antenna, whose lobes are connected to the source and gate FET electrodes. The oscillating electric field experienced by the channel electrons, combined with the charge density modulation by the gate electrode, results in a source-drain signal rectification, which can be read as a DC signal output. We investigated the influence of Se-doping concentration of InAs NWs on the detection performances, reaching responsivity values higher than 100 V W-1, with noise-equivalent-power of similar to 10(-9) W Hz(-1/2). Transmission imaging experiments at 0.3 THz show the good reliability and sensitivity of the devices in a real practical application.
---
paper_title: High-resolution gas phase spectroscopy with a distributed feedback terahertz quantum cascade laser
paper_content:
The quantum cascade laser is a powerful, narrow linewidth, and continuous wave source of terahertz radiation. The authors have implemented a distributed feedback device in a spectrometer for high-resolution gas phase spectroscopy. Amplitude as well as frequency modulation schemes have been realized. The absolute frequency was determined by mixing the radiation from the quantum cascade laser with that from a gas laser. The pressure broadening and the pressure shift of a rotational transition of methanol at 2.519THz were measured in order to demonstrate the performance of the spectrometer.The quantum cascade laser is a powerful, narrow linewidth, and continuous wave source of terahertz radiation. The authors have implemented a distributed feedback device in a spectrometer for high-resolution gas phase spectroscopy. Amplitude as well as frequency modulation schemes have been realized. The absolute frequency was determined by mixing the radiation from the quantum cascade laser with that from a gas laser. The pressure broadening and the pressure shift of a rotational transition of methanol at 2.519THz were measured in order to demonstrate the performance of the spectrometer.
---
paper_title: A quartz enhanced photo-acoustic gas sensor based on a custom tuning fork and a terahertz quantum cascade laser
paper_content:
An innovative quartz enhanced photoacoustic (QEPAS) gas sensing system operating in the THz spectral range and employing a custom quartz tuning fork (QTF) is described. The QTF dimensions are 3.3 cm × 0.4 cm × 0.8 cm, with the two prongs spaced by ∼800 μm. To test our sensor we used a quantum cascade laser as the light source and selected a methanol rotational absorption line at 131.054 cm−1 (∼3.93 THz), with line-strength S = 4.28 × 10−21 cm mol−1. The sensor was operated at 10 Torr pressure on the first flexion QTF resonance frequency of 4245 Hz. The corresponding Q-factor was 74 760. Stepwise concentration measurements were performed to verify the linearity of the QEPAS signal as a function of the methanol concentration. The achieved sensitivity of the system is 7 parts per million in 4 seconds, corresponding to a QEPAS normalized noise-equivalent absorption of 2 × 10−10 W cm−1 Hz−1/2, comparable with the best result of mid-IR QEPAS systems.
---
paper_title: QEPAS detector for rapid spectral measurements
paper_content:
A quartz enhanced photoacoustic spectroscopy sensor designed for fast response was used in combination with a pulsed external cavity quantum cascade laser to rapidly acquire gas absorption data over the 1196–1281 cm−1 spectral range. The system was used to measure concentrations of water vapor, pentafluoroethane (freon-125), acetone, and ethanol both individually and in combined mixtures. The precision achieved for freon-125 concentration in a single 1.1 s long spectral scan is 13 ppbv.
---
paper_title: Terahertz quartz enhanced photo-acoustic sensor
paper_content:
A quartz enhanced photo-acoustic sensor employing a single-mode quantum cascade laser emitting at 3.93 Terahertz (THz) is reported. A custom tuning fork with a 1 mm spatial separation between the prongs allows the focusing of the THz laser beam between them, while preventing the prongs illumination. A methanol transition with line-strength of 4.28 × 10−21 cm has been selected as target spectroscopic line. At a laser optical power of ∼ 40 μW, we reach a sensitivity of 7 parts per million in 4s integration time, corresponding to a 1σ normalized noise-equivalent absorption of 2 × 10−10 cm−1W/Hz½.
---
paper_title: High-resolution photoacoustic and direct absorption spectroscopy of main greenhouse gases by use of a pulsed entangled cavity doubly resonant OPO
paper_content:
An entangled cavity doubly resonant optical parametric oscillator (ECOPO) has been developed to provide tunable narrow line width (<100 MHz) pulsed (8 ns) radiation over the 3.8–4.3 μm spectral range at a multi-kilohertz repetition rate with up to 100-W peak power. We demonstrate that coarse single mode tuning is obtained over the full spectral range of oscillation (300 cm−1), while automated mode-hop-free fine tuning is carried out over more than 100 GHz. High-resolution spectra of main greenhouse gases (CO2, N2O, SO2 and CH4) have been obtained in good agreement with calculated spectra from the HITRAN database. These experiments outline the unique capabilities of the ECOPO for multi-gas sensing based on direct absorption as well as photoacoustic spectroscopy.
---
paper_title: The potential of mid-infrared photoacoustic spectroscopy for the detection of various doping agents used by athletes
paper_content:
The feasibility of laser-photoacoustic measurements for the detection and the analysis of different isolated doping agents in the vapour phase is discussed. To the best of our knowledge, this is the first time that photoacoustic vapour-phase measurements of doping substances have been presented. Spectra of different doping classes (stimulants, anabolica, diuretica, and beta blockers) are shown and discussed in terms of their detection sensitivity and selectivity. The potential of laser spectroscopy for detecting the intake of prohibited substances by athletes is explored.
---
paper_title: A quartz enhanced photo-acoustic gas sensor based on a custom tuning fork and a terahertz quantum cascade laser
paper_content:
An innovative quartz enhanced photoacoustic (QEPAS) gas sensing system operating in the THz spectral range and employing a custom quartz tuning fork (QTF) is described. The QTF dimensions are 3.3 cm × 0.4 cm × 0.8 cm, with the two prongs spaced by ∼800 μm. To test our sensor we used a quantum cascade laser as the light source and selected a methanol rotational absorption line at 131.054 cm−1 (∼3.93 THz), with line-strength S = 4.28 × 10−21 cm mol−1. The sensor was operated at 10 Torr pressure on the first flexion QTF resonance frequency of 4245 Hz. The corresponding Q-factor was 74 760. Stepwise concentration measurements were performed to verify the linearity of the QEPAS signal as a function of the methanol concentration. The achieved sensitivity of the system is 7 parts per million in 4 seconds, corresponding to a QEPAS normalized noise-equivalent absorption of 2 × 10−10 W cm−1 Hz−1/2, comparable with the best result of mid-IR QEPAS systems.
---
paper_title: Terahertz quartz enhanced photo-acoustic sensor
paper_content:
A quartz enhanced photo-acoustic sensor employing a single-mode quantum cascade laser emitting at 3.93 Terahertz (THz) is reported. A custom tuning fork with a 1 mm spatial separation between the prongs allows the focusing of the THz laser beam between them, while preventing the prongs illumination. A methanol transition with line-strength of 4.28 × 10−21 cm has been selected as target spectroscopic line. At a laser optical power of ∼ 40 μW, we reach a sensitivity of 7 parts per million in 4s integration time, corresponding to a 1σ normalized noise-equivalent absorption of 2 × 10−10 cm−1W/Hz½.
---
paper_title: External cavity quantum cascade laser
paper_content:
In this paper we review the progress of the development of mid-infrared quantum cascade lasers (QCLs) operated in an external cavity configuration. We concentrate on QCLs based on the bound-to-continuum design, since this design is especially suitable for broadband applications. Since they were first demonstrated, these laser-based tunable sources have improved in performance in terms of output power, duty cycle, operation temperature and tuneability. Nowadays they are an interesting alternative to FTIRs for some applications. They operate at room temperature, feature a high spectral resolution while being small in size. They were successfully used in different absorption spectroscopy techniques. Due to their vast potential for applications in industry, medicine, security and research, these sources enjoy increasing interest within the research community as well as in industry.
---
paper_title: THz quantum cascade designs for optimized injection
paper_content:
We report two different terahertz (THz) quantum cascade lasers based on bound-to-continuum transitions emitting at 2.8 and 3.8 THz. Novel design approaches have been explored with the purpose of optimizing the electron injection process. Both devices show large current dynamic ranges with output powers of several tens of mW, offering ample scope for future improvements.
---
paper_title: Part-per-trillion level detection of SF6 using a single-mode fiber-coupled quantum cascade laser and a quartz enhanced photoacoustic sensor
paper_content:
We will report here on the design and realization of optoacoustic sensors based on an external cavity QCL laser source ::: emitting at 10,54 μm, fiber-coupled with a QEPAS spectrophone module. SF 6 has been selected as the target gas. Single ::: mode laser delivery through the prongs of the quartz tuning fork has been realized using a hollow waveguide fiber with ::: internal core size of 300 μm. The achieved sensitivity of the system was 50 part per trillion in 1 s corresponding to a ::: record for QEPAS normalized noise-equivalent absorption of 2,7•10 -10 W•cm -1 •Hz- 1/2 .
---
paper_title: Submillimeter, millimeter, and microwave spectral line catalog.
paper_content:
This paper describes a computer accessible catalog of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10,000 GHz (i.e., wavelengths longer than 30 μm). The catalog can be used as a planning guide or as an aid in identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalog has been constructed by using theoretical least-squares fits of published spectra lines to accepted molecular models. The associated predictions and their estimated errors are based on the resultant fitted parameters and their covariances.
---
paper_title: Part-per-trillion level SF6 detection using a quartz enhanced photoacoustic spectroscopy-based sensor with single-mode fiber-coupled quantum cascade laser excitation.
paper_content:
A sensitive spectroscopic sensor based on a hollow-core fiber-coupled quantum cascade laser (QCL) emitting at 10.54 μm and quartz enhanced photoacoustic spectroscopy (QEPAS) technique is reported. The design and realization of mid-IR fiber and coupler optics has ensured single-mode QCL beam delivery to the QEPAS sensor. The collimation optics was designed to produce a laser beam of significantly reduced beam size and waist so as to prevent illumination of the quartz tuning fork and microresonator tubes. SF6 was selected as the target gas. A minimum detection sensitivity of 50 parts per trillion in 1 s was achieved with a QCL power of 18 mW, corresponding to a normalized noise-equivalent absorption of 2.7×10−10 W·cm−1/Hz1/2.
---
paper_title: High power terahertz quantum cascade laser
paper_content:
We present high power terahertz quantum laser at about 3 THz based on bound-to-continuum active region design. At 10 K, corrected by the collection efficiency, the maximum peak power of 137 mW is obtained in pulsed mode. What’s more, we firstly introduce monolithically integrated THz quantum cascade laser (QCL) array and the maximum peak power increased to 218 mW after correction. In total, the array shows better performance than single device, implying cheerful prospect.
---
paper_title: Mid-infrared fiber-coupled QCL-QEPAS sensor
paper_content:
An innovative spectroscopic system based on an external cavity quantum cascade laser (EC-QCL) coupled with a mid-infrared (mid-IR) fiber and quartz enhanced photoacoustic spectroscopy (QEPAS) is described. SF6 has been selected as a target gas in demonstration of the system for trace gas sensing. Single mode laser delivery through the prongs of the quartz tuning fork has been obtained employing a hollow waveguide fiber with inner silver–silver iodine (Ag–AgI) coatings and internal core diameter of 300 μm. A detailed design and realization of the QCL fiber coupling and output collimator system allowed almost practically all (99.4 %) of the laser beam to be transmitted through the spectrophone module. The achieved sensitivity of the system is 50 parts per trillion in 1 s, corresponding to a record for QEPAS normalized noise-equivalent absorption of 2.7 × 10−10 W cm−1 Hz−1/2.
---
paper_title: Influence of molecular relaxation dynamics on quartz-enhanced photoacoustic detection of CO2 at λ=2 μm
paper_content:
Carbon dioxide (CO2) trace gas detection based on quartz enhanced photoacoustic spectroscopy (QEPAS) using a distributed feedback diode laser operating at λ=2 μm is performed, with a primary purpose of studying vibrational relaxation processes in the CO2-N2-H2O system. A simple model is developed and used to explain the experimentally observed dependence of amplitude and phase of the photoacoustic signal on pressure and gas humidity. A (1σ) sensitivity of 110 parts-per-million (with a 1 s lock-in time constant) was obtained for CO2 concentrations measured in humid gas samples.
---
paper_title: NO trace gas sensor based on quartz-enhanced photoacoustic spectroscopy and external cavity quantum cascade laser
paper_content:
A gas sensor based on quartz-enhanced photoa- coustic detection and an external cavity quantum cascade laser was realized and characterized for trace nitric ox- ide monitoring using the NO R(6.5) absorption doublet at 1900.075 cm −1 . Signal and noise dependence on gas pressure were studied to optimize sensor performance. The NO concentration resulting in a noise-equivalent signal was found to be 15 parts per billion by volume, with 100 mW optical excitation power and a data acquisition time of 5 s.
---
paper_title: Ultrasensitive gas detection by quartz-enhanced photoacoustic spectroscopy in the fundamental molecular absorption bands region
paper_content:
A trace gas sensor based on quartz-enhanced photoacoustic spectroscopy with a quantum cascade laser operating at 4.55 μm as an excitation source was developed. The sensor performance was evaluated for the detection of N2O and CO. A noise-equivalent (1σ) sensitivity of 4 ppbv N2O with 3 s response time to (1-1/e) of the steady-state level was demonstrated. The influence of the relevant energy transfer processes on the detection limits was analyzed. Approaches to improve the current sensor performance are also discussed.
---
paper_title: Part-per-trillion level SF6 detection using a quartz enhanced photoacoustic spectroscopy-based sensor with single-mode fiber-coupled quantum cascade laser excitation.
paper_content:
A sensitive spectroscopic sensor based on a hollow-core fiber-coupled quantum cascade laser (QCL) emitting at 10.54 μm and quartz enhanced photoacoustic spectroscopy (QEPAS) technique is reported. The design and realization of mid-IR fiber and coupler optics has ensured single-mode QCL beam delivery to the QEPAS sensor. The collimation optics was designed to produce a laser beam of significantly reduced beam size and waist so as to prevent illumination of the quartz tuning fork and microresonator tubes. SF6 was selected as the target gas. A minimum detection sensitivity of 50 parts per trillion in 1 s was achieved with a QCL power of 18 mW, corresponding to a normalized noise-equivalent absorption of 2.7×10−10 W·cm−1/Hz1/2.
---
paper_title: Photoacoustic phase shift as a chemically selective spectroscopic parameter
paper_content:
The phase information obtained in photoacoustic experiments can be used to separate the signals originating from chemical species with overlapping absorption spectra. This approach was applied to quantify parts per million CO levels in propylene using quartz-enhanced photoacoustic spectroscopy and a quantum cascade laser as an excitation source. The experimental data were used to evaluate V–T relaxation rates of CO and N2O in propylene.
---
paper_title: SEMICONDUCTOR LASER BASED TRACE GAS SENSOR TECHNOLOGY: RECENT ADVANCES AND APPLICATIONS Laser Based Trace Gas Sensor Technology
paper_content:
Recent advances in the development of sensors based on infrared diode and quantum cascade lasers for the detection of trace gas species is reported. Several examples of applications in environmental and industrial process monitoring as well as in medical diagnostics using quartz enhanced photoacoustic spectroscopy and laser absorption spectroscopy will be described.
---
paper_title: Applications of quartz tuning forks in spectroscopic gas sensing
paper_content:
A recently introduced approach to photoacoustic detection of trace gases utilizing a quartz tuning fork (TF) as a resonant acoustic transducer is described in detail. Advantages of the technique called quartz-enhanced photoacoustic spectroscopy (QEPAS) compared to conventional resonant photoacoustic spectroscopy include QEPAS sensor immunity to environmental acoustic noise, a simple absorption detection module design, and its capability to analyze gas samples ∼1mm3 in volume. Noise sources and the TF properties as a function of the sampled gas pressure, temperature and chemical composition are analyzed. Previously published results for QEPAS based chemical gas sensing are summarized. The achieved sensitivity of 5.4×10−9cm−1W∕√Hz is compared to recent published results of photoacoustic gas sensing by other research groups. An experimental study of the long-term stability of a QEPAS-based ammonia sensor is presented. The results of this study indicate that the sensor exhibits very low drift, which allows da...
---
paper_title: Diode laser-based photoacoustic spectroscopy with interferometrically-enhanced cantilever detection.
paper_content:
A novel sensitive approach to detect weak pressure variations has been applied to tunable diode laser-based photoacoustic spectroscopy. The sensing device consists of a miniature silicon cantilever, the deflection of which is detected with a compact Michelson-type interferometer. The photoacoustic system has been applied to the detection of carbon dioxide (CO2) at 1572 nm with a distributed feedback diode laser. A noise equivalent sensitivity of 2.8 x 10-10 cm-1WHz-1/2 was demonstrated. Potential improvements of the technique are discussed.
---
paper_title: Optical gas sensing: a review
paper_content:
The detection and measurement of gas concentrations using the characteristic optical absorption of the gas species is important for both understanding and monitoring a variety of phenomena from industrial processes to environmental change. This study reviews the field, covering several individual gas detection techniques including non-dispersive infrared, spectrophotometry, tunable diode laser spectroscopy and photoacoustic spectroscopy. We present the basis for each technique, recent developments in methods and performance limitations. The technology available to support this field, in terms of key components such as light sources and gas cells, has advanced rapidly in recent years and we discuss these new developments. Finally, we present a performance comparison of different techniques, taking data reported over the preceding decade, and draw conclusions from this benchmarking.
---
paper_title: Intracavity quartz-enhanced photoacoustic sensor
paper_content:
We report on a spectroscopic technique named intracavity quartz-enhanced photoacoustic spectroscopy (I-QEPAS) employed for sensitive trace-gas detection in the mid-infrared spectral region. It is based on a combination of QEPAS with a buildup optical cavity. The sensor includes a distributed feedback quantum cascade laser emitting at 4.33 μm. We achieved a laser optical power buildup factor of ∼500, which corresponds to an intracavity laser power of ∼0.75 W. CO2 has been selected as the target molecule for the I-QEPAS demonstration. We achieved a detection sensitivity of 300 parts per trillion for 4 s integration time, corresponding to a noise equivalent absorption coefficient of 1.4 × 10−8 cm−1 and a normalized noise-equivalent absorption of 3.2 × 10−10 W cm−1 Hz−1/2.
---
paper_title: High-Performance InP-Based Mid-IR Quantum Cascade Lasers
paper_content:
Quantum cascade lasers (QCLs) were once considered as inefficient devices, as the wall-plug efficiency (WPE) was merely a few percent at room temperature. But this situation has changed in the past few years, as dramatic enhancements to the output power and WPE have been made for InP-based mid-IR QCLs. Room temperature continuous-wave (CW) output power as high as 2.8 W and WPE as high as 15% have now been demonstrated for individual devices. Along with the fundamental exploration of refining the design and improving the material quality, a consistent determination of important device performance parameters allows for strategically addressing each component that can be improved potentially. In this paper, we present quantitative experimental evidence backing up the strategies we have adopted to improve the WPE for QCLs with room temperature CW operation.
---
paper_title: Atmospheric CH_4 and N_2O measurements near Greater Houston area landfills using a QCL-based QEPAS sensor system during DISCOVER-AQ 2013
paper_content:
A quartz-enhanced photoacoustic absorption spectroscopy (QEPAS)-based gas sensor was developed for methane (CH4) and nitrous-oxide (N2O) detection. The QEPAS-based sensor was installed in a mobile laboratory operated by Aerodyne Research, Inc. to perform atmospheric CH4 and N2O detection around two urban waste-disposal sites located in the northeastern part of the Greater Houston area, during DISCOVER-AQ, a NASA Earth Venture during September 2013. A continuous wave, thermoelectrically cooled, 158 mW distributed feedback quantum cascade laser emitting at 7.83 μm was used as the excitation source in the QEPAS gas sensor system. Compared to typical ambient atmospheric mixing ratios of CH4 and N2O of 1.8 ppmv and 323 ppbv, respectively, significant increases in mixing ratios were observed when the mobile laboratory was circling two waste-disposal sites in Harris County and when waste disposal trucks were encountered.
---
paper_title: Aircraft Emissions of Methane and Nitrous Oxide during the Alternative Aviation Fuel Experiment
paper_content:
Given the predicted growth of aviation and the recent developments of alternative aviation fuels, quantifying methane (CH4) and nitrous oxide (N2O) emission ratios for various aircraft engines and fuels can help constrain projected impacts of aviation on the Earth’s radiative balance. Fuel-based emission indices for CH4 and N2O were quantified from CFM56–2C1 engines aboard the NASA DC-8 aircraft during the first Alternative Aviation Fuel Experiment (AAFEX-I) in 2009. The measurements of JP-8 fuel combustion products indicate that at low thrust engine states (idle and taxi, or 4% and 7% maximum rated thrusts, respectively) the engines emit both CH4 and N2O at a mean ±1σ rate of 170 ± 160 mg CH4 (kg Fuel)−1 and 110 ± 50 mg N2O (kg Fuel)−1, respectively. At higher thrust levels corresponding to greater fuel flow and higher engine temperatures, CH4 concentrations in engine exhaust were lower than ambient concentrations. Average emission indices for JP-8 fuel combusted at engine thrusts between 30% and 100% of...
---
| Title: Quartz-Enhanced Photoacoustic Spectroscopy: A Review
Section 1: Introduction
Description 1: This section outlines the context and importance of trace gas detection, introduces different gas sensor types, and discusses the advantages of laser absorption spectroscopy over other techniques.
Section 2: Photoacoustic Spectroscopy
Description 2: This section describes the fundamentals of photoacoustic spectroscopy, the underlying optical absorption process, various noise sources, and the application areas of PAS.
Section 3: Quartz-Enhanced Photoacoustic Spectroscopy
Description 3: This section introduces quartz-enhanced photoacoustic spectroscopy (QEPAS), its benefits over regular PAS, and the principles of operation using quartz tuning forks.
Section 4: QEPAS Sensor
Description 4: This section details the configuration and working of a typical QEPAS sensor, including the optical components and methods for laser wavelength modulation.
Section 5: Quartz Tuning Fork: Resonant Properties and Noise
Description 5: This section explains the resonant properties and noise characteristics of quartz tuning forks used in QEPAS, discussing their mechanical and electrical modeling.
Section 6: Wavelength Modulation Detection
Description 6: This section elucidates the wavelength modulation technique used to improve the QEPAS signal-to-noise ratio and minimize external acoustic noise.
Section 7: Amplitude Modulation Detection for Broadband Absorbers
Description 7: This section describes the amplitude modulation technique applicable for the detection of broadband absorbers and the associated challenges and optimizations.
Section 8: On-Beam QEPAS
Description 8: This section discusses the design and functioning of on-beam QEPAS configurations, including the use of micro-resonators to enhance the photoacoustic signal.
Section 9: Off-Beam QEPAS
Description 9: This section introduces the off-beam QEPAS configuration, its advantages over on-beam setups, and details the optimization for different applications.
Section 10: Fiber-coupled QCL-QEPAS
Description 10: This section covers the integration of fiber-coupled quantum cascade lasers with QEPAS for flexible and compact sensor designs, briefing on hollow core waveguides.
Section 11: MOCAM Technique Combined with QEPAS
Description 11: This section explains the Modulation Cancellation Method (MOCAM) combined with QEPAS for applications like temperature and isotopic composition measurements of gas mixtures.
Section 12: Quartz-Enhanced Evanescent-Wave PAS
Description 12: This section introduces quartz-enhanced evanescent-wave PAS using tapered optical fibers, its configuration, and potential applications.
Section 13: Terahertz Spectroscopy for Gas Sensing
Description 13: This section discusses the application of terahertz spectroscopy for gas sensing, highlighting recent advancements and potential for high-resolution molecular spectroscopy.
Section 14: Extension of QEPAS Technique in THz Range: Custom-Made Tuning Fork
Description 14: This section elaborates on the extension of QEPAS to the THz range using a custom-made tuning fork with larger prong spacing to accommodate the THz laser beam.
Section 15: THz QEPAS Sensor for Methanol Detection
Description 15: This section presents a case study of a THz QCL-based QEPAS sensor for detecting methanol, discussing experimental setup, optimization, and results.
Section 16: Review of QEPAS-Based Trace Gas Detection
Description 16: This section reviews the performance of QEPAS sensors for various gases, summarizing the minimum detection limits and comparing it with other techniques.
Section 17: Long Term Stability of a QEPAS Sensor
Description 17: This section assesses the long-term stability and performance of QEPAS sensors using Allan variance analysis to quantify sensitivity drift and signal averaging limits.
Section 18: Comparison with Existing Optical Techniques and Perspective
Description 18: This section compares QEPAS with other optical detection techniques, discusses future prospects, and identifies key factors for achieving high sensitivity in optical gas sensors.
Section 19: Conclusions
Description 19: This section summarizes the key advancements and benefits of quartz-enhanced photoacoustic spectroscopy and its applications across various fields. |
ElectroMagnetoEncephalography Software: Overview and Integration with Other EEG/MEG Toolboxes | 16 | ---
paper_title: Keep it simple: a case for using classical minimum norm estimation in the analysis of EEG and MEG data
paper_content:
The present study aims at finding the optimal inverse solution for the bioelectromagnetic inverse problem in the absence of reliable a priori information about the generating sources. Three approaches to tackle this problem are compared theoretically: the maximum-likelihood approach, the minimum norm approach, and the resolution optimization approach. It is shown that in all three of these frameworks, it is possible to make use of the same kind of a priori information if available, and the same solutions are obtained if the same a priori information is implemented. In particular, they all yield the minimum norm pseudoinverse (MNP) in the complete absence of such information. This indicates that the properties of the MNP, and in particular, its limitations like the inability to localize sources in depth, are not specific to this method but are fundamental limitations of the recording modalities. The minimum norm solution provides the amount of information that is actually present in the data themselves, and is therefore optimally suited to investigate the general resolution and accuracy limits of EEG and MEG measurement configurations. Furthermore, this strongly suggests that the classical minimum norm solution is a valuable method whenever no reliable a priori information about source generators is available, that is, when complex cognitive tasks are employed or when very noisy data (e.g., single-trial data) are analyzed. For that purpose, an efficient and practical implementation of this method will be suggested and illustrated with simulations using a realistic head geometry.
---
paper_title: A new method for off-line removal of ocular artifact
paper_content:
Abstract A new off-line procedure for dealing with ocular artifacts in ERP recording is described. The procedure (EMPC) uses EOG and EEG records for individual trials in an experimental session to estimate a propagation factor which describes the relationship between the EOG and EEG traces. The propagation factor is computed after stimulus-linkes variability in both traces has been removed. Different propagation factors are computed for blinks and eye movements. Tests are presented which demonstrate the validity and reliability of the procedure. ERPs derived from trials corrected by EMCP are more similar to a ‘true’ ERP than are ERPs derived from either uncorrected or randomly corrected trials. The procedure also reduces the difference between ERPs which are based on trials with different degrees of EOG variance. Furthermore, variability at each time point, across trials, is reduced following correction. The propagation factor decreases from frontal to parietal electrodes, and is larger for saccades than blinks. It is more consistent within experimental sessions than between sessions. The major advantage of the procedure is that it permits retention of all trials in an ERP experiment, irrespective of ocular artifact. Thus, studies of populations characterized by a high degree of artifact, and those requiring eye movements as part of the experimental task, are made possible. Furthermore, there is no need to require subjects to restrict eye movement activity. In comparison to procedures suggested by others, EMCP also has the advantage that separate correction factors are computed for blinks and movements and that these factors are based on data from the experimental session itself rather than from a separate calibration session.
---
paper_title: The polar average reference effect: a bias in estimating the head surface integral in EEG recording
paper_content:
A reference-independent measure of potential is helpful for studying the multichannel EEG. The potentials integrated over the surface of the body is a constant, i.e. inactive across time, regardless of the activity and distribution of brain electric sources. Therefore, the average reference, the mean of all recording channels at each time point, may be used to approximate an inactive reference. However, this approximation is valid only with accurate spatial sampling of the scalp fields. Accurate sampling requires a sufficient electrode density and full coverage of the head’s surface. If electrodes are concentrated in one region of the surface, such as just on the scalp, then the average is biased toward that region. Differences from the average will then be smaller in the center of the region, e.g. the vertex, than at the periphery. In this paper, we illustrate how this polar average reference effect (PARE) may be created by both the inadequate density and the uneven distribution of EEG electrodes. The greater the coverage of the surface of the volume conductor, the more the average reference approaches the ideal inactive reference. q 1999 Elsevier Science Ireland Ltd. All rights reserved.
---
paper_title: Statistical control of artifacts in dense array EEG/MEG studies
paper_content:
With the advent of dense sensor arrays (64-256 channels) in electroencephalography and magnetoencephalography studies, the probability increases that some recording channels are contaminated by artifact. If all channels are required to be artifact free, the number of acceptable trials may be unacceptably low. Precise artifact screening is necessary for accurate spatial mapping, for current density measures, for source analysis, and for accurate temporal analysis based on single-trial methods. Precise screening presents a number of problems given the large datasets. We propose a procedure for statistical correction of artifacts in dense array studies (SCADS), which (1) detects individual channel artifacts using the recording reference, (2) detects global artifacts using the average reference, (3) replaces artifact-contaminated sensors with spherical interpolation statistically weighted on the basis of all sensors, and (4) computes the variance of the signal across trials to document the stability of the averaged waveform. Examples from 128-channel recordings and from numerical simulations illustrate the importance of careful artifact review in the avoidance of analysis errors.
---
paper_title: Interpreting magnetic fields of the brain: minimum norm estimates
paper_content:
The authors have applied estimation theory to the problem of determining primary current distributions from measured neuromagnetic fields. In this procedure, essentially nothing is assumed about the source currents, except that they are spatially restricted to a certain region. Simulation experiments show that the results can describe the structure of the current flow fairly well. By increasing the number of measurements, the estimate can be made more localised. The current distributions may be also used as an interpolation and an extrapolation for the measured field patterns.
---
paper_title: Electromagnetic brain mapping
paper_content:
There has been tremendous advances in our ability to produce images of human brain function. Applications of functional brain imaging extend from improving our understanding of the basic mechanisms of cognitive processes to better characterization of pathologies that impair normal function. Magnetoencephalography (MEG) and electroencephalography (EEG) (MEG/EEG) localize neural electrical activity using noninvasive measurements of external electromagnetic signals. Among the available functional imaging techniques, MEG and EEG uniquely have temporal resolutions below 100 ms. This temporal precision allows us to explore the timing of basic neural processes at the level of cell assemblies. MEG/EEG source localization draws on a wide range of signal processing techniques including digital filtering, three-dimensional image analysis, array signal processing, image modeling and reconstruction, and, blind source separation and phase synchrony estimation. We describe the underlying models currently used in MEG/EEG source estimation and describe the various signal processing steps required to compute these sources. In particular we describe methods for computing the forward fields for known source distributions and parametric and imaging-based approaches to the inverse problem.
---
paper_title: Keep it simple: a case for using classical minimum norm estimation in the analysis of EEG and MEG data
paper_content:
The present study aims at finding the optimal inverse solution for the bioelectromagnetic inverse problem in the absence of reliable a priori information about the generating sources. Three approaches to tackle this problem are compared theoretically: the maximum-likelihood approach, the minimum norm approach, and the resolution optimization approach. It is shown that in all three of these frameworks, it is possible to make use of the same kind of a priori information if available, and the same solutions are obtained if the same a priori information is implemented. In particular, they all yield the minimum norm pseudoinverse (MNP) in the complete absence of such information. This indicates that the properties of the MNP, and in particular, its limitations like the inability to localize sources in depth, are not specific to this method but are fundamental limitations of the recording modalities. The minimum norm solution provides the amount of information that is actually present in the data themselves, and is therefore optimally suited to investigate the general resolution and accuracy limits of EEG and MEG measurement configurations. Furthermore, this strongly suggests that the classical minimum norm solution is a valuable method whenever no reliable a priori information about source generators is available, that is, when complex cognitive tasks are employed or when very noisy data (e.g., single-trial data) are analyzed. For that purpose, an efficient and practical implementation of this method will be suggested and illustrated with simulations using a realistic head geometry.
---
paper_title: Mapping EEG-potentials on the surface of the brain: A strategy for uncovering cortical sources
paper_content:
This paper describes a uniform method for calculating the interpolation of scalp EEG potential distribution, the current source density (CSD), the cortical potential distribution (cortical mapping) and the CSD of the cortical potential distribution. It will be shown that interpolation and deblurring methods such as CSD or cortical mapping are not independent of the inverse problem in potential theory. Not only the resolution but also the accuracy of these techniques, especially those of deblurring, depend greatly on the spatial sampling rate (i.e., the number of electrodes). Using examples from simulated and real (64 channels) data it can be shown that the application of more than 100 EEG channels is not only favourable but necessary to guarantee a reasonable accuracy in the calculations of CSD or cortical mapping. Likewise, it can be shown that using more than 250 electrodes does not improve the resolution.
---
paper_title: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
paper_content:
We have developed a toolbox and graphic user interface, EEGLAB, running under the cross-platform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), Independent Component Analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation.
---
paper_title: Oscillatory γ-Band (30–70 Hz) Activity Induced by a Visual Search Task in Humans
paper_content:
The coherent representation of an object in the visual system has been suggested to be achieved by the synchronization in the γ-band (30–70 Hz) of a distributed neuronal assembly. Here we measure variations of high-frequency activity on the human scalp. The experiment is designed to allow the comparison of two different perceptions of the same picture. In the first condition, an apparently meaningless picture that contained a hidden Dalmatian, a neutral stimulus, and a target stimulus (twirled blobs) are presented. After the subject has been trained to perceive the hidden dog and its mirror image, the second part of the recordings is performed (condition 2). The same neutral stimulus is presented, intermixed with the picture of the dog and its mirror image (target stimulus). Early (95 msec) phase-locked (or stimulus-locked) γ-band oscillations do not vary with stimulus type but can be subdivided into an anterior component (38 Hz) and a posterior component (35 Hz). Non-phase-locked γ-band oscillations appear with a latency jitter around 280 msec after stimulus onset and disappear in averaged data. They increase in amplitude in response to both target stimuli. They also globally increase in the second condition compared with the first one. It is suggested that this γ-band energy increase reflects both bottom-up (binding of elementary features) and top-down (search for the hidden dog) activation of the same neural assembly coding for the Dalmatian. The relationships between high- and low-frequency components of the response are discussed, and a possible functional role of each component is suggested.
---
paper_title: Difference formulas for the surface Laplacian on a triangulated surface
paper_content:
Abstract Different approximating expressions for the surface Laplacian operator on a triangulated surface are derived. They are evaluated on a triangulated spherical surface for which the analytical expression of the surface Laplacian is known. It is shown that in order to obtain accurate results, due care has to be taken of irregularities present in the triangulation grid. If this is done, the approximation will equal the performance of an expression based on least squares which can be derived. Next the different approximations obtained are used as a regularization operator in the solution of an ill-posed inverse problem in electrical volume conduction. It is shown that in this application a crude approximation to the surface Laplacian suffices.
---
paper_title: Spherical splines for scalp potential and current density mapping
paper_content:
Abstract Description of mapping methods using spherical splines, both to interpolate scalp potentials (SPs), and to approximate scalp current densities (SCDs). Compared to a previously published method using thin plate splines, the advantages are a very simple derivation of the SCD approximation, faster computing times, and greater accuracy in areas with few electrodes.
---
paper_title: EEG oscillations and wavelet analysis
paper_content:
Electroencephalographic recordings are analyzed in an event-related fashion when we want to gain insights into the relation of the electroencephalogram (EEG) and experimental events. The standard analysis method is to focus on event-related potentials (ERPs) by averaging. However, another approach is to concentrate on eventrelated oscillations (EROs). This chapter will introduce the notion of EEG oscillations and a method suited to analyze the temporal and spatial characteristics of EROs at the same time, namely the wavelet analysis. At first an introduction to oscillatory EEG activity will be given, followed by details of the wavelet analysis. Some general prerequisites of recording EROs will be reviewed and finally, recently introduced wavelet-based methods for studying dynamical interrelations between brain signals will be discussed.
---
paper_title: Mapping EEG-potentials on the surface of the brain: A strategy for uncovering cortical sources
paper_content:
This paper describes a uniform method for calculating the interpolation of scalp EEG potential distribution, the current source density (CSD), the cortical potential distribution (cortical mapping) and the CSD of the cortical potential distribution. It will be shown that interpolation and deblurring methods such as CSD or cortical mapping are not independent of the inverse problem in potential theory. Not only the resolution but also the accuracy of these techniques, especially those of deblurring, depend greatly on the spatial sampling rate (i.e., the number of electrodes). Using examples from simulated and real (64 channels) data it can be shown that the application of more than 100 EEG channels is not only favourable but necessary to guarantee a reasonable accuracy in the calculations of CSD or cortical mapping. Likewise, it can be shown that using more than 250 electrodes does not improve the resolution.
---
paper_title: Event-Related EEG Time-Frequency Analysis: An Overview of Measures and An Analysis of Early Gamma Band Phase Locking in Schizophrenia
paper_content:
An increasing number of schizophrenia studies have been examining electroencephalography (EEG) data using time-frequency analysis, documenting illness-related abnormalities in neuronal oscillations and their synchronization, particularly in the gamma band. In this article, we review common methods of spectral decomposition of EEG, time-frequency analyses, types of measures that separately quantify magnitude and phase information from the EEG, and the influence of parameter choices on the analysis results. We then compare the degree of phase locking (ie, phase-locking factor) of the gamma band (36–50 Hz) response evoked about 50 milliseconds following the presentation of standard tones in 22 healthy controls and 21 medicated patients with schizophrenia. These tones were presented as part of an auditory oddball task performed by subjects while EEG was recorded from their scalps. The results showed prominent gamma band phase locking at frontal electrodes between 20 and 60 milliseconds following tone onset in healthy controls that was significantly reduced in patients with schizophrenia (P 5 .03). The finding suggests that the early-evoked gamma band response to auditory stimuli is deficiently synchronized in schizophrenia. We discuss the results in terms of pathophysiological mechanisms compromising event-related gamma phase synchrony in schizophrenia and further attempt to reconcile this finding with prior studies that failed to find this effect.
---
paper_title: The polar average reference effect: a bias in estimating the head surface integral in EEG recording
paper_content:
A reference-independent measure of potential is helpful for studying the multichannel EEG. The potentials integrated over the surface of the body is a constant, i.e. inactive across time, regardless of the activity and distribution of brain electric sources. Therefore, the average reference, the mean of all recording channels at each time point, may be used to approximate an inactive reference. However, this approximation is valid only with accurate spatial sampling of the scalp fields. Accurate sampling requires a sufficient electrode density and full coverage of the head’s surface. If electrodes are concentrated in one region of the surface, such as just on the scalp, then the average is biased toward that region. Differences from the average will then be smaller in the center of the region, e.g. the vertex, than at the periphery. In this paper, we illustrate how this polar average reference effect (PARE) may be created by both the inadequate density and the uneven distribution of EEG electrodes. The greater the coverage of the surface of the volume conductor, the more the average reference approaches the ideal inactive reference. q 1999 Elsevier Science Ireland Ltd. All rights reserved.
---
| Title: ElectroMagnetoEncephalography Software: Overview and Integration with Other EEG/MEG Toolboxes
Section 1: Introduction
Description 1: Introduce the EMEGS software, its purpose, and the structure of the paper.
Section 2: Main Field of Software Application
Description 2: Describe the primary applications of EMEGS in academic and research settings, including the distinction between basic and expert modes.
Section 3: Availability, License, and Support
Description 3: Provide details on how to access EMEGS, its licensing under GNU GPL, and the available support and documentation.
Section 4: System Requirements
Description 4: Outline the system requirements for running EMEGS, including necessary software and hardware specifications.
Section 5: Features and Implementation
Description 5: Provide an overview of the features offered by EMEGS and how various tasks are implemented in the software.
Section 6: Preprocessing
Description 6: Detail the preprocessing steps in EMEGS to prepare raw EEG or MEG data for analysis.
Section 7: Artifact Detection and Averaging
Description 7: Explain the process of artifact detection, sensor interpolation, and averaging in EMEGS.
Section 8: Interpolation, Current Source Density, and Source Localization
Description 8: Discuss the methods used in EMEGS for interpolation, current source density calculation, and source localization.
Section 9: Statistical and Exploratory Analysis of Evoked Brain Signals
Description 9: Describe the statistical and exploratory analysis capabilities within EMEGS for analyzing evoked brain signals.
Section 10: Data Display and Visualization
Description 10: Explain the various data visualization tools and techniques provided by EMEGS.
Section 11: Generation of Synthetic Data
Description 11: Discuss how synthetic EEG/MEG data can be generated in EMEGS for educational and testing purposes.
Section 12: Extending EMEGS Capabilities: Exemplified Time Frequency Analysis Using FieldTrip
Description 12: Describe the integration of EMEGS with FieldTrip for time-frequency analysis and how it extends the software’s capabilities.
Section 13: Future Directions
Description 13: Provide insights into the future development plans for EMEGS and potential new features.
Section 14: Conclusion
Description 14: Summarize the key points of the paper and the advantages of using EMEGS for EEG/MEG data analysis.
Section 15: Appendix A - EMEGS Analysis Tutorial
Description 15: Provide brief analysis tutorials to illustrate the EMEGS processing facilities.
Section 16: Appendix B - EMEGS Plug-ins
Description 16: Explain the plug-in facility in EMEGS and how users can create and integrate their own analysis functions. |
K3M: A universal algorithm for image skeletonization and a review of thinning techniques | 5 | ---
paper_title: Experiments in processing pictorial information with a digital computer
paper_content:
In almost all digital data processing machine applications, the input data made available to the machine are the result of some prior processing. This processing is done manually in many applications. Thus, such inputs as punched cards, magnetic tape, and punched paper tape often are the result of a manual processing operation in which a human being is required to inspect visually an array of printed characters and to describe these data in a form capable of being processed by machine. In recognition of the importance of automating such operations, many investigations have been undertaken to devise automatic character sensing equipment. Suppose, however, that we attempt to view such efforts in proper perspective. We find a more fundamental problem that has, heretofore, failed to receive the attention that it warrants. The problem is one of making directly available to a computer pictorial information which would ordinarily be visually processed by human beings before being fed to a data processing system. This pictorial information may range from such highly stylized forms as printed characters, diagrams, schematic drawings, emblems, and designs through less stylized forms in cartoons and handwritten characters to such highly amorphous forms as photographs of real objects, e.g., people, aerial views, and microscopic and telescopic images.
---
paper_title: Quantitative performance evaluation of thinning algorithms under noisy conditions
paper_content:
Thinning algorithms are an important sub-component in the construction of computer vision (especially for optical character recognition (OCR)) systems. Important criteria for the choice of a thinning algorithm include the sensitivity of the algorithms to input shape complexity and to the amount of noise. In previous work, we introduced a methodology to quantitatively analyse the performance of thinning algorithms. The methodology uses an ideal world model for thinning based on the concept of Blum ribbons. In this paper we extend upon this methodology to answer these and other experimental questions of interest. We contaminate the noise-free images using a noise model that simulates the degradation introduced by the process of xerographic copying and laser printing. We then design experiments that study how each of 16 popular thinning algorithms performs relative to the Blum ribbon gold standard and relative to itself as the amount of noise varies. We design statistical data analysis procedures for various performance comparisons. We present the results obtained from these comparisons and a discussion of their implications in this paper. >
---
paper_title: One-pass parallel thinning: analysis, properties, and quantitative evaluation
paper_content:
A one-pass parallel thinning algorithm based on a number of criteria, including connectivity, unit-width convergence, medial axis approximation, noise immunity, and efficiency, is proposed. A pipeline processing model is assumed for the development. Precise analysis of the thinning process is presented to show its properties, and proofs of skeletal connectivity and convergence are provided. The proposed algorithm is further extended to the derived-grid to attain an isotropic medial axis representation. A set of measures based on the desired properties of thinning is used for quantitative evaluation of various algorithms. Image reconstruction from connected skeletons is also discussed. Evaluation shows that the procedures compare favorably to others. >
---
paper_title: Programming pattern recognition
paper_content:
Everyone likes to speculate, and recently there has been a lot of talk about reading machines and hearing machines. We know it is possible to simulate speech. This raises lots of interesting questions such as: If the machines can speak, will they squawk when you ask them to divide by zero? And can two machines carry on an intelligent conversation, say in Gaelic? And, of course, there is the expression "electronic brain" and the question, Do machines think? These questions are more philosophical than technical and I am going to duck them.
---
paper_title: Thinning methodologies−a comprehensive survey
paper_content:
A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. >
---
paper_title: Algorithms for Graphics and Image Processing
paper_content:
1: Introduction.- 1.1 Graphics, Image Processing, and Pattern Recognition.- 1.2 Forms of Pictorial Data.- 1.2.1 Class 1: Full Gray Scale and Color Pictures.- 1.2.2 Class 2: Bilevel or "Few Color" pictures.- 1.2.3 Class 3: Continuous Curves and Lines.- 1.2.4 Class 4: Points or Polygons.- 1.3 Pictorial Input.- 1.4 Display Devices.- 1.5 Vector Graphics.- 1.6 Raster Graphics.- 1.7 Common Primitive Graphic Instructions.- 1.8 Comparison of Vector and Raster Graphics.- 1.9 Pictorial Editor.- 1.10 Pictorial Transformations.- 1.11 Algorithm Notation.- 1.12 A Few Words on Complexity.- 1.13 Bibliographical Notes.- 1.14 Relevant Literature.- 1.15 Problems.- 2: Digitization of Gray Scale Images.- 2.1 Introduction.- 2.2 A Review of Fourier and other Transforms.- 2.3 Sampling.- 2.3.1 One-dimensional Sampling.- 2.3.2 Two-dimensional Sampling.- 2.4 Aliasing.- 2.5 Quantization.- 2.6 Bibliographical Notes.- 2.7 Relevant Literature.- 2.8 Problems.- Appendix 2.A: Fast Fourier Transform.- 3: Processing of Gray Scale Images.- 3.1 Introduction.- 3.2 Histogram and Histogram Equalization.- 3.3 Co-occurrence Matrices.- 3.4 Linear Image Filtering.- 3.5 Nonlinear Image Filtering.- 3.5.1 Directional Filters.- 3.5.2 Two-part Filters.- 3.5.3 Functional Approximation Filters.- 3.6 Bibliographical Notes.- 3.7 Relevant Literature.- 3.8 Problems.- 4: Segmentation.- 4.1 Introduction.- 4.2 Thresholding.- 4.3 Edge Detection.- 4.4 Segmentation by Region Growing.- 4.4.1 Segmentation by Average Brightness Level.- 4.4.2 Other Uniformity Criteria.- 4.5 Bibliographical Notes.- 4.6 Relevant Literature.- 4.7 Problems.- 5: Projections.- 5.1 Introduction.- 5.2 Introduction to Reconstruction Techniques.- 5.3 A Class of Reconstruction Algorithms.- 5.4 Projections for Shape Analysis.- 5.5 Bibliographical Notes.- 5.6 Relevant Literature.- 5.7 Problems.- Appendix 5.A: An Elementary Reconstruction Program.- 6: Data Structures.- 6.1 Introduction.- 6.2 Graph Traversal Algorithms.- 6.3 Paging.- 6.4 Pyramids or Quad Trees.- 6.4.1 Creating a Quad Tree.- 6.4.2 Reconstructing an Image from a Quad Tree.- 6.4.3 Image Compaction with a Quad Tree.- 6.5 Binary Image Trees.- 6.6 Split-and-Merge Algorithms.- 6.7 Line Encodings and the Line Adjacency Graph.- 6.8 Region Encodings and the Region Adjacency Graph.- 6.9 Iconic Representations.- 6.10 Data Structures for Displays.- 6.11 Bibliographical Notes.- 6.12 Relevant Literature.- 6.13 Problems.- Appendix 6.A: Introduction to Graphs.- 7: Bilevel Pictures.- 7.1 Introduction.- 7.2 Sampling and Topology.- 7.3 Elements of Discrete Geometry.- 7.4 A Sampling Theorem for Class 2 Pictures.- 7.5 Contour Tracing.- 7.5.1 Tracing of a Single Contour.- 7.5.2 Traversal of All the Contours of a Region.- 7.6 Curves and Lines on a Discrete Grid.- 7.6.1 When a Set of Pixels is not a Curve.- 7.6.2 When a Set of Pixels is a Curve.- 7.7 Multiple Pixels.- 7.8 An Introduction to Shape Analysis.- 7.9 Bibliographical Notes.- 7.10 Relevant Literature.- 7.11 Problems.- 8: Contour Filling.- 8.1 Introduction.- 8.2 Edge Filling.- 8.3 Contour Filling by Parity Check.- 8.3.1 Proof of Correctness of Algorithm 8.3.- 8.3.2 Implementation of a Parity Check Algorithm.- 8.4 Contour Filling by Connectivity.- 8.4.1 Recursive Connectivity Filling.- 8.4.2 Nonrecursive Connectivity Filling.- 8.4.3 Procedures used for Connectivity Filling.- 8.4.4 Description of the Main Algorithm.- 8.5 Comparisons and Combinations.- 8.6 Bibliographical Notes.- 8.7 Relevant Literature.- 8.8 Problems.- 9: Thinning Algorithms.- 9.1 Introduction.- 9.2 Classical Thinning Algorithms.- 9.3 Asynchronous Thinning Algorithms.- 9.4 Implementation of an Asynchronous Thinning Algorithm.- 9.5 A Quick Thinning Algorithm.- 9.6 Structural Shape Analysis.- 9.7 Transformation of Bilevel Images into Line Drawings.- 9.8 Bibliographical Notes.- 9.9 Relevant Literature.- 9.10 Problems.- 10: Curve Fitting and Curve Displaying.- 10.1 Introduction.- 10.2 Polynomial Interpolation.- 10.3 Bezier Polynomials.- 10.4 Computation of Bezier Polynomials.- 10.5 Some Properties of Bezier Polynomials.- 10.6 Circular Arcs.- 10.7 Display of Lines and Curves.- 10.7.1 Display of Curves through Differential Equations.- 10.7.2 Effect of Round-off Errors in Displays.- 10.8 A Point Editor.- 10.8.1 A Data Structure for a Point Editor.- 10.8.2 Input and Output for a Point Editor.- 10.9 Bibliographical Notes.- 10.10 Relevant Literature.- 10.11 Problems.- 11: Curve Fitting with Splines.- 11.1 Introduction.- 11.2 Fundamental Definitions.- 11.3 B-Splines.- 11.4 Computation with B-Splines.- 11.5 Interpolating B-Splines.- 11.6 B-Splines in Graphics.- 11.7 Shape Description and B-splines.- 11.8 Bibliographical Notes.- 11.9 Relevant Literature.- 11.10 Problems.- 12: Approximation of Curves.- 12.1 Introduction.- 12.2 Integral Square Error Approximation.- 12.3 Approximation Using B-Splines.- 12.4 Approximation by Splines with Variable Breakpoints.- 12.5 Polygonal Approximations.- 12.5.1 A Suboptimal Line Fitting Algorithm.- 12.5.2 A Simple Polygon Fitting Algorithm.- 12.5.3 Properties of Algorithm 12.2.- 12.6 Applications of Curve Approximation in Graphics.- 12.6.1 Handling of Groups of Points by a Point Editor.- 12.6.2 Finding Some Simple Approximating Curves.- 12.7 Bibliographical Notes.- 12.8 Relevant Literature.- 12.9 Problems.- 13: Surface Fitting and Surface Displaying.- 13.1 Introduction.- 13.2 Some Simple Properties of Surfaces.- 13.3 Singular Points of a Surface.- 13.4 Linear and Bilinear Interpolating Surface Patches.- 13.5 Lofted Surfaces.- 13.6 Coons Surfaces.- 13.7 Guided Surfaces.- 13.7.1 Bezier Surfaces.- 13.7.2 B-Spline Surfaces.- 13.8 The Choice of a Surface Partition.- 13.9 Display of Surfaces and Shading.- 13.10 Bibliographical Notes.- 13.11 Relevant Literature.- 13.12 Problems.- 14: The Mathematics of Two-Dimensional Graphics.- 14.1 Introduction.- 14.2 Two-Dimensional Transformations.- 14.3 Homogeneous Coordinates.- 14.3.1 Equation of a Line Defined by Two Points.- 14.3.2 Coordinates of a Point Defined as the Intersection of Two Lines.- 14.3.3 Duality.- 14.4 Line Segment Problems.- 14.4.1 Position of a Point with respect to a Line.- 14.4.2 Intersection of Line Segments.- 14.4.3 Position of a Point with respect to a Polygon.- 14.4.4 Segment Shadow.- 14.5 Bibliographical Notes.- 14.6 Relevant Literature.- 14.7 Problems.- 15: Polygon Clipping.- 15.1 Introduction.- 15.2 Clipping a Line Segment by a Convex Polygon.- 15.3 Clipping a Line Segment by a Regular Rectangle.- 15.4 Clipping an Arbitrary Polygon by a Line.- 15.5 Intersection of Two Polygons.- 15.6 Efficient Polygon Intersection.- 15.7 Bibliographical Notes.- 15.8 Relevant Literature.- 15.9 Problems.- 16: The Mathematics of Three-Dimensional Graphics.- 16.1 Introduction.- 16.2 Homogeneous Coordinates.- 16.2.1 Position of a Point with respect to a Plane.- 16.2.2 Intersection of Triangles.- 16.3 Three-Dimensional Transformations.- 16.3.1 Mathematical Preliminaries.- 16.3.2 Rotation around an Axis through the Origin.- 16.4 Orthogonal Projections.- 16.5 Perspective Projections.- 16.6 Bibliographical Notes.- 16.7 Relevant Literature.- 16.8 Problems.- 17: Creating Three-Dimensional Graphic Displays.- 17.1 Introduction.- 17.2 The Hidden Line and Hidden Surface Problems.- 17.2.1 Surface Shadow.- 17.2.2 Approaches to the Visibility Problem.- 17.2.3 Single Convex Object Visibility.- 17.3 A Quad Tree Visibility Algorithm.- 17.4 A Raster Line Scan Visibility Algorithm.- 17.5 Coherence.- 17.6 Nonlinear Object Descriptions.- 17.7 Making a Natural Looking Display.- 17.8 Bibliographical Notes.- 17.9 Relevant Literature.- 17.10 Problems.- Author Index.- Algorithm Index.
---
paper_title: The use of hidden deletable pixel detection to obtain bias-reduced skeletons in parallel thinning
paper_content:
Bias skeletons yielded by thinning patterns which usually degrade the preservation of significant geometric features of patterns are addressed in this paper. Based on the considerable performance of our pseudo 1-subcycle parallel thinning algorithm, we continuously adopt it to develop an intermediate vector analysis about the removed pixels in each normal iteration to obtain bias-reduced skeletons. The hidden deletable pixel which is a major factor to yield bias skeletons is newly defined and can be detected by the presented algorithms of vector analysis.
---
paper_title: A systematic approach for designing 2-subcycle and pseudo 1-subcycle parallel thinning algorithms
paper_content:
Abstract This paper describes a systematic approach for designing parallel thinning algorithms, in which three new functions, named local connecting, extended local connecting and erosive direction number, are introduced. With these functions as well as two properties of shape invariance of local edges and local straight lines, all the possible cases of 2-subcycle parallel thinning algorithm are constructed and all the corresponding removing conditions are generated and assigned automatically. In addition, the pseudo 1-subcycle parallel thinning algorithm is also presented. Finally, the effects and efficiency of the above proposed algorithms are analyzed and compared with those of some presently well-known algorithms. Experimental results confirm this new approach, and an efficient and effective algorithm has been built for practical applications.
---
paper_title: Thinning methodologies−a comprehensive survey
paper_content:
A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. >
---
paper_title: Fast fully parallel thinning algorithms
paper_content:
Abstract Three new fast fully parallel 2-D thinning algorithms using reduction operators with 11-pixel supports are presented and evaluated. These are compared to earlier fully parallel thinning algorithms in tests on artificial and natural images; the new algorithms produce either superior parallel computation time (number of parallel iterations) or thinner medial curve results with comparable parallel computation time. Further, estimates of the best possible parallel computation time are developed which are applied to the specific test sets used. The parallel computation times of the new algorithms and one earlier algorithm are shown to approach closely or surpass these estimates and are in this sense near optimally fast.
---
paper_title: A rotation invariant rule-based thinning algorithm for character recognition
paper_content:
This paper presents a novel rule-based system for thinning. The unique feature that distinguishes our thinning system is that it thins symbols to their central lines. This means that the shape of the symbol is preserved. It also means that the method is rotation invariant. The system has 20 rules in its inference engine. These rules are applied simultaneously to each pixel in the image. Therefore, the system has the advantages of symmetrical thinning and speed. The results show that the system is very efficient in preserving the topology of symbols and letters written in any language.
---
paper_title: Thinning Algorithms for Gray-Scale Pictures
paper_content:
Elongated black objects in black-and-white pictures can be ``thinned'' to arcs and curves, without changing their connectedness, by (repeatedly) deleting black border points whose deletion does not locally disconnect the black points in their neighborhoods. This technique generalizes to gray-scale pictures if we use a weighted definition of connectedness: two points are ``connected'' if there is a path joining them on which no point is lighter than either of them. We can then ``thin'' dark objects by changing each point's gray level to the minimum of its neighbors' gray levels, provided this does not disconnect any pair of points in its neighborhood. Examples illustrating the performance of this technique are given.
---
paper_title: Algorithms for Graphics and Image Processing
paper_content:
1: Introduction.- 1.1 Graphics, Image Processing, and Pattern Recognition.- 1.2 Forms of Pictorial Data.- 1.2.1 Class 1: Full Gray Scale and Color Pictures.- 1.2.2 Class 2: Bilevel or "Few Color" pictures.- 1.2.3 Class 3: Continuous Curves and Lines.- 1.2.4 Class 4: Points or Polygons.- 1.3 Pictorial Input.- 1.4 Display Devices.- 1.5 Vector Graphics.- 1.6 Raster Graphics.- 1.7 Common Primitive Graphic Instructions.- 1.8 Comparison of Vector and Raster Graphics.- 1.9 Pictorial Editor.- 1.10 Pictorial Transformations.- 1.11 Algorithm Notation.- 1.12 A Few Words on Complexity.- 1.13 Bibliographical Notes.- 1.14 Relevant Literature.- 1.15 Problems.- 2: Digitization of Gray Scale Images.- 2.1 Introduction.- 2.2 A Review of Fourier and other Transforms.- 2.3 Sampling.- 2.3.1 One-dimensional Sampling.- 2.3.2 Two-dimensional Sampling.- 2.4 Aliasing.- 2.5 Quantization.- 2.6 Bibliographical Notes.- 2.7 Relevant Literature.- 2.8 Problems.- Appendix 2.A: Fast Fourier Transform.- 3: Processing of Gray Scale Images.- 3.1 Introduction.- 3.2 Histogram and Histogram Equalization.- 3.3 Co-occurrence Matrices.- 3.4 Linear Image Filtering.- 3.5 Nonlinear Image Filtering.- 3.5.1 Directional Filters.- 3.5.2 Two-part Filters.- 3.5.3 Functional Approximation Filters.- 3.6 Bibliographical Notes.- 3.7 Relevant Literature.- 3.8 Problems.- 4: Segmentation.- 4.1 Introduction.- 4.2 Thresholding.- 4.3 Edge Detection.- 4.4 Segmentation by Region Growing.- 4.4.1 Segmentation by Average Brightness Level.- 4.4.2 Other Uniformity Criteria.- 4.5 Bibliographical Notes.- 4.6 Relevant Literature.- 4.7 Problems.- 5: Projections.- 5.1 Introduction.- 5.2 Introduction to Reconstruction Techniques.- 5.3 A Class of Reconstruction Algorithms.- 5.4 Projections for Shape Analysis.- 5.5 Bibliographical Notes.- 5.6 Relevant Literature.- 5.7 Problems.- Appendix 5.A: An Elementary Reconstruction Program.- 6: Data Structures.- 6.1 Introduction.- 6.2 Graph Traversal Algorithms.- 6.3 Paging.- 6.4 Pyramids or Quad Trees.- 6.4.1 Creating a Quad Tree.- 6.4.2 Reconstructing an Image from a Quad Tree.- 6.4.3 Image Compaction with a Quad Tree.- 6.5 Binary Image Trees.- 6.6 Split-and-Merge Algorithms.- 6.7 Line Encodings and the Line Adjacency Graph.- 6.8 Region Encodings and the Region Adjacency Graph.- 6.9 Iconic Representations.- 6.10 Data Structures for Displays.- 6.11 Bibliographical Notes.- 6.12 Relevant Literature.- 6.13 Problems.- Appendix 6.A: Introduction to Graphs.- 7: Bilevel Pictures.- 7.1 Introduction.- 7.2 Sampling and Topology.- 7.3 Elements of Discrete Geometry.- 7.4 A Sampling Theorem for Class 2 Pictures.- 7.5 Contour Tracing.- 7.5.1 Tracing of a Single Contour.- 7.5.2 Traversal of All the Contours of a Region.- 7.6 Curves and Lines on a Discrete Grid.- 7.6.1 When a Set of Pixels is not a Curve.- 7.6.2 When a Set of Pixels is a Curve.- 7.7 Multiple Pixels.- 7.8 An Introduction to Shape Analysis.- 7.9 Bibliographical Notes.- 7.10 Relevant Literature.- 7.11 Problems.- 8: Contour Filling.- 8.1 Introduction.- 8.2 Edge Filling.- 8.3 Contour Filling by Parity Check.- 8.3.1 Proof of Correctness of Algorithm 8.3.- 8.3.2 Implementation of a Parity Check Algorithm.- 8.4 Contour Filling by Connectivity.- 8.4.1 Recursive Connectivity Filling.- 8.4.2 Nonrecursive Connectivity Filling.- 8.4.3 Procedures used for Connectivity Filling.- 8.4.4 Description of the Main Algorithm.- 8.5 Comparisons and Combinations.- 8.6 Bibliographical Notes.- 8.7 Relevant Literature.- 8.8 Problems.- 9: Thinning Algorithms.- 9.1 Introduction.- 9.2 Classical Thinning Algorithms.- 9.3 Asynchronous Thinning Algorithms.- 9.4 Implementation of an Asynchronous Thinning Algorithm.- 9.5 A Quick Thinning Algorithm.- 9.6 Structural Shape Analysis.- 9.7 Transformation of Bilevel Images into Line Drawings.- 9.8 Bibliographical Notes.- 9.9 Relevant Literature.- 9.10 Problems.- 10: Curve Fitting and Curve Displaying.- 10.1 Introduction.- 10.2 Polynomial Interpolation.- 10.3 Bezier Polynomials.- 10.4 Computation of Bezier Polynomials.- 10.5 Some Properties of Bezier Polynomials.- 10.6 Circular Arcs.- 10.7 Display of Lines and Curves.- 10.7.1 Display of Curves through Differential Equations.- 10.7.2 Effect of Round-off Errors in Displays.- 10.8 A Point Editor.- 10.8.1 A Data Structure for a Point Editor.- 10.8.2 Input and Output for a Point Editor.- 10.9 Bibliographical Notes.- 10.10 Relevant Literature.- 10.11 Problems.- 11: Curve Fitting with Splines.- 11.1 Introduction.- 11.2 Fundamental Definitions.- 11.3 B-Splines.- 11.4 Computation with B-Splines.- 11.5 Interpolating B-Splines.- 11.6 B-Splines in Graphics.- 11.7 Shape Description and B-splines.- 11.8 Bibliographical Notes.- 11.9 Relevant Literature.- 11.10 Problems.- 12: Approximation of Curves.- 12.1 Introduction.- 12.2 Integral Square Error Approximation.- 12.3 Approximation Using B-Splines.- 12.4 Approximation by Splines with Variable Breakpoints.- 12.5 Polygonal Approximations.- 12.5.1 A Suboptimal Line Fitting Algorithm.- 12.5.2 A Simple Polygon Fitting Algorithm.- 12.5.3 Properties of Algorithm 12.2.- 12.6 Applications of Curve Approximation in Graphics.- 12.6.1 Handling of Groups of Points by a Point Editor.- 12.6.2 Finding Some Simple Approximating Curves.- 12.7 Bibliographical Notes.- 12.8 Relevant Literature.- 12.9 Problems.- 13: Surface Fitting and Surface Displaying.- 13.1 Introduction.- 13.2 Some Simple Properties of Surfaces.- 13.3 Singular Points of a Surface.- 13.4 Linear and Bilinear Interpolating Surface Patches.- 13.5 Lofted Surfaces.- 13.6 Coons Surfaces.- 13.7 Guided Surfaces.- 13.7.1 Bezier Surfaces.- 13.7.2 B-Spline Surfaces.- 13.8 The Choice of a Surface Partition.- 13.9 Display of Surfaces and Shading.- 13.10 Bibliographical Notes.- 13.11 Relevant Literature.- 13.12 Problems.- 14: The Mathematics of Two-Dimensional Graphics.- 14.1 Introduction.- 14.2 Two-Dimensional Transformations.- 14.3 Homogeneous Coordinates.- 14.3.1 Equation of a Line Defined by Two Points.- 14.3.2 Coordinates of a Point Defined as the Intersection of Two Lines.- 14.3.3 Duality.- 14.4 Line Segment Problems.- 14.4.1 Position of a Point with respect to a Line.- 14.4.2 Intersection of Line Segments.- 14.4.3 Position of a Point with respect to a Polygon.- 14.4.4 Segment Shadow.- 14.5 Bibliographical Notes.- 14.6 Relevant Literature.- 14.7 Problems.- 15: Polygon Clipping.- 15.1 Introduction.- 15.2 Clipping a Line Segment by a Convex Polygon.- 15.3 Clipping a Line Segment by a Regular Rectangle.- 15.4 Clipping an Arbitrary Polygon by a Line.- 15.5 Intersection of Two Polygons.- 15.6 Efficient Polygon Intersection.- 15.7 Bibliographical Notes.- 15.8 Relevant Literature.- 15.9 Problems.- 16: The Mathematics of Three-Dimensional Graphics.- 16.1 Introduction.- 16.2 Homogeneous Coordinates.- 16.2.1 Position of a Point with respect to a Plane.- 16.2.2 Intersection of Triangles.- 16.3 Three-Dimensional Transformations.- 16.3.1 Mathematical Preliminaries.- 16.3.2 Rotation around an Axis through the Origin.- 16.4 Orthogonal Projections.- 16.5 Perspective Projections.- 16.6 Bibliographical Notes.- 16.7 Relevant Literature.- 16.8 Problems.- 17: Creating Three-Dimensional Graphic Displays.- 17.1 Introduction.- 17.2 The Hidden Line and Hidden Surface Problems.- 17.2.1 Surface Shadow.- 17.2.2 Approaches to the Visibility Problem.- 17.2.3 Single Convex Object Visibility.- 17.3 A Quad Tree Visibility Algorithm.- 17.4 A Raster Line Scan Visibility Algorithm.- 17.5 Coherence.- 17.6 Nonlinear Object Descriptions.- 17.7 Making a Natural Looking Display.- 17.8 Bibliographical Notes.- 17.9 Relevant Literature.- 17.10 Problems.- Author Index.- Algorithm Index.
---
paper_title: Fast thinning algorithm for binary images
paper_content:
Abstract A fast thinning algorithm is proposed which achieves its increase in speed by applying any existing thinning algorithm to a greatly reduced amount of image information. The procedure compacts the image, applies an optimal thresholding routine, thins the result, and then expands the skeleton to its original scale. Results of testing the algorithm on a number of images are shown.
---
paper_title: Colour image skeletonisation
paper_content:
In this paper a new morphological technique suitable for colour image skeleton extraction is presented. Vector morphological operations are defined by means of a new ordering of vectors of the HSV colour space, which is a combination of conditional and partial sub-ordering. Then, these are used to extract skeletons of colour images in terms of erosions and openings. The proposed method was tested with a variety of images and such experimental results are provided. Its applications include image compression and recognition problems.
---
paper_title: Computing a family of skeletons of volumetric models for shape description
paper_content:
Skeletons are important shape descriptors in object representation and recognition. Typically, skeletons of volumetric models are computed via an iterative thinning process. However, traditional thinning methods often generate skeletons with complex structures that are unsuitable for shape description, and appropriate pruning methods are lacking. In this paper, we present a new method for computing skeletons on volumes by alternating thinning and a novel skeleton pruning routine. Our method creates a family of skeletons parameterized by two user-specified numbers that determine respectively the size of curve and surface features on the skeleton. As demonstrated on both real-world models and medical images, our method generates skeletons with simple and meaningful structures that are particularly suitable for describing cylindrical and plate-like shapes.
---
paper_title: A thinning algorithm for discrete binary images
paper_content:
The paper discusses thinning algorithms and introduces a characterization of skeletal pixels in terms of how many arcs of the boundary pass through a pixel. A new algorithm is proposed which proceeds by peeling off successive contours of the set to be thinned while identifying pixels where disjoint parts of the boundary have been mapped. The union of these pixels (plus a few others with similar properties) forms the skeleton. The algorithm can be implemented in such a way as to have time complexity which is a linear function of the area.
---
paper_title: An improved parallel thinning algorithm
paper_content:
This paper describes an improved thinning algorithmfor binary images. We improve thinning algorithm fromthe fundamental properties such as connectivity, one-pixelwidth, robust to noise and speed. In addition, inorder to overcome information loss, we integrated thecontour and skeleton of pattern and proposed thethreshold way. Some fundamental requirements ofthinning and the shape of pattern are preserved very well.Algorithm is very robust to noise and eliminate somespurious branch. Above all, it can overcome the loss ofinformation in pattern. Experimental results show theperformance of the proposed algorithm.
---
paper_title: The use of hidden deletable pixel detection to obtain bias-reduced skeletons in parallel thinning
paper_content:
Bias skeletons yielded by thinning patterns which usually degrade the preservation of significant geometric features of patterns are addressed in this paper. Based on the considerable performance of our pseudo 1-subcycle parallel thinning algorithm, we continuously adopt it to develop an intermediate vector analysis about the removed pixels in each normal iteration to obtain bias-reduced skeletons. The hidden deletable pixel which is a major factor to yield bias skeletons is newly defined and can be detected by the presented algorithms of vector analysis.
---
paper_title: 3D discrete skeleton generation by wave propagation on PR-octree for finite element mesh sizing
paper_content:
This paper proposes a new algorithm to generate a disconnected, three-dimensional (3D) skeleton and an application of such a skeleton to generate a finite element (FE) mesh sizing function of a solid. The mesh sizing function controls the element size and the gradient, and it is crucial in generating a desired FE mesh. Here, a geometry-based mesh sizing function is generated using a skeleton. A discrete skeleton is generated by propagating a wave from the boundary towards the interior on an octree lattice of an input solid model. As the wave propagates, the distance from the boundary and direction of the wave front are calculated at the lattice-nodes (vertices) of the new front. An approximate Euclidean distance metric is used to calculate the distance traveled by the wave. Skeleton points are generated at the region where the opposing fronts meet. The distance at these skeleton points is used to measure both proximity between geometric entities and feature size, and is utilized to generate the mesh size at the lattice-nodes. The proposed octree-based skeleton is more accurate and efficient than traditional voxel-based skeleton and proves to be great tool for mesh sizing function generation.
---
paper_title: Implementation and Advanced Results on the Non-Interrupted Skeletonization Algorithm
paper_content:
This paper is a continuation to the work in [1], in which a new algorithm for skeletonization is introduced. The algorithm given there and implemented for script and text is applied here on images like pictures, medical organs and signatures. This is very important for a lot of applications in pattern recognition, like, for example, data compression, transmission or saving. Some interesting results have been obtained and presented in this article. Comparing our results with others we can conclude that if it comes to thinning of scripts, words or sentences our method is as good as some of the latest approaches, when considering cursive script. However, when it comes to pictures, signatures or other more complicated images, our algorithm showed better and more precise results [6].
---
paper_title: Pattern thinning by contour tracing
paper_content:
Abstract The possibility of applying thinning algorithms to digital figures not already nearly thin is discussed in the framework of using contour tracing to inspect the pixels currently candidates for removal. The proposed algorithm takes advantage of this approach both to reduce the computation time needed to obtain the skeleton and to perform contour analysis in order to find the contour regions to be represented by the skeleton branches. Since different contour descriptions can be found, various skeleton structures can alternatively be generated depending on the requirements of the problem domain. The skeleton found may also be regarded as a discrete version of that obtained by using the medial axis transformation.
---
paper_title: A thinning algorithm for Arabic characters using ART2 neural network
paper_content:
The authors propose a thinning algorithm based on clustering the image data. They employ the ART2 network which is a self-organizing neural network for the clustering of Arabic characters. The skeleton is generated by plotting the cluster centers and connecting adjacent clusters by straight lines. This algorithm produces skeletons which are superior to the outputs of the conventional algorithms. It achieves a higher data-reduction efficiency and much simpler skeletons with less noise spurs. Moreover, to make the algorithm appropriate for real-time applications, an optimization technique is developed to reduce the time complexity of the algorithm. The developed algorithm is not limited to Arabic characters, and it can also be used to skeletonize characters of other languages.
---
paper_title: A parallel thinning algorithm with two-subiteration that generates one-pixel-wide skeletons
paper_content:
Many algorithms for vectorization by thinning have been devised and applied to a great variety of pictures and drawings for data compression, pattern recognition and raster-to-vector conversion. But parallel thinning algorithms which generate one-pixel-wide skeletons can have difficulty preserving the connectivity of an image. In this paper, we propose a 2-subiteration parallel thinning algorithm with template matching (PTA2T) which preserves image connectivity, produces thinner results, maintains very fast speed and generates one-pixel-wide skeletons.
---
paper_title: A distance map based skeletonization algorithm and its application in fiber recognition
paper_content:
The conversion of two-dimensional objects into a skeletal representation is a fundamental calculation in image processing and pattern recognition, because topological structures of objects can be preserved in the skeleton. For the purpose of shaped fiber recognition, the number of shaped fiberpsilas branches is necessary to be calculated. In this paper, a skeletonization algorithm is proposed, in which the fiberpsilas topology information is well preserved. The algorithm also shows good noise tolerance. At the end of this paper, the experimental results of shaped fiber recognition based on proposed algorithm are given.
---
paper_title: An asynchronous thinning algorithm
paper_content:
Abstract A problem common to many areas of pictorial information processing is the transformation of a bilevel (two-color) image into a line drawing. The first step in such a process is discussed: transformation of the bilevel image into another bilevel image that is “thin.” An algorithm is proposed that can be implemented in either parallel or sequential fashion, and therefore is suitable for a mixed operation where a group of processors operate in parallel with each one examining sequentially the pixels of a part of the image assigned to it. It is possible to process images in pieces and thin correctly parts that are intersected by the dividing lines. Therefore, the method can be used on large images, such as maps and engineering drawings. The algorithm may also be run with certain options that label the thinned image so that exact reconstruction of the original is possible.
---
paper_title: A theory for multiresolution signal decomposition: the wavelet representation
paper_content:
Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >
---
paper_title: An improved rotation-invariant thinning algorithm
paper_content:
Ahmed and Ward [Sept. 1995] have recently presented an elegant, rule-based rotation-invariant thinning algorithm to produce a single-pixel wide skeleton from a binary image. We show examples where this algorithm fails on two-pixel wide lines and propose a modified method which corrects this shortcoming based on graph connectivity.
---
paper_title: A systematic approach for designing 2-subcycle and pseudo 1-subcycle parallel thinning algorithms
paper_content:
Abstract This paper describes a systematic approach for designing parallel thinning algorithms, in which three new functions, named local connecting, extended local connecting and erosive direction number, are introduced. With these functions as well as two properties of shape invariance of local edges and local straight lines, all the possible cases of 2-subcycle parallel thinning algorithm are constructed and all the corresponding removing conditions are generated and assigned automatically. In addition, the pseudo 1-subcycle parallel thinning algorithm is also presented. Finally, the effects and efficiency of the above proposed algorithms are analyzed and compared with those of some presently well-known algorithms. Experimental results confirm this new approach, and an efficient and effective algorithm has been built for practical applications.
---
paper_title: Line thinning by line following
paper_content:
Abstract A line following algorithm is presented as a means to perform line thinning on elongated figures. The method is faster than conventional thinning algorithms and not as sensitive to noise. It can be useful for other applications also.
---
paper_title: Document Examiner Feature Extraction: Thinned vs. Skeletonised Handwriting Images
paper_content:
This paper describes two approaches to approximation of handwriting strokes for use in writer identification. One approach is based on a thinning method and produces raster skeleton whereas the other approximates handwriting strokes by cubic splines and produces a vector skeleton. The vector skeletonisation method is designed to preserve the individual features that can distinguish one writer from another. Extraction of structural character-level features of handwriting is performed using both skeletonisation methods and the results are compared. Use of the vector skeletonisation method resulted in lower error rate during the feature extraction stage. It also enabled to extract more structural features and improved the accuracy of writer identification from 78% to 98% in the experiment with 100 samples of grapheme "th" collected from 20 writers.
---
paper_title: Fast fully parallel thinning algorithms
paper_content:
Abstract Three new fast fully parallel 2-D thinning algorithms using reduction operators with 11-pixel supports are presented and evaluated. These are compared to earlier fully parallel thinning algorithms in tests on artificial and natural images; the new algorithms produce either superior parallel computation time (number of parallel iterations) or thinner medial curve results with comparable parallel computation time. Further, estimates of the best possible parallel computation time are developed which are applied to the specific test sets used. The parallel computation times of the new algorithms and one earlier algorithm are shown to approach closely or surpass these estimates and are in this sense near optimally fast.
---
paper_title: Wavelet-Based Approach to Character Skeleton
paper_content:
Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform
---
paper_title: A thinning algorithm based on prominence detection
paper_content:
Abstract Binary pictures are taken into account and an algorithm which transforms a digital figure into a set of 8-simple digital arcs and curves, by employing local sequential operations, il illustrated. The adopted procedure considers the removal of suitable contour elements of the given figure and it is repeated on every current set of 1-elements as long as the final thinned figure is obtained. The notion of local elongation is employed to find, at every step of the process, those contour regions which can be regarded as significant protrusions. Then such regions are detected before applying the removal operations, contributing hence to the isotropic behavior of the proposed thinning transformation. Further refinements of the algorithm are also discussed.
---
paper_title: Parallel syntactic thinning by recoding of binary pictures
paper_content:
Abstract In recoding with three bits the pixels of objects in binary images according to some measure of distance to the background, parallel thinning is achieved using syntactic rules. The codes in a 3 × 2 window are searched for interrelations which determine the new code of the central pixel. The local structure of the codes is determined through a binary cascade of tables. To each structure corresponds a rewriting rule. The new code is the output of the cascade. A two-scan postprocessing is needed to obtain a skeleton with linear structure.
---
paper_title: A contour characterization for multiply connected figures
paper_content:
Abstract The characterization of a digital figure in terms of the multiple pixels, i.e., the pixels placed where the contour selfinteracts, can provide useful shape cues. These pixels can satisfactorily be detected if a suitable definition of contour simplicity is available. Such a definition is given in this paper, for the case of multiply connected figures. The multiple pixels are those where the contour fails to be simple, and they can be identified by using 3 × 3 local operations.
---
paper_title: A thinning algorithm for Arabic characters using ART2 neural network
paper_content:
The authors propose a thinning algorithm based on clustering the image data. They employ the ART2 network which is a self-organizing neural network for the clustering of Arabic characters. The skeleton is generated by plotting the cluster centers and connecting adjacent clusters by straight lines. This algorithm produces skeletons which are superior to the outputs of the conventional algorithms. It achieves a higher data-reduction efficiency and much simpler skeletons with less noise spurs. Moreover, to make the algorithm appropriate for real-time applications, an optimization technique is developed to reduce the time complexity of the algorithm. The developed algorithm is not limited to Arabic characters, and it can also be used to skeletonize characters of other languages.
---
paper_title: A parallel thinning algorithm with two-subiteration that generates one-pixel-wide skeletons
paper_content:
Many algorithms for vectorization by thinning have been devised and applied to a great variety of pictures and drawings for data compression, pattern recognition and raster-to-vector conversion. But parallel thinning algorithms which generate one-pixel-wide skeletons can have difficulty preserving the connectivity of an image. In this paper, we propose a 2-subiteration parallel thinning algorithm with template matching (PTA2T) which preserves image connectivity, produces thinner results, maintains very fast speed and generates one-pixel-wide skeletons.
---
paper_title: A systematic approach for designing 2-subcycle and pseudo 1-subcycle parallel thinning algorithms
paper_content:
Abstract This paper describes a systematic approach for designing parallel thinning algorithms, in which three new functions, named local connecting, extended local connecting and erosive direction number, are introduced. With these functions as well as two properties of shape invariance of local edges and local straight lines, all the possible cases of 2-subcycle parallel thinning algorithm are constructed and all the corresponding removing conditions are generated and assigned automatically. In addition, the pseudo 1-subcycle parallel thinning algorithm is also presented. Finally, the effects and efficiency of the above proposed algorithms are analyzed and compared with those of some presently well-known algorithms. Experimental results confirm this new approach, and an efficient and effective algorithm has been built for practical applications.
---
paper_title: A 1-subcycle parallel thinning algorithm for producing perfect 8-curves and obtaining isotropic skeleton of an L-shape pattern
paper_content:
The authors review their pseudo-one-subcycle parallel thinning algorithm. They present two improved versions of this algorithm, and describe a two-stage structure to realize the one-subcycle parallel algorithm. The first stage is to produce a perfect 8-curve excluding T-junction thin line. The second is to obtain the isotropic skeleton of an L-shaped pattern. The two stage structure consists of a thinning table and a control unit. The thinning table is used to provide the attributions for an input 3*3 local pattern. The control unit is used to check the removal of the center pixel of this local pattern, and the inputs of the control unit also comprise the outputs of other neighboring thinning tables. This structure can exactly implement the proposed one-subcycle parallel algorithms. The two improved algorithms have been implemented. Experiments confirm that the improved algorithms can produce the desired effective thin line and also show that the structure realized is feasible and practicable. >
---
paper_title: A rotation invariant rule-based thinning algorithm for character recognition
paper_content:
This paper presents a novel rule-based system for thinning. The unique feature that distinguishes our thinning system is that it thins symbols to their central lines. This means that the shape of the symbol is preserved. It also means that the method is rotation invariant. The system has 20 rules in its inference engine. These rules are applied simultaneously to each pixel in the image. Therefore, the system has the advantages of symmetrical thinning and speed. The results show that the system is very efficient in preserving the topology of symbols and letters written in any language.
---
paper_title: A parallel thinning algorithm with two-subiteration that generates one-pixel-wide skeletons
paper_content:
Many algorithms for vectorization by thinning have been devised and applied to a great variety of pictures and drawings for data compression, pattern recognition and raster-to-vector conversion. But parallel thinning algorithms which generate one-pixel-wide skeletons can have difficulty preserving the connectivity of an image. In this paper, we propose a 2-subiteration parallel thinning algorithm with template matching (PTA2T) which preserves image connectivity, produces thinner results, maintains very fast speed and generates one-pixel-wide skeletons.
---
paper_title: Fast fully parallel thinning algorithms
paper_content:
Abstract Three new fast fully parallel 2-D thinning algorithms using reduction operators with 11-pixel supports are presented and evaluated. These are compared to earlier fully parallel thinning algorithms in tests on artificial and natural images; the new algorithms produce either superior parallel computation time (number of parallel iterations) or thinner medial curve results with comparable parallel computation time. Further, estimates of the best possible parallel computation time are developed which are applied to the specific test sets used. The parallel computation times of the new algorithms and one earlier algorithm are shown to approach closely or surpass these estimates and are in this sense near optimally fast.
---
| Title: K3M: A universal algorithm for image skeletonization and a review of thinning techniques
Section 1: Introduction
Description 1: Provide an overview of the importance of thinning in image processing, its history, and the motivation behind the proposed K3M algorithm.
Section 2: State of the art
Description 2: Present a historical review and taxonomy of thinning algorithms, along with descriptions of significant algorithms and their comparative efficiency.
Section 3: K3M: A modified KMM algorithm
Description 3: Detail the K3M algorithm, including its assumptions and definitions, the iterative nature of its phases, and the modifications made to improve upon the KMM algorithm.
Section 4: K3M results and comparison
Description 4: Showcase the thinning results of the K3M algorithm on various inputs such as isolated letters, handwritten words, and graphical symbols, and compare its performance with other known algorithms.
Section 5: Parallelizing sequential algorithms
Description 5: Discuss the challenges and strategies for parallelizing sequential thinning algorithms to utilize modern parallel processing techniques effectively.
Section 6: Conclusions and future work
Description 6: Summarize the findings of the paper, highlighting the advantages of the K3M algorithm, and outline future research directions to enhance the algorithm further. |
Survey of semi-regular multiresolution models for interactive terrain rendering | 10 | ---
paper_title: Multiresolution models for topographic surface description
paper_content:
Multiresolution terrain models describe a topographic surface at various levels of resolution. Besides providing a data compression mechanism for dense topographic data, such models enable us to analyze and visualize surfaces at a variable resolution. This paper provides a critical survey of multiresolution terrain models. Formal definitions of hierarchical and pyramidal models are presented. Multiresolution models proposed in the literature (namely, surface quadtree, restricted quadtree, quaternary triangulation, ternary triangulation, adaptive hierarchical triangulation, hierarchical Delaunay triangulation, and Delaunay pyramid) are described and discussed within such frameworks. Construction algorithms for all such models are given, together with an analysis of their time and space complexities.
---
paper_title: Multiresolution Modeling: Survey and Future Opportunities
paper_content:
For twenty years, it has been clear that many datasets are excessively complex for applications such as real-time display, and that techniques for controlling the level of detail of models are crucial. More recently, there has been considerable interest in techniques for the automatic simplification of highly detailed polygonal models into faithful approximations using fewer polygons. Several effective techniques for the automatic simplification of polygonal models have been developed in recent years. This report begins with a survey of the most notable available algorithms. Iterative edge contraction algorithms are of particular interest because they induce a certain hierarchical structure on the surface. An overview of this hierarchical structure is presented,including a formulation relating it to minimum spanning tree construction algorithms. Finally, we will consider the most significant directions in which existing simplification methods can be improved, and a summary of other potential applications for the hierarchies resulting from simplification.
---
paper_title: Level of Detail for 3D Graphics
paper_content:
From the Publisher: ::: Level of detail (LOD) techniques are increasingly used by professional real-time developers to strike the balance between breathtaking virtual worlds and smooth, flowing animation. Level of Detail for 3D Graphics brings together, for the first time, the mechanisms, principles, practices, and theory needed by every graphics developer seeking to apply LOD methods. ::: Continuing advances in level of detail management have brought this powerful technology to the forefront of 3D graphics optimization research. This book, written by the very researchers and developers who have built LOD technology, is both a state-of-the-art chronicle of LOD advances and a practical sourcebook, which will enable graphics developers from all disciplines to apply these formidable techniques to their own work. ::: Features ::: Is a complete, practical resource for programmers wishing to incorporate LOD technology into their own systems. Is an important reference for professionals in game development, computer animation, information visualization, real-time graphics and simulation, data capture and preview, CAD display, and virtual worlds. Is accessible to anyone familiar with the essentials of computer science and interactive computer graphics. Covers the full range of LOD methods from mesh simplification to error metrics, as well as advanced issues of human perception, temporal detail, and visual fidelity measurement. Includes an accompanying Web site rich in supplementary material including source code, tools, 3D models, public domain software, documentation, LOD updates, and more. ::: ::: Author Biography:David Luebke ::: David is an Assistant Professor in the Department of Computer Science at the University of Virginia. His principal research interest is the problem of rendering very complex scenes at interactive rates. His research focuses on software techniques such as polygonal simplification and occlusion culling to reduce the complexity of such scenes to manageable levels. Luebke's dissertation research, summarized in a SIGGRAPH '97 paper, introduced a dynamic, view-dependent approach to polygonal simplification for interactive rendering of extremely complex CAD models. He earned his Ph.D. at the University of North Carolina, and his Bachelors degree at the Colorado College. ::: Martin Reddy ::: Martin is a Senior Computer Scientist at SRI International where he works in the area of terrain visualization. This work involves the real-time display of massive terrain databases that are distributed over wide-area networks. His research interests include level of detail, visual perception, and computer graphics. His doctoral research involved the application of models of visual perception to real-time computer graphics systems, enabling the selection of level of detail based upon measures of human perception. He received his B.Sc. from the University of Strathclyde and his Ph.D. from the University of Edinburgh, UK. He is on the Board of Directors of the Web3D Consortium and chair of the GeoVRML Working Group. ::: Jonathan D. Cohen ::: Jon is an Assistant Professor in the Department of Computer Science at The Johns Hopkins University. He earned his Doctoral and Masters degrees from The University of North Carolina at Chapel Hill and earned his Bachelors degree from Duke University. His interests include polygonal simplification and other software acceleration techniques, parallel rendering architectures, collision detection, and high-quality interactive computer graphics. ::: Amitabh Varshney ::: Amitabh is an Associate Professor in the Department of Computer Science at the University of Maryland. His research interests lie in interactive computer graphics, scientific visualization, molecular graphics, and CAD. Varshney has worked on several aspects of level-of-detail simplifications including topology-preserving and topology-reducing simplifications, view-dependent simplifications, parallelization of simplification computation, as well as using triangle strips in multiresolution rendering. Varshney received his PhD and MS from the University of North Carolina at Chapel Hill in 1994 and 1991 respectively. He received his B. Tech. in Computer Science from the Indian Institute of Technology at Delhi in 1989. ::: Benjamin Watson ::: Ben is an Assistant Professor in Computer Science at Northwestern University. He earned his doctoral and Masters degrees at Georgia Tech's GVU Center, and his Bachelors degree at the University of California, Irvine. His dissertation focused on user performance effects of dynamic level of detail management. His other research interests include object simplification, medical applications of virtual reality, and 3D user interfaces. ::: Robert Huebner ::: Robert is the Director of Technology at Nihilistic Software, an independent development studio located in Marin County, California. Prior to co-founding Nihilistic, Robert has worked on a number of successful game titles including "Jedi Knight: Dark Forces 2" for LucasArts Entertainment, "Descent" for Parallax Software, and "Starcraft" for Blizzard Entertainment. Nihilistic's first title, "Vampire The Masquerade: Redemption" was released for the PC in 2000 and sold over 500,000 copies worldwide. Nihilistic's second project will be released in the Winter of 2002 on next-generation game consoles. Robert has spoken on game technology topics at SIGGRAPH, the Game Developer's Conference (GDC), and Electronic Entertainment Expo (E3). He also serves on the advisory board for the Game Developer's Conference and the International Game Developer's Association (IGDA). Robert's e-mail address is .
---
paper_title: A Developer's Survey of Polygonal Simplification Algorithms
paper_content:
Polygonal models currently dominate interactive computer graphics. This is chiefly because of their mathematical simplicity: polygonal models lend themselves to simple, regular rendering algorithms that embed well in hardware, which has in turn led to widely available polygon rendering accelerators for every platform. Unfortunately, the complexity of these models, which is measured by the number of polygons, seems to grow faster than the ability of our graphics hardware to render them interactively. Put another way, the number of polygons we want always seems to exceed the number of polygons we can afford. Polygonal simplification techniques offer one solution for developers grappling with complex models. These methods simplify the polygonal geometry of small, distant, or otherwise unimportant portions of the model, seeking to reduce the rendering cost without a significant loss in the scene's visual content. The article surveys polygonal simplification algorithms, identifies the issues in picking an algorithm, relates the strengths and weaknesses of different approaches, and describes several published algorithms.
---
paper_title: A Comparison of mesh simplification algorithms
paper_content:
Abstract In many applications the need for an accurate simplification of surface meshes is becoming more and more urgent. This need is not only due to rendering speed reasons, but also to allow fast transmission of 3D models in network-based applications. Many different approaches and algorithms for mesh simplification have been proposed in the last few years. We present a survey and a characterization of the fundamental methods. Moreover, the results of an empirical comparison of the simplification codes available in the public domain are discussed. Five implementations, chosen to give a wide spectrum of different topology preserving methods, were run on a set of sample surfaces. We compared empirical computational complexities and the approximation accuracy of the resulting output meshes.
---
paper_title: Survey of Polygonal Surface Simplification Algorithms
paper_content:
Abstract : This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewise-linear surface in 3-D defined by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically. The surface types range from height fields (bivariate functions), to manifolds, to non-manifold self-intersecting surfaces. Piecewise-linear curve simplification is also briefly surveyed.
---
paper_title: Smooth view-dependent level-of-detail control and its application to terrain rendering
paper_content:
The key to real-time rendering of large-scale surfaces is to locally adapt surface geometric complexity to changing view parameters. Several schemes have been developed to address this problem of view-dependent level-of-detail control. Among these, the view-dependent progressive mesh (VDPM) framework represents an arbitrary triangle mesh as a hierarchy of geometrically optimized refinement transformations, from which accurate approximating meshes can be efficiently retrieved. In this paper we extend the general VDPM framework to provide temporal coherence through the run-time creation of geomorphs. These geomorphs eliminate "popping" artifacts by smoothly interpolating geometry. Their implementation requires new output-sensitive data structures, which have the added benefit of reducing memory use. We specialize the VDPM framework to the important case of terrain rendering. To handle huge terrain grids, we introduce a block-based simplification scheme that constructs a progressive mesh as a hierarchy of block refinements. We demonstrate the need for an accurate approximation metric during simplification. Our contributions are highlighted in a real-time flyover of a large, rugged terrain. Notably, the use of geomorphs results in visually smooth rendering even at 72 frames/sec on a graphics workstation.
---
paper_title: Building and traversing a surface at variable resolution
paper_content:
The authors consider the multi-triangulation, a general model for representing surfaces at variable resolution based on triangle meshes. They analyse characteristics of the model that make it effective for supporting basic operations such as extraction of a surface approximation, and point location. An interruptible algorithm for extracting a representation at a resolution variable over the surface is presented. Different heuristics for building the model are considered and compared. Results on both the construction and the extraction algorithm are presented.
---
paper_title: A hybrid, hierarchical data structure for real-time terrain visualization
paper_content:
The approximation tree is a hybrid, hierarchical data structure for real-time terrain visualization which represents both geometry data and texture data of a terrain in a hierarchical manner. This framework can integrate different multiresolution modeling techniques operating on different types of data sets such as TLNs, regular grids, and non-regular grids. An approximation tree recursively aggregates terrain patches which reference geometry data and texture data. The rendering algorithm selects patches based on a geometric approximation error and a texture approximation error. Terrain shading and thematic texturing, which can be generated in a preprocessing step, improve the visual quality of level of detail models and eliminate the defects resulting from a Gouraud shaded geometric model since they do not depend on the current (probably reduced) geometry. The approximation tree can be implemented efficiently using object-oriented design principles. A case study for cartographic landscape visualization illustrates the use of approximation trees.
---
paper_title: Geometry clipmaps: terrain rendering using nested regular grids
paper_content:
Rendering throughput has reached a level that enables a novel approach to level-of-detail (LOD) control in terrain rendering. We introduce the geometry clipmap, which caches the terrain in a set of nested regular grids centered about the viewer. The grids are stored as vertex buffers in fast video memory, and are incrementally refilled as the viewpoint moves. This simple framework provides visual continuity, uniform frame rate, complexity throttling, and graceful degradation. Moreover it allows two new exciting real-time functionalities: decompression and synthesis. Our main dataset is a 40GB height map of the United States. A compressed image pyramid reduces the size by a remarkable factor of 100, so that it fits entirely in memory. This compressed data also contributes normal maps for shading. As the viewer approaches the surface, we synthesize grid levels finer than the stored terrain using fractal noise displacement. Decompression, synthesis, and normal-map computations are incremental, thereby allowing interactive flight at 60 frames/sec.
---
paper_title: TerraVision II: Visualizing Massive Terrain Databases in VRML
paper_content:
To disseminate 3D maps and spatial data over the Web, we designed massive terrain data sets accessible through either a VRML browser or the customized TerraVision II browser. Although not required to view the content, TerraVision II lets the user perform specialized browser level optimizations that offer increased efficiency and seamless interaction with the terrain data. We designed our framework to simplify terrain data maintenance and to let users dynamically select particular sets of geo-referenced data. Our implementation uses Java scripting to extend VRML's base functionality and the External Authoring Interface to offer application-specific management of the virtual geographic environment.
---
paper_title: Geometry clipmaps: terrain rendering using nested regular grids
paper_content:
Rendering throughput has reached a level that enables a novel approach to level-of-detail (LOD) control in terrain rendering. We introduce the geometry clipmap, which caches the terrain in a set of nested regular grids centered about the viewer. The grids are stored as vertex buffers in fast video memory, and are incrementally refilled as the viewpoint moves. This simple framework provides visual continuity, uniform frame rate, complexity throttling, and graceful degradation. Moreover it allows two new exciting real-time functionalities: decompression and synthesis. Our main dataset is a 40GB height map of the United States. A compressed image pyramid reduces the size by a remarkable factor of 100, so that it fits entirely in memory. This compressed data also contributes normal maps for shading. As the viewer approaches the surface, we synthesize grid levels finer than the stored terrain using fractal noise displacement. Decompression, synthesis, and normal-map computations are incremental, thereby allowing interactive flight at 60 frames/sec.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Multiresolution Compression and Visualization of Global Topographic Data
paper_content:
We present a multiresolution model for terrain surfaces which is able to handle large-scale global topographic data. It is based on a hierarchical decomposition of the sphere by a recursive bisection triangulation in geographic coordinates. Error indicators allow the representation of the data at various levels of detail and enable data compression by local omission of data values. The resulting adaptive hierarchical triangulation is stored using a bit code of the underlying binary tree and additionally, relative pointers which allow a selective tree traversal. This way, it is possible to work directly on the compressed data. We show that significant compression rates can be obtained already for small threshold values. In a visualization application, adaptive triangulations which consist of hundreds of thousands of shaded triangles are extracted and drawn at interactive rates.
---
paper_title: Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies
paper_content:
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.
---
paper_title: Top–Down View–Dependent Terrain Triangulation using the Octagon Metric
paper_content:
A boat bumper/fender, comprising a resilient, flexible, weather resistant, body having opposing first end and a second ends, said body composed of a closed cell foam material and having length, width and height, said body being of greater length than width; said body having an internal passage extending substantially the length of said body and terminating at said first and second ends; bushing means adapted to be inserted into said first and second end of said body; anchoring means extending through said body in said passage and extending beyond the ends of said body; said anchoring means further equipped with stop means to cooperatively interact with said bushing to limit movement of the body along said anchoring means.
---
paper_title: BDAM - Batched Dynamic Adaptive Meshes for High Performance Terrain Visualization
paper_content:
This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs and are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependent texture and geometry refinement is performed at each frame through a stateless traversal algorithm. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. Both preprocessing and rendering exploit out-of-core techniques to be fully scalable and to manage large terrain datasets.
---
paper_title: Visualization of large terrains made easy
paper_content:
We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts.Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient.
---
paper_title: Terrain Simplification Simplified : A General Framework for View-Dependent Out-of-Core Visualization
paper_content:
We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.
---
paper_title: Scalable compression and rendering of textured terrain data
paper_content:
Several sophisticated methods are available for efficient rendering of out-of-core terrain data sets. For huge data sets the use of preprocessed tiles has proven to be more efficient than continuous levels of detail, since in the latter case the screen space error has to be verified for individual triangles. There are some prevailing problems of these approaches: i) the partitioning and simplification of the original data set and ii) the accurate rendering of these data sets. Current approaches still trade the approximation error in image space for increased frame rates. To overcome these problems we propose a data structure and LOD scheme. These enable the real-time rendering of out-of-core data sets while guaranteeing geometric and texture accuracy of one pixel between original and rendered mesh in image space. To accomplish this, we utilize novel scalable techniques for integrated simplification, compression, and rendering. The combination of these techniques with impostors and occlusion culling yields a truly output sensitive algorithm for terrain data sets. We demonstrate the potential of our approach by presenting results for several terrain data sets with sizes up to 16k x 16k. The results show the unprecedented fidelity of the visualization, which is maintained even during real-time exploration of the data sets.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: Right-Triangulated Irregular Networks
paper_content:
We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying two-dimensional space using right-angled triangles. The methods we discuss permit a single approximation to have a varying level of approximation accuracy across the surface. Thus, for example, the area close to an observer may be represented with greater detail than areas which lie outside their field of view. ::: ::: We discuss the application of this hierarchical data structure to the problem of interactive terrain visualization. We point out some of the advantages of this method in terms of memory usage and speed.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Surface Modeling Using Quadtrees
paper_content:
Two quadtree variants effective in modeling 2 1/2-d surfaces are presented. The restricted quadtree can handle regularly sampled data. For irregular data, embedding a TIN inside a PMR quadtree is suggested. Together, these schemes facilitate the handling of most types of input within a single framework. Algorithms for the construction of both data structures from their respective data formats are described and analyzed. The possible application of each of the models to the problem of visibility determination is considered and theoretical evaluation of its performance is made.
---
paper_title: Variable Resolution Triangulations
paper_content:
Abstract A comprehensive study of multiresolution decompositions of planar domains into triangles is given. A general model is introduced, called a Multi-Triangulation (MT), which is based on a collection of fragments of triangulations arranged into a directed acyclic graph. Different decompositions of a domain can be obtained by combining different fragments of the model. Theoretical results on the expressive power of the MT are given. An efficient algorithm is proposed that can extract a triangulation from the MT, whose level of detail is variable over the domain according to a given threshold function. The algorithm works in linear time, and the extracted representation has minimum size among all possible triangulations that can be built from triangles in the MT, and that satisfy the given level of detail. Major applications of these results are in real-time rendering of complex surfaces, such as topographic surfaces in flight simulation.
---
paper_title: Accurate triangulations of deformed, intersecting surfaces
paper_content:
A quadtree algorithm is developed to triangulate deformed, intersecting parametric surfaces. The biggest problem with adaptive sampling is to guarantee that the triangulation is accurate within a given tolerance. A new method guarantees the accuracy of the triangulation, given a "Lipschitz" condition on the surface definition. The method constructs a hierarchical set of bounding volumes for the surface, useful for ray tracing and solid modeling operations. The task of adaptively sampling a surface is broken into two parts: a subdivision mechanism for recursively subdividing a surface, and a set of subdivision criteria for controlling the subdivision process.An adaptive sampling technique is said to be robust if it accurately represents the surface being sampled. A new type of quadtree, called a restricted quadtree, is more robust than the traditional unrestricted quadtree at adaptive sampling of parametric surfaces. Each sub-region in the quadtree is half the width of the previous region. The restricted quadtree requires that adjacent regions be the same width within a factor of two, while the traditional quadtree makes no restriction on neighbor width. Restricted surface quadtrees are effective at recursively sampling a parametric surface. Quadtree samples are concentrated in regions of high curvature, and along intersection boundaries, using several subdivision criteria. Silhouette subdivision improves the accuracy of the silhouette boundary when a viewing transformation is available at sampling time. The adaptive sampling method is more robust than uniform sampling, and can be more efficient at rendering deformed, intersecting parametric surfaces.
---
paper_title: A hybrid, hierarchical data structure for real-time terrain visualization
paper_content:
The approximation tree is a hybrid, hierarchical data structure for real-time terrain visualization which represents both geometry data and texture data of a terrain in a hierarchical manner. This framework can integrate different multiresolution modeling techniques operating on different types of data sets such as TLNs, regular grids, and non-regular grids. An approximation tree recursively aggregates terrain patches which reference geometry data and texture data. The rendering algorithm selects patches based on a geometric approximation error and a texture approximation error. Terrain shading and thematic texturing, which can be generated in a preprocessing step, improve the visual quality of level of detail models and eliminate the defects resulting from a Gouraud shaded geometric model since they do not depend on the current (probably reduced) geometry. The approximation tree can be implemented efficiently using object-oriented design principles. A case study for cartographic landscape visualization illustrates the use of approximation trees.
---
paper_title: Surface Modeling Using Quadtrees
paper_content:
Two quadtree variants effective in modeling 2 1/2-d surfaces are presented. The restricted quadtree can handle regularly sampled data. For irregular data, embedding a TIN inside a PMR quadtree is suggested. Together, these schemes facilitate the handling of most types of input within a single framework. Algorithms for the construction of both data structures from their respective data formats are described and analyzed. The possible application of each of the models to the problem of visibility determination is considered and theoretical evaluation of its performance is made.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Variable Resolution 4-k Meshes: Concepts and Applications
paper_content:
In this paper we introduce variable resolution 4-k meshes, a powerful structure for the representation of geometric objects at multiple levels of detail. It combines most properties of other related descriptions with several advantages, such as more flexibility and greater expressive power. The main unique feature of the 4-k mesh structure lies in its variable resolution capability, which is crucial for adaptive computation. We also give an overview of the different methods for constructing the 4-k mesh representation, as well as the basic algorithms necessary to incorporate it in modeling and graphics applications.
---
paper_title: Finding neighbors of equal size in linear quadtrees and octrees in constant time
paper_content:
Abstract Linear quadtrees and octrees are data structures which are of interest in image processing, computer graphics, and solid modeling. Their representation involves spatial addresses called location codes. For many of the operations on objects in linear quadtree and octree representation, finding neighbors is a basic operation. By considering the components of a location code, named dilated integers, a representation and associated addition and subtraction operations may be defined which are efficient in execution. The operations form the basis for the definition of location code addition and subtraction, with which finding neighbors of equal size is accomplished in constant time. The translation of pixels is a related operation. The results for linear quadtrees can be generalized without difficulty to linear octrees.
---
paper_title: Surface Modeling Using Quadtrees
paper_content:
Two quadtree variants effective in modeling 2 1/2-d surfaces are presented. The restricted quadtree can handle regularly sampled data. For irregular data, embedding a TIN inside a PMR quadtree is suggested. Together, these schemes facilitate the handling of most types of input within a single framework. Algorithms for the construction of both data structures from their respective data formats are described and analyzed. The possible application of each of the models to the problem of visibility determination is considered and theoretical evaluation of its performance is made.
---
paper_title: Computational analysis of 4-8 meshes with application to surface simplification using global error
paper_content:
We present computational results when computing approximations of a class of meshes with subdivision connectivity, known as 4-8 meshes. We consider algorithms using vertex decimation or vertex insertion. We explain that a full decomposition of a 4-8 mesh using global error can be obtained with an decimation algorithm. Our algorithm produces progressive and adaptive representations of terrain data or subdivision surfaces in having arbitrary topology.
---
paper_title: Image encoding with triangulation wavelets
paper_content:
We demonstrate some wavelet-based image processing applications of a class of simplicial grids arising in finite element computations and computer graphics. The cells of a triangular grid form the set of leaves of a binary tree and the nodes of a directed graph consisting of a single cycle. The leaf cycle of a uniform grid forms a pattern for pixel image scanning and for coherent computation of coefficients of splines and wavelets. A simple form of image encoding is accomplished with a 1D quadrature mirror filter whose coefficients represent an expansion of the image in terms of 2D Haar wavelets with triangular support. A combination the leaf cycle and an inherent quadtree structure allow efficient neighbor finding, grid refinement, tree pruning and storage. Pruning of the simplex tree yields a partially compressed image which requires no decoding, but rather may be rendered as a shaded triangulation. This structure and its generalization to n-dimensions form a convenient setting for wavelet analysis and computations based on simplicial grids.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Using Semi-Regular 4-8 Meshes for Subdivision Surfaces
paper_content:
Abstract Semi-regular 4–8 meshes are refinable triangulated quadmngulations. They provide a powerful hierarchical structure for multiresolution applications. In this paper, we show how to decompose the Doa-Sabin and Catmull-Clark subdivision schemes using 4–8 refinement. The described technique makes it possible to use these classical subdivision surfaces with semi-regular 4–8 meshes.
---
paper_title: Space Filling Curves and Their Use in the Design of Geometric Data Structures
paper_content:
We are given a two-dimensional square grid of size N×N, where N∶=2n and n≥0. A space filling curve (SFC) is a numbering of the cells of this grid with numbers from c+1 to c+N2, for some c≥0. We call a SFC recursive (RSFC) if it can be recursively divided into four square RSFCs of equal size. Examples of well-known RSFCs include the Hilbert curve, the z-curve, and the Gray code.
---
paper_title: QuadTIN: quadtree based triangulated irregular networks
paper_content:
Interactive visualization of large digital elevation models is of continuing interest in scientific visualization, GIS, and virtual reality applications. Taking advantage of the regular structure of grid digital elevation models, efficient hierarchical multiresolution triangulation and adaptive level-of-detail (LOD) rendering algorithms have been developed for interactive terrain visualization. Despite the higher triangle count, these approaches generally outperform mesh simplification methods that produce irregular triangulated network (TIN) based LOD representations. In this project we combine the advantage of a TIN based mesh simplification preprocess with high-performance quadtree based LOD triangulation and rendering at run-time. This approach, called QuadTIN, generates an efficient quadtree triangulation hierarchy over any irregular point set that may originate from irregular terrain sampling or from reducing oversampling in high-resolution grid digital elevation models.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Variable Resolution 4-k Meshes: Concepts and Applications
paper_content:
In this paper we introduce variable resolution 4-k meshes, a powerful structure for the representation of geometric objects at multiple levels of detail. It combines most properties of other related descriptions with several advantages, such as more flexibility and greater expressive power. The main unique feature of the 4-k mesh structure lies in its variable resolution capability, which is crucial for adaptive computation. We also give an overview of the different methods for constructing the 4-k mesh representation, as well as the basic algorithms necessary to incorporate it in modeling and graphics applications.
---
paper_title: Efficient algorithms for embedded rendering of terrain models
paper_content:
Digital terrains are generally large files and need to be simplified to be rendered efficiently. We propose to build an adaptive embedded triangulation based on a binary tree structure to generate multiple levels of details. We present a O(nlogn) decimation algorithm and a O(nlogn) refinement algorithm, where n is the number of elevation points. We compare them in a rate-distortion (RD) framework. The algorithms are based on an improved version of the optimal tree pruning algorithm G-BFOS allowing one to deal with constrained tree structures and non-monotonic tree functionals.
---
paper_title: Computational analysis of 4-8 meshes with application to surface simplification using global error
paper_content:
We present computational results when computing approximations of a class of meshes with subdivision connectivity, known as 4-8 meshes. We consider algorithms using vertex decimation or vertex insertion. We explain that a full decomposition of a 4-8 mesh using global error can be obtained with an decimation algorithm. Our algorithm produces progressive and adaptive representations of terrain data or subdivision surfaces in having arbitrary topology.
---
paper_title: Using Semi-Regular 4-8 Meshes for Subdivision Surfaces
paper_content:
Abstract Semi-regular 4–8 meshes are refinable triangulated quadmngulations. They provide a powerful hierarchical structure for multiresolution applications. In this paper, we show how to decompose the Doa-Sabin and Catmull-Clark subdivision schemes using 4–8 refinement. The described technique makes it possible to use these classical subdivision surfaces with semi-regular 4–8 meshes.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Multiresolution Compression and Visualization of Global Topographic Data
paper_content:
We present a multiresolution model for terrain surfaces which is able to handle large-scale global topographic data. It is based on a hierarchical decomposition of the sphere by a recursive bisection triangulation in geographic coordinates. Error indicators allow the representation of the data at various levels of detail and enable data compression by local omission of data values. The resulting adaptive hierarchical triangulation is stored using a bit code of the underlying binary tree and additionally, relative pointers which allow a selective tree traversal. This way, it is possible to work directly on the compressed data. We show that significant compression rates can be obtained already for small threshold values. In a visualization application, adaptive triangulations which consist of hundreds of thousands of shaded triangles are extracted and drawn at interactive rates.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: Right-Triangulated Irregular Networks
paper_content:
We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying two-dimensional space using right-angled triangles. The methods we discuss permit a single approximation to have a varying level of approximation accuracy across the surface. Thus, for example, the area close to an observer may be represented with greater detail than areas which lie outside their field of view. ::: ::: We discuss the application of this hierarchical data structure to the problem of interactive terrain visualization. We point out some of the advantages of this method in terms of memory usage and speed.
---
paper_title: Methods for user-based reduction of model complexity for virtual planetary exploration
paper_content:
SUMMARY AND CONCLUSION5.1 Summary ofmodel and resultsWe have described a model for analysis of object geometric complexity and model display in visual simulationsystems that more fully exploits the power of adaptive methods than has been used previously. Our model is stronglyoriented towards user and application needs. A general version of the model that uses multiple analysis criteria, adaptsto system load feedback, and is applicable to nearly any real-time feedback control system was presented first. We thenspecialized that model for the case of virtual environment systems to one that includes three criteria: an application taskdependent one analyzed in Environment Coordinate Space, a view dependent criterion analyzed in 3D View CoordinateSpace, and a visual perception dependent criterion analyzed in 2D Display Coordinate Space. The latter criterion incor-porates both static and dynamic perceptual effects. Finally, we described a further specialization and implementation ofour model in the NASA Ames Virtual Planetary Exploration Testbed. The VPE Testbed utilizes hierarchical spatial sub-division of regularly gridded polygon mesh models of terrain surfaces for computational efficiency.Although portions of our model implementation take advantage of current system hardware and software architec-ture, the general principles are independent of these. Thus, with any performance advances in future architectures, ourmethods will always be able to enhance that performance.The results of our work can provide either a significant increase in visual appearance for a given frame update rate,or a higher update rate for comparable visual appearance than possible without using our methods. We address the"many polygons, little time" problem by showing the user many "important' polygons and fewer "unimportant" polygons.5.2
---
paper_title: Smooth view-dependent level-of-detail control and its application to terrain rendering
paper_content:
The key to real-time rendering of large-scale surfaces is to locally adapt surface geometric complexity to changing view parameters. Several schemes have been developed to address this problem of view-dependent level-of-detail control. Among these, the view-dependent progressive mesh (VDPM) framework represents an arbitrary triangle mesh as a hierarchy of geometrically optimized refinement transformations, from which accurate approximating meshes can be efficiently retrieved. In this paper we extend the general VDPM framework to provide temporal coherence through the run-time creation of geomorphs. These geomorphs eliminate "popping" artifacts by smoothly interpolating geometry. Their implementation requires new output-sensitive data structures, which have the added benefit of reducing memory use. We specialize the VDPM framework to the important case of terrain rendering. To handle huge terrain grids, we introduce a block-based simplification scheme that constructs a progressive mesh as a hierarchy of block refinements. We demonstrate the need for an accurate approximation metric during simplification. Our contributions are highlighted in a real-time flyover of a large, rugged terrain. Notably, the use of geomorphs results in visually smooth rendering even at 72 frames/sec on a graphics workstation.
---
paper_title: Scalable compression and rendering of textured terrain data
paper_content:
Several sophisticated methods are available for efficient rendering of out-of-core terrain data sets. For huge data sets the use of preprocessed tiles has proven to be more efficient than continuous levels of detail, since in the latter case the screen space error has to be verified for individual triangles. There are some prevailing problems of these approaches: i) the partitioning and simplification of the original data set and ii) the accurate rendering of these data sets. Current approaches still trade the approximation error in image space for increased frame rates. To overcome these problems we propose a data structure and LOD scheme. These enable the real-time rendering of out-of-core data sets while guaranteeing geometric and texture accuracy of one pixel between original and rendered mesh in image space. To accomplish this, we utilize novel scalable techniques for integrated simplification, compression, and rendering. The combination of these techniques with impostors and occlusion culling yields a truly output sensitive algorithm for terrain data sets. We demonstrate the potential of our approach by presenting results for several terrain data sets with sizes up to 16k x 16k. The results show the unprecedented fidelity of the visualization, which is maintained even during real-time exploration of the data sets.
---
paper_title: Fast view-dependent level-of-detail rendering using cached geometry
paper_content:
Level-of-detail rendering is essential for rendering very large, detailed worlds in real-time. Unfortunately, level-of-detail computations can be expensive, creating a bottleneck at the CPU.This paper presents the CABTT algorithm, an extension to existing binary-triangle-tree-based level-of-detail algorithms. Instead of manipulating triangles, the CABTT algorithm instead operates on clusters of geometry called aggregate triangles. This reduces CPU overhead, eliminating a bottleneck common to level-of-detail algorithms. Since aggregate triangles stay fixed over several frames, they may be cached on the video card. This further reduces CPU load and fully utilizes the hardware accelerated rendering pipeline on modern video cards. These improvements result in a fourfold increase in frame rate over ROAM [7] at high detail levels. Our implementation renders an approximation of an 8 million triangle heightfield at 42 frames per second with an maximum error of 1 pixel on consumer hardware.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: QuadTIN: quadtree based triangulated irregular networks
paper_content:
Interactive visualization of large digital elevation models is of continuing interest in scientific visualization, GIS, and virtual reality applications. Taking advantage of the regular structure of grid digital elevation models, efficient hierarchical multiresolution triangulation and adaptive level-of-detail (LOD) rendering algorithms have been developed for interactive terrain visualization. Despite the higher triangle count, these approaches generally outperform mesh simplification methods that produce irregular triangulated network (TIN) based LOD representations. In this project we combine the advantage of a TIN based mesh simplification preprocess with high-performance quadtree based LOD triangulation and rendering at run-time. This approach, called QuadTIN, generates an efficient quadtree triangulation hierarchy over any irregular point set that may originate from irregular terrain sampling or from reducing oversampling in high-resolution grid digital elevation models.
---
paper_title: Fast view-dependent level-of-detail rendering using cached geometry
paper_content:
Level-of-detail rendering is essential for rendering very large, detailed worlds in real-time. Unfortunately, level-of-detail computations can be expensive, creating a bottleneck at the CPU.This paper presents the CABTT algorithm, an extension to existing binary-triangle-tree-based level-of-detail algorithms. Instead of manipulating triangles, the CABTT algorithm instead operates on clusters of geometry called aggregate triangles. This reduces CPU overhead, eliminating a bottleneck common to level-of-detail algorithms. Since aggregate triangles stay fixed over several frames, they may be cached on the video card. This further reduces CPU load and fully utilizes the hardware accelerated rendering pipeline on modern video cards. These improvements result in a fourfold increase in frame rate over ROAM [7] at high detail levels. Our implementation renders an approximation of an 8 million triangle heightfield at 42 frames per second with an maximum error of 1 pixel on consumer hardware.
---
paper_title: BDAM - Batched Dynamic Adaptive Meshes for High Performance Terrain Visualization
paper_content:
This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs and are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependent texture and geometry refinement is performed at each frame through a stateless traversal algorithm. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. Both preprocessing and rendering exploit out-of-core techniques to be fully scalable and to manage large terrain datasets.
---
paper_title: Hyperblock-QuadTIN: Hyper-block quadtree based triangulated irregular networks
paper_content:
rendering has always been an expensive task due to large input data models. Hierarchical mul-tiresolution triangulation and level-of-detail rendering algorithms over regular structures of grid digital elevation models have been widely used for interactive terrain visualization. The main drawbacks of these are the large cost of memory storage required and the possible over-sampling of high- resolution terrain models. Triangulated irregular networks (TIN) can reduce the amount of vertices at the expense of more complex and slower memory data access. We present a hyper-block quadtree based triangulated irregular networks approach, where the notion of vertex-selection is extended to block-selection. The hyper-block structure allows to store different pre-calculated triangulations. This reduces the vertex selection time per frame and removes the calculations needed to build the geometric rendering primitives (triangle strips) of the scene at the expense of a larger number of selected vertices. The presented approach shows a speed increment of 20% for high-quality terrain rendering with small screen-projection error thresholds.
---
paper_title: Planet-sized batched dynamic adaptive meshes (P-BDAM)
paper_content:
We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: BDAM - Batched Dynamic Adaptive Meshes for High Performance Terrain Visualization
paper_content:
This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs and are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependent texture and geometry refinement is performed at each frame through a stateless traversal algorithm. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. Both preprocessing and rendering exploit out-of-core techniques to be fully scalable and to manage large terrain datasets.
---
paper_title: Batched multi triangulation
paper_content:
The multi triangulation framework (MT) is a very general approach for managing adaptive resolution in triangle meshes. The key idea is arranging mesh fragments at different resolution in a directed acyclic graph (DAG) which encodes the dependencies between fragments, thereby encompassing a wide class of multiresolution approaches that use hierarchies or DAGs with predefined topology. On current architectures, the classic MT is however unfit for real-time rendering, since DAG traversal costs vastly dominate raw rendering costs. In this paper, we redesign the MT framework in a GPU friendly fashion, moving its granularity from triangles to precomputed optimized triangle patches. The patches can be conveniently tri-stripped and stored in secondary memory to be loaded on demand, ready to be sent to the GPU using preferential paths. In this manner, central memory only contains the DAG structure and CPU workload becomes negligible. The major contributions of this work are: a new out-of-core multiresolution framework, that, just like the MT, encompasses a wide class of multiresolution structures; a robust and elegant way to build a well conditioned MT DAG by introducing the concept of V-partitions, that can encompass various state of the art multiresolution algorithms; an efficient multithreaded rendering engine and a general subsystem for the external memory processing and simplification of huge meshes.
---
paper_title: Planet-sized batched dynamic adaptive meshes (P-BDAM)
paper_content:
We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
---
paper_title: Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies
paper_content:
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.
---
paper_title: Accurate triangulations of deformed, intersecting surfaces
paper_content:
A quadtree algorithm is developed to triangulate deformed, intersecting parametric surfaces. The biggest problem with adaptive sampling is to guarantee that the triangulation is accurate within a given tolerance. A new method guarantees the accuracy of the triangulation, given a "Lipschitz" condition on the surface definition. The method constructs a hierarchical set of bounding volumes for the surface, useful for ray tracing and solid modeling operations. The task of adaptively sampling a surface is broken into two parts: a subdivision mechanism for recursively subdividing a surface, and a set of subdivision criteria for controlling the subdivision process.An adaptive sampling technique is said to be robust if it accurately represents the surface being sampled. A new type of quadtree, called a restricted quadtree, is more robust than the traditional unrestricted quadtree at adaptive sampling of parametric surfaces. Each sub-region in the quadtree is half the width of the previous region. The restricted quadtree requires that adjacent regions be the same width within a factor of two, while the traditional quadtree makes no restriction on neighbor width. Restricted surface quadtrees are effective at recursively sampling a parametric surface. Quadtree samples are concentrated in regions of high curvature, and along intersection boundaries, using several subdivision criteria. Silhouette subdivision improves the accuracy of the silhouette boundary when a viewing transformation is available at sampling time. The adaptive sampling method is more robust than uniform sampling, and can be more efficient at rendering deformed, intersecting parametric surfaces.
---
paper_title: Topology preserving and controlled topology simplifying multiresolution isosurface extraction
paper_content:
Multiresolution methods are becoming increasingly important tools for the interactive visualization of very large data sets. Multiresolution isosurface visualization allows the user to explore volume data using simplified and coarse representations of the isosurface for overview images, and finer resolution in areas of high interest or when zooming into the data. Ideally, a coarse isosurface should have the same topological structure as the original. The topological genus of the isosurface is one important property which is often neglected in multiresolution algorithms. This results in uncontrolled topological changes which can occur whenever the level-of-detail is changed. The scope of this paper is to propose an efficient technique which allows preservation of topology as well as controlled topology simplification in multiresolution isosurface extraction.
---
paper_title: Error Indicators for Multilevel Visualization and Computing on Nested Grids
paper_content:
Abstract Nowadays computing and post processing of simulation data is often based on efficient hierarchical methods. While multigrid methods are already established standards for fast simulation codes, multiresolution visualization methods have only recently become an important ingredient of real–time interactive post processing. Both methodologies use local error indicators which serve as criteria where to refine the data representation on the physical domain. In this article we give an overview on different types of error measurement on nested grids and compare them for selected applications in 2D as well as in 3D. Furthermore, it is pointed out that a certain saturation of the considered error indicator plays an important role in multilevel visualization and computing on implicitly defined adaptive grids.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: Right-Triangulated Irregular Networks
paper_content:
We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying two-dimensional space using right-angled triangles. The methods we discuss permit a single approximation to have a varying level of approximation accuracy across the surface. Thus, for example, the area close to an observer may be represented with greater detail than areas which lie outside their field of view. ::: ::: We discuss the application of this hierarchical data structure to the problem of interactive terrain visualization. We point out some of the advantages of this method in terms of memory usage and speed.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Top–Down View–Dependent Terrain Triangulation using the Octagon Metric
paper_content:
A boat bumper/fender, comprising a resilient, flexible, weather resistant, body having opposing first end and a second ends, said body composed of a closed cell foam material and having length, width and height, said body being of greater length than width; said body having an internal passage extending substantially the length of said body and terminating at said first and second ends; bushing means adapted to be inserted into said first and second end of said body; anchoring means extending through said body in said passage and extending beyond the ends of said body; said anchoring means further equipped with stop means to cooperatively interact with said bushing to limit movement of the body along said anchoring means.
---
paper_title: Visualization of large terrains made easy
paper_content:
We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts.Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient.
---
paper_title: Terrain Simplification Simplified : A General Framework for View-Dependent Out-of-Core Visualization
paper_content:
We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies
paper_content:
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.
---
paper_title: BDAM - Batched Dynamic Adaptive Meshes for High Performance Terrain Visualization
paper_content:
This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs and are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependent texture and geometry refinement is performed at each frame through a stateless traversal algorithm. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. Both preprocessing and rendering exploit out-of-core techniques to be fully scalable and to manage large terrain datasets.
---
paper_title: Visualization of large terrains made easy
paper_content:
We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts.Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient.
---
paper_title: Terrain Simplification Simplified : A General Framework for View-Dependent Out-of-Core Visualization
paper_content:
We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.
---
paper_title: ROAMing terrain: Real-time Optimally Adapting Meshes
paper_content:
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
---
paper_title: Right-Triangulated Irregular Networks
paper_content:
We describe a hierarchical data structure for representing a digital terrain (height field) which contains approximations of the terrain at different levels of detail. The approximations are based on triangulations of the underlying two-dimensional space using right-angled triangles. The methods we discuss permit a single approximation to have a varying level of approximation accuracy across the surface. Thus, for example, the area close to an observer may be represented with greater detail than areas which lie outside their field of view. ::: ::: We discuss the application of this hierarchical data structure to the problem of interactive terrain visualization. We point out some of the advantages of this method in terms of memory usage and speed.
---
paper_title: Planet-sized batched dynamic adaptive meshes (P-BDAM)
paper_content:
We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
---
paper_title: Real-time, continuous level of detail rendering of height fields
paper_content:
We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
---
paper_title: An Integrated Global GIS and Visual Simulation System
paper_content:
This paper reports on an integrated visual simulation system supporting visualization of global multiresolution terrain elevation and imagery data, static and dynamic 3D objects with multiple levels of detail, non-protrusive features such as roads and rivers, distributed simulation and real-time sensor input, and an embedded geographic information system. The requirements of real-time rendering, very large datasets, and heterogeneous detail management strongly affect the structure of this system. Use of hierarchical spatial data structures and multiple coordinate systems allow for visualization and manipulation of huge terrain datasets spanning the entire surface of the Earth at resolutions well below one meter. The multithreaded nature of the system supports multiple windows with independent, stereoscopic views. The system is portable, built on OpenGL, POSIX threads, and X11/Motif windowed interface. It has been tested and evaluated in the field with a variety of terrain data, updates due to real-time sensor input, and display of networked DIS simulations.
---
paper_title: The Alps at your fingertips: virtual reality and geoinformation systems
paper_content:
Advocates a desktop virtual reality (VR) interface to a geographic information system (GIS). The navigational capability to explore large topographic scenes is a powerful metaphor and a natural way of interacting with a GIS. VR systems succeed in providing visual realism and real-time navigation and interaction, but fail to cope with very large amounts of data and to provide the general functionality of information systems. We suggest a way to overcome these problems. We describe a prototype system, called ViRGIS (Virtual Reality GIS), that integrates two system platforms: a client that runs the VR component interacts via a (local or wide area) network with a server that runs an object-oriented database containing geographic data. For the purpose of accessing data efficiently, we describe how to integrate a geometric index into the database, and how to perform the operations that are requested in a real-time trip through the virtual world.
---
paper_title: Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies
paper_content:
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.
---
paper_title: TerraVision II: Visualizing Massive Terrain Databases in VRML
paper_content:
To disseminate 3D maps and spatial data over the Web, we designed massive terrain data sets accessible through either a VRML browser or the customized TerraVision II browser. Although not required to view the content, TerraVision II lets the user perform specialized browser level optimizations that offer increased efficiency and seamless interaction with the terrain data. We designed our framework to simplify terrain data maintenance and to let users dynamically select particular sets of geo-referenced data. Our implementation uses Java scripting to extend VRML's base functionality and the External Authoring Interface to offer application-specific management of the virtual geographic environment.
---
paper_title: BDAM - Batched Dynamic Adaptive Meshes for High Performance Terrain Visualization
paper_content:
This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs and are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependent texture and geometry refinement is performed at each frame through a stateless traversal algorithm. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. Both preprocessing and rendering exploit out-of-core techniques to be fully scalable and to manage large terrain datasets.
---
paper_title: Visualization of large terrains made easy
paper_content:
We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts.Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient.
---
paper_title: Smooth view-dependent level-of-detail control and its application to terrain rendering
paper_content:
The key to real-time rendering of large-scale surfaces is to locally adapt surface geometric complexity to changing view parameters. Several schemes have been developed to address this problem of view-dependent level-of-detail control. Among these, the view-dependent progressive mesh (VDPM) framework represents an arbitrary triangle mesh as a hierarchy of geometrically optimized refinement transformations, from which accurate approximating meshes can be efficiently retrieved. In this paper we extend the general VDPM framework to provide temporal coherence through the run-time creation of geomorphs. These geomorphs eliminate "popping" artifacts by smoothly interpolating geometry. Their implementation requires new output-sensitive data structures, which have the added benefit of reducing memory use. We specialize the VDPM framework to the important case of terrain rendering. To handle huge terrain grids, we introduce a block-based simplification scheme that constructs a progressive mesh as a hierarchy of block refinements. We demonstrate the need for an accurate approximation metric during simplification. Our contributions are highlighted in a real-time flyover of a large, rugged terrain. Notably, the use of geomorphs results in visually smooth rendering even at 72 frames/sec on a graphics workstation.
---
paper_title: C-BDAM – compressed batched dynamic adaptive meshes for terrain rendering
paper_content:
We describe a compressed multiresolution representation for supporting interactive rendering of very large planar and spherical terrain surfaces. The technique, called Compressed Batched Dynamic Adaptive Meshes (C-BDAM), is an extension of the BDAM and P-BDAM chunked level-of-detail hierarchy. In the C-BDAM approach, all patches share the same regular triangulation connectivity and incrementally encode their vertex attributes using a quantized representation of the difference with respect to values predicted from the coarser level. The structure provides a number of benefits: simplicity of data structures, overall geometric continuity for planar and spherical domains, support for variable resolution input data, management of multiple vertex attributes, efficient compression and fast construction times, ability to support maximum-error metrics, real-time decompression and shaded rendering with configurable variable level-of-detail extraction, and runtime detail synthesis. The efficiency of the approach and the achieved compression rates are demonstrated on a number of test cases, including the interactive visualization of a 29 gigasample reconstruction of the whole planet Earth created from high resolution SRTM data.
---
paper_title: Terrain Simplification Simplified : A General Framework for View-Dependent Out-of-Core Visualization
paper_content:
We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.
---
paper_title: Space Filling Curves and Their Use in the Design of Geometric Data Structures
paper_content:
We are given a two-dimensional square grid of size N×N, where N∶=2n and n≥0. A space filling curve (SFC) is a numbering of the cells of this grid with numbers from c+1 to c+N2, for some c≥0. We call a SFC recursive (RSFC) if it can be recursively divided into four square RSFCs of equal size. Examples of well-known RSFCs include the Hilbert curve, the z-curve, and the Gray code.
---
paper_title: Virtual GIS: a real-time 3D geographic information system
paper_content:
Advances in computer graphics hardware and algorithms, visualization, and interactive techniques for analysis offer the components for a highly integrated, efficient real-time 3D Geographic Information System. We have developed "Virtual GIS", a system with truly immersive capability for navigating and understanding complex and dynamic terrain-based databases. The system provides the means for visualizing terrain models consisting of elevation and imagery data, along with GIS raster layers, protruding features, buildings, vehicles, and other objects. We have implemented window-based and virtual reality versions and in both cases provide a direct manipulation, visual interface for accessing the GIS data. Unique terrain data structures and algorithms allow rendering of large, high resolution datasets at interactive rates.
---
paper_title: Planet-sized batched dynamic adaptive meshes (P-BDAM)
paper_content:
We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
---
paper_title: Geometry clipmaps: terrain rendering using nested regular grids
paper_content:
Rendering throughput has reached a level that enables a novel approach to level-of-detail (LOD) control in terrain rendering. We introduce the geometry clipmap, which caches the terrain in a set of nested regular grids centered about the viewer. The grids are stored as vertex buffers in fast video memory, and are incrementally refilled as the viewpoint moves. This simple framework provides visual continuity, uniform frame rate, complexity throttling, and graceful degradation. Moreover it allows two new exciting real-time functionalities: decompression and synthesis. Our main dataset is a 40GB height map of the United States. A compressed image pyramid reduces the size by a remarkable factor of 100, so that it fits entirely in memory. This compressed data also contributes normal maps for shading. As the viewer approaches the surface, we synthesize grid levels finer than the stored terrain using fractal noise displacement. Decompression, synthesis, and normal-map computations are incremental, thereby allowing interactive flight at 60 frames/sec.
---
paper_title: Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies
paper_content:
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.
---
paper_title: C-BDAM – compressed batched dynamic adaptive meshes for terrain rendering
paper_content:
We describe a compressed multiresolution representation for supporting interactive rendering of very large planar and spherical terrain surfaces. The technique, called Compressed Batched Dynamic Adaptive Meshes (C-BDAM), is an extension of the BDAM and P-BDAM chunked level-of-detail hierarchy. In the C-BDAM approach, all patches share the same regular triangulation connectivity and incrementally encode their vertex attributes using a quantized representation of the difference with respect to values predicted from the coarser level. The structure provides a number of benefits: simplicity of data structures, overall geometric continuity for planar and spherical domains, support for variable resolution input data, management of multiple vertex attributes, efficient compression and fast construction times, ability to support maximum-error metrics, real-time decompression and shaded rendering with configurable variable level-of-detail extraction, and runtime detail synthesis. The efficiency of the approach and the achieved compression rates are demonstrated on a number of test cases, including the interactive visualization of a 29 gigasample reconstruction of the whole planet Earth created from high resolution SRTM data.
---
paper_title: Fast progressive image coding without wavelets
paper_content:
We introduce a new image compression algorithm that allows progressive image reconstruction - both in resolution and in fidelity, with a fully embedded bit-stream. The algorithm is based on bit-plane entropy coding of reordered transform coefficients, similar to the progressive wavelet codec (PWC) previously introduced. Unlike PWC, however, our new progressive transform coder (PTC) does not use wavelets; it performs the space-frequency decomposition step via a new lapped biorthogonal transform (LBT). PTC achieves a rate distortion performance that is comparable (within 2%) to that of the state-of-the-art SPIHT (set partitioning in hierarchical trees) codec. However, thanks to the use of the LBT, the space-frequency decomposition step in PTC reduces the number of multiplications per pixel by a factor of 2.7, and the number of additions by about 15%, when compared to the fastest possible implementation of the "9/7" wavelet transform via lifting. Furthermore, since most of the computation in the LBT is in fact performed by a DCT, our PTC codec can make full use of fast software and hardware modules for 1D and 2D DCT.
---
paper_title: An Integrated Global GIS and Visual Simulation System
paper_content:
This paper reports on an integrated visual simulation system supporting visualization of global multiresolution terrain elevation and imagery data, static and dynamic 3D objects with multiple levels of detail, non-protrusive features such as roads and rivers, distributed simulation and real-time sensor input, and an embedded geographic information system. The requirements of real-time rendering, very large datasets, and heterogeneous detail management strongly affect the structure of this system. Use of hierarchical spatial data structures and multiple coordinate systems allow for visualization and manipulation of huge terrain datasets spanning the entire surface of the Earth at resolutions well below one meter. The multithreaded nature of the system supports multiple windows with independent, stereoscopic views. The system is portable, built on OpenGL, POSIX threads, and X11/Motif windowed interface. It has been tested and evaluated in the field with a variety of terrain data, updates due to real-time sensor input, and display of networked DIS simulations.
---
paper_title: TerraVision II: Visualizing Massive Terrain Databases in VRML
paper_content:
To disseminate 3D maps and spatial data over the Web, we designed massive terrain data sets accessible through either a VRML browser or the customized TerraVision II browser. Although not required to view the content, TerraVision II lets the user perform specialized browser level optimizations that offer increased efficiency and seamless interaction with the terrain data. We designed our framework to simplify terrain data maintenance and to let users dynamically select particular sets of geo-referenced data. Our implementation uses Java scripting to extend VRML's base functionality and the External Authoring Interface to offer application-specific management of the virtual geographic environment.
---
paper_title: Planet-sized batched dynamic adaptive meshes (P-BDAM)
paper_content:
We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
---
| Title: Survey of semi-regular multiresolution models for interactive terrain rendering
Section 1: Introduction
Description 1: This section introduces the significance and challenges of efficient interactive visualization of large digital elevation models (DEMs) and overviews the multiresolution methods for interactive terrain rendering.
Section 2: Background and motivation
Description 2: This section provides foundational concepts and motivations behind multiresolution terrain models, discussing the encoding, extraction, and rendering processes.
Section 3: Non-conforming and limited adaptivity techniques: tiled blocks and nested regular grids
Description 3: This section explores techniques based on non-conforming representations, focusing on tiled blocks and nested regular grids, and their implementation.
Section 4: Variable resolution triangulation using quadtree and triangle bin-tree subdivision
Description 4: This section examines methods based on quadtree and triangle bin-tree triangulation, emphasizing their ability to create continuous variable resolution surfaces.
Section 5: Quadtree triangulation
Description 5: This section details algorithms and data structures associated with quadtree-based adaptive triangulation, including restricted quadtrees and continuous LOD quadtree methods.
Section 6: Triangle bin-trees
Description 6: This section discusses triangle bin-tree-based triangulation methods, such as ROAM, RTIN, and related efficient data storage and processing approaches.
Section 7: Cluster triangulations
Description 7: This section describes recent techniques that use cluster-based triangulation approaches to optimize performance, covering various methods like tiled blocks, cached triangle bin-trees, and combining regular and irregular triangulations.
Section 8: LOD error metric
Description 8: This section reviews the major error metrics proposed for terrain triangulation algorithms, including object-space and image-space approximation errors.
Section 9: System issues
Description 9: This section addresses system-level aspects of terrain visualization, such as dynamic scene management, out-of-core data organization, and client-server architecture.
Section 10: Conclusions
Description 10: This section summarizes the survey, highlighting contributions, current trends, and open research problems in the field of multiresolution terrain rendering. |
A Survey on Ambient Intelligence in Healthcare | 10 | ---
paper_title: Ambient intelligence: A survey
paper_content:
In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.
---
paper_title: Ambient Intelligence, Wireless Networking, And Ubiquitous Computing
paper_content:
Networking in Miniature Computers and Embedded Systems. Ubiquitous Computing. Context Transparency and Awareness. Computational Intelligence. Computer Architectures for Ambient Intelligence. 3G and Beyond Services and Architectures for Mobile User Support. Semantic Web, Knowledge Management, and Discovery for Network Services. Middleware and Distributed Computing Issues for Ubiquitous Computing Using Wireless and Mobile Technologies. Polymorphic, Programmable/Active Networks for Mobile and Wireless Environments. Dynamic Data- and Model-Driven Network Architectures and Services. Nomadic User Support and Management. Service Provisioning for Mobile/Wireless Users. Security and Privacy in Mobile and Wireless World. Peer-to-Peer and Grid-Based Services for Mobile Users.
---
paper_title: Ambient Intelligence: Technologies, Applications, and Opportunities
paper_content:
Ambient intelligence is an emerging discipline that brings intelligence to our everyday environments and makes those environments sensitive to us. Ambient intelligence (AmI) research builds upon advances in sensors and sensor networks, pervasive computing, and artificial intelligence. Because these contributing fields have experienced tremendous growth in the last few years, AmI research has strengthened and expanded. Because AmI research is maturing, the resulting technologies promise to revolutionarize daily human life by making people's surroundings flexible and adaptive. In this paper, we provide a survey of the technologies that comprise ambient intelligence and of the applications that are dramatically affected by it. In particular, we specifically focus on the research that makes AmI technologies ''intelligent''. We also highlight challenges and opportunities that AmI researchers will face in the coming years.
---
paper_title: Applications of Smartphones for Ubiquitous Health Monitoring and Wellbeing Management
paper_content:
Advances in smartphone technology and data communications facilitate the use of ubiquitous health monitoring and mobile health application as a solution of choice for the overwhelming problems of the healthcare system. In addition to easier management and seamless access to historical records, ubiquitous technology has the potential to motivate users to take an active role and manage their own conditions. In this paper we present capabilities of the current generation of smartphones and possible applications for ubiquitous health monitoring and wellness management. We describe the architecture and organization of ubiquitous health monitoring systems, Body Sensor Networks, and integration of wearable and environmental sensors. We also describe mainstream mobile health related applications in today's mobile marketplaces such as Apple App Store and Google Android Marketplace. Finally, we present the development of UAHealth - our integrated mobile health monitoring system for wellness management, designed to monitor physical activity, weight, and heart activity.
---
paper_title: Pervasive Computing in Health Care: Smart Spaces and Enterprise Information Systems
paper_content:
Middleware for pervasive computing is an active area of research that is now starting to mature. Prototypical systems have been developed and used for demonstration ap- plications in a number of domains. However, realistic pervasive environments of the future will need to integrate with enterprise systems, including software, hardware, databases, standards, and life cycle. Designing and implementing "enterprise-strength" pervasive applications today is not an easy task, as we illustrate with a simple case study of a healthcare scenario. We use the case study to draw conclusions about requirements placed on future pervasive systems. We present a scenario where context middleware, within a few years, might assist in clinical-care business practices. Patients are instrumented with vital-sign monitors and with a means of determining their location. Physicians and nurses have wireless PDAs, also instrumented with a means of determining their location. Context-aware applications help optimize physician rounds, support nurse triage, simplify the user interface to pervasive devices, provide additional data for billing reconciliation, and provide clinical communications. A physician, Dr. Able, arrives on the second floor of the hospital. A graphic appears on his PDA showing the rooms assigned to his patients. Rooms are highlighted depending on whether the patient is currently in the room, and ambulating patients are notified that the doctor is making rounds. Relatives waiting elsewhere in the hospital are also notified so that they
---
paper_title: Development and Validation of a Smartphone Heart Rate Acquisition Application for Health Promotion and Wellness Telehealth Applications
paper_content:
Objective. Current generation smartphones' video camera technologies enable photoplethysmographic (PPG) acquisition and heart rate (HR) measurement. The study objective was to develop an Android application and compare HRs derived from a Motorola Droid to electrocardiograph (ECG) and Nonin 9560BT pulse oximeter readings during various movement-free tasks. Materials and Methods. HRs were collected simultaneously from 14 subjects, ages 20 to 58, healthy or with clinical conditions, using the 3 devices during 5-minute periods while at rest, reading aloud under observation, and playing a video game. Correlation between the 3 devices was determined, and Bland-Altman plots for all possible pairs of devices across all conditions assessed agreement. Results. Across conditions, all device pairs showed high correlations. Bland-Altman plots further revealed the Droid as a valid measure for HR acquisition. Across all conditions, the Droid compared to ECG, 95% of the data points (differences between devices) fell within the limits of agreement. Conclusion. The Android application provides valid HRs at varying levels of movement free mental/perceptual motor exertion. Lack of electrode patches or wireless sensor telemetric straps make it advantageous for use in mobile-cell-phone-delivered health promotion and wellness programs. Further validation is needed to determine its applicability while engaging in physical movement-related activities.
---
paper_title: A Survey on Wireless Body Area Networks
paper_content:
The increasing use of wireless networks and the constant miniaturization of electrical devices has empowered the development of Wireless Body Area Networks (WBANs). In these networks various sensors are attached on clothing or on the body or even implanted under the skin. The wireless nature of the network and the wide variety of sensors offer numerous new, practical and innovative applications to improve health care and the Quality of Life. The sensors of a WBAN measure for example the heartbeat, the body temperature or record a prolonged electrocardiogram. Using a WBAN, the patient experiences a greater physical mobility and is no longer compelled to stay in the hospital. This paper offers a survey of the concept of Wireless Body Area Networks. First, we focus on some applications with special interest in patient monitoring. Then the communication in a WBAN and its positioning between the different technologies is discussed. An overview of the current research on the physical layer, existing MAC and network protocols is given. Further, cross layer and quality of service is discussed. As WBANs are placed on the human body and often transport private data, security is also considered. An overview of current and past projects is given. Finally, the open research issues and challenges are pointed out.
---
paper_title: Body Area Networks: A Survey
paper_content:
Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications.
---
paper_title: HomeMesh: a low-cost indoor wireless mesh for home networking
paper_content:
Wi-Fi access technology has become popular in recent years. Many users nowadays use Wi-Fi to gain wireless access to the Internet from offices, public libraries, shopping malls, homes, and other places. However, current Wi-Fi deployment is limited to areas where wired LAN is available. Due to its relatively short transmission range in indoor environments (typically several tens of meters), Wi-Fi coverage needs to be extended significantly to full coverage of a certain area. The wireless mesh network (WMN) is a practical and effective solution. In this article we present HomeMesh, an off-the-shelf, simple, and cost-effective WMN for the indoor home environment. HomeMesh is based on simple protocols, implementable in normal notebooks or PCs, and is compatible with existing Wi-Fi APs and clients (i.e., no AP and client modifications). To achieve better end-to-end delay and throughput, HomeMesh dynamically selects its access path based on the ETX metric. We have implemented HomeMesh and conducted proofof- concept experiments in an indoor environment. Our mesh solution is shown to be effective in improving Wi-Fi services.
---
paper_title: Sensor Networks for Ambient Intelligence
paper_content:
Due to rapid advances in networking and sensing technology we are witnessing a growing interest in sensor networks, in which a variety of sensors are connected to each other and to computational devices capable of multimodal signal processing and data analysis. Such networks are seen to play an increasingly important role as key enablers in emerging pervasive computing technologies. In the first part of this paper we give an overview of recent developments in the area of multimodal sensor networks, paying special attention to ambient intelligence applications. In the second part, we discuss how the time series generated by data streams emanating from the sensors can be mined for temporal patterns, indicating cross-sensor signal correlations.
---
paper_title: ReTrust: Attack-Resistant and Lightweight Trust Management for Medical Sensor Networks
paper_content:
Wireless medical sensor networks (MSNs) enable ubiquitous health monitoring of users during their everyday lives, at health sites, without restricting their freedom. Establishing trust among distributed network entities has been recognized as a powerful tool to improve the security and performance of distributed networks such as mobile ad hoc networks and sensor networks. However, most existing trust systems are not well suited for MSNs due to the unique operational and security requirements of MSNs. Moreover, similar to most security schemes, trust management methods themselves can be vulnerable to attacks. Unfortunately, this issue is often ignored in existing trust systems. In this paper, we identify the security and performance challenges facing a sensor network for wireless medical monitoring and suggest it should follow a two-tier architecture. Based on such an architecture, we develop an attack-resistant and lightweight trust management scheme named ReTrust. This paper also reports the experimental results of the Collection Tree Protocol using our proposed system in a network of TelosB motes, which show that ReTrust not only can efficiently detect malicious/faulty behaviors, but can also significantly improve the network performance in practice.
---
paper_title: Wireless mesh networks: a survey
paper_content:
Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted.
---
paper_title: Sensor-Based Activity Recognition
paper_content:
Research on sensor-based activity recognition has, recently, made significant progress and is attracting growing attention in a number of disciplines and application domains. However, there is a lack of high-level overview on this topic that can inform related communities of the research state of the art. In this paper, we present a comprehensive survey to examine the development and current status of various aspects of sensor-based activity recognition. We first discuss the general rationale and distinctions of vision-based and sensor-based activity recognition. Then, we review the major approaches and methods associated with sensor-based activity monitoring, modeling, and recognition from which strengths and weaknesses of those approaches are highlighted. We make a primary distinction in this paper between data-driven and knowledge-driven approaches, and use this distinction to structure our survey. We also discuss some promising directions for future research.
---
paper_title: Recognizing independent and joint activities among multiple residents in smart environments
paper_content:
The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed.
---
paper_title: Discovering Activities to Recognize and Track in a Smart Environment
paper_content:
The machine learning and pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track activities that people normally perform as part of their daily routines. Although approaches do exist for recognizing activities, the approaches are applied to activities that have been preselected and for which labeled training data are available. In contrast, we introduce an automated approach to activity tracking that identifies frequent activities that naturally occur in an individual's routine. With this capability, we can then track the occurrence of regular activities to monitor functional health and to detect changes in an individual's patterns and lifestyle. In this paper, we describe our activity mining and tracking approach, and validate our algorithms on data collected in physical smart environments.
---
paper_title: Experiences in the development of a Smart Lab
paper_content:
There is now a growing demand to provide improved delivery of health and social care due to changes in the age profile of our population. One area where these services may be improved is through the development of smart living environments. Within this paper we provide an overview of the drivers behind the development of such environments along with details of the different ways in which they may exist. Finally, we provide details of our initial experiences in the establishment of a Smart Living Environment for the development of assistive technologies to support independent living.
---
paper_title: Activity Recognition using Actigraph Sensor
paper_content:
accelerometers are being extensively used in the recognition of simple ambulatory activities. Using wearable sensors for activity recognition is the latest topic of interest in smart home research. We use an Actigraph watch with an embedded accelerometer sensor to recognize real-life activities done in a home. Real-life activities include the set of Activities of Daily Living (ADL). ADLs are the crucial activities we perform everyday in our homes. Actigraph watches have been profusely used in sleep studies to determine the sleep/wake cycles and also the quality of sleep. In this paper, we investigate the possibility of using Actigraph watches to recognize activities. The data collected from an Actigraph watch was analyzed to predict ADLs (Activities of Daily Living). We apply machine learning algorithms to the Actigraph data to predict the ADLs. Also, a comparative study of activity prediction accuracy obtained from four machine learning algorithms is discussed.
---
paper_title: Conditional random fields for activity recognition
paper_content:
Activity recognition is a key component for creating intelligent, multi-agent systems. Intrinsically, activity recognition is a temporal classification problem. In this paper, we compare two models for temporal classification: hidden Markov models (HMMs), which have long been applied to the activity recognition problem, and conditional random fields (CRFs). CRFs are discriminative models for labeling sequences. They condition on the entire observation sequence, which avoids the need for independence assumptions between observations. Conditioning on the observations vastly expands the set of features that can be incorporated into the model without violating its assumptions. Using data from a simulated robot tag domain, chosen because it is multi-agent and produces complex interactions between observations, we explore the differences in performance between the discriminatively trained CRF and the generative HMM. Additionally, we examine the effect of incorporating features which violate independence assumptions between observations; such features are typically necessary for high classification accuracy. We find that the discriminatively trained CRF performs as well as or better than an HMM even when the model features do not violate the independence assumptions of the HMM. In cases where features depend on observations from many time steps, we confirm that CRFs are robust against any degradation in performance.
---
paper_title: Human motion detection using Markov random fields
paper_content:
In this paper, we propose Markov random fields (MRFs) to automatically detect a moving human body through minimizing the joint energy of the MRF for the velocity and relative position of body parts. The relaxation labeling algorithm is employed to find the best body part labeling configuration between MRFs and observed data. We detect a walking motion viewed monocularly based on point features, where some points are from the unoccluded body parts and some belong to the background. The results show that MRFs can detect human motions robustly and accurately.
---
paper_title: Object relevance weight pattern mining for activity recognition and segmentation
paper_content:
Monitoring daily activities of a person has many potential benefits in pervasive computing. These include providing proactive support for the elderly and monitoring anomalous behaviors. A typical approach in existing research on activity detection is to construct sequence-based models of low-level activity features based on the order of object usage. However, these models have poor accuracy, require many parameters to estimate, and demand excessive computational effort. Many other supervised learning approaches have been proposed but they all suffer from poor scalability due to the manual labeling involved in the training process. In this paper, we simplify the activity modeling process by relying on the relevance weights of objects as the basis of activity discrimination rather than on sequence information. For each activity, we mine the web to extract the most relevant objects according to their normalized usage frequency. We develop a KeyExtract algorithm for activity recognition and two algorithms, MaxGap and MaxGain, for activity segmentation with linear time complexities. Simulation results indicate that our proposed algorithms achieve high accuracy in the presence of different noise levels indicating their good potential in real-world deployment.
---
paper_title: SMASH: a distributed sensing and processing garment for the classification of upper body postures
paper_content:
This paper introduces a smart textile for posture classification. A distributed sensing and processing architecture is implemented into a loose fitting long sleeve shirt. Standardized interfaces to remote periphery support the variable placement of different sensor modalities at any location of the textile. ::: ::: The shirt is equipped with acceleration sensors in order to determine the postural resolution and the systems feasibility for applications in movement rehabilitation. For the garment characterization an arm posture measurement method is proposed and applied in a study with 5 users. ::: ::: The classification performance is analyzed on data from overall 8 users, conducting 12 posture types, relevant for shoulder and elbow joint rehabilitation. We present results for different user-modes, with classification rates of 89% for a user-independent evaluation. Moreover, the relation of body dimensions on the posture classification performance are analyzed.
---
paper_title: Mining models of human activities from the web
paper_content:
The ability to determine what day-to-day activity (such as cooking pasta, taking a pill, or watching a video) a person is performing is of interest in many application domains. A system that can do this requires models of the activities of interest, but model construction does not scale well: humans must specify low-level details, such as segmentation and feature selection of sensor data, and high-level structure, such as spatio-temporal relations between states of the model, for each and every activity. As a result, previous practical activity recognition systems have been content to model a tiny fraction of the thousands of human activities that are potentially useful to detect. In this paper, we present an approach to sensing and modeling activities that scales to a much larger class of activities than before. We show how a new class of sensors, based on Radio Frequency Identification (RFID) tags, can directly yield semantic terms that describe the state of the physical world. These sensors allow us to formulate activity models by translating labeled activities, such as 'cooking pasta', into probabilistic collections of object terms, such as 'pot'. Given this view of activity models as text translations, we show how to mine definitions of activities in an unsupervised manner from the web. We have used our technique to mine definitions for over 20,000 activities. We experimentally validate our approach using data gathered from actual human activity as well as simulated data.
---
paper_title: Strategies for inference mechanism of conditional random fields for multiple-resident activity recognition in a smart home
paper_content:
Multiple-resident activity recognition is a major challenge for building a smart-home system. In this paper, conditional random fields (CRFs) are chosen as our activity recognition models for overcoming this challenge. We evaluate our proposed approach with several strategies, including conditional random field with iterative inference and the one with decomposition inference, to enhance the commonly used CRFs so that they can be applied to a multipleresident environment. We use the multi-resident CASAS data collected at WSU (Washington State University) to validate these strategies. The results show that data association of non-obstructive sensor data is of vital importance to improve the performance of activity recognition in a multiple-resident environment. Furthermore, the study also suggests that human interaction be taken into consideration for further accuracy improvement.
---
paper_title: Recognizing human motion with multiple acceleration sensors
paper_content:
In this paper experiments with acceleration sensors are described for human activity recognition of a wearable device user. The use of principal component analysis and independent component analysis with a wavelet transform is tested for feature generation. Recognition of human activity is examined with a multilayer perceptron classifier. Best classification results for recognition of different human motion were 83-90%, and they were achieved by utilizing independent component analysis and principal component analysis. The difference between these methods turned out to be negligible.
---
paper_title: Recognizing daily activities with RFID-based sensors
paper_content:
We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring.
---
paper_title: Voice activity detection driven acoustic event classification for monitoring in smart homes
paper_content:
This contribution focuses on acoustic event detection and classification for monitoring of elderly people in ambient assistive living environments such as smart homes or nursing homes. We describe an autonomous system for robust detection of acoustic events in various practically relevant acoustic situations that benefits from a voice activity detection inspired preprocessing mechanism. Therefore, various already established voice activity detection schemes have been evaluated beforehand. As a specific use case, we address coughing as an acoustic event of interest which can be interpreted as an indicator for a potentially upcoming illness. After the detection of such events using a psychoacoustically motivated spectro-temporal representation (the so-called cochleogram), we forward its output to a statistical event modeling stage for automatic instantaneous emergency classification and long-term monitoring. The parameters derived by this procedure can then be used to inform medical or care-service personal.
---
paper_title: Activity and location recognition using wearable sensors
paper_content:
Using measured acceleration and angular velocity data gathered through inexpensive, wearable sensors, this dead-reckoning method can determine a user's location, detect transitions between preselected locations, and recognize and classify sitting, standing, and walking behaviors. Experiments demonstrate the proposed method's effectiveness.
---
paper_title: Inferring High-Level Behavior from Low-Level Sensors
paper_content:
We present a method of learning a Bayesian model of a traveler moving through an urban environment. This technique is novel in that it simultaneously learns a unified model of the traveler’s current mode of transportation as well as his most likely route, in an unsupervised manner. The model is implemented using particle filters and learned using Expectation-Maximization. The training data is drawn from a GPS sensor stream that was collected by the authors over a period of three months. We demonstrate that by adding more external knowledge about bus routes and bus stops, accuracy is improved.
---
paper_title: Simultaneous tracking & activity recognition (star) using many anonymous, binary sensors
paper_content:
In this paper we introduce the simultaneous tracking and activity recognition (STAR) problem, which exploits the synergy between location and activity to provide the information necessary for automatic health monitoring. Automatic health monitoring can potentially help the elderly population live safely and independently in their own homes by providing key information to caregivers. Our goal is to perform accurate tracking and activity recognition for multiple people in a home environment. We use a “bottom-up” approach that primarily uses information gathered by many minimally invasive sensors commonly found in home security systems. We describe a Rao-Blackwellised particle filter for room-level tracking, rudimentary activity recognition (i.e., whether or not an occupant is moving), and data association. We evaluate our approach with experiments in a simulated environment and in a real instrumented home.
---
paper_title: Bayesian Activity Recognition in Residence for Elders
paper_content:
The growing population of elders in our society calls for a new approach in caregiving. By inferring what activities elderly are performing in their houses it is possible to determine their physical and cognitive capabilities. In this paper we describe probabilistic models for performing activity recognition from sensor patterns. We introduce a new observation model which takes the history of sensor readings into account. Results show that the new observation model improves accuracy, but a description using less parameters is likely to give even better results.
---
paper_title: Activity Recognition in the Home Using Simple and Ubiquitous Sensors
paper_content:
In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25% to 89% depending on the evaluation criteria used.
---
paper_title: An activity monitoring system for elderly care using generative and discriminative models
paper_content:
An activity monitoring system allows many applications to assist in care giving for elderly in their homes. In this paper we present a wireless sensor network for unintrusive observations in the home and show the potential of generative and discriminative models for recognizing activities from such observations. Through a large number of experiments using four real world datasets we show the effectiveness of the generative hidden Markov model and the discriminative conditional random fields in activity recognition.
---
paper_title: Learning Situation Models in a Smart Home
paper_content:
This paper addresses the problem of learning situation models for providing context-aware services. Context for modeling human behavior in a smart environment is represented by a situation model describing environment, users, and their activities. A framework for acquiring and evolving different layers of a situation model in a smart environment is proposed. Different learning methods are presented as part of this framework: role detection per entity, unsupervised extraction of situations from multimodal data, supervised learning of situation representations, and evolution of a predefined situation model with feedback. The situation model serves as frame and support for the different methods, permitting to stay in an intuitive declarative framework. The proposed methods have been integrated into a whole system for smart home environment. The implementation is detailed, and two evaluations are conducted in the smart home environment. The obtained results validate the proposed approach.
---
paper_title: A survey of vision-based methods for action representation, segmentation and recognition
paper_content:
Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human-computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions.
---
paper_title: Fabric-based strain sensors for measuring movement in wearable telemonitoring applications
paper_content:
This paper summarises preliminary work comparing conductive yarns, knitting structures and yarn compositions in order to integrate smart sensor strips into a surrounding garment as a kinematic measurement tool. The conductive areas of the garment were to be used as a strain-sensitive material; ultimately measuring knee joint movement. In total, thirty sample fabrics were developed using conductive yarns; six of which were chosen to be tested for responsiveness during repeated strain. Preliminary tests showed good levels of responsiveness to strain and acceptable levels of recovery.
---
paper_title: Activity recognition and monitoring using multiple sensors on different body positions
paper_content:
The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated.
---
paper_title: Learning Setting-Generalized Activity Models for Smart Spaces
paper_content:
Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types.
---
paper_title: Transfer learning for activity recognition: a survey
paper_content:
Many intelligent systems that focus on the needs of a human require information about the activities being performed by the human. At the core of this capability is activity recognition, which is a challenging and well-researched problem. Activity recognition algorithms require substantial amounts of labeled training data yet need to perform well under very diverse circumstances. As a result, researchers have been designing methods to identify and utilize subtle connections between activity recognition datasets, or to perform transfer-based activity recognition. In this paper, we survey the literature to highlight recent advances in transfer learning for activity recognition. We characterize existing approaches to transfer-based activity recognition by sensor modality, by differences between source and target environments, by data availability, and by type of information that is transferred. Finally, we present some grand challenges for the community to consider as this field is further developed.
---
paper_title: Health-status monitoring through analysis of behavioral patterns
paper_content:
With the rapid growth of the elderly population, there is a need to support the ability of elders to maintain an independent and healthy lifestyle in their homes rather than through more expensive and isolated care facilities. One approach to accomplish these objectives employs the concepts of ambient intelligence to remotely monitor an elder's activities and condition. The SmartHouse project uses a system of basic sensors to monitor a person's in-home activity; a prototype of the system is being tested within a subject's home. We examined whether the system could be used to detect behavioral patterns and report the results in this paper. Mixture models were used to develop a probabilistic model of behavioral patterns. The results of the mixture-model analysis were then evaluated by using a log of events kept by the occupant.
---
paper_title: Activity Recognition from Sparsely Labeled Data Using Multi-Instance Learning
paper_content:
Activity recognition has attracted increasing attention in recent years due to its potential to enable a number of compelling context-aware applications. As most approaches rely on supervised learning methods, obtaining substantial amounts of labeled data is often an important bottle-neck for these approaches. In this paper, we present and explore a novel method for activity recognition from sparsely labeled data. The method is based on multi-instance learning allowing to significantly reduce the required level of supervision. In particular we propose several novel extensions of multi-instance learning to support different annotation strategies. The validity of the approach is demonstrated on two public datasets for three different labeling scenarios.
---
paper_title: Activity recognition from user-annotated acceleration data
paper_content:
In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84%. The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.
---
paper_title: Mining sequential patterns
paper_content:
We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction. >
---
paper_title: Distributed Recognition of Human Actions Using Wearable Motion Sensor Networks
paper_content:
We propose a distributed recognition framework to classify continuous human actions using a low-bandwidth wearable motion sensor network, called distributed sparsity classifier (DSC). The algorithm classifies human actions using a set of training motion sequences as prior examples. It is also capable of rejecting outlying actions that are not in the training categories. The classification is operated in a distributed fashion on individual sensor nodes and a base station computer. We model the distribution of multiple action classes as a mixture subspace model, one subspace for each action class. Given a new test sample, we seek the sparsest linear representation of the sample w.r.t. all training examples. We show that the dominant coefficients in the representation only correspond to the action class of the test sample, and hence its membership is encoded in the sparse representation. Fast linear solvers are provided to compute such representation via e 1-minimization. To validate the accuracy of the framework, a public wearable action recognition database is constructed, called wearable action recognition database (WARD). The database is comprised of 20 human subjects in 13 action categories. Using up to five motion sensors in the WARD database, DSC achieves state-of-the-art performance. We further show that the recognition precision only decreases gracefully using smaller subsets of active sensors. It validates the robustness of the distributed recognition framework on an unreliable wireless network. It also demonstrates the ability of DSC to conserve sensor energy for communication while preserve accurate global classification. ::: ::: (This work was partially supported by ARO MURI W911NF-06-1-0076, NSF TRUST Center, and the startup funding from the University of Texas and Texas Instruments.)
---
paper_title: Inferring activities from interactions with objects
paper_content:
A key aspect of pervasive computing is using computers and sensor networks to effectively and unobtrusively infer users' behavior in their environment. This includes inferring which activity users are performing, how they're performing it, and its current stage. Recognizing and recording activities of daily living is a significant problem in elder care. A new paradigm for ADL inferencing leverages radio-frequency-identification technology, data mining, and a probabilistic inference engine to recognize ADLs, based on the objects people use. We propose an approach that addresses these challenges and shows promise in automating some types of ADL monitoring. Our key observation is that the sequence of objects a person uses while performing an ADL robustly characterizes both the ADL's identity and the quality of its execution. So, we have developed Proactive Activity Toolkit (PROACT).
---
paper_title: An activity recognition system for mobile phones
paper_content:
We present a novel system that recognizes and records the motional activities of a person using a mobile phone. Wireless sensors measuring the intensity of motions are attached to body parts of the user. Sensory data is collected by a mobile application that recognizes prelearnt activities in real-time. For efficient motion pattern recognition of gestures and postures, feed-forward backpropagation neural networks are adopted. The design and implementation of the system are presented along with the records of our experiences. Results show high recognition rates for distinguishing among six different motion patterns. The recognized activity can be used as an additional retrieval key in an extensive mobile memory recording and sharing project. Power consumption measurements of the wireless communication and the recognition algorithm are provided to characterize the resource requirements of the system.
---
paper_title: Fine-grained activity recognition by aggregating abstract object usage
paper_content:
In this paper we present results related to achieving finegrained activity recognition for context-aware computing applications. We examine the advantages and challenges of reasoning with globally unique object instances detected by an RFID glove. We present a sequence of increasingly powerful probabilistic graphical models for activity recognition. We show the advantages of adding additional complexity and conclude with a model that can reason tractably about aggregated object instances and gracefully generalizes from object instances to their classes by using abstraction smoothing. We apply these models to data collected from a morning household routine.
---
paper_title: A Long-Term Evaluation of Sensing Modalities for Activity Recognition
paper_content:
We study activity recognition using 104 hours of annotated data collected from a person living in an instrumented home. The home contained over 900 sensor inputs, including wired reed switches, current and water flow inputs, object and person motion detectors, and RFID tags. Our aim was to compare different sensor modalities on data that approached "real world" conditions, where the subject and annotator were unaffiliated with the authors. We found that 10 infra-red motion detectors outperformed the other sensors on many of the activities studied, especially those that were typically performed in the same location. However, several activities, in particular "eating" and "reading" were difficult to detect, and we lacked data to study many fine-grained activities. We characterize a number of issues important for designing activity detection systems that may not have been as evident in prior work when data was collected under more controlled conditions.
---
paper_title: Sensor-Based Human Activity Recognition in a Multi-user Scenario
paper_content:
Existing work on sensor-based activity recognition focuses mainly on single-user activities. However, in real life, activities are often performed by multiple users involving interactions between them. In this paper, we propose Coupled Hidden Markov Models (CHMMs) to recognize multi-user activities from sensor readings in a smart home environment. We develop a multimodal sensing platform and present a theoretical framework to recognize both single-user and multi-user activities. We conduct our trace collection done in a smart home, and evaluate our framework through experimental studies. Our experimental result shows that we achieve an average accuracy of 85.46% with CHMMs.
---
paper_title: Object relevance weight pattern mining for activity recognition and segmentation
paper_content:
Monitoring daily activities of a person has many potential benefits in pervasive computing. These include providing proactive support for the elderly and monitoring anomalous behaviors. A typical approach in existing research on activity detection is to construct sequence-based models of low-level activity features based on the order of object usage. However, these models have poor accuracy, require many parameters to estimate, and demand excessive computational effort. Many other supervised learning approaches have been proposed but they all suffer from poor scalability due to the manual labeling involved in the training process. In this paper, we simplify the activity modeling process by relying on the relevance weights of objects as the basis of activity discrimination rather than on sequence information. For each activity, we mine the web to extract the most relevant objects according to their normalized usage frequency. We develop a KeyExtract algorithm for activity recognition and two algorithms, MaxGap and MaxGain, for activity segmentation with linear time complexities. Simulation results indicate that our proposed algorithms achieve high accuracy in the presence of different noise levels indicating their good potential in real-world deployment.
---
paper_title: An unsupervised approach to activity recognition and segmentation based on object-use fingerprints
paper_content:
Human activity recognition is an important task which has many potential applications. In recent years, researchers from pervasive computing are interested in deploying on-body sensors to collect observations and applying machine learning techniques to model and recognize activities. Supervised machine learning techniques typically require an appropriate training process in which training data need to be labeled manually. In this paper, we propose an unsupervised approach based on object-use fingerprints to recognize activities without human labeling. We show how to build our activity models based on object-use fingerprints, which are sets of contrast patterns describing significant differences of object use between any two activity classes. We then propose a fingerprint-based algorithm to recognize activities. We also propose two heuristic algorithms based on object relevance to segment a trace and detect the boundary of any two adjacent activities. We develop a wearable RFID system and conduct a real-world trace collection done by seven volunteers in a smart home over a period of 2 weeks. We conduct comprehensive experimental evaluations and comparison study. The results show that our recognition algorithm achieves a precision of 91.4% and a recall 92.8%, and the segmentation algorithm achieves an accuracy of 93.1% on the dataset we collected.
---
paper_title: Creating Evolving User Behavior Profiles Automatically
paper_content:
Knowledge about computer users is very beneficial for assisting them, predicting their future actions or detecting masqueraders. In this paper, a new approach for creating and recognizing automatically the behavior profile of a computer user is presented. In this case, a computer user behavior is represented as the sequence of the commands she/he types during her/his work. This sequence is transformed into a distribution of relevant subsequences of commands in order to find out a profile that defines its behavior. Also, because a user profile is not necessarily fixed but rather it evolves/changes, we propose an evolving method to keep up to date the created profiles using an Evolving Systems approach. In this paper, we combine the evolving classifier with a trie-based user profiling to obtain a powerful self-learning online scheme. We also develop further the recursive formula of the potential of a data point to become a cluster center using cosine distance, which is provided in the Appendix. The novel approach proposed in this paper can be applicable to any problem of dynamic/evolving user behavior modeling where it can be represented as a sequence of actions or events. It has been evaluated on several real data streams.
---
paper_title: Transferring knowledge of activity recognition across sensor networks
paper_content:
A problem in performing activity recognition on a large scale (i.e. in many homes) is that a labelled data set needs to be recorded for each house activity recognition is performed in. This is because most models for activity recognition require labelled data to learn their parameters. In this paper we introduce a transfer learning method for activity recognition which allows the use of existing labelled data sets of various homes to learn the parameters of a model applied in a new home. We evaluate our method using three large real world data sets and show our approach achieves good classification performance in a home for which little or no labelled data is available.
---
paper_title: Improving the recognition of interleaved activities
paper_content:
We introduce Interleaved Hidden Markov Models for recognizing multitasked activities. The model captures both inter-activity and intra-activity dynamics. Although the state space is intractably large, we describe an approximation that is both effective and efficient. This method significantly reduces the error rate when compared with previously proposed methods. The algorithm is suitable for mobile platforms where computational resources may be limited.
---
paper_title: Coping with multiple residents in a smart environment
paper_content:
Smart environment research has resulted in many useful tools for modeling, monitoring, and adapting to a single resident. However, many of these tools are not equipped for coping with multiple residents in the same environment simultaneously. In this paper we investigate a first step in coping with multiple residents, that of attributing sensor events to individuals in a multi-resident environment. We discuss approaches that can be used to achieve this goal and we evaluate our implementations in the context of two physical smart environment testbeds. We also explore how learning resident identifiers can aid in performing other analyses on smart environment sensor data such as activity recognition.
---
paper_title: Mining Sensor Streams for Discovering Human Activity Patterns over Time
paper_content:
In recent years, new emerging application domains have introduced new constraints and methods in data mining field. One of such application domains is activity discovery from sensor data. Activity discovery and recognition plays an important role in a wide range of applications from assisted living to security and surveillance. Most of the current approaches for activity discovery assume a static model of the activities and ignore the problem of mining and discovering activities from a data stream over time. Inspired by the unique requirements of activity discovery application domain, in this paper we propose a new stream mining method for finding sequential patterns over time from streaming non-transaction data using multiple time granularities. Our algorithm is able to find sequential patterns, even if the patterns exhibit discontinuities (interruptions) or variations in the sequence order. Our algorithm also addresses the problem of dealing with rare events across space and over time. We validate the results of our algorithms using data collected from two different smart apartments.
---
paper_title: Learning Setting-Generalized Activity Models for Smart Spaces
paper_content:
Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types.
---
paper_title: Recognizing independent and joint activities among multiple residents in smart environments
paper_content:
The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed.
---
paper_title: Activity knowledge transfer in smart environments
paper_content:
Current activity recognition approaches usually ignore knowledge learned in previous smart environments when training the recognition algorithms for a new smart environment. In this paper, we propose a method of transferring the knowledge of learned activities in multiple physical spaces, e.g. homes A and B, to a new target space, e.g. home C. Transferring the knowledge of learned activities to a target space results in reducing the data collection and annotation period, achieving an accelerated learning pace and exploiting the insights from previous settings. We validate our algorithms using data collected from several smart apartments.
---
paper_title: The Advanced Health and Disaster Aid Network: A Light-Weight Wireless Medical System for Triage
paper_content:
Advances in semiconductor technology have resulted in the creation of miniature medical embedded systems that can wirelessly monitor the vital signs of patients. These lightweight medical systems can aid providers in large disasters who become overwhelmed with the large number of patients, limited resources, and insufficient information. In a mass casualty incident, small embedded medical systems facilitate patient care, resource allocation, and real-time communication in the advanced health and disaster aid network (AID-N). We present the design of electronic triage tags on lightweight, embedded systems with limited memory and computational power. These electronic triage tags use noninvasive, biomedical sensors (pulse oximeter, electrocardiogram, and blood pressure cuff) to continuously monitor the vital signs of a patient and deliver pertinent information to first responders. This electronic triage system facilitates the seamless collection and dissemination of data from the incident site to key members of the distributed emergency response community. The real-time collection of data through a mesh network in a mass casualty drill was shown to approximately triple the number of times patients that were triaged compared with the traditional paper triage system.
---
paper_title: GAIS: A Method for Detecting Interleaved Sequential Patterns from Imperfect Data
paper_content:
This paper introduces a novel method, GAIS, for detecting interleaved sequential patterns from databases. A case, where data is of low quality and has errors is considered. Pattern detection from erroneous data, which contains multiple interleaved patterns is an important problem in a field of sensor network applications. We approach the problem by grouping data rows with the help of a model database and comparing groups with the models. In evaluation GAIS clearly outperforms the greedy algorithm. Using GAIS desired sequential patterns can be detected from low quality data.
---
paper_title: Mining Sensor Streams for Discovering Human Activity Patterns over Time
paper_content:
In recent years, new emerging application domains have introduced new constraints and methods in data mining field. One of such application domains is activity discovery from sensor data. Activity discovery and recognition plays an important role in a wide range of applications from assisted living to security and surveillance. Most of the current approaches for activity discovery assume a static model of the activities and ignore the problem of mining and discovering activities from a data stream over time. Inspired by the unique requirements of activity discovery application domain, in this paper we propose a new stream mining method for finding sequential patterns over time from streaming non-transaction data using multiple time granularities. Our algorithm is able to find sequential patterns, even if the patterns exhibit discontinuities (interruptions) or variations in the sequence order. Our algorithm also addresses the problem of dealing with rare events across space and over time. We validate the results of our algorithms using data collected from two different smart apartments.
---
paper_title: Improving home automation by discovering regularly occurring device usage patterns
paper_content:
The data stream captured by recording inhabitant-device interactions in an environment can be mined to discover significant patterns, which an intelligent agent could use to automate device interactions. However, this knowledge discovery problem is complicated by several challenges, such as excessive noise in the data, data that does not naturally exist as transactions, a need to operate in real time, and a domain where frequency may not be the best discriminator. We propose a novel data mining technique that addresses these challenges and discovers regularly-occurring interactions with a smart home. We also discuss a case study that shows the data mining technique can improve the accuracy of two prediction algorithms, thus demonstrating multiple uses for a home automation system. Finally, we present an analysis of the algorithm and results obtained using inhabitant interactions.
---
paper_title: Health-status monitoring through analysis of behavioral patterns
paper_content:
With the rapid growth of the elderly population, there is a need to support the ability of elders to maintain an independent and healthy lifestyle in their homes rather than through more expensive and isolated care facilities. One approach to accomplish these objectives employs the concepts of ambient intelligence to remotely monitor an elder's activities and condition. The SmartHouse project uses a system of basic sensors to monitor a person's in-home activity; a prototype of the system is being tested within a subject's home. We examined whether the system could be used to detect behavioral patterns and report the results in this paper. Mixture models were used to develop a probabilistic model of behavioral patterns. The results of the mixture-model analysis were then evaluated by using a log of events kept by the occupant.
---
paper_title: Keeping the Resident in the Loop: Adapting the Smart Home to the User
paper_content:
Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data.
---
paper_title: Discovering Activities to Recognize and Track in a Smart Environment
paper_content:
The machine learning and pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track activities that people normally perform as part of their daily routines. Although approaches do exist for recognizing activities, the approaches are applied to activities that have been preselected and for which labeled training data are available. In contrast, we introduce an automated approach to activity tracking that identifies frequent activities that naturally occur in an individual's routine. With this capability, we can then track the occurrence of regular activities to monitor functional health and to detect changes in an individual's patterns and lifestyle. In this paper, we describe our activity mining and tracking approach, and validate our algorithms on data collected in physical smart environments.
---
paper_title: Constraint-based sequential pattern mining: the pattern-growth methods
paper_content:
Constraints are essential for many sequential pattern mining applications. However, there is no systematic study on constraint-based sequential pattern mining. In this paper, we investigate this issue and point out that the framework developed for constrained frequent-pattern mining does not fit our mission well. An extended framework is developed based on a sequential pattern growth methodology. Our study shows that constraints can be effectively and efficiently pushed deep into the sequential pattern mining under this new framework. Moreover, this framework can be extended to constraint-based structured pattern mining as well.
---
paper_title: An approach to cognitive assessment in smart home
paper_content:
In this paper, we describe an approach to developing an ecologically valid framework for performing automated cognitive assessment. To automate assessment, we use a machine learning approach that builds a model of cognitive health based on observations of activity performance and uses lab-based assessment to provide ground truth for training and testing the learning algorithm. To evaluate our approach, we recruited older adults to perform a set of activities in our smart home test-bed. While participants perform activities, sensors placed in the smart home unobtrusively capture the progress of the activity. During analysis, we extract features that indicate how well participants perform the activities. Our machine-learning algorithm accepts these features as input and outputs the cognitive status of the participants as belonging to one of two groups: Cognitively healthy or Dementia. We conclude that machine-learning algorithms can distinguish between cognitively healthy older adults and older adults with dementia given adequate features that represent how well they have performed the activity.
---
paper_title: Anomaly Detection Using Temporal Data Mining in a Smart Home Environment
paper_content:
Objectives: To many people, home is a sanctuary. With the maturing of smart home technologies, many people with cognitive and physical disabilities can lead independent lives in their own homes for extended periods of time. In this paper, we investigate the design of machine learning algorithms that support this goal. We hypothesize that machine learning algorithms can be designed to automatically learn models of resident behavior in a smart home, and that the results can be used to perform automated health monitoring and to detect anomalies. Methods: Specifically, our algorithms draw upon the temporal nature of sensor data collected in a smart home to build a model of expected activities and to detect unexpected, and possibly health-critical, events in the home. Results: We validate our algorithms using synthetic data and real activity data collected from volunteers in an automated smart environment. Conclusions: The results from our experiments support our hypothesis that a model can be learned from observed smart home data and used to report anomalies, as they occur, in a smart home.
---
paper_title: Modeling the scaling properties of human mobility
paper_content:
While the fat tailed jump size and the waiting time distributions characterizing individual human trajectories strongly suggest the relevance of the continuous time random walk (CTRW) models of human mobility, no one seriously believes that human traces are truly random. Given the importance of human mobility, from epidemic modeling to traffic prediction and urban planning, we need quantitative models that can account for the statistical characteristics of individual human trajectories. Here we use empirical data on human mobility, captured by mobile phone traces, to show that the predictions of the CTRW models are in systematic conflict with the empirical results. We introduce two principles that govern human trajectories, allowing us to build a statistically self-consistent microscopic model for individual human mobility. The model not only accounts for the empirically observed scaling laws but also allows us to analytically predict most of the pertinent scaling exponents.
---
paper_title: Sensor-Based Abnormal Human-Activity Detection
paper_content:
With the availability of affordable sensors and sensor networks, sensor-based human activity recognition has attracted much attention in artificial intelligence and ubiquitous computing. In this paper, we present a novel two-phase approach for detecting abnormal activities based on wireless sensors attached to a human body. Detecting abnormal activities is a particular important task in security monitoring and healthcare applications of sensor networks, among many others. Traditional approaches to this problem suffer from a high false positive rate, particularly when the collected sensor data are biased towards normal data while the abnormal events are rare. Therefore, there is a lack of training data for many traditional data mining methods to be applied. To solve this problem, our approach first employs a one-class support vector machine (SVM) that is trained on commonly available normal activities, which filters out the activities that have a very high probability of being normal. We then derive abnormal activity models from a general normal model via a kernel nonlinear regression (KNLR) to reduce false positive rate in an unsupervised manner. We show that our approach provides a good tradeoff between abnormality detection rate and false alarm rate, and allows abnormal activity models to be automatically derived without the need to explicitly label the abnormal training data, which are scarce. We demonstrate the effectiveness of our approach using real data collected from a sensor network that is deployed in a realistic setting.
---
paper_title: Planning as Heuristic Search
paper_content:
In the AIPS98 Planning Contest, the HSP planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and SAT planners. Heuristic search planners like HSP transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik’s Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers. 2001 Elsevier Science B.V. All rights reserved.
---
paper_title: What Planner for Ambient Intelligence Applications?
paper_content:
The development of ambient intelligence (AmI) applications that effectively adapt to the needs of the users and environments requires, among other things, the presence of planning mechanisms for goal-oriented behavior. Planning is intended as the ability of an AmI system to build a course of actions that, when carried out by the devices in the environment, achieve a given goal. The problem of planning in AmI has not yet been adequately explored in literature. We propose a planning system for AmI applications, based on the hierarchical task network (HTN) approach and called distributed hierarchical task network (D- HTN), able to find courses of actions to address given goals. The plans produced by D-HTN are flexibly tailored to exploit the capabilities of the devices currently available in the environment in the best way. We discuss both the architecture and the implementation of D-HTN. Moreover, we present some of the experimental results that validated the proposed planner in a realistic application scenario in which an AmI system monitors and answers the needs of a diabetic patient.
---
paper_title: Fast Planning Through Planning Graph Analysis
paper_content:
Abstract We introduce a new approach to planning in STRIPS-like domains based on constructing and analyzing a compact structure we call a planning graph. We describe a new planner, Graphplan, that uses this paradigm. Graphplan always returns a shortest possible partial-order plan, or states that no valid plan exists. We provide empirical evidence in favor of this approach, showing that Graphplan outperforms the total-order planner, Prodigy and the partial-order planner, UCPOP, on a variety of interesting natural and artificial planning problems. We also give empirical evidence that the plans produced by Graphplan are quite sensible. Since searches made by this approach are fundamentally different from the searches of other common planning methods, they provide a new perspective on the planning problem.
---
paper_title: O-Plan: a Web-based AI Planning Agent
paper_content:
In these demonstrations we show O-Plan, an AI planning agent working over the WWW. There are a number of demonstrations ranging from a simple “single shot” generation of Unix systems administration scripts through to comprehensive use of AI technologies across the whole planning lifecycle in military and civilian crisis situations The applications are derived from actual user requirements and domain knowledge. The AI planning technologies demonstrated include: • Domain knowledge elicitation • Rich plan representation and use • Hierarchical Task Network Planning • Detailed constraint management • Goal structure-based plan monitoring • Dynamic issue handling • Plan repair in low and high tempo situations • Interfaces for users with different roles • Management of planning and execution workflow The featured demonstrations, and others, are available at http://www.aiai.ed.ac.uk/~oplan/isd/
---
paper_title: The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home
paper_content:
This paper discusses the use of computer vision in pervasive healthcare systems, specifically in the design of a sensing agent for an intelligent environment that assists older adults with dementia during an activity of daily living. An overview of the techniques applied in this particular example is provided, along with results from preliminary trials completed using the new sensing agent. A discussion of the results obtained to date is presented, including technical and social issues that remain for the advancement and acceptance of this type of technology within pervasive healthcare.
---
paper_title: Plans and planning in smart homes
paper_content:
In this chapter, we review the use (and uses) of plans and planning in Smart Homes. Plans have several applications within Smart Homes, including: sharing task execution with the home's inhabitants, providing task guidance to inhabitants, and to identifying emergencies. These plans are not necessarily generated automatically, nor are they always represented in a human-readable form. The chapter ends with a discussion of the research issues surrounding the integration of plans and planning into Smart Homes.
---
paper_title: An investigation into reactive planning in complex domains
paper_content:
A model of purely reactive planning is proposed based on the concept of reactive action packages. A reactive action package, or RAP, can be thought of as an independent entity pursuing some goal in competition with many others at execution time. The RAP processing algorithm addresses the problems of execution monitoring and replanning in uncertain domains with a single, uniform representation and control structure. Use of the RAP model as a basis for adaptive strategic planning is also discussed.
---
paper_title: Implementing decision support systems: a survey
paper_content:
The successful implementation of decision support systems (DSS) is a function of many factors. This paper provides an overview of the factors that determine the success or failure of DSS. The paper discusses the following topics and concepts: assessing successful implementation; research on implementation; implementation frameworks; the implementation process; and research guides.
---
paper_title: Context-aware knowledge modelling for decision support in e-health
paper_content:
In the context of e-health, professionals and healthcare service providers in various organisational and geographical locations are to work together, using information and communication systems, for the purpose of providing better patient-centred and technology-supported healthcare services at anytime and from anywhere. However, various organisations and geographies have varying contexts of work, which are dependent on their local work culture, available expertise, available technologies, people's perspectives and attitudes and organisational and regional agendas. As a result, there is the need to ensure that a suggestion - information and knowledge - provided by a professional to support decision making in a different, and often distant, organisation and geography takes into cognizance the context of the local work setting in which the suggestion is to be used. To meet this challenge, we propose a framework for context-aware knowledge modelling in e-health, which we refer to as ContextMorph. ContextMorph combines the commonKADS knowledge modelling methodology with the concept of activity landscape and context-aware modelling techniques in order to morph, i.e. enrich and optimise, a knowledge resource to support decision making across various contexts of work. The goal is to integrate explicit information and tacit expert experiences across various work domains into a knowledge resource adequate for supporting the operational context of the work setting in which it is to be used.
---
paper_title: Persuasion in ambient intelligence
paper_content:
Although the field of persuasive technologies has lately attracted a lot of attention, only recently the notion of ambient persuasive technologies was introduced. Ambient persuasive technologies can be integrated into every aspect of life, and as such have greater persuasive power than the traditional box like machines. This article discusses ambient persuasion and poses a model that structures the knowledge from social sciences on persuasion, attitude change, and behavior change. Using this model the challenges of ambient persuasive technologies to fulfill its persuasive promises are identified. From the ambient persuasion model it is clear that ambient persuasive technologies can go beyond traditional persuasive technologies by being context and situational aware, by interpreting individual differences between users, and by being a social actor in their own right.
---
paper_title: Physical, Social, and Experiential Knowledge in Pervasive Computing Environments
paper_content:
Pervasive computing designers and researchers often create services and applications to help people record their experiences. At the same time, cheap, small, and easy-to-deploy recording technologies are quickly emerging throughout public spaces. In many ways, these technologies are pervasive computing realized. Understanding how people deal with audio and video recording is therefore a good way to explore how people might adopt, adapt, and react to pervasive computing technologies in general. A long-term deployment of a system for recording experiences in informal spaces demonstrates that people use physical, social, and experiential knowledge to determine new technologies' relative utility and safety. In this paper, we aim to add significantly to the research surrounding security and privacy concerns by focusing on them rather than just noting them as a side effect of testing an application's utility and usability.
---
paper_title: A Survey on Privacy Preserving Data Mining
paper_content:
Privacy preserving becomes an important issue in the development progress of data mining techniques. Privacy preserving data mining has become increasingly popular because it allows sharing of privacy-sensitive data for analysis purposes. So people have become increasingly unwilling to share their data, frequently resulting in individuals either refusing to share their data or providing incorrect data. In turn, such problems in data collection can affect the success of data mining, which relies on sufficient amounts of accurate data in order to produce meaningful results. In recent years, the wide availability of personal data has made the problem of privacy preserving data mining an important one. A number of methods have recently been proposed for privacy preserving data mining of multidimensional data records. This paper intends to reiterate several privacy preserving data mining technologies clearly and then proceeds to analyze the merits and shortcomings of these technologies.
---
paper_title: Minimum spanning tree partitioning algorithm for microaggregation
paper_content:
This paper presents a clustering algorithm for partitioning a minimum spanning tree with a constraint on minimum group size. The problem is motivated by microaggregation, a disclosure limitation technique in which similar records are aggregated into groups containing a minimum of k records. Heuristic clustering methods are needed since the minimum information loss microaggregation problem is NP-hard. Our MST partitioning algorithm for microaggregation is sufficiently efficient to be practical for large data sets and yields results that are comparable to the best available heuristic methods for microaggregation. For data that contain pronounced clustering effects, our method results in significantly lower information loss. Our algorithm is general enough to accommodate different measures of information loss and can be used for other clustering applications that have a constraint on minimum group size.
---
paper_title: Findings from a participatory evaluation of a smart home application for older adults.
paper_content:
The aim of this paper is to present a participatory evaluation of an actual "smart home" project implemented in an independent retirement facility. Using the participatory evaluation process, residents guided the research team through development and implementation of the initial phase of a smart home project designed to assist residents to remain functionally independent and age in place. We recruited nine residents who provided permission to install the technology in their apartments. We conducted a total of 75 interviews and three observational sessions. Residents expressed overall positive perceptions of the sensor technologies and did not feel that these interfered with their daily activities. The process of adoption and acceptance of the sensors included three phases, familiarization, adjustment and curiosity, and full integration. Residents did not express privacy concerns. They provided detailed feedback and suggestions that were integrated into the redesign of the system. They also reported a sense of control resulting from their active involvement in the evaluation process. Observational sessions confirmed that the sensors were not noticeable and residents did not change their routines. The participatory evaluation approach not only empowers end-users but it also allows for the implementation of smart home systems that address residents' needs.
---
paper_title: MagIC System: a New Textile-Based Wearable Device for Biological Signal Monitoring. Applicability in Daily Life and Clinical Setting
paper_content:
The paper presents a new textile-based wearable system for the unobtrusive recording of cardiorespiratory and motion signals during spontaneous behavior along with the first results concerning the application of this device in daily life and in a clinical environment. The system, called MagIC (Maglietta Interattiva Computerizzata), is composed of a vest, including textile sensors for detecting ECG and respiratory activity, and a portable electronic board for motion detection, signal preprocessing and wireless data transmission to a remote monitoring station. The MagIC system has been tested in freely moving subjects at work, at home, while driving and cycling and in microgravity condition during a parabolic flight. Applicability of the system in cardiac in-patients is now under evaluation. Preliminary data derived from recordings performed on patients in bed and during physical exercise showed 1) good signal quality over most of the monitoring periods, 2) a correct identification of arrhythmic events, and 3) a correct estimation of the average beat-by-beat heart rate. These positive results supports further developments of the MagIC system, aimed at tuning this approach for a routine use in clinical practice and in daily life
---
paper_title: The IMMED project: wearable video monitoring of people with age dementia
paper_content:
In this paper, we describe a new application for multimedia indexing, using a system that monitors the instrumental activities of daily living to assess the cognitive decline caused by dementia. The system is composed of a wearable camera device designed to capture audio and video data of the instrumental activities of a patient, which is leveraged with multimedia indexing techniques in order to allow medical specialists to analyze several hour long observation shots efficiently.
---
paper_title: New approach for the early detection of dementia by recording in-house activities.
paper_content:
People with dementia often have low physical activity and some sleep problems. This study focused on daily life activities and sleeping conditions, and examined the use of these parameters for detecting dementia. Five passive infrared (IR) sensors were installed in each of 14 subject's houses. Each patient lived alone. The subjects' in-house movements were recorded by the passive IR sensor for approximately 3 months (average, 78 days). Based on these records, the following parameters of life activities were assessed: (1) the number of outings, (2) total sleep time, (3) number of sleep interruptions, and (4) sleep rhythm. Subjects with impaired cognition (Mini Mental State Examination [MMSE] < 24) had a significantly lesser number of outings (p = 0.001) and a tendency toward a shorter sleep time (p = 0.054) in comparison with control subjects (MMSE >or= 24). These results suggest that the monitoring of life activities by using passive infrared sensors could be an efficient method for detecting dementia.
---
paper_title: Acoustic fall detection using Gaussian mixture models and GMM supervectors
paper_content:
We present a system that detects human falls in the home environment, distinguishing them from competing noise, by using only the audio signal from a single far-field microphone. The proposed system models each fall or noise segment by means of a Gaussian mixture model (GMM) supervector, whose Euclidean distance measures the pairwise difference between audio segments. A support vector machine built on a kernel between GMM supervectors is employed to classify audio segments into falls and various types of noise. Experiments on a dataset of human falls, collected as part of the Netcarity project, show that the method improves fall classification F-score to 67% from 59% of a baseline GMM classifier. The approach also effectively addresses the more difficult fall detection problem, where audio segment boundaries are unknown. Specifically, we employ it to reclassify confusable segments produced by a dynamic programming scheme based on traditional GMMs. Such post-processing improves a fall detection accuracy metric by 5% relative.
---
paper_title: Towards automatic analysis of social interaction patterns in a nursing home environment from video
paper_content:
In this paper, we propose an ontology-based approach for analyzing social interaction patterns in a nursing home from video. Social interaction patterns are broken into individual activities and behavior events using a multi-level context hierarchy ontology framework. To take advantage of an ontology in representing how social interactions evolve, we design and refine the ontology based on knowledge gained from 80 hours of video recorded in the public spaces of a nursing home. The ontology is implemented using a dynamic Bayesian network to statistically model the multi-level concepts defined in the ontology. We have developed a prototype system to illustrate the proposed concept. Experiment results have demonstrated feasibility of the proposed approach. The objective of this research is to automatically create concise and comprehensive reports of activities and behaviors of patients to support physicians and caregivers in a nursing facility
---
paper_title: Home health monitoring system in the sleep
paper_content:
This paper describes a development of health monitoring system that uses a non-invasive type microphone based pressure sensor. The system can estimate the sleep stages from the heartbeats and body motion measured by the sensor. The algorithm for sleep stages estimations as implemented to the single chip microcomputer. The validity of the proposed system is confirmed by comparing the conventional sleep stage estimation results.
---
paper_title: A wearable health care system based on knitted integrated sensors
paper_content:
A comfortable health monitoring system named WEALTHY is presented. The system is based on a textile wearable interface implemented by integrating sensors, electrodes, and connections in fabric form, advanced signal processing techniques, and modern telecommunication systems. Sensors, electrodes and connections are realized with conductive and piezoresistive yarns. The sensorized knitted fabric is produced in a one step process. The purpose of this paper is to show the feasibility of a system based on fabric sensing elements. The capability of this system to acquire simultaneously several biomedical signals (i.e. electrocardiogram, respiration, activity) has been investigated and compared with a standard monitoring system. Furthermore, the paper presents two different methodologies for the acquisition of the respiratory signal with textile sensors. Results show that the information contained in the signals obtained by the integrated systems is comparable with that obtained by standard sensors. The proposed system is designed to monitor individuals affected by cardiovascular diseases, in particular during the rehabilitation phase. The system can also help professional workers who are subject to considerable physical and psychological stress and/or environmental and professional health risks.
---
paper_title: Ambient intelligence and pervasive systems for the monitoring of citizens at cardiac risk: New solutions from the EPI-MEDICS project
paper_content:
In western countries, heart disease is the main cause of premature death. Most of cardiac deaths occur out of hospital. Symptoms are often interpreted incorrectly. Victims do not survive long enough to benefit from in-hospital treatments. To reduce the time before treatment, the only useful diagnostic tool to assess the presence of a cardiac event is the electrocardiogram (ECG). Event and transtelephonic ECG recorders are used to improve decision-making but require setting up new infrastructures. The Pervasive solution proposed by the European EPI-MEDICS project is an intelligent Personal ECG Monitor for the early detection of cardiac events. It includes decision-making techniques, generates different alarm levels and forwards alarm messages to the relevant care providers by means of new generation wireless communication. It is cost saving, involving care provider only if necessary without specific infrastructure. Healthcare becomes personalized, wearable, ubiquitous.
---
paper_title: Healthy Aims: Developing New Medical Implants and Diagnostic Equipment
paper_content:
Healthy Aims is a 23- million, four-year project, funded under the EU's information society technology sixth framework program to develop intelligent medical implants and diagnostic systems (www.healthyaims.org). The project has 25 partners from 10 countries, including commercial, clinical, and research groups. This consortium represents a combination of disciplines to design and fabricate new medical devices and components as well as to test them in laboratories and subsequent clinical trials. The project focuses on medical implants for nerve stimulation and diagnostic equipment based on strain-gauge technology.
---
paper_title: Sensorized environment for self-communication based on observation of daily human behavior
paper_content:
This paper describes an intelligent environmental system, SELF (Sensorized Environment for Life), that enables a person to maintain his or her health through "self-communication." The system externalizes a "self" by storing personal data such as physiological status, analyzing it, and reporting useful information to assist the person, in maintaining his or her health. To establish the methodology to create such a system, this study analyzes and re-defines the self as an object of information processing. Self-communication is nothing but understanding ourselves objectively. It is not easy to put into practice because it requires a visualized or numerized representation of oneself. As an example of an externalized self, this study focused on the physiological status of a person. The physiological status is obtained as follows: SELF observes the person's behavior with distributed sensors invisibly embedded in the daily environment, extracts physiological parameters from it, analyzes the parameters, and accumulates the results. The accumulated results are used for reporting useful information to maintain the person's health. The authors constructed a model room for SELF consisting of a bed with pressure sensor array, a ceiling lighting dome with a microphone, and a washstand with a display. Fundamental results of self-externalization using SELF are presented.
---
paper_title: Liverpool Telecare Pilot: telecare as an information tool
paper_content:
The role of telecare systems is normally seen as identifying, and drawing attention to, situations of concern in the homes of service users. While this may currently be the primary reason for deploying such systems, the scope of telecare should not be limited to such an alarm generation role. The role of telecare in enhancing community-based care provision may be broadened by using similar, or identical, technology for providing relevant information to the carers of service users. In this paper we present a technical overview and discussion of an information provision approach to telecare which was trialled as one aspect of a pilot service in Liverpool, UK. The service used data collected by the telecare system to produce visual daily behavioural profiles and presented these to carers. The recipients for these profiles included social workers, occupational therapists and relatives of the service users. In this paper we discuss the visual profiles together with the benefits offered by such an information provision approach, including the perspective of a occupational therapist based in Liverpool.
---
paper_title: A daily behavior enabled hidden Markov model for human behavior understanding
paper_content:
This paper presents a Hierarchical Context Hidden Markov Model (HC-HMM) for behavior understanding from video streams in a nursing center. The proposed HC-HMM infers elderly behaviors through three contexts which are spatial, activities, and temporal context. By considering the hierarchical architecture, HC-HMM builds three modules composing the three components, reasoning in the primary and the secondary relationship. The spatial contexts are defined from the spatial structure, so that it is placed as the primary inference contexts. The temporal duration is associated to elderly activities, so activities are placed in the following of spatial contexts and the temporal duration is placed after activities. Between the spatial context reasoning and behavior reasoning of activities, a modified duration HMM is applied to extract activities. According to this design, human behaviors different in spatial contexts would be distinguished in first module. The behaviors different in activities would be determined in second module. The third module is to recognize behaviors involving different temporal duration. By this design, an abnormal signaling process corresponding to different situations is also placed for application. The developed approach has been applied for understanding of elder behaviors in a nursing center. Results have indicated the promise of the approach which can accurately interpret 85% of the elderly behaviors. For abnormal detection, the approach was found to have 90% accuracy, with 0% false alarm.
---
paper_title: Portable Preimpact Fall Detector With Inertial Sensors
paper_content:
Falls and the resulting hip fractures in the elderly are a major health and economic problem. The goal of this study was to investigate the feasibility of a portable preimpact fall detector in detecting impending falls before the body impacts on the ground. It was hypothesized that a single sensor with the appropriate kinematics measurements and detection algorithms, located near the body center of gravity, would be able to distinguish an in-progress and unrecoverable fall from nonfalling activities. The apparatus was tested in an array of daily nonfall activities of young (n = 10) and elderly (n = 14) subjects, and simulated fall activities of young subjects. A threshold detection method was used with the magnitude of inertial frame vertical velocity as the main variable to separate the nonfall and fall activities. The algorithm was able to detect all fall events at least 70 ms before the impact. With the threshold adapted to each individual subject, all falls were detected successfully, and no false alarms occurred. This portable preimpact fall detection apparatus will lead to the development of a new generation inflatable hip pad for preventing fall-related hip fractures.
---
paper_title: Individualization, globalization and health - about sustainable information technologies and the aim of medical informatics
paper_content:
This paper discusses aspects of information technologies for health care, in particular on transinstitutional health information systems (HIS) and on health-enabling technologies, with some consequences for the aim of medical informatics. It is argued that with the extended range of health information systems and the perspective of having adequate transinstitutional HIS architectures, a substantial contribution can be made to better patient-centered care, with possibilities ranging from regional, national to even global care. It is also argued that in applying health-enabling technologies, using ubiquitous, pervasive computing environments and ambient intelligence approaches, we can expect that in addition care will become more specific and tailored for the individual, and that we can achieve better personalized care. In developing health care systems towards transinstitutional HIS and health-enabling technologies, the aim of medical informatics, to contribute to the progress of the sciences and to high-quality, efficient, and affordable health care that does justice to the individual and to society, may be extended to also contributing to self-determined and self-sufficient (autonomous) life. Reference is made and examples are given from the Yearbook of Medical Informatics of the International Medical Informatics Association (IMIA) and from the work of Professor Jochen Moehr.
---
paper_title: An eigenspace-based approach for human fall detection using Integrated Time Motion Image and Neural Network
paper_content:
Falls are a major health hazard for the elderly and a serious obstacle for independent living. Since falling causes dramatic physical-psychological consequences, development of intelligent video surveillance systems is so important due to providing safe environments. To this end, this paper proposes a novel approach for human fall detection based on combination of integrated time motion images and eigenspace technique. Integrated time motion image (ITMI) is a type of spatio-temporal database that includes motion and time of motion occurrence. Applying eigenspace technique to ITMIs leads in extracting eigen-motion and finally MLP Neural Network is used for precise classification of motions and determination of a fall event. Unlike existent fall detection systems only deal with limited movement patterns, we considered wide range of motions consisting normal daily life activities, abnormal behaviors and also unusual events. Reliable recognition rate of experimental results underlines satisfactory performance of our system.
---
paper_title: A Smart and Passive Floor-Vibration Based Fall Detector for Elderly
paper_content:
Falls are very prevalent among the elderly. They are the second leading cause of unintentional-injury death for people of all ages and the leading cause of death for elders 79 years and older. Studies have shown that the medical outcome of a fall is largely dependent upon the response and rescue time. Hence, a highly accurate automatic fall detector is an important component of the living setting for older adult to expedite and improve the medical care provided to this population. Though there are several kinds of fall detectors currently available, they suffer from various drawbacks. Some of them are intrusive while others require the user to wear and activate the devices, and hence may fail in the event of user non-compliance. This paper describes the working principle and the design of a floor vibration-based fall detector that is completely passive and unobtrusive to the resident. The detector was designed to overcome some of the common drawbacks of the earlier fall detectors. The performance of the detector is evaluated by conducting controlled laboratory tests using anthropomorphic dummies. The results showed 100% fall detection rate with minimum potential for false alarms
---
paper_title: A survey on fall detection: Principles and approaches
paper_content:
Fall detection is a major challenge in the public health care domain, especially for the elderly, and reliable surveillance is a necessity to mitigate the effects of falls. The technology and products related to fall detection have always been in high demand within the security and the health-care industries. An effective fall detection system is required to provide urgent support and to significantly reduce the medical care costs associated with falls. In this paper, we give a comprehensive survey of different systems for fall detection and their underlying algorithms. Fall detection approaches are divided into three main categories: wearable device based, ambience device based and vision based. These approaches are summarised and compared with each other and a conclusion is derived with some discussions on possible future work.
---
paper_title: AMON: a wearable multiparameter medical monitoring and alert system
paper_content:
This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.
---
paper_title: Keeping the Resident in the Loop: Adapting the Smart Home to the User
paper_content:
Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data.
---
paper_title: Behavioral Telemonitoring of the Elderly at Home: Detection of Nycthemeral Rhythms Drifts from Location Data
paper_content:
Supporting ageing in place and staying at home, delaying institutionalization, lightening the caregivers' burden, improving the elderly quality of life are as many expectations that TeleHealthCare aims at coming up to. This paper proposes a method for Telemonitoring to detect abnormal changes in behavior which may lead to an early entrance in dependency. This method allows to detect and quantify a possible nycthemeral shift in daily routine. Such a disorder is common with elderly but in severe cases, it may be a marker of pathological behavior. Particularly, in individuals with Alzheimer disease, it appears to be an indicator for more rapid decline. In all the cases, the detection of a disruption in the activity circadian clock needs a follow-up visit. The method introduced is fast and cost-effective in computation. It measures the dissimilarity between sequences of activity using a variant of the Hamming distance traditionally used in information theory. Then results are interpreted according to the circular Gumbel distribution. This method is illustrated through a longitudinal study of the successive locations of an elderly woman within her own flat. In this preliminary work, the records were captured by passive infrared sensors placed in each room allowing only the detection of elementary activities of daily living. The method was tested by varying the timebox width of the study (i. e. the duration of the watched activities) and in a second time by distinguishing the day of the week. In both cases, it provides interesting insights into the behavior and the daily routine of the watched person as well as deviations from this routine. Important deviations will trigger alarms to alert the care providers. Diagnosing early abnormal behaviors is crucial for the person management and treatment effectiveness and consequently his/her maintaining at home.
---
paper_title: Detection of Cognitive Injured Body Region Using Multiple Triaxial Accelerometers for Elderly Falling
paper_content:
This paper aimed to use several triaxial acceleration sensor devices for joint sensing of injured body parts, when an accidental fall occurs. The model transmitted the information fed by the sensors distributed over various body parts to the computer through wireless transmission devices for further analysis and judgment, and employed cognitive adjustment method to adjust the acceleration range of various body parts in different movements. The model can determine the possible occurrence of fall accidents, when the acceleration significantly exceeds the usual acceleration range. In addition, after a fall accident occurs, the impact acceleration and normal (habitual) acceleration can be compared to determine the level of injury. This study also implemented a sensing system for analysis. The area of the body parts that may sustain greater impact force are marked red in this system, so that more information can be provided for medical personnel for more accurate judgment.
---
paper_title: Mobile Human Airbag System for Fall Protection Using MEMS Sensors and Embedded SVM Classifier
paper_content:
This paper introduces a mobile human airbag system designed for fall protection for the elderly. A Micro Inertial Measurement Unit ( muIMU) of 56 mm times 23 mm times 15 mm in size is built. This unit consists of three dimensional MEMS accelerometers, gyroscopes, a Bluetooth module and a Micro Controller Unit (MCU). It records human motion information, and, through the analysis of falls using a high-speed camera, a lateral fall can be determined by gyro threshold. A human motion database that includes falls and other normal motions (walking, running, etc.) is set up. Using a support vector machine (SVM) training process, we can classify falls and other normal motions successfully with a SVM filter. Based on the SVM filter, an embedded digital signal processing (DSP) system is developed for real-time fall detection. In addition, a smart mechanical airbag deployment system is finalized. The response time for the mechanical trigger is 0.133 s, which allows enough time for compressed air to be released before a person falls to the ground. The integrated system is tested and the feasibility of the airbag system for real-time fall protection is demonstrated.
---
paper_title: SenseCam: A Retrospective Memory Aid
paper_content:
This paper presents a novel ubiquitous computing device, the SenseCam, a sensor augmented wearable stills camera. SenseCam is designed to capture a digital record of the wearer's day, by recording a series of images and capturing a log of sensor data. We believe that reviewing this information will help the wearer recollect aspects of earlier experiences that have subsequently been forgotten, and thereby form a powerful retrospective memory aid. In this paper we review existing work on memory aids and conclude that there is scope for an improved device. We then report on the design of SenseCam in some detail for the first time. We explain the details of a first in-depth user study of this device, a 12-month clinical trial with a patient suffering from amnesia. The results of this initial evaluation are extremely promising; periodic review of images of events recorded by SenseCam results in significant recall of those events by the patient, which was previously impossible. We end the paper with a discussion of future work, including the application of SenseCam to a wider audience, such as those with neurodegenerative conditions such as Alzheimer's disease.
---
paper_title: Autominder: an intelligent cognitive orthotic system for people with memory impairment
paper_content:
The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project).
---
paper_title: Interaction in pervasive computing settings using Bluetooth-enabled active tags and passive RFID technology together with mobile phones
paper_content:
Passive RFID technology and unobtrusive Bluetooth-enabled active tags are means to augment products and everyday objects with information technology invisible to human users. This paper analyzes general interaction patterns in such pervasive computing settings where information about the user's context is derived by a combination of active and passive tags present in the user's environment. The concept of invisible preselection of interaction partners based on the user's context is introduced It enables unobtrusive interaction with smart objects in that it combines different forms of association, e.g. implicit and user initiated association, by transferring interaction stubs to mobile devices based on the user's current situation. Invisible preselection can also be used for remote interaction. By assigning phone numbers to smart objects, we propose making this remote user interaction with everyday items as easy as making a phone call. We evaluate the suitability of the proposed concepts on the basis of three concrete examples: a product monitoring system, a smart medicine cabinet, and a remote interaction application.
---
paper_title: Blind Path Identification System Design Base on RFID
paper_content:
Blind path identification with the help of RFID technology not only makes the blind can gain feelings of orientation during walking, but also has the knowledge of the exactly location of the street. The main idea is that when a blind man walking on the path which has electronic tags pre-built under the tile of the blind path, those tags activated by radio wave came from the RFID reader sent their identity codes transmitted by the reader to the computer, and after the query of the database, the number of the street, the name of the shop of the current location of the blind will be known immediately, meanwhile, the corresponding voice data will be sent via earphones to the blind in order to get the precise identification. The system is composed of the hardware and software. The hardware includes the electronic tags, reader, MCU, wireless earphone and GSM circuit. The software is consisted of the database that controls the read-write of the tags, the management of the street info as well as the sending of the voice data. Experimental result shows that the system almost can achieve the proposed objective and provide the novel technology support for the blind path construction.
---
paper_title: Opportunity Knocks: A System to Provide Cognitive Assistance with Transportation Services
paper_content:
We present an automated transportation routing system, called “Opportunity Knocks,” whose goal is to improve the efficiency, safety and independence of individuals with mild cognitive disabilities. Our system is implemented on a combination of a Bluetooth sensor beacon that broadcasts GPS data, a GPRS-enabled cell-phone, and remote activity inference software. The system uses a novel inference engine that does not require users to explicitly provide information about the start or ending points of their journeys; instead this information is learned from users’ past behavior. Futhermore, we demonstrate how route errors can be detected and how the system helps to correct the errors with real-time transit information. In addition we present a novel solution to the problem of labeling positions with place names.
---
paper_title: The next generation of mobile medication management solutions
paper_content:
In this paper, we describe the development of an internet-based system and a novel mobile home based device for the management of medication. We extend these concepts through the descriptions of an enhanced service with the use of mobile phone technology and home based digital TV services.
---
paper_title: ALARM-NET: Wireless sensor networks for assisted-living and residential monitoring
paper_content:
We describe ALARM-NET, a wireless sensor network for assisted-living and residential monitoring. It integrates environmental and physiological sensors in a scalable, heterogeneous architecture. A query protocol allows real-time collection and processing of sensor data by user interfaces and back-end analysis programs. One such program determines circadian activity rhythms of residents, feeding activity information back into the sensor network to aid context-aware power management, dynamic privacy policies, and data association. Communication is secured end-to-end to protect sensitive medical and operational information. The ALARM-NET system has been implemented as a network of MICAz sensors, stargate gateways, iPAQ PDAs, and PCs. Customized infrared motion and dust sensors, and integrated temperature, light, pulse, and blood oxygenation sensors are present. Software components include: TinyOS query processor and security modules for motes; AlarmGate, an embedded Java application for managing power, privacy, security, queries, and client connections; Java resident monitoring and sensor data querying applications for PDAs and PCs; and a circadian activity rhythm analysis program. We show the correctness, robustness, and extensibility of the system architecture through a scenario-based evaluation of the integrated ALARM-NET system, as well as performance data for individual software components.
---
paper_title: Orange alerts: Lessons from an outdoor case study
paper_content:
Ambient Assisted Living (AAL) is of particular relevance to those who may suffer from Alzheimer's Disease or dementia, and, of course, their carers. The slow but progressive nature of the disease, together with its neurological nature, ultimately compromises the behavior and function of people who may be essentially healthy from a physical perspective. An illustration of this is the wandering behavior frequently found in people with dementia. In this paper, a novel AAL solution for caregivers, particularly tailored for Alzheimer's patients who are the early stage of the disease and exhibit unpredictable wandering behavior, is briefly described. Salient aspects of a user evaluation are presented, and some issues relevant to the practical design of AAL systems in dementia cases are identified.
---
paper_title: The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home
paper_content:
This paper discusses the use of computer vision in pervasive healthcare systems, specifically in the design of a sensing agent for an intelligent environment that assists older adults with dementia during an activity of daily living. An overview of the techniques applied in this particular example is provided, along with results from preliminary trials completed using the new sensing agent. A discussion of the results obtained to date is presented, including technical and social issues that remain for the advancement and acceptance of this type of technology within pervasive healthcare.
---
paper_title: Magic Medicine Cabinet: A Situated Portal for Consumer Healthcare
paper_content:
In this paper, we introduce a smart appliance for consumer healthcare called "Magic Medicine Cabinet." It integrates such technologies like smart labels, face recognition, health monitoring devices, flat panel displays, and the Web to provide situated support for a broad range of health needs, including condition monitoring, medication reminders, interactions with one's own pharmacists and physicians, as well as access to personalized health information.
---
paper_title: The myheart project - Fighting cardiovascular diseases by prevention and early diagnosis
paper_content:
MyHeart is a so-called Integrated Project of the European Union aiming to develop intelligent systems for the prevention and monitoring of cardiovascular diseases. The project develops smart electronic and textile systems and appropriate services that empower the users to take control of their own health status. I. INTRODUCTION ARDIOVASCULAR diseases (CVD) are the leading cause of death in developed countries. Roughly 45% of all deaths in the EU are due to cardio-vascular diseases. More than 20% of all European citizens suffer from a chronic cardio-vascular disease. Europe spends annually hundred billion Euros on CVD. With the ageing population, it is a challenge for Europe to provide its citizens with healthcare at affordable costs. It is the aim of the MyHeart project to fight CVD by prevention and early diagnosis. A healthy and preventive lifestyle as well as early diagnosis of heart diseases could save millions of life years annually, reduce the morbidity significantly and, simultaneously, improve the quality of life of the European citizen. Prevention offers the opportunity to systematically fight the origin of cardio- vascular diseases as well as to improve the medical outcome after an event. Prevention is therefore believed to be the solution for improving the quality of care for cardio- vascular diseases. Classical medical institutions offer only intermittent, episodical treatment, while prevention asks for a lifelong continuous change of habits and therefore for a continuous health-care delivery process. The institutional points of care cannot provide preventive healthcare in a cost-effective manner due to their inherent cost structure. Novel methods are needed that provide continuous and ubiquitous access to medical excellence in a cost-effective way. The approach of the MyHeart project is to monitor Vital Body Signs (VBS) with wearable technology, to process the measured data and to give (therapy) recommendations
---
paper_title: A tool to promote prolonged engagement in art therapy: design and development from arts therapist requirements
paper_content:
This paper describes the development of a tool that assists arts therapists working with older adults with dementia. Participation in creative activities is becoming accepted as a method for improving quality of life. This paper presents the design of a novel tool to increase the capacity of creative arts therapists to engage cognitively impaired older adults in creative activities. The tool is a creative arts touch-screen interface that presents a user with activities such as painting, drawing, or collage. It was developed with a user-centered design methodology in collaboration with a group of creative arts therapists. The tool is customizable by therapists, allowing them to design and build personalized therapeutic/goal-oriented creative activities for each client. In this paper, we evaluate the acceptability of the tool by arts therapists (our primary user group). We perform this evaluation qualitatively with a set of one-on-one interviews with arts therapists who work specifically with persons with dementia. We show how their responses during interviews support the idea of a customizable assistance tool. We evaluate the tool in simulation by showing a number of examples, and by demonstrating customizable components.
---
paper_title: Towards evolutionary ambient assisted living systems
paper_content:
Ambient assisted living (AAL) is advocated as technological solutions that will enable the elderly population maintain their independence for a longer time than would otherwise be the case. Though the facts motivating the need for AAL are indisputable, the inherently heterogeneous nature and requirements of the elderly population raise significant difficulties. One particular challenge is that of designing AAL systems that can evolve to meet the requirements of individuals as their needs and circumstances change. This demands the availability of an adaptive, open, scalable software platform that incorporates a select combination of autonomic and intelligent techniques. Given that the first generation of AAL systems will be deployed in the near future, it is incumbent on designers to factor this need for evolution and adaptivity in their designs and implementations. Thus this paper explores AAL from a number of prospective and considers an agent-based middleware approach to realising an architecture for evolutionary AAL.
---
paper_title: SWAN: System for Wearable Audio Navigation
paper_content:
Wearable computers can certainly support audio-only presentation of information; a visual interface need not be present for effective user interaction. A system for wearable audio navigation (SWAN) is being developed to serve as a navigation and orientation aid for persons temporarily or permanently visually impaired. SWAN is a wearable computer consisting of audio-only output and tactile input via a handheld interface. SWAN aids a user in safe pedestrian navigation and includes the ability for the user to author new GIS data relevant to their needs of wayfinding, obstacle avoidance, and situational awareness support. Emphasis is placed on representing pertinent data with non-speech sounds through a process of sonification. SWAN relies on a geographic information system (GIS) infrastructure for supporting geocoding and spatialization of data. Furthermore, SWAN utilizes novel tracking technology.
---
paper_title: Wireless Health Care Service System for Elderly With Dementia
paper_content:
The purpose of this paper is to integrate the technologies of radio frequency identification, global positioning system, global system for mobile communications, and geographic information system (GIS) to construct a stray prevention system for elderly persons suffering from dementia without interfering with their activities of daily livings. We also aim to improve the passive and manpowered way of searching the missing patient with the help of the information technology. Our system provides four monitoring schemes, including indoor residence monitoring, outdoor activity area monitoring, emergency rescue, and remote monitoring modes, and we have developed a service platform to implement these monitoring schemes. The platform consists of a web service server, a database server, a message controller server, and a health-GIS (H-GIS) server. Family members or volunteer workers can identify the real-time positions of missing elderly using mobile phone, PDA, Notebook PC, and various mobile devices through the service platform. System performance and reliability is analyzed. Experiments performed on four different time slots, from three locations, through three mobile telecommunication companies show that the overall transaction time is 34 s and the average deviation of the geographical location is about 8 m. A questionnaire surveyed by 11 users show that eight users are satisfied with the system stability and 10 users would like to carry the locating device themselves, or recommend it to their family members
---
paper_title: A Blind Navigation System Using RFID for Indoor Environments
paper_content:
A location and tracking system becomes very important to our future world of pervasive computing, where information is all around us. Location is one of the most needed information for emerging and future applications. Since the public use of GPS satellite is allowed, several state-of-the-art devices become part of our life, e.g. a car navigator and a mobile phone with a built-in GPS receiver. However, location information for indoor environments is still very limited as mentioned in paper. Several techniques are proposed to get location information in buildings such as using a radio signal triangulation. Using Radio Frequency Identification (RFID) tags is a new way of giving location information to users. Due to its passive communication circuit, RFID tags can be embedded almost anywhere without an energy source. The tags stores location information and give it to any reader that is within a proximity range which can be up to 7-10 centimeters. In this paper RFID-based system for navigation in a building for blind people or visually impaired
---
paper_title: iMAT: Intelligent medication administration tools
paper_content:
iMAT is a system of automatic medication dispensers and software tools. It is for people who take medications on long term basis at home to stay well and independent. The system helps its users to improve rigor in compliance by preventing misunderstanding of medication directions and making medication schedules more tolerant to tardiness and negligence. This paper presents an overview of the assumptions, models, architecture and implementation of the system.
---
paper_title: CAALYX: a new generation of location-based services in healthcare
paper_content:
Recent advances in mobile positioning systems and telecommunications are providing the technology needed for the development of location-aware tele-care applications. This paper introduces CAALYX – Complete Ambient Assisted Living Experiment, an EU-funded project that aims at increasing older people's autonomy and self-confidence by developing a wearable light device capable of measuring specific vital signs of the elderly, detecting falls and location, and communicating automatically in real-time with his/her care provider in case of an emergency, wherever the older person happens to be, at home or outside.
---
paper_title: Wireless Medication Management System: Design and performance evaluation
paper_content:
To keep healthcare costs under control, a high-level of medication adherence, or compliance with medication regimen, must be achieved. In this paper, we show how wireless technologies can be used to improve medication adherence. More specifically, we present the design, operation, and evaluation of a wireless medication system, termed Smart Medication Management System (SMMS). We also present multiple metrics for quality of service and an analytical model for evaluating the medication adherence. The performance results show that very high medication adherence is achievable by SMMS for single and multiple medications even for patients with mild cognitive deficiency. The performance of context-aware reminders as implemented by SMMS is found to be most effective as compared to other interventions. Several powerful "composite" interventions are also proposed and evaluated for medication adherence. The results also show that with increasing hospitalization cost the healthcare savings due to improved medication adherence become even more significant. The proposed work forms the basis to provide personalized interventions to patients for improving medication adherence in multiple surroundings using wireless technologies.
---
paper_title: SAPHIRE: A Multi-Agent System for Remote Healthcare Monitoring through Computerized Clinical Guidelines
paper_content:
Due to increasing percentage of graying population and patients with chronic diseases, the world is facing serious problems for serving high quality healthcare services to citizens at a reasonable costs. In this paper, we are providing a Clininical Desicion Support system for remote monitoring of patients at their homes, and at the hospital to decrease the load of medical practitioners and also healthcare costs. As the expert knowledge required to build the clinical decision support system, Clinical Guidelines are exploited. Examining the reasons of failure for adoption of clinical guidelines by healthcare institutes, we have realized that necessary measures should be taken in order to establish a semantic interoperability environment to be able to communicate with various heterogenous clinical systems. In this paper these requirements are detailed and a semantic infrastructure to enable easy deployment and execution of clinical guidelines in heterogenous healthcare enviroments is presented. Due to the nature of the problem which necessitates having many autonomous entities dealing with heterogenous distributed resources, we have built the system as a Multi-Agent System. The architecture described in this paper is realized within the scope of IST-27074 SAPHIRE project.
---
paper_title: Context-Aware wireless sensor networks for assisted-living and residential monitoring
paper_content:
Improving the quality of healthcare and the prospects of "aging in place" using wireless sensor technology requires solving difficult problems in scale, energy management, data access, security, and privacy. We present AlarmNet, a novel system for assisted living and residential monitoring that uses a two-way flow of data and analysis between the front- and back-ends to enable context-aware protocols that are tailored to residents' individual patterns of living. AlarmNet integrates environmental, physiological, and activity sensors in a scalable heterogeneous architecture. The SenQ query protocol provides real-time access to data and lightweight in-network processing. Circadian activity rhythm analysis learns resident activity patterns and feeds them back into the network to aid context-aware power management and dynamic privacy policies.
---
paper_title: TigerPlace: an innovative educational and research environment
paper_content:
This item also falls under AAAI copyright. For more information, please visit http://www.aaai.org/ojs/index.php/aimagazine/index
---
paper_title: Telemonitoring and telerehabilitation of patients with Parkinson's disease: health technology assessment of a novel wearable step counter.
paper_content:
Step counting is an important index of motion in telemonitoring. One of the most diffused wearable systems, designed for this purpose, is the pedometer. The accuracy of commercial pedometers has been reported in the literature. Several limits have been found in many commercial systems both in healthy subjects and in people with disabilities. Furthermore, commercial pedometers lack interoperability. This paper introduces a new wearable system for step counting for telemonitoring applications. This system is based on a wearable device with a force-sensing resistor. It is affixed on the gastrocnemius muscle for monitoring muscular expansion correlated with the gait. The data exchange is assured by the XTR-434H (Aurel, FC, Italy) telemetric system. The proposed gastrocnemius expansion measurement unit (GEMU) was tested on 5 subjects with Parkinsons disease at Level 3 of the Tinetti test of unbalance. Ten repetitions of 500 steps with three different speeds (fast, slow, and normal) were performed. The mean error was <0.5%. Results also showed that GEMU performed better than an accelerometer unit (considered in the literature the best solution for this disability). The study showed that GEMU had a high performance in subjects with Parkinsons disease, causing a high degree of unbalance that confounded motion style. The next phase will be the optimization of GEMU for long-term medical applications at the patients home.
---
paper_title: A sensor-enhanced health information system to support automatically controlled exercise training of COPD patients
paper_content:
For an improvement of the quality of life for patients suffering from chronic obstructive pulmonary disease (COPD) we developed a concept and prototype of a sensor-enhanced health information system. This system includes a component that is monitoring the rehabilitation training and automatically controls the target load for the exercise on the basis of his or her vital data. The system also detects potentially critical health states and communicates alarms to external users. The component interacts with a personal electronic health record (PHR) that provides additional health related information for the decision making process, as feedback to the user and as an opportunity for physicians to optimize the users exercise plan. The PHR uses current medical informatics standards to store and transmit training data to health care professionals and to provide a maximum of interoperability with their information systems. We have integrated these components in a service oriented platform design that is located in the home environment of the user.
---
paper_title: Sensor networks for medical care
paper_content:
Sensor networks have the potential to greatly impact many aspects of medical care. By outfitting patients with wireless, wearable vital sign sensors, collecting detailed real-time data on physiological status can be greatly simplified. However, there is a significant gap between existing sensor network systems and the needs of medical care. In particular, medical sensor networks must support multicast routing topologies, node mobility, a wide range of data rates and high degrees of reliability, and security. This paper describes our experiences with developing a combined hardware and software platform for medical sensor networks, called CodeBlue. CodeBlue provides protocols for device discovery and publish/subscribe multihop routing, as well as a simple query interface that is tailored for medical monitoring. We have developed several medical sensors based on the popular MicaZ and Telos mote designs, including a pulse oximeter, EKG and motion-activity sensor. We also describe a new, miniaturized sensor mote designed for medical use. We present initial results for the CodeBlue prototype demonstrating the integration of our medical sensors with the publish/subscribe routing substrate. We have experimentally validated the prototype on our 30-node sensor network testbed, demonstrating its scalability and robustness as the number of simultaneous queries, data rates, and transmitting sensors are varied. We also study the effect of node mobility, fairness across multiple simultaneous paths, and patterns of packet loss, confirming the system’s ability to maintain stable routes despite variations in node location and
---
paper_title: ECG telemonitoring during home-based cardiac rehabilitation in heart failure patients
paper_content:
We assessed ECGs recorded during home-based telemonitored cardiac rehabilitation (HTCR) in stable patients with heart-failure. The study included 75 patients with heart failure (NYHA II, III), with a mean age of 56 years. They participated in an eight-week programme of home cardiac rehabilitation which was telemonitored with a device which recorded 16-s fragments of their ECG. These fragments were transmitted via mobile phone to a monitoring centre. The times of the automatic ECG recordings were pre-set and coordinated with the cardiac rehabilitation. Patients were able to make additional recordings when they felt unwell using a tele-event-Holter ECG facility. During the study, 5757 HTCR sessions were recorded and 11,534 transmitted ECG fragments were evaluated. Most ECGs originated from the automatic recordings. Singular supraventricular and ventricular premature beats and ventricular couplets were detected in 16%, 69% and 16% of patients, respectively. Twenty ECGs were recorded when patients felt unwell: non sustained ventricular tachycardia occurred in three patients and paroxysmal atrial fibrillation episode in two patients. Heart failure patients undergoing HTCR did not develop any arrhythmia which required a change of the procedure, confirming it was safe. Cardiac rehabilitation at home was improved by utilizing the tele-event-Holter ECG facility.
---
paper_title: Wearable Assistant for Parkinson’s Disease Patients With the Freezing of Gait Symptom
paper_content:
In this paper, we present a wearable assistant for Parkinson's disease (PD) patients with the freezing of gait (FOG) symptom. This wearable system uses on-body acceleration sensors to measure the patients' movements. It automatically detects FOG by analyzing frequency components inherent in these movements. When FOG is detected, the assistant provides a rhythmic auditory signal that stimulates the patient to resume walking. Ten PD patients tested the system while performing several walking tasks in the laboratory. More than 8 h of data were recorded. Eight patients experienced FOG during the study, and 237 FOG events were identified by professional physiotherapists in a post hoc video analysis. Our wearable assistant was able to provide online assistive feedback for PD patients when they experienced FOG. The system detected FOG events online with a sensitivity of 73.1% and a specificity of 81.6%. The majority of patients indicated that the context-aware automatic cueing was beneficial to them. Finally, we characterize the system performance with respect to the walking style, the sensor placement, and the dominant algorithm parameters.
---
paper_title: Ubiquitous Rehabilitation Center: An Implementation of a Wireless Sensor Network Based Rehabilitation Management System
paper_content:
This paper documents the implementation of a system, the ubiquitous rehabilitation center, which integrates a Zigbee-based wireless network with sensors that monitor patients and rehabilitation machines. These sensors interface with Zigbee motes which in turn interface with a server application that manages all aspects of the rehabilitation center and allows rehabilitation specialists to assign prescriptions to patients. Patients carry out prescriptions while the system monitors and collects all pertinent session data, storing it in a database. The rehabilitation specialist is then able to use trend-based analysis techniques on collected data in order to evaluate a patient's condition. Specialists then assign further prescriptions based on this evaluation. Consequently patients are treated more effectively while potentially spending less time in rehabilitation. This paper demonstrates how the ubiquitous rehabilitation center improves on the traditional rehabilitation center by highlighting the differences in rehabilitation methods using the two systems in an ACL rehabilitation test case.
---
paper_title: BASE - An interactive technology solution to deliver balance and strength exercises to older adults
paper_content:
There is a high prevalence of falls in older adults. It has been recognised that a highly challenging balance and strength retraining program can reduce the incidence of falls significantly. This paper describes the design and initial evaluation of a home-based interactive technology solution to deliver a personalised, physiotherapist prescribed exercise program to older adults. We adopted a user centred design process to ensure such technology is easy to use, acting as a facilitator to completing the exercise program, rather than an inhibitor. Initial usability findings, in addition to participant attitudes towards such a system, are outlined.
---
paper_title: Ambient Intelligence: The Evolution of Technology, Communication and Cognition Towards the Future of Human-Computer Interaction
paper_content:
The metaphor of Ambient Intelligence (AmI) tries to picture a vision of the future where all of us will be surrounded by 'intelligent' electronic environments, and this ambient has claims to being sensitive and responsive to our needs. Ambient Intelligence without invasion of privacy represents a long-term vision for the EU Information Society Technologies Research programme. A strong multi-disciplinary and collaborative approach is a key requirement for large-scale technology innovation and the development of effective applications. Up to now, most of the books and papers related to AmI focus their analysis on the technology potential only. An important feature of this volume is the link between the technology - through the concepts of ubiquitous computing and intelligent interface - and the human experience of interacting in the world - through a neuro-psychological vision centred on the concept of 'presence'. Presence - the sense of being there - is the experience of projecting one's mind through media to other places, people and designed environments.The combination of recent discoveries in cognitive neuroscience - which make it possible to acquire a better understanding of the human aspects of presence, and the breakthroughs at the level of the enabling technologies make it increasingly possible to build novel systems based on this understanding. The goal of this volume is to assess the technologies and processes that are behind the AmI vision, in order to help the development of state-of-the-art applications. More in detail, this volume aims at supporting researchers and scientists, interested in the understanding and exploiting the potential of AmI.
---
paper_title: A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation
paper_content:
Background ::: Recent technological advances in integrated circuits, wireless communications, and physiological sensing allow miniature, lightweight, ultra-low power, intelligent monitoring devices. A number of these devices can be integrated into a Wireless Body Area Network (WBAN), a new enabling technology for health monitoring.
---
paper_title: Investigation of gait features for stability and risk identification in elders
paper_content:
Today, eldercare demands a greater degree of versatility in healthcare. Automatic monitoring devices and sensors are under development to help senior citizens achieve greater autonomy, and, as situations arise, alert healthcare providers. In this paper, we study gait patterns based on extracted silhouettes from image sequences. Three features are investigated through two different image capture perspectives: shoulder level, spinal incline, and silhouette centroid. Through the evaluation of fourteen image sequences representing a range of healthy to frail gait styles, features are extracted and compared to validation results using a Vicon motion capture system. The results obtained show promise for future studies that can increase both the accuracy of feature extraction and pragmatism of machine monitoring for at-risk elders.
---
paper_title: Brain-Operated Assistive Devices: the ASPICE Project
paper_content:
The ASPICE project aims at the development of a system which allows the neuromotor disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system pivots around a software controller running on a personal computer, which offers to the user a proper interface to communicate through input interfaces matched with the individual's residual abilities. The system uses the user's input to control domotic devices - such as remotely controlled lights, TV sets, etc. - and a Sony AIBO robot. At this time, the system is under clinical validation, that will provide assessment through patients' feedback and guidelines for customized system installation
---
paper_title: The DAT project: a smart home environment for people with disabilities
paper_content:
The DAT project is a research initiative that aims at building up a smart home environment where people with disabilities can improve their abilities to cope with daily life activities by means of technologically advanced home automation solutions. The project has a threefold purpose. The smart home will be used as a physical setting, where clients with disabilities can follow individual programs aimed at improving their independence in the home environment. The smart house will also be used as a demonstration an educational laboratory where anybody interested can get knowledge of the latest advancements in the field of home automation and tele-care. Finally, the smart home will be used as research laboratory for testing and developing new clinical protocols and innovative solutions in the field of environmental control and home care. This article describes the architecture of the smart home, the design of the home automation system, and the research programs associated with the DAT project
---
paper_title: A Pervasive Visual–Haptic Framework for Virtual Delivery Training
paper_content:
Thanks to the advances of voltage regulator (VR) technologies and haptic systems, virtual simulators are increasingly becoming a viable alternative to physical simulators in medicine and surgery, though many challenges still remain. In this study, a pervasive visual-haptic framework aimed to the training of obstetricians and midwives to vaginal delivery is described. The haptic feedback is provided by means of two hand-based haptic devices able to reproduce force-feedbacks on fingers and arms, thus enabling a much more realistic manipulation respect to stilus-based solutions. The interactive simulation is not solely driven by an approximated model of complex forces and physical constraints but, instead, is approached by a formal modeling of the whole labor and of the assistance/intervention procedures performed by means of a timed automata network and applied to a parametrical 3-D model of the anatomy, able to mimic a wide range of configurations. This novel methodology is able to represent not only the sequence of the main events associated to either a spontaneous or to an operative childbirth process, but also to help in validating the manual intervention as the actions performed by the user during the simulation are evaluated according to established medical guidelines. A discussion on the first results as well as on the challenges still unaddressed is included.
---
paper_title: Advanced robotic residence for the elderly/the handicapped: realization and user evaluation
paper_content:
A novel advanced robotic residence, Intelligent Sweet Home (ISH), is developed at KAIST, Korea for testing advanced concepts for independent living of the elderly and the physically handicapped. The work focuses on human-friendly technical solutions for motion/mobility assistance, health monitoring, and advanced human-machine interfaces that provide easy control of both assistive devices and home-installed appliances. To improve the inhabitant's comfort, an intelligent bed, intelligent wheelchair and mechatronic transfer robot were developed. And, various interfaces based on hand gestures and voice, and health monitoring system were studied. This paper emphasizes the realization scheme of Intelligent Sweet Home and user evaluation by a physically handicapped person.
---
paper_title: Robotic smart house to assist people with movement disabilities
paper_content:
This paper introduces a new robotic smart house, Intelligent Sweet Home, developed at KAIST in Korea, which is based on several robotic agents and aims at testing advanced concepts for independent living of the elderly and people with disabilities. The work focuses on technical solutions for human-friendly assistance in motion/mobility and advanced human-machine interfaces that provide simple control of all assistive robotic systems and home-installed appliances. The smart house concept includes an intelligent bed, intelligent wheelchair, and robotic hoist for effortless transfer of the user between bed and wheelchair. The design solutions comply with most of the users' requirements and suggestions collected by a special questionnaire survey of people with disabilities. The smart house responds to the user's commands as well as to the recognized intentions of the user. Various interfaces, based on hand gestures, voice, body movement, and posture, have been studied and tested. The paper describes the overall system structure and explains the design and functionality of some main system components.
---
paper_title: perCues: Trails of Persuasion for Ambient Intelligence
paper_content:
The realization of the ambient intelligence (AmI) vision will have a profound impact on our everyday lives and society. AmI applied in contexts like homes or public spaces will not only affect individual users but influence entire groups of users. The question is how we can apply such technologies to persuade groups and individual users. Our approach is to design AmI environments by borrowing a concept which works very well in biological and social systems: Collective Intelligence (CI). The intelligence of a group surpasses the individual intelligences and leads to improved problem solving capabilities of individuals and groups. From nature we borrow examples of cues in the environment to stimulate goal directed collective intelligence (perCues). The application of perCues in AmI environments helps to persuade users to reach a common goal like decreasing environmental pollution. Adopting CI for AmI we blaze a trail for the design of persuasive AmI environments.
---
paper_title: International survey on the Dance Dance Revolution game
paper_content:
Despite the growing popularity of physically interactive game-playing, no user studies have been conducted on dance gaming (one of the most popular forms of playing via full-body movements). An online questionnaire was used to study various factors related to Dance Dance Revolution (DDR) gaming. In total, 556 respondents from 22 countries of ages 12 to 50 filled in a questionnaire which examined the players' gaming background, playing styles and skills, motivational and user experience factors, social issues, and physical effects of dance gaming, and taking part in dance-gaming related activities. The results show that playing DDR has a positive effect on the social life and physical health of players, as it improves endurance, muscle strength and sense of rhythm, and creates a setting where new friends can be found.
---
paper_title: perFrames: Persuasive Picture Frames for Proper Posture
paper_content:
Poor sitting habits and bad sitting posture are often the cause for musculoskeletal disorders like back pain. Also office employees are affected, because they carry out the majority of their work sitting in front of computers. Therefore we aim at sensitizing and motivating office employees regarding preventive healthcare. We have developed a persuasive interface in form of an interactive picture frame which integrates unobtrusively into the working environment --- the perFrame. This frame contains a moving portrait of a person the employee loves or likes. It provides affective feedback in order to persuade employees for better sitting habits while working with a computer. We conducted a preliminary in-situ study, deploying these picture frames on the desktops of eight office employees. The results highlight the employees' acceptance of our application as well as its potential to foster awareness and achieve persuasion regarding healthy behavior in the office.
---
paper_title: The Intelligent e-Therapy System: A New Paradigm for Telepsychology and Cybertherapy.
paper_content:
ABSTRACT One of the main drawbacks of computer-assisted psychology tools developed up to now is related to the real time customisation and adaptation of the content to each patient depending on his/her activity. In this paper we propose a new approach for mental e-health treatments named Intelligent e-Therapy (eIT) with capabilities for ambient intelligence and ubiquitous computing. From a technical point of view, an eIT system is based on four fundamental axes: ambient intelligence for capturing physiological, psychological and contextual information of the patient; persuasive computing for changing/reinforcing behaviours; ubiquitous computing for using the system at any place, and at any time; and multiple technological platforms support.
---
paper_title: Ambient Intelligence and Persuasive Technology: The Blurring Boundaries Between Human and Technology
paper_content:
The currently developing fields of Ambient Intelligence and Persuasive Technology bring about a convergence of information technology and cognitive science. Smart environments that are able to respond intelligently to what we do and that even aim to influence our behaviour challenge the basic frameworks we commonly use for understanding the relations and role divisions between human beings and technological artifacts. After discussing the promises and threats of these technologies, this article develops alternative conceptions of agency, freedom, and responsibility that make it possible to better understand and assess the social roles of Ambient Intelligence and Persuasive Technology. The central claim of the article is that these new technologies urge us to blur the boundaries between humans and technologies also at the level of our conceptual and moral frameworks.
---
paper_title: Designing motivation using persuasive ambient mirrors
paper_content:
In this article, we describe four case studies of ubiquitous persuasive technologies that support behavior change through personalized feedback reflecting a user's current behavior or attitude. The first case study is Persuasive Art, reflecting the current status of a user's physical exercise in artistic images. The second system, Virtual Aquarium, reflects a user's toothbrushing behavior in a Virtual Aquarium. The third system, Mona Lisa Bookshelf, reflects the situation of a shared bookshelf on a Mona Lisa painting. The last case study is EcoIsland, reflecting cooperative efforts toward reducing CO2 emissions as a set of virtual islands shared by a neighborhood. Drawing from the experience of designing and evaluating these systems, we present guidelines for the design of persuasive ambient mirrors: systems that use visual feedback to effect changes in users' everyday living patterns. In particular, we feature findings in choosing incentive systems, designing emotionally engaging feedback, timing feedback, and persuasive interaction design. Implications for current design efforts as well as for future research directions are discussed.
---
paper_title: Persuasion in ambient intelligence
paper_content:
Although the field of persuasive technologies has lately attracted a lot of attention, only recently the notion of ambient persuasive technologies was introduced. Ambient persuasive technologies can be integrated into every aspect of life, and as such have greater persuasive power than the traditional box like machines. This article discusses ambient persuasion and poses a model that structures the knowledge from social sciences on persuasion, attitude change, and behavior change. Using this model the challenges of ambient persuasive technologies to fulfill its persuasive promises are identified. From the ambient persuasion model it is clear that ambient persuasive technologies can go beyond traditional persuasive technologies by being context and situational aware, by interpreting individual differences between users, and by being a social actor in their own right.
---
paper_title: Motivating People in Smart Environments
paper_content:
In this paper we discuss the possibility to extend PORTIA, a persuasion system currently applied in human-agent dialogs, to support ambient persuasion. We have identified a fitness center as an appropriate smart environment in which ambient persuasion strategies can be applied. According to the Ubiquitous Computing vision, in the fitness center the user is surrounded by several connected devices that cooperate in the persuasion process, each of them with the most appropriate strategy, mode of persuasion, style of communication according to the context. To this aim we propose a multi-agent system able to support this distributed and intelligent approach to persuasion that allows to follow the user during the gradual change from the initial attitude to sustain of long term behaviours.
---
paper_title: Objectively Monitoring Wellbeing through Pervasive Technology
paper_content:
Wellbeing is an underlying theme in many local and national policies and procedures outlined by governments and health care services. In recent years a person’s wellbeing has been largely monitored through the use of subjective rating scales or other retrospective interview methods. This position paper considers how technology can help to monitor wellbeing more objectively and within the individual’s naturalistic environment. For this purpose, we introduce and discuss the design of the Wearable Acoustic Monitor (WAM). The WAM provides support in monitoring aspects of social and emotional wellbeing through the provision of information about a person’s level of social interaction and vocal features of emotionality. We further reflect on the ethical and privacy issues that are crucial for the design of digital devices capturing audio data to explore wellbeing.
---
paper_title: AffectAura: Emotional wellbeing reflection system
paper_content:
Emotional health is of huge importance to our quality of life. However, monitoring emotional wellbeing is challenging. AffectAura is an emotional prosthetic that allows users to reflect on their emotional states over long periods of time. The system continuously predicts user's valence, arousal and engagement based on information gathered from a multimodal sensor setup. The interface combines these predictions with rich contextual information and allows the user to explore the data. AffectAura has been validated on 100's of hours of data recorded from multiple people who found the system allowed them to reason forward and backward in time about their emotional experiences. This project illustrates the first longitudinally evaluated emotional memory system.
---
paper_title: Computer-mediated emotional regulation: Detection of emotional changes using non-parametric cumulative sum
paper_content:
It has been demonstrated that negative emotions have adverse effects on the immune system of a person. This contributes to increased morbidity and mortality in the elderly population and has a direct impact on quality of life. Positive emotions on the other hand may not only undo the harmful effects of negative emotions but also protect against certain diseases. Hence the use of technology to facilitate emotional regulation that reduces negative emotions may be a good way to promote self-care and support well-being. In this paper we present the early design stages of an emotion detection system that aims to support remote support and self-regulation in situations of intense emotional distress. We provide evidence of the suitability of non-parametric cumulative sum (CUSUM) to indentify emotional changes from neutral to non-neutral and vice versa in real time.
---
paper_title: Agent-based ambient intelligence for healthcare
paper_content:
Healthcare professionals, working in hospitals experience a high level of mobility due to their need for accessing patients' clinical records, medical devices distributed throughout the premises, and colleagues with whom they collaborate. In this paper, we present how autonomous agents provide capabilities of intelligence and proactivity to healthcare environments furnished with ubiquitous computing and medical devices, resulting thus, in an ambient intelligence (AmI) system. Autonomous agents enable ubiquitous technology to respond to users' particular conditions and demands. To support the building of this type of intelligent systems, we created the SALSA development framework. SALSA is a middleware designed to support the development of AmI environments based on autonomous agents. We illustrate the facilities provided by SALSA and its flexibility to iteratively implement an AmI system for a healthcare application scenario.
---
paper_title: GerAmi: Improving Healthcare Delivery in Geriatric Residences
paper_content:
Many countries face an ever-growing need to supply constant care and support for their disabled and elderly populations. In this paper, we've developed geriatric ambient intelligence, an intelligent environment that integrates multiagent systems, mobile devices, RFID, and Wi-Fi technologies to facilitate management and control of geriatric residences. At GerAmi's core is the geriatric agent (GerAg), a deliberative agent that incorporates a case-based planning (CBP) mechanism to optimize work schedules and provide up-to-date patient and facility data. We've successfully implemented a system prototype at a care facility for Alzheimer patients.
---
paper_title: Integrating context-aware public displays into a mobile hospital information system
paper_content:
Hospitals are convenient settings for deployment of ubiquitous computing technology. Not only are they technology-rich environments, but their workers experience a high level of mobility resulting in information infrastructures with artifacts distributed throughout the premises. Hospital information systems (HISs) that provide access to electronic patient records are a step in the direction of providing accurate and timely information to hospital staff in support of adequate decision-making. This has motivated the introduction of mobile computing technology in hospitals based on designs which respond to their particular conditions and demands. Among those conditions is the fact that worker mobility does not exclude the need for having shared information artifacts at particular locations. In this paper, we extend a handheld-based mobile HIS with ubiquitous computing technology and describe how public displays are integrated with handheld and the services offered by these devices. Public displays become aware of the presence of physicians and nurses in their vicinity and adapt to provide users with personalized, relevant information. An agent-based architecture allows the integration of proactive components that offer information relevant to the case at hand, either from medical guidelines or previous similar cases.
---
paper_title: Activity Recognition for the Smart Hospital
paper_content:
Although researchers have developed robust approaches for estimating, location, and user identity, estimating user activities has proven much more challenging. Human activities are so complex and dynamic that it's often unclear what information is even relevant for modeling activities. Robust approaches to recognize user activities requires identifying the relevant information to be sensed and the appropriate sensing technologies. In our effort to develop an approach for automatically estimating hospital-staff activities, we trained a discrete hidden Markov model (HMM) to map contextual information to a user activity. We trained the model and evaluated it using data captured from almost 200 hours of detailed observation and documentation of hospital workers. In this article, we discuss our approach, the results, and how activity recognition could empower our vision of the hospital as a smart environment.
---
paper_title: Ontology-based intelligent fuzzy agent for diabetes application
paper_content:
It is widely pointed out that classical ontologies are not sufficient to deal with imprecise and vague knowledge for some real world applications, but the fuzzy ontology can effectively solve data and knowledge with uncertainty. In this paper, an ontology-based intelligent fuzzy agent (OIFA), including a fuzzy markup language (FML) generating mechanism, a FML parser, a fuzzy inference mechanism, and a semantic decision making mechanism, is proposed to apply to the semantic decision making for diabetes domain. In addition, a FML-based definition is considered modeling the knowledge base and rule base of the fuzzy objects and inference operators. The experimental results show that the proposed method is feasible for diabetes semantic decision-making.
---
paper_title: New research perspectives on Ambient Intelligence
paper_content:
Ten years of AmI research have led to many new insights and understandings about the way highly interactive environments should be designed to meet the requirement of being truly unobtrusive and supportive from an end-user perspective. Probably the most revealing finding is the fact that, in addition to cognitive intelligence and computing, also elements from social intelligence and design play a dominant role in the realization of the vision. In this paper we discuss these novel insights and their resulting impact on the AmI research landscape. We introduce a number of new AmI research perspectives that are related to social intelligence and in addition we argue that new ways of working are required applying the concept of Experience Research resulting in a true user-centered approach to Ambient Intelligence.
---
paper_title: Interoperable and adaptive fuzzy services for ambient intelligence applications
paper_content:
In Ambient Intelligence (AmI) vision, people should be able to seamlessly and unobtrusively use and configure the intelligent devices and systems in their ubiquitous computing environments without being cognitively and physically overloaded. In other words, the user should not have to program each device or connect them together to achieve the required functionality. However, although it is possible for a human operator to specify an active space configuration explicitly, the size, sophistication, and dynamic requirements of modern living environment demand that they have autonomous intelligence satisfying the needs of inhabitants without human intervention. This work presents a proposal for AmI fuzzy computing that exploits multiagent systems and fuzzy theory to realize a long-life learning strategy able to generate context-aware-based fuzzy services and actualize them through abstraction techniques in order to maximize the users' comfort and hardware interoperability level. Experimental results show that proposed approach is capable of anticipating user's requirements by automatically generating the most suitable collection of interoperable fuzzy services.
---
paper_title: Effects of Electromagnetic Radiations from Cell Phones: A Critical Study
paper_content:
The increasing use of mobile phones has resulted in larger human exposure to radio frequency electromagnetic fields. Although the electromagnetic fields from mobile phones are weak, the high number of exposed persons, together with some stimulating but inconclusive scientific results, has raised concerns about possible health hazards. Most of the biological studies showed that radio frequency modulated electromagnetic fields (like Global System for Mobile Communications (GSM 900 MHz and 1800 MHz)) induced a stress response in the cells, which can potentially change the blood-brain barrier. The potential health effects of human-made electromagnetic fields (EMFs) have been a topic of scientific interest since the late 1800s, particularly in the last twenty years. Electromagnetic fields are natural phenomena that have always been present on earth. However, during the twentieth century, environmental exposure to human-made EMFs increased steadily, predominantly due to increased electricity and wireless technology use. Nearly all people are exposed to a complex mix of different types of weak electric and magnetic fields, both at home and at work. Electrical currents exist naturally in the human body and are an essential part of normal body function. Nerves relay signals by transmitting electric impulses, and most biochemical reactions, from those associated with digestion to those involved in brain activity, proceed by means of rearranging charged particles. This present paper shows the possible effects of electromagnetic radiations emitted from cellular phones on human and living organisms. Therefore, a complete study has been carried out to discuss the serious worries of using mobile phones. The paper has been divided into four parts: part I gives the basis introduction and related studies of cell phones, part II presents the technology behind the cellular networks, part III discusses the basic hazards of using mobile phones, and danger of cellular phone and their prolonged effects, and part IV gives the conclusions and precautions, which should be taken into account while using the cell phones to avoid any serious health related risks.
---
paper_title: Living in a World of Smart Everyday Objects—Social, Economic, and Ethical Implications
paper_content:
ABSTRACT Visions of Pervasive Computing and ambient intelligence involve integrating tiny microelectronic processors and sensors into everyday objects in order to make them “smart.” Smart things can explore their environment, communicate with other smart things, and interact with humans, therefore helping users to cope with their tasks in new, intuitive ways. Although many concepts have already been tested out as prototypes in field trials, the repercussions of such extensive integration of computer technology into our everyday lives are difficult to predict. This article is a first attempt to classify the social, economic, and ethical implications of this development.
---
paper_title: A privacy framework for mobile health and home-care systems
paper_content:
In this paper, we consider the challenge of preserving patient privacy in the context of mobile healthcare and home-care systems, that is, the use of mobile computing and communications technologies in the delivery of healthcare or the provision of at-home medical care and assisted living. This paper makes three primary contributions. First, we compare existing privacy frameworks, identifying key differences and shortcomings. Second, we identify a privacy framework for mobile healthcare and home-care systems. Third, we extract a set of privacy properties intended for use by those who design systems and applications for mobile healthcare and home-care systems, linking them back to the privacy principles. Finally, we list several important research questions that the community should address. We hope that the privacy framework in this paper can help to guide the researchers and developers in this community, and that the privacy properties provide a concrete foundation for privacy-sensitive systems and applications for mobile healthcare and home-care systems.
---
paper_title: Perspectives of ambient intelligence in the home environment
paper_content:
Ambient Intelligence is a vision of the future information society stemming from the convergence of ubiquitous computing, ubiquitous communication and intelligent user-friendly interfaces. It offers an opportunity to realise an old dream, i.e. the smart or intelligent home. Will it fulfil the promises or is it just an illusion--offering apparently easy living while actually increasing the complexity of life? This article touches upon this question by discussing the technologies, applications and social implications of ambient intelligence in the home environment. It explores how Ambient Intelligence may change our way of life. It concludes that there are great opportunities for Ambient Intelligence to support social developments and modern lifestyles. However, in order to gain wide acceptance a delicate balance is needed: the technology should enhance the quality of life but not be seeking domination. It should be reliable and controllable but nevertheless adaptive to human habits and changing contexts.
---
paper_title: Safeguards in a world of ambient intelligence
paper_content:
Copy the following link for free access to the first chapter of this title: http://www.springerlink.com/content/j23468h304310755/fulltext.pdf This book is a warning. It aims to warn policy-makers, industry, academia, civil society organisations, the media and the public about the threats and vulnerabilities facing our privacy, identity, trust, security and inclusion in the rapidly approaching world of ambient intelligence (AmI). In the near future, every manufactured product our clothes, money, appliances, the paint on our walls, the carpets on our floors, our cars, everything will be embedded with intelligence, networks of tiny sensors and actuators, which some have termed smart dust. The AmI world is not far off. We already have surveillance systems, biometrics, personal communicators, machine learning and more. AmI will provide personalised services and know more about us on a scale dwarfing anything hitherto available. In the AmI vision, ubiquitous computing, communications and interfaces converge and adapt to the user. AmI promises greater user-friendliness in an environment capable of recognising and responding to the presence of different individuals in a seamless, unobtrusive and often invisible way. While most stakeholders paint the promise of AmI in sunny colours, there is a dark side to AmI. This book aims to illustrate the threats and vulnerabilities by means of four dark scenarios. The authors set out a structured methodology for analysing the four scenarios, and then identify safeguards to counter the foreseen threats and vulnerabilities. They make recommendations to policy-makers and other stakeholders about what they can do to maximise the benefits from ambient intelligence and minimise the negative consequences.
---
paper_title: Balancing Smartness and Privacy for the Ambient Intelligence
paper_content:
Ambient Intelligence (AmI) will introduce large privacy risks. Stored context histories are vulnerable for unauthorized disclosure, thus unlimited storing of privacy-sensitive context data is not desirable from the privacy viewpoint. However, high quality and quantity of data enable smartness for the AmI, while less and coarse data benefit privacy. This raises a very important problem to the AmI, that is, how to balance the smartness and privacy requirements in an ambient world. In this article, we propose to give to donors the control over the life cycle of their context data, so that users themselves can balance their needs and wishes in terms of smartness and privacy.
---
paper_title: Privacy by Design - Principles of Privacy-Aware Ubiquitous Systems
paper_content:
This paper tries to serve as an introductory reading to privacy issues in the field of ubiquitous computing. It develops six principles for guiding system design, based on a set of fair information practices common in most privacy legislation in use today: notice, choice and consent, proximity and locality, anonymity and pseudonymity, security, and access and recourse. A brief look at the history of privacy protection, its legal status, and its expected utility is provided as a background.
---
| Title: A Survey on Ambient Intelligence in Healthcare
Section 1: INTRODUCTION
Description 1: This section introduces the concept of Ambient Intelligence (AmI) and its relevance to healthcare, including key characteristics and technological underpinnings.
Section 2: Healthcare Challenges
Description 2: This section outlines the current challenges faced in the healthcare sector, particularly due to rising costs and aging populations, and discusses how AmI can address these challenges.
Section 3: SUPPORTING INFRASTRUCTURE AND TECHNOLOGY
Description 3: This section details the infrastructure and technologies that support AmI systems in healthcare, including body area networks (BANs), dense/mesh sensor networks, and recent trends in sensor technology.
Section 4: ALGORITHMS AND METHODS
Description 4: This section introduces the computational methodologies essential for developing AmI healthcare applications, such as activity recognition, behavioral pattern discovery, anomaly detection, planning and scheduling, and decision support systems.
Section 5: APPLICATIONS
Description 5: This section discusses various AmI applications in healthcare, including continuous health and behavioral monitoring, emergency detection, assisted living, therapy and rehabilitation, persuasive wellbeing applications, emotional wellbeing, and smart hospitals.
Section 6: Artificial Intelligence
Description 6: This section explores the role of artificial intelligence in healthcare, detailing its use in diagnosis, prognosis, medical training, and how AmI can enhance AI methodologies.
Section 7: Design and Human Factors
Description 7: This section addresses the design considerations and human factors that must be taken into account when developing AmI systems to ensure they improve quality of life without adverse effects.
Section 8: Security and Infrastructure
Description 8: This section discusses the security issues and infrastructural requirements for AmI systems, emphasizing the need for safeguards to protect data privacy and system security.
Section 9: Social and Ethical Issues
Description 9: This section highlights the social and ethical considerations surrounding the use of AmI in healthcare, such as ensuring accessibility, avoiding overreliance, and maintaining patient communication.
Section 10: CONCLUSION
Description 10: This section summarizes the findings of the survey, reflecting on the potential and challenges of AmI in healthcare, and expresses confidence in achieving the vision of AmI through continued interdisciplinary research. |
Big Data : A Survey The New Paradigms , Methodologies and Tools | 7 | ---
paper_title: Big Data: the current wave front of the tsunami
paper_content:
In recent years, a real tsunami has flooded many human activities. Genomics, Astronomy, Particle Physics and Social Sciences are just a few examples of fields which have been intensively invaded by a massive amount of data coming from simulation, experiments or exploration. This huge pile of data requires a new way to deal with, a real paradigmatic shift respect to the past as for theories, technologies or approaches in data management. This work outlines the current wave front of Big Data, starting from a possible characterization of this new paradigm to its most compelling applications and tools, with an exploratory research of Big Data challenges in manufacturing engineering.
---
paper_title: The Unreasonable Effectiveness of Data
paper_content:
At Brown University, there is excitement of having access to the Brown Corpus, containing one million English words. Since then, we have seen several notable corpora that are about 100 times larger, and in 2006, Google released a trillion-word corpus with frequency counts for all sequences up to five words long. In some ways this corpus is a step backwards from the Brown Corpus: it's taken from unfiltered Web pages and thus contains incomplete sentences, spelling errors, grammatical errors, and all sorts of other errors. It's not annotated with carefully hand-corrected part-of-speech tags. But the fact that it's a million times larger than the Brown Corpus outweighs these drawbacks. A trillion-word corpus - along with other Web-derived corpora of millions, billions, or trillions of links, videos, images, tables, and user interactions - captures even very rare aspects of human behavior. So, this corpus could serve as the basis of a complete model for certain tasks - if only we knew how to extract the model from the data.
---
paper_title: Perspectives on Big Data
paper_content:
There is much hype associated with the term ‘Big Data’ (BD), and much opportunity in the data that are associated with that term, along with the tools and techniques in existence and being developed to leverage it for decision making and improving the condition of living beings, firms and society. However, many are not clear on what are, or what is meant by the term, ‘Big Data’. The focus of this research was to explore the meaning of BD and to identify important paths of research on BD. As part of this process, we called upon a diverse set of marketing scholars who possess expertise and special insights. We discovered that different communities have different perspectives. It could be argued that they are all correct as they reflect the preferred perspectives of different communities. We find it helpful to think of BD as a term that represents a period of time or era, a process, and data that are from a variety of sources, of various structures or forms, and in a variety of locations. Important research questions and issues related to BD are discussed.
---
paper_title: Hadoop: The Definitive Guide
paper_content:
Hadoop: The Definitive Guide helps you harness the power of your data. Ideal for processing large datasets, the Apache Hadoop framework is an open source implementation of the MapReduce algorithm on which Google built its empire. This comprehensive resource demonstrates how to use Hadoop to build reliable, scalable, distributed systems: programmers will find details for analyzing large datasets, and administrators will learn how to set up and run Hadoop clusters. Complete with case studies that illustrate how Hadoop solves specific problems, this book helps you: Use the Hadoop Distributed File System (HDFS) for storing large datasets, and run distributed computations over those datasets using MapReduce Become familiar with Hadoop's data and I/O building blocks for compression, data integrity, serialization, and persistence Discover common pitfalls and advanced features for writing real-world MapReduce programs Design, build, and administer a dedicated Hadoop cluster, or run Hadoop in the cloud Use Pig, a high-level query language for large-scale data processing Take advantage of HBase, Hadoop's database for structured and semi-structured data Learn ZooKeeper, a toolkit of coordination primitives for building distributed systems If you have lots of data -- whether it's gigabytes or petabytes -- Hadoop is the perfect solution. Hadoop: The Definitive Guide is the most thorough book available on the subject. "Now you have the opportunity to learn about Hadoop from a master-not only of the technology, but also of common sense and plain talk." -- Doug Cutting, Hadoop Founder, Yahoo!
---
paper_title: Big Data: A Survey
paper_content:
In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.
---
paper_title: Taming the Big Data Tidal Wave: Finding Opportunities in Huge Data Streams with Advanced Analytics
paper_content:
You receive an e-mail. It contains an offer for a complete personal computer system. It seems like the retailer read your mind since you were exploring computers on their web site just a few hours prior. As you drive to the store to buy the computer bundle, you get an offer for a discounted coffee from the coffee shop you are getting ready to drive past. It says that since youre in the area, you can get 10% off if you stop by in the next 20 minutes. As you drink your coffee, you receive an apology from the manufacturer of a product that you complained about yesterday on your Facebook page, as well as on the companys web site. Finally, once you get back home, you receive notice of a special armor upgrade available for purchase in your favorite online video game. It is just what is needed to get past some spots youve been struggling with. Sound crazy? Are these things that can only happen in the distant future? No. All of these scenarios are possible today! Big data. Advanced analytics. Big data analytics. It seems you cant escape such terms today. Everywhere you turn people are discussing, writing about, and promoting big data and advanced analytics. Well, you can now add this book to the discussion. What is real and what is hype? Such attention can lead one to the suspicion that perhaps the analysis of big data is something that is more hype than substance. While there has been a lot of hype over the past few years, the reality is that we are in a transformative era in terms of analytic capabilities and the leveraging of massive amounts of data. If you take the time to cut through the sometimes-over-zealous hype present in the media, youll find something very real and very powerful underneath it. With big data, the hype is driven by genuine excitement and anticipation of the business and consumer benefits that analyzing it will yield over time. Big data is the next wave of new data sources that will drive the next wave of analytic innovation in business, government, and academia. These innovations have the potential to radically change how organizations view their business. The analysis that big data enables will lead to decisions that are more informed and, in some cases, different from what they are today. It will yield insights that many can only dream about today. As youll see, there are many consistencies with the requirements to tame big data and what has always been needed to tame new data sources. However, the additional scale of big data necessitates utilizing the newest tools, technologies, methods, and processes. The old way of approaching analysis just wont work. It is time to evolve the world of advanced analytics to the next level. Thats what this book is about. Taming the Big Data Tidal Wave isnt just the title of this book, but rather an activity that will determine which businesses win and which lose in the next decade. By preparing and taking the initiative, organizations can ride the big data tidal wave to success rather than being pummeled underneath the crushing surf. What do you need to know and how do you prepare in order to start taming big data and generating exciting new analytics from it? Sit back, get comfortable, and prepare to find out!
---
paper_title: An integrated framework for evaluating big-data storage solutions - IDA case study
paper_content:
The amount of data stored is rapidly increasing due to consumer, business, scientific, and government generated content. In addition to keeping pace with storing generated data, there is a need to comply with laws and Service Level Agreements (SLA) to protect and preserve stored data. The issues of capacity and scale in data storage are of constant concern to ensure the ability to absorb data growth, to manage existing data and to analyze the data. Evaluating data storage for IT infrastructure is a complex task that has multiple variables. These variables include capacity, scalability, financial, workload requirements, security, privacy, availability, reliability, analytics and operational. Current frameworks that attempt to address data storage evaluations focus either on a single aspect of these variables or use generic IT frameworks to evaluate data storage as a sub-component. The complexity of data storage requirements merits a holistic framework in the data storage domain. The contribution of this paper is an integrated framework to evaluate and assist in selecting optimum storage solution for multi-variable requirements. This paper examines Information Dispersal Algorithm (IDA) storage technology using this framework and is the first in a series to examine four different big-data storage technologies using this framework.
---
paper_title: Benchmarking cloud serving systems with YCSB
paper_content:
While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address "cloud OLTP" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the "Yahoo! Cloud Serving Benchmark" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.
---
paper_title: The Hadoop Distributed File System
paper_content:
The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. We describe the architecture of HDFS and report on experience using HDFS to manage 25 petabytes of enterprise data at Yahoo!.
---
paper_title: Scalable SQL and NoSQL data stores
paper_content:
In this paper, we examine a number of SQL and socalled "NoSQL" data stores designed to scale simple OLTP-style application loads over many servers. Originally motivated by Web 2.0 applications, these systems are designed to scale to thousands or millions of users doing updates as well as reads, in contrast to traditional DBMSs and data warehouses. We contrast the new systems on their data model, consistency mechanisms, storage mechanisms, durability guarantees, availability, query support, and other dimensions. These systems typically sacrifice some of these dimensions, e.g. database-wide transaction consistency, in order to achieve others, e.g. higher availability and scalability.
---
paper_title: Mining data streams: a review
paper_content:
The recent advances in hardware and software have enabled the capture of different measurements of data in a wide range of fields. These measurements are generated continuously and in a very high fluctuating data rates. Examples include sensor networks, web logs, and computer network traffic. The storage, querying and mining of such data sets are highly computationally challenging tasks. Mining data streams is concerned with extracting knowledge structures represented in models and patterns in non stopping streams of information. The research in data stream mining has gained a high attraction due to the importance of its applications and the increasing generation of streaming information. Applications of data stream analysis can vary from critical scientific and astronomical applications to important business and financial ones. Algorithms, systems and frameworks that address streaming challenges have been developed over the past three years. In this review paper, we present the state-of-the-art in this growing vital field.
---
paper_title: The Hadoop Distributed File System
paper_content:
The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. We describe the architecture of HDFS and report on experience using HDFS to manage 25 petabytes of enterprise data at Yahoo!.
---
paper_title: A multimedia ontology model based on linguistic properties and audio-visual features
paper_content:
The exponential growth of informative contents needs intelligent information systems able to use data to create information. To aim this goal, these systems should have formal models to represent knowledge. In this way complex data can be managed and used to perform new tasks and implement innovative functionalities. This article describes a general and formal ontology model to represent knowledge using multimedia data and linguistic properties to bridge the gap between the target semantic classes and the available low-level multimedia descriptors. This model has been implemented in a system to edit, manage and share ontology in the WEB. The system provides a graphical interface to add multimedia objects by means of user interaction. The multimedia features are automatically extracted using algorithms based on MPEG-7 descriptors.
---
paper_title: A Classification of Ontology Change
paper_content:
The problem of modifying an ontology in response to a certain need for change is a complex and multifaceted one, being addressed by several different, but closely related and often overlapping research disciplines. Unfortunately, the boundaries of each such discipline are not clear, as certain terms are often used with different meanings in the relevant literature. The purpose of this paper is to identify the exact relationships, connections and overlaps between these research areas and determine the boundaries of each field, by performing a broad review of the relevant literature.
---
paper_title: Information Retrieval from the Web: an Interactive Paradigm
paper_content:
Information retrieval is moving beyond the stage where users simply type one or more keywords and retrieve a ranked list of documents. In such a scenario users have to go through the returned documents in order to find what they are actually looking for. More often they would like to get targeted answers to their queries without extraneous information, even if their requirements are not well specified. In this paper we propose an approach for designing a web retrieval system able to find the desired information through several interactions with the users. The proposed approach allows to overcome the problems deriving from ambiguous or too vague queries, using semantic search and topic detection techniques. The results of the very first experiments on a prototype system are also reported.
---
paper_title: A content-based approach for document representation and retrieval
paper_content:
In the last few years, the problem of defining efficient techniques for knowledge representation is becoming a challenging topic in both academic and industrial community. The large amount of available data creates several problems in terms of information overload. In this framework, we assume that new approaches for knowledge definition and representation may be useful, in particular the ones based on the concept of ontology. In this paper we propose a suitable model for knowledge representation purposes using linguistic concepts and properties. We implement our model in a system which, using novel techniques and metrics, analyzes documents from a semantic point of view using as context of interest the Web. Experiments are performed on a test set built using a directory service to have information about analyzed documents. The obtained results compared with other similar systems show an effective improvement.
---
paper_title: Partitioning of ontologies driven by a structure-based approach
paper_content:
In this paper, we propose a novel structure-based partitioning algorithm able to break a large ontology into different modules related to specific topics for the domain of interest. In particular, we leverage the topological properties of the ontology graph and exploit several techniques derived from Network Analysis to produce an effective partitioning without considering any information about semantics of ontology relationships. An automated partitioning tool has been developed and several preliminary experiments have been conducted to validate the effectiveness of our approach with respect to other techniques.
---
paper_title: Bigtable: A Distributed Storage System for Structured Data
paper_content:
Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.
---
| Title: Big Data: A Survey of New Paradigms, Methodologies, and Tools
Section 1: INTRODUCTION
Description 1: This section provides an overview of the increasing availability of data from various sources and introduces the concept of Big Data. It also highlights the challenges and opportunities associated with managing large data sets and the importance of selecting the right tools and methodologies.
Section 2: BIG DATA DIMENSIONS AND TECHNOLOGICAL SOLUTIONS
Description 2: This section discusses the various dimensions of Big Data, such as Volume, Velocity, and Variety, and presents technological solutions to address the challenges associated with these dimensions. It also explores additional dimensions introduced in the literature, such as viscosity, variability, veracity, and volatility.
Section 3: A FRAMEWORK FOR A QUALITATIVE EVALUATION OF BIG DATA SOLUTIONS
Description 3: This section introduces a framework for the qualitative evaluation of Big Data tools and solutions. It identifies various technical challenges in the Big Data life cycle, such as data heterogeneity, scalability, timeliness, privacy, and visualization.
Section 4: Relational DBMS and SQL Language
Description 4: This section describes the characteristics of traditional Relational Database Management Systems (RDBMS) and the SQL language. It highlights the benefits and limitations of using RDBMS for managing large data sets.
Section 5: New SQL
Description 5: This section introduces NewSQL, a class of modern relational database management systems that combine the scalability of NoSQL systems with the ACID properties of traditional RDBMS. It discusses the main features and advantages of NewSQL systems.
Section 6: The Survey of the Analyzed Solutions
Description 6: This section provides a comprehensive survey of the most widely used NoSQL and NewSQL solutions. It evaluates each solution based on criteria such as scalability, consistency, failover support, and available APIs. The section includes descriptions of various Big Data tools like Google BigTable, MongoDB, Neo4j, Apache CouchDB, and others.
Section 7: CONCLUSIONS
Description 7: This section summarizes the major findings of the survey, highlighting the flexibility and multi-platform support of many Big Data tools. It also underscores the significant interest from the developer community in open-source projects and concludes with implications for future research and development in Big Data technologies. |
A Survey of Data Mining Techniques for Social Media Analysis | 22 | ---
paper_title: Hybrid Recommender Systems: Survey and Experiments
paper_content:
Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering.
---
paper_title: Users of the world, unite! The challenges and opportunities of Social Media
paper_content:
The concept of Social Media is top of the agenda for many business executives today. Decision makers, as well as consultants, try to identify ways in which firms can make profitable use of applications such as Wikipedia, YouTube, Facebook, Second Life, and Twitter. Yet despite this interest, there seems to be very limited understanding of what the term ''Social Media'' exactly means; this article intends to provide some clarification. We begin by describing the concept of Social Media, and discuss how it differs from related concepts such as Web 2.0 and User Generated Content. Based on this definition, we then provide a classification of Social Media which groups applications currently subsumed under the generalized term into more specific categories by characteristic: collaborative projects, blogs, content communities, social networking sites, virtual game worlds, and virtual social worlds. Finally, we present 10 pieces of advice for companies which decide to utilize Social Media.
---
paper_title: Exploiting context analysis for combining multiple entity resolution systems
paper_content:
Entity Resolution (ER) is an important real world problem that has attracted significant research interest over the past few years. It deals with determining which object descriptions co-refer in a dataset. Due to its practical significance for data mining and data analysis tasks many different ER approaches has been developed to address the ER challenge. This paper proposes a new ER Ensemble framework. The task of ER Ensemble is to combine the results of multiple base-level ER systems into a single solution with the goal of increasing the quality of ER. The framework proposed in this paper leverages the observation that often no single ER method always performs the best, consistently outperforming other ER techniques in terms of quality. Instead, different ER solutions perform better in different contexts. The framework employs two novel combining approaches, which are based on supervised learning. The two approaches learn a mapping of the clustering decisions of the base-level ER systems, together with the local context, into a combined clustering decision. The paper empirically studies the framework by applying it to different domains. The experiments demonstrate that the proposed framework achieves significantly higher disambiguation quality compared to the current state of the art solutions.
---
paper_title: Influence of Social Media Use on Discussion Network Heterogeneity and Civic Engagement: The Moderating Role of Personality Traits
paper_content:
Using original national survey data, we examine how social media use affects individuals' discussion network heterogeneity and their level of civic engagement. We also investigate the moderating role of personality traits (i.e., extraversion and openness to experiences) in this association. Results support the notion that use of social media contributes to heterogeneity of discussion networks and activities in civic life. More importantly, personality traits such as extraversion and openness to experiences were found to moderate the influence of social media on discussion network heterogeneity and civic participation, indicating that the contributing role of social media in increasing network heterogeneity and civic engagement is greater for introverted and less open individuals.
---
paper_title: TwitterMonitor: trend detection over the twitter stream
paper_content:
We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.
---
paper_title: Social Media Use in the United States: Implications for Health Communication
paper_content:
Background: Given the rapid changes in the communication landscape brought about by participative Internet use and social media, it is important to develop a better understanding of these technologies and their impact on health communication. The first step in this effort is to identify the characteristics of current social media users. Up-to-date reporting of current social media use will help monitor the growth of social media and inform health promotion/communication efforts aiming to effectively utilize social media. Objective: The purpose of the study is to identify the sociodemographic and health-related factors associated with current adult social media users in the United States. Methods: Data came from the 2007 iteration of the Health Information National Trends Study (HINTS, N = 7674). HINTS is a nationally representative cross-sectional survey on health-related communication trends and practices. Survey respondents who reported having accessed the Internet (N = 5078) were asked whether, over the past year, they had (1) participated in an online support group, (2) written in a blog, (3) visited a social networking site. Bivariate and multivariate logistic regression analyses were conducted to identify predictors of each type of social media use. Results: Approximately 69% of US adults reported having access to the Internet in 2007. Among Internet users, 5% participated in an online support group, 7% reported blogging, and 23% used a social networking site. Multivariate analysis found that younger age was the only significant predictor of blogging and social networking site participation; a statistically significant linear relationship was observed, with younger categories reporting more frequent use. Younger age, poorer subjective health, and a personal cancer experience predicted support group participation. In general, social media are penetrating the US population independent of education, race/ethnicity, or health care access. Conclusions: Recent growth of social media is not uniformly distributed across age groups; therefore, health communication programs utilizing social media must first consider the age of the targeted population to help ensure that messages reach the intended audience. While racial/ethnic and health status–related disparities exist in Internet access, among those with Internet access, these characteristics do not affect social media use. This finding suggests that the new technologies, represented by social media, may be changing the communication pattern throughout the United States. [J Med Internet Res 2009;11(4):e48]
---
paper_title: Users of the world, unite! The challenges and opportunities of Social Media
paper_content:
The concept of Social Media is top of the agenda for many business executives today. Decision makers, as well as consultants, try to identify ways in which firms can make profitable use of applications such as Wikipedia, YouTube, Facebook, Second Life, and Twitter. Yet despite this interest, there seems to be very limited understanding of what the term ''Social Media'' exactly means; this article intends to provide some clarification. We begin by describing the concept of Social Media, and discuss how it differs from related concepts such as Web 2.0 and User Generated Content. Based on this definition, we then provide a classification of Social Media which groups applications currently subsumed under the generalized term into more specific categories by characteristic: collaborative projects, blogs, content communities, social networking sites, virtual game worlds, and virtual social worlds. Finally, we present 10 pieces of advice for companies which decide to utilize Social Media.
---
paper_title: Opinion mining and sentiment analysis
paper_content:
An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
---
paper_title: Opinion mining and sentiment analysis
paper_content:
An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
---
paper_title: LCI: a social channel analysis platform for live customer intelligence
paper_content:
The rise of Web 2.0 with its increasingly popular social sites like Twitter, Facebook, blogs and review sites has motivated people to express their opinions publicly and more frequently than ever before. This has fueled the emerging field known as sentiment analysis whose goal is to translate the vagaries of human emotion into hard data. LCI is a social channel analysis platform that taps into what is being said to understand the sentiment with the particular ability of doing so in near real-time. LCI integrates novel algorithms for sentiment analysis and a configurable dashboard with different kinds of charts including dynamic ones that change as new data is ingested. LCI has been researched and prototyped at HP Labs in close interaction with the Business Intelligence Solutions (BIS) Division and a few customers. This paper presents an overview of the architecture and some of its key components and algorithms, focusing in particular on how LCI deals with Twitter and illustrating its capabilities with selected use cases.
---
paper_title: A Clustering Approach for Collaborative Filtering Recommendation Using Social Network Analysis
paper_content:
Collaborative Filtering(CF) is a well-known technique in recommender sys- tems. CF exploits relationships between users and recommends items to the active user according to the ratings of his/her neighbors. CF suffers from the data sparsity problem, where users only rate a small set of items. That makes the computation of similarity between users imprecise and consequently reduces the accuracy of CF algorithms. In this article, we propose a clustering approach based on the social information of users to derive the recommendations. We study the application of this approach in two appli- cation scenarios: academic venue recommendation based on collaboration information and trust-based recommendation. Using the data from DBLP digital library and Epin- ion, the evaluation shows that our clustering technique based CF performs better than traditional CF algorithms.
---
paper_title: Use of social network information to enhance collaborative filtering performance
paper_content:
When people make decisions, they usually rely on recommendations from friends and acquaintances. Although collaborative filtering (CF), the most popular recommendation technique, utilizes similar neighbors to generate recommendations, it does not distinguish friends in a neighborhood from strangers who have similar tastes. Because social networking Web sites now make it easy to gather social network information, a study about the use of social network information in making recommendations will probably produce productive results. In this study, we developed a way to increase recommendation effectiveness by incorporating social network information into CF. We collected data about users' preference ratings and their social network relationships from a social networking Web site. Then, we evaluated CF performance with diverse neighbor groups combining groups of friends and nearest neighbors. Our results indicated that more accurate prediction algorithms can be produced by incorporating social network information into CF.
---
paper_title: How is the Semantic Web evolving? A dynamic social network perspective
paper_content:
Finding how the Semantic Web has evolved can help understand the status of Semantic Web community and predict the diffusion of the Semantic Web. One of the promising applications of the Semantic Web is the representation of personal profiles using Friend of a Friend (FOAF). A key characteristic of such social networks is their continual change. However, extant analyses of social networks on the Semantic Web are essentially static in that the information about the change of social networks is neglected. To address the limitations, we analyzed the dynamics of a large-scale real-world social network in this paper. Social network ties were extracted from both within and between FOAF documents. The former was based on knows relations between persons, and the latter was based on revision relations. We found that the social network evolves in a speckled fashion, which is highly distributed. The network went through rapid increase in size at an early stage and became stabilized later. By examining the changes of structural properties over time, we identified the evolution patterns of social networks on the Semantic Web and provided evidence for the growth and sustainability of the Semantic Web community.
---
paper_title: Research on Application Model of Semantic Web-based Social Network Analysis
paper_content:
The extensive research of semantic web technology provides a new way of studying on social network analysis methods,which has become the focus of social network analysis area.This paper takes the application of semantic web technology in social network analysis field as its object.First,the article reviews the achievement on the semantic web and social network analysis achieved by both domestic and foreign scholars.Second the paper makes an analysis on the social semantic networks,and proposes an application model Semantic Web-based social network analysis.The paper puts forward technical difficulty in the course of developing of the model as well as its future research field in the end.
---
paper_title: Evaluation and Development of Data Mining Tools for Social Network Analysis
paper_content:
This chapter reviews existing data mining tools for scraping data from heterogeneous online social networks. It introduces not only the complexities of scraping data from these sources (which include diverse data forms), but also presents currently available tools including their strengths and weaknesses. The chapter introduces our solution to effectively mining online social networks through the development of VoyeurServer, a tool we designed which builds upon the open-source Web-Harvest framework. We have shared details of how VoyeurServer was developed and how it works so that data mining developers can develop their own customized data mining solutions built upon the Web-Harvest framework. We conclude the chapter with future directions of our data mining project so that developers can incorporate relevant features into their data mining applications.
---
paper_title: Using Very Simple Statistics for Review Search: An Exploration
paper_content:
We report on work in progress on using very simple statistics in an unsupervised fashion to re-rank search engine results when review-oriented queries are issued; the goal is to bring opinionated or subjective results to the top of the results list. We find that our proposed technique performs comparably to methods that rely on sophisticated pre-encoded linguistic knowledge, and that both substantially improve the initial results produced by the Yahoo! search engine.
---
paper_title: Mining the peanut gallery: opinion extraction and semantic classification of product reviews
paper_content:
The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.
---
paper_title: Seeing Stars: Exploiting Class Relationships For Sentiment Categorization With Respect To Rating Scales
paper_content:
We address the rating-inference problem, wherein rather than simply decide whether a review is "thumbs up" or "thumbs down", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five "stars"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, "three stars" is intuitively closer to "four stars" than to "one star".We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem.
---
paper_title: Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews
paper_content:
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g.,"subtle nuances") and a negative semantic orientation when it has bad associations (e.g.,"very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word"excellent"minus the mutual information between the given phrase and the word"poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.
---
paper_title: A Parameterized Centrality Metric for Network Analysis
paper_content:
USC Information Sciences Institute4676 Admiralty Way, Marina del Rey, CA 90292(Dated: October 21, 2010)A variety of metrics have been proposed to measure the relative importance of nodes in a net-work. One of these, -centrality [1], measures the number of attenuated paths that exist betweennodes. We introduce a normalized version of this metric and use it to study network structure,speci cally, to rank nodes and nd community structure of the network. Speci cally, we extend themodularity-maximization method [2] for community detection to use this metric as the measure ofnode connectivity. Normalized -centrality is a powerful tool for network analysis, since it containsa tunable parameter that sets the length scale of interactions. By studying how rankings and dis-covered communities change when this parameter is varied allows us to identify locally and globallyimportant nodes and structures. We apply the proposed method to several benchmark networksand show that it leads to better insight into network structure than alternative methods.
---
paper_title: Thumbs Up? Sentiment Classification Using Machine Learning Techniques
paper_content:
We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we flnd that standard machine learning techniques deflnitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classiflcation, and support vector machines) do not perform as well on sentiment classiflcation as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classiflcation problem more challenging.
---
paper_title: Creating Subjective and Objective Sentence Classifiers from Unannotated Texts
paper_content:
This paper presents the results of developing subjectivity classifiers using only unannotated texts for training. The performance rivals that of previous supervised learning approaches. In addition, we advance the state of the art in objective sentence classification by learning extraction patterns associated with objectivity and creating objective classifiers that achieve substantially higher recall than previous work with comparable precision.
---
paper_title: TwitterMonitor: trend detection over the twitter stream
paper_content:
We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.
---
paper_title: Mining and summarizing customer reviews
paper_content:
Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.
---
paper_title: Determining The Sentiment Of Opinions
paper_content:
Identifying sentiments (the affective parts of opinions) is a challenging problem. We present a system that, given a topic, automatically finds the people who hold opinions about that topic and the sentiment of each opinion. The system contains a module for determining word sentiment and another for combining sentiments within a sentence. We experiment with various models of classifying and combining sentiment at word and sentence levels, with promising results.
---
paper_title: Automatic Extraction of Opinion Propositions and their Holders
paper_content:
We identify a new task in the ongoing analysis of opinions: finding propositional opinions, sentential complements which for many verbs contain the actual opinion, rather than full opinion sentences. We propose an extension of semantic parsing techniques, coupled with additional lexical and syntactic features, that can produce labels for propositional opinions as opposed to other syntactic constituents. We describe the annotation of a small corpus of 5,139 sentences with propositional opinion information, and use this corpus to evaluate our methods. We also present results that indicate that the proposed methods can be extended to the related task of identifying opinion holders and associating them with propositional
---
paper_title: Birds of a Feather: Homophily in Social Networks
paper_content:
Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize...
---
paper_title: Mining the peanut gallery: opinion extraction and semantic classification of product reviews
paper_content:
The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.
---
paper_title: Mining product reputations on the Web
paper_content:
Knowing the reputations of your own and/or competitors' products is important for marketing and customer relationship management. It is, however, very costly to collect and analyze survey data manually. This paper presents a new framework for mining product reputations on the Internet. It automatically collects people's opinions about target products from Web pages, and it uses text mining techniques to obtain the reputations of those products.On the basis of human-test samples, we generate in advance syntactic and linguistic rules to determine whether any given statement is an opinion or not, as well as whether such any opinion is positive or negative in nature. We first collect statements regarding target products using a general search engine, and then, using the rules, extract opinions from among them and attach three labels to each opinion, labels indicating the positive/negative determination, the product name itself, and an numerical value expressing the degree of system confidence that the statement is, in fact, an opinion. The labeled opinions are then input into an opinion database.The mining of reputations, i.e., the finding of statistically meaningful information included in the database, is then conducted. We specify target categories using label values (such as positive opinions of product A) and perform four types of text mining: extraction of 1) characteristic words, 2) co-occurrence words, 3) typical sentences, for individual target categories, and 4) correspondence analysis among multiple target categories.Actual marketing data is used to demonstrate the validity and effectiveness of the framework, which offers a drastic reduction in the overall cost of reputation analysis over that of conventional survey approaches and supports the discovery of knowledge from the pool of opinions on the web.
---
paper_title: Latent aspect rating analysis on review text data: a rating regression approach
paper_content:
In this paper, we define and study a new opinionated text data analysis problem called Latent Aspect Rating Analysis (LARA), which aims at analyzing opinions expressed about an entity in an online review at the level of topical aspects to discover each individual reviewer's latent opinion on each aspect as well as the relative emphasis on different aspects when forming the overall judgment of the entity. We propose a novel probabilistic rating regression model to solve this new text mining problem in a general way. Empirical experiments on a hotel review data set show that the proposed latent rating regression model can effectively solve the problem of LARA, and that the detailed analysis of opinions at the level of topical aspects enabled by the proposed model can support a wide range of application tasks, such as aspect opinion summarization, entity ranking based on aspect ratings, and analysis of reviewers rating behavior.
---
paper_title: TwitterMonitor: trend detection over the twitter stream
paper_content:
We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.
---
paper_title: Mining Comparative Sentences and Relations
paper_content:
This paper studies a text mining problem, comparative sentence mining (CSM). A comparative sentence expresses an ordering relation between two sets of entities with respect to some common features. For example, the comparative sentence "Canon's optics are better than those of Sony and Nikon" expresses the comparative relation: (better, {optics}, {Canon}, {Sony, Nikon}). Given a set of evaluative texts on the Web, e.g., reviews, forum postings, and news articles, the task of comparative sentence mining is (1) to identify comparative sentences from the texts and (2) to extract comparative relations from the identified comparative sentences. This problem has many applications. For example, a product manufacturer wants to know customer opinions of its products in comparison with those of its competitors. In this paper, we propose two novel techniques based on two new types of sequential rules to perform the tasks. Experimental evaluation has been conducted using different types of evaluative texts from the Web. Results show that our techniques are very promising.
---
paper_title: A Link to the Past: Constructing Historical Social Networks
paper_content:
To assist in the research of social networks in history, we develop machine-learning-based tools for the identification and classification of personal relationships. Our case study focuses on the Dutch social movement between 1870 and 1940, and is based on biographical texts describing the lives of notable people in this movement. We treat the identification and the labeling of relations between two persons into positive, neutral, and negative both as a sequence of two tasks and as a single task. We observe that our machine-learning classifiers, support vector machines, produce better generalization performance on the single task. We show how a complete social network can be built from these classifications, and provide a qualitative analysis of the induced network using expert judgements on samples of the network.
---
paper_title: Using Very Simple Statistics for Review Search: An Exploration
paper_content:
We report on work in progress on using very simple statistics in an unsupervised fashion to re-rank search engine results when review-oriented queries are issued; the goal is to bring opinionated or subjective results to the top of the results list. We find that our proposed technique performs comparably to methods that rely on sophisticated pre-encoded linguistic knowledge, and that both substantially improve the initial results produced by the Yahoo! search engine.
---
paper_title: Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews
paper_content:
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g.,"subtle nuances") and a negative semantic orientation when it has bad associations (e.g.,"very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word"excellent"minus the mutual information between the given phrase and the word"poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.
---
paper_title: Thumbs Up? Sentiment Classification Using Machine Learning Techniques
paper_content:
We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we flnd that standard machine learning techniques deflnitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classiflcation, and support vector machines) do not perform as well on sentiment classiflcation as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classiflcation problem more challenging.
---
paper_title: Using sentiment orientation features for mood classification in blogs
paper_content:
In this paper we explore the task of mood classification for blog postings. We propose a novel approach that uses the hierarchy of possible moods to achieve better results than a standard machine learning approach. We also show that using sentiment orientation features improves the performance of classification. We used the Livejournal blog corpus as a dataset to train and evaluate our method.
---
paper_title: Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena
paper_content:
We perform a sentiment analysis of all tweets published on the microblogging platform Twitter in the second half of 2008. We use a psychometric instrument to extract six mood states (tension, depression, anger, vigor, fatigue, confusion) from the aggregated Twitter content and compute a six-dimensional mood vector for each day in the timeline. We compare our results to a record of popular events gathered from media and sources. We find that events in the social, political, cultural and economic sphere do have a significant, immediate and highly specific effect on the various dimensions of public mood. We speculate that large scale analyses of mood can provide a solid platform to model collective emotive trends in terms of their predictive value with regards to existing social as well as economic indicators.
---
paper_title: Pulse: Mining Customer Opinions from Free Text
paper_content:
We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface.
---
paper_title: Product recommendation and rating prediction based on multi-modal social networks
paper_content:
Online Social Rating Networks (SRNs) such as Epinions and Flixter, allow users to form several implicit social networks, through their daily interactions like co-commenting on the same products, or similarly co-rating products. The majority of earlier work in Rating Prediction and Recommendation of products (e.g. Collaborative Filtering) mainly takes into account ratings of users on products. However, in SRNs users can also built their explicit social network by adding each other as friends. In this paper, we propose Social-Union, a method which combines similarity matrices derived from heterogeneous (unipartite and bipartite) explicit or implicit SRNs. Moreover, we propose an effective weighting strategy of SRNs influence based on their structured density. We also generalize our model for combining multiple social networks. We perform an extensive experimental comparison of the proposed method against existing rating prediction and product recommendation algorithms, using synthetic and two real data sets (Epinions and Flixter). Our experimental results show that our Social-Union algorithm is more effective in predicting rating and recommending products in SRNs.
---
paper_title: Strength of social influence in trust networks in product review sites
paper_content:
Some popular product review sites such as Epinions allow users to establish a trust network among themselves, indicating who they trust in providing product reviews and ratings. While trust relations have been found to be useful in generating personalised recommendations, the relations between trust and product ratings has so far been overlooked. In this paper, we examine large datasets collected from Epinions and Ciao, two popular product review sites. We discover that in general users who trust each other tend to have smaller differences in their ratings as time passes, giving support to the theories of homophily and social influence. However, we also discover that this does not hold true across all trusted users. A trust relation does not guarantee that two users have similar preferences, implying that personalised recommendations based on trust relations do not necessarily produce more accurate predictions. We propose a method to estimate the strengths of trust relations so as to estimate the true influence among the trusted users. Our method extends the popular matrix factorisation technique for collaborative filtering, which allow us to generate more accurate rating predictions at the same time. We also show that the estimated strengths of trust relations correlate with the similarity among the users. Our work contributes to the understanding of the interplay between trust relations and product ratings, and suggests that trust networks may serve as a more general socialising venue than only an indication of similarity in user preferences.
---
paper_title: A survey on sentiment detection of reviews
paper_content:
The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating in this research community, namely, subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. In fact, there are inherent relations between them. Subjectivity classification can prevent the sentiment classifier from considering irrelevant or even potentially misleading text. Document sentiment classification and opinion extraction have often involved word sentiment classification techniques. This survey discusses related issues and main approaches to these problems.
---
paper_title: TwitterMonitor: trend detection over the twitter stream
paper_content:
We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.
---
paper_title: Latent aspect rating analysis on review text data: a rating regression approach
paper_content:
In this paper, we define and study a new opinionated text data analysis problem called Latent Aspect Rating Analysis (LARA), which aims at analyzing opinions expressed about an entity in an online review at the level of topical aspects to discover each individual reviewer's latent opinion on each aspect as well as the relative emphasis on different aspects when forming the overall judgment of the entity. We propose a novel probabilistic rating regression model to solve this new text mining problem in a general way. Empirical experiments on a hotel review data set show that the proposed latent rating regression model can effectively solve the problem of LARA, and that the detailed analysis of opinions at the level of topical aspects enabled by the proposed model can support a wide range of application tasks, such as aspect opinion summarization, entity ranking based on aspect ratings, and analysis of reviewers rating behavior.
---
paper_title: A Joint Model of Text and Aspect Ratings for Sentiment Summarization
paper_content:
Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings ‐ a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals.
---
paper_title: A holistic lexicon-based approach to opinion mining
paper_content:
One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly
---
paper_title: Using Very Simple Statistics for Review Search: An Exploration
paper_content:
We report on work in progress on using very simple statistics in an unsupervised fashion to re-rank search engine results when review-oriented queries are issued; the goal is to bring opinionated or subjective results to the top of the results list. We find that our proposed technique performs comparably to methods that rely on sophisticated pre-encoded linguistic knowledge, and that both substantially improve the initial results produced by the Yahoo! search engine.
---
paper_title: Automatic Construction Of Polarity-Tagged Corpus From HTML Documents
paper_content:
This paper proposes a novel method of building polarity-tagged corpus from HTML documents. The characteristics of this method is that it is fully automatic and can be applied to arbitrary HTML documents. The idea behind our method is to utilize certain layout structures and linguistic pattern. By using them, we can automatically extract such sentences that express opinion. In our experiment, the method could construct a corpus consisting of 126,610 sentences.
---
paper_title: Creating Subjective and Objective Sentence Classifiers from Unannotated Texts
paper_content:
This paper presents the results of developing subjectivity classifiers using only unannotated texts for training. The performance rivals that of previous supervised learning approaches. In addition, we advance the state of the art in objective sentence classification by learning extraction patterns associated with objectivity and creating objective classifiers that achieve substantially higher recall than previous work with comparable precision.
---
paper_title: Clustering product features for opinion mining
paper_content:
In sentiment analysis of product reviews, one important problem is to produce a summary of opinions based on product features/attributes (also called aspects). However, for the same feature, people can express it with many different words or phrases. To produce a useful summary, these words and phrases, which are domain synonyms, need to be grouped under the same feature group. Although several methods have been proposed to extract product features from reviews, limited work has been done on clustering or grouping of synonym features. This paper focuses on this task. Classic methods for solving this problem are based on unsupervised learning using some forms of distributional similarity. However, we found that these methods do not do well. We then model it as a semi-supervised learning problem. Lexical characteristics of the problem are exploited to automatically identify some labeled examples. Empirical evaluation shows that the proposed method outperforms existing state-of-the-art methods by a large margin.
---
paper_title: Learning Subjective Nouns Using Extraction Pattern Bootstrapping
paper_content:
We explore the idea of creating a subjectivity classifier that uses lists of subjective nouns learned by bootstrapping algorithms. The goal of our research is to develop a system that can distinguish subjective sentences from objective sentences. First, we use two bootstrapping algorithms that exploit extraction patterns to learn sets of subjective nouns. Then we train a Naive Bayes classifier using the subjective nouns, discourse features, and subjectivity clues identified in prior research. The bootstrapping algorithms learned over 1000 subjective nouns, and the subjectivity classifier performed well, achieving 77% recall with 81% precision.
---
paper_title: Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews
paper_content:
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g.,"subtle nuances") and a negative semantic orientation when it has bad associations (e.g.,"very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word"excellent"minus the mutual information between the given phrase and the word"poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.
---
paper_title: Determining the semantic orientation of terms through gloss classification
paper_content:
Sentiment classification is a recent subdiscipline of text classification which is concerned not with the topic a document is about, but with the opinion it expresses. It has a rich set of applications, ranging from tracking users' opinions about products or about political candidates as expressed in online forums, to customer relationship management. Functional to the extraction of opinions from text is the determination of the orientation of ``subjective'' terms contained in text, i.e. the determination of whether a term that carries opinionated content has a positive or a negative connotation. In this paper we present a new method for determining the orientation of subjective terms. The method is based on the quantitative analysis of the glosses of such terms, i.e. the definitions that these terms are given in on-line dictionaries, and on the use of the resulting term representations for semi-supervised term classification. The method we present outperforms all known methods when tested on the recognized standard benchmarks for this task.
---
paper_title: Seeing Stars: Exploiting Class Relationships For Sentiment Categorization With Respect To Rating Scales
paper_content:
We address the rating-inference problem, wherein rather than simply decide whether a review is "thumbs up" or "thumbs down", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five "stars"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, "three stars" is intuitively closer to "four stars" than to "one star".We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem.
---
paper_title: Document-Word Co-regularization for Semi-supervised Sentiment Analysis
paper_content:
The goal of sentiment prediction is to automatically identify whether a given piece of text expresses positive or negative opinion towards a topic of interest. One can pose sentiment prediction as a standard text categorization problem, but gathering labeled data turns out to be a bottleneck. Fortunately, background knowledge is often available in the form of prior information about the sentiment polarity of words in a lexicon. Moreover, in many applications abundant unlabeled data is also available. In this paper, we propose a novel semi-supervised sentiment prediction algorithm that utilizes lexical prior knowledge in conjunction with unlabeled examples. Our method is based on joint sentiment analysis of documents and words based on a bipartite graph representation of the data. We present an empirical study on a diverse collection of sentiment prediction problems which confirms that our semi-supervised lexical models significantly outperform purely supervised and competing semi-supervised techniques.
---
paper_title: Seeing Stars When There Aren’t Many Stars: Graph-Based Semi-Supervised Learning For Sentiment Categorization
paper_content:
We present a graph-based semi-supervised learning algorithm to address the sentiment analysis task of rating inference. Given a set of documents (e.g., movie reviews) and accompanying ratings (e.g., "4 stars"), the task calls for inferring numerical ratings for unlabeled documents based on the perceived sentiment expressed by their text. In particular, we are interested in the situation where labeled data is scarce. We place this task in the semi-supervised setting and demonstrate that considering unlabeled reviews in the learning process can improve rating-inference performance. We do so by creating a graph on both labeled and unlabeled data to encode certain assumptions for this task. We then solve an optimization problem to obtain a smooth rating function over the whole graph. When only limited labeled data is available, this method achieves significantly better predictive accuracy over other methods that ignore the unlabeled examples during training.
---
paper_title: Semi-Supervised Polarity Lexicon Induction
paper_content:
We present an extensive study on the problem of detecting polarity of words. We consider the polarity of a word to be either positive or negative. For example, words such as good, beautiful, and wonderful are considered as positive words; whereas words such as bad, ugly, and sad are considered negative words. We treat polarity detection as a semi-supervised label propagation problem in a graph. In the graph, each node represents a word whose polarity is to be determined. Each weighted edge encodes a relation that exists between two words. Each node (word) can have two labels: positive or negative. We study this framework in two different resource availability scenarios using WordNet and OpenOffice thesaurus when WordNet is not available. We report our results on three different languages: English, French, and Hindi. Our results indicate that label propagation improves significantly over the baseline and other semi-supervised learning methods like Mincuts and Randomized Mincuts for this task.
---
paper_title: Thumbs Up Or Thumbs Down? Semantic Orientation Applied To Unsupervised Classification Of Reviews
paper_content:
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g.,"subtle nuances") and a negative semantic orientation when it has bad associations (e.g.,"very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word"excellent"minus the mutual information between the given phrase and the word"poor". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.
---
paper_title: Identifying content for planned events across social media sites
paper_content:
User-contributed Web data contains rich and diverse information about a variety of events in the physical world, such as shows, festivals, conferences and more. This information ranges from known event features (e.g., title, time, location) posted on event aggregation platforms (e.g., Last.fm events, EventBrite, Facebook events) to discussions and reactions related to events shared on different social media sites (e.g., Twitter, YouTube, Flickr). In this paper, we focus on the challenge of automatically identifying user-contributed content for events that are planned and, therefore, known in advance, across different social media sites. We mine event aggregation platforms to extract event features, which are often noisy or missing. We use these features to develop query formulation strategies for retrieving content associated with an event on different social media sites. Further, we explore ways in which event content identified on one social media site can be used to retrieve additional relevant event content on other social media sites. We apply our strategies to a large set of user-contributed events, and analyze their effectiveness in retrieving relevant event content from Twitter, YouTube, and Flickr.
---
paper_title: Rule Type Identification Using TRCM for Trend Analysis in Twitter
paper_content:
This paper considers the use of Association Rule Mining (ARM) and our proposed Transaction based Rule Change Mining (TRCM) to identify the rule types present in tweet’s hashtags over a specific consecutive period of time and their linkage to real life occurrences. Our novel algorithm was termed TRCM-RTI in reference to Rule Type Identification. We created Time Frame Windows (TFWs) to detect evolvement statuses and calculate the lifespan of hashtags in online tweets. We link RTI to real life events by monitoring and recording rule evolvement patterns in TFWs on the Twitter network.
---
paper_title: TRCM: a methodology for temporal analysis of evolving concepts in Twitter
paper_content:
The Twitter network has been labelled the most commonly used microblogging application around today. With about 500 million estimated registered users as of June, 2012, Twitter has become a credible medium of sentiment/opinion expression. It is also a notable medium for information dissemination; including breaking news on diverse issues since it was launched in 2007. Many organisations, individuals and even government bodies follow activities on the network in order to obtain knowledge on how their audience reacts to tweets that affect them. We can use postings on Twitter (known as tweets) to analyse patterns associated with events by detecting the dynamics of the tweets. A common way of labelling a tweet is by including a number of hashtags that describe its contents. Association Rule Mining can find the likelihood of co-occurrence of hashtags. In this paper, we propose the use of temporal Association Rule Mining to detect rule dynamics, and consequently dynamics of tweets. We coined our methodology Transaction-based Rule Change Mining (TRCM). A number of patterns are identifiable in these rule dynamics including, new rules, emerging rules, unexpected rules and ‘dead’ rules. Also the linkage between the different types of rule dynamics is investigated experimentally in this paper.
---
paper_title: Event Detection in Twitter ∗
paper_content:
Twitter, as a form of social media, is fast emerging in recent years. Users are using Twitter to report real-life events. This paper focuses on detecting those events by analyzing the text stream in Twitter. Although event detection has long been a research topic, the characteristics of Twitter make it a non-trivial task. Tweets reporting such events are usually overwhelmed by high flood of meaningless “babbles”. Moreover, event detection algorithm needs to be scalable given the sheer amount of tweets. This paper attempts to tackle these challenges with EDCoW (Event Detection with Clustering of Wavelet-based Signals). EDCoW builds signals for individual words by applying wavelet analysis on the frequencybased raw signals of the words. It then filters away the trivial words by looking at their corresponding signal autocorrelations. The remaining words are then clustered to form events with a modularity-based graph partitioning technique. Experimental results show promising result of EDCoW.
---
paper_title: Twitter event detection: combining wavelet analysis and topic inference summarization
paper_content:
Today streaming text mining plays an important role within real-time social media mining. Given the amount and cadence of the data generated by those platforms, classical text mining techniques are not suitable to deal with such new mining challenges. Event detection is no exception, available algorithms rely on text mining techniques applied to pre-known datasets processed with no restrictions about computational complexity and required execution time per document analysis. This work presents a lightweight event detection using wavelet signal analysis of hashtag occurrences in the twitter public stream. It also pro- poses a strategy to describe detected events using a Latent Dirichlet Allocation topic inference model based on Gibbs Sampling. Peak detec- tion using Continuous Wavelet Transformation achieved good results in the identification of abrupt increases on the mentions of specific hash- tags. The combination of this method with the extraction of topics from tweets with hashtag mentions proved to be a viable option to summarize detected twitter events in streaming environments.
---
paper_title: Sensing Trending Topics in Twitter
paper_content:
Online social and news media generate rich and timely information about real-world events of all kinds. However, the huge amount of data available, along with the breadth of the user base, requires a substantial effort of information filtering to successfully drill down to relevant topics and events. Trending topic detection is therefore a fundamental building block to monitor and summarize information originating from social sources. There are a wide variety of methods and variables and they greatly affect the quality of results. We compare six topic detection methods on three Twitter datasets related to major events, which differ in their time scale and topic churn rate. We observe how the nature of the event considered, the volume of activity over time, the sampling procedure and the pre-processing of the data all greatly affect the quality of detected topics, which also depends on the type of detection method used. We find that standard natural language processing techniques can perform well for social streams on very focused topics, but novel techniques designed to mine the temporal distribution of concepts are needed to handle more heterogeneous streams containing multiple stories evolving in parallel. One of the novel topic detection methods we propose, based on -grams cooccurrence and topic ranking, consistently achieves the best performance across all these conditions, thus being more reliable than other state-of-the-art techniques.
---
paper_title: Breaking News Detection and Tracking in Twitter
paper_content:
Twitter has been used as one of the communication channels for spreading breaking news. We propose a method to collect, group, rank and track breaking news in Twitter. Since short length messages make similarity comparison difficult, we boost scores on proper nouns to improve the grouping results. Each group is ranked based on popularity and reliability factors. Current detection method is limited to facts part of messages. We developed an application called “Hotstream” based on the proposed method. Users can discover breaking news from the Twitter timeline. Each story is provided with the information of message originator, story development and activity chart. This provides a convenient way for people to follow breaking news and stay informed with real-time updates.
---
paper_title: Automatic identification and presentation of twitter content for planned events
paper_content:
We demonstrate a system for automatically augmenting information on planned events with Twitter messages, aimed at enhancing the event information-seeking experience with rich and timely user-contributed content. Our system uses a set of automatic query building strategies that, for any given event and its associated context features (e.g., title, description, location), identify and return related Twitter messages. We present two alternative interfaces to our system, namely, a browser plug-in and a customizable Web interface. The browser plug-in uses a precongured set of strategies to display related Twitter messages alongside the event information on a user-contributed event Website. The customizable interface provides the exibility of tuning the query building strategies, and also allows for several alternative ranking modes for the identied event messages.
---
paper_title: TRCM: a methodology for temporal analysis of evolving concepts in Twitter
paper_content:
The Twitter network has been labelled the most commonly used microblogging application around today. With about 500 million estimated registered users as of June, 2012, Twitter has become a credible medium of sentiment/opinion expression. It is also a notable medium for information dissemination; including breaking news on diverse issues since it was launched in 2007. Many organisations, individuals and even government bodies follow activities on the network in order to obtain knowledge on how their audience reacts to tweets that affect them. We can use postings on Twitter (known as tweets) to analyse patterns associated with events by detecting the dynamics of the tweets. A common way of labelling a tweet is by including a number of hashtags that describe its contents. Association Rule Mining can find the likelihood of co-occurrence of hashtags. In this paper, we propose the use of temporal Association Rule Mining to detect rule dynamics, and consequently dynamics of tweets. We coined our methodology Transaction-based Rule Change Mining (TRCM). A number of patterns are identifiable in these rule dynamics including, new rules, emerging rules, unexpected rules and ‘dead’ rules. Also the linkage between the different types of rule dynamics is investigated experimentally in this paper.
---
paper_title: Rule Type Identification Using TRCM for Trend Analysis in Twitter
paper_content:
This paper considers the use of Association Rule Mining (ARM) and our proposed Transaction based Rule Change Mining (TRCM) to identify the rule types present in tweet’s hashtags over a specific consecutive period of time and their linkage to real life occurrences. Our novel algorithm was termed TRCM-RTI in reference to Rule Type Identification. We created Time Frame Windows (TFWs) to detect evolvement statuses and calculate the lifespan of hashtags in online tweets. We link RTI to real life events by monitoring and recording rule evolvement patterns in TFWs on the Twitter network.
---
paper_title: Rule Type Identification Using TRCM for Trend Analysis in Twitter
paper_content:
This paper considers the use of Association Rule Mining (ARM) and our proposed Transaction based Rule Change Mining (TRCM) to identify the rule types present in tweet’s hashtags over a specific consecutive period of time and their linkage to real life occurrences. Our novel algorithm was termed TRCM-RTI in reference to Rule Type Identification. We created Time Frame Windows (TFWs) to detect evolvement statuses and calculate the lifespan of hashtags in online tweets. We link RTI to real life events by monitoring and recording rule evolvement patterns in TFWs on the Twitter network.
---
paper_title: TRCM: a methodology for temporal analysis of evolving concepts in Twitter
paper_content:
The Twitter network has been labelled the most commonly used microblogging application around today. With about 500 million estimated registered users as of June, 2012, Twitter has become a credible medium of sentiment/opinion expression. It is also a notable medium for information dissemination; including breaking news on diverse issues since it was launched in 2007. Many organisations, individuals and even government bodies follow activities on the network in order to obtain knowledge on how their audience reacts to tweets that affect them. We can use postings on Twitter (known as tweets) to analyse patterns associated with events by detecting the dynamics of the tweets. A common way of labelling a tweet is by including a number of hashtags that describe its contents. Association Rule Mining can find the likelihood of co-occurrence of hashtags. In this paper, we propose the use of temporal Association Rule Mining to detect rule dynamics, and consequently dynamics of tweets. We coined our methodology Transaction-based Rule Change Mining (TRCM). A number of patterns are identifiable in these rule dynamics including, new rules, emerging rules, unexpected rules and ‘dead’ rules. Also the linkage between the different types of rule dynamics is investigated experimentally in this paper.
---
| Title: A Survey of Data Mining Techniques for Social Media Analysis
Section 1: Introduction
Description 1: Introduce the concept of social network and its relevance to data mining, discussing the nature of social networks and the challenges posed by the volume of data.
Section 2: Social Network Background
Description 2: Discuss the popularity and significance of social networks, highlighting their role in information dissemination and opinion expression.
Section 3: Research Issues on Social Network Analysis
Description 3: Outline the key research issues and challenges in using data mining techniques for social network analysis.
Section 4: Recommender System in Social Network Community
Description 4: Explain how recommender systems utilize data mining techniques based on collaborative filtering and content-based recommendations within social networks.
Section 5: Semantic Web of Social Network
Description 5: Describe the role of the Semantic Web in social network analysis and how it facilitates knowledge sharing and reuse across different applications.
Section 6: Opinion Analysis on Social Network
Description 6: Discuss the methods used to categorize opinions expressed in social networks, including the use of data mining to analyze sentiments on various subjects.
Section 7: Aspect-Based/Feature-Based Opinion Mining
Description 7: Explain how aspect-based or feature-based opinion mining focuses on specific aspects of reviews to determine their polarity.
Section 8: Homophily Clustering in Opinion Formation
Description 8: Outline how clustering techniques are used to model opinion formation on social networks by evaluating the similarity of users' opinions.
Section 9: Opinion Definition and Opinion Summarization
Description 9: Discuss techniques for summarizing opinions expressed in documents by analyzing sentiment polarities and occurrences.
Section 10: Opinion Extraction
Description 10: Detail the methods for extracting the specific parts of documents where opinions are expressed.
Section 11: Sentiment Analysis on Social Network
Description 11: Provide a comprehensive overview of sentiment analysis and its application in identifying the positive and negative opinions in social networks.
Section 12: Sentiment Orientation (SO)
Description 12: Explain the concept of Sentiment Orientation and its use in rating and classifying reviews based on the 5-star scale rating.
Section 13: Product Ratings and Reviews
Description 13: Discuss how social networks influence product ratings and reviews and the importance of analyzing this data for consumer decision-making.
Section 14: Reviews and Ratings (RnR) Architecture
Description 14: Describe the RnR architecture designed for analyzing product and service reviews to provide updated evaluations.
Section 15: Aspect Rating Analysis
Description 15: Explain the use of aspect rating analysis in determining the level of satisfaction represented in comments about specific aspects of products or services.
Section 16: Sentiment Lexicon
Description 16: Detail the creation and utilization of sentiment lexicons, or dictionaries of sentimental words, in data mining for sentiment analysis.
Section 17: Unsupervised Classification of Social Network Data
Description 17: Outline various unsupervised learning methods used to classify and cluster sentiment data from social networks.
Section 18: Semi-supervised Classification
Description 18: Discuss semi-supervised learning approaches that combine both labeled and unlabeled data to improve sentiment classification.
Section 19: Supervised Classification
Description 19: Provide an overview of supervised learning techniques used for analyzing known data patterns in social networks.
Section 20: Topic Detection and Tracking on Social Network
Description 20: Explain the techniques used to detect and track the emergence of new topics and their evolution over time on social networks.
Section 21: TRCM for TDT
Description 21: Describe the Transaction-based Rule Change Mining (TRCM) methodology used for detecting changes in event patterns on Twitter.
Section 22: Conclusion and Future Work
Description 22: Summarize the survey findings and suggest directions for future research in data mining techniques for social media analysis. |
Software Toolkits: Practical Aspects of the Internet of Things—A Survey | 6 | ---
paper_title: Contiki - a lightweight and flexible operating system for tiny networked sensors
paper_content:
Wireless sensor networks are composed of large numbers of tiny networked devices that communicate untethered. For large scale networks, it is important to be able to download code into the network dynamically. We present Contiki, a lightweight operating system with support for dynamic loading and replacement of individual programs and services. Contiki is built around an event-driven kernel but provides optional preemptive multithreading that can be applied to individual processes. We show that dynamic loading and unloading is feasible in a resource constrained environment, while keeping the base system lightweight and compact.
---
paper_title: Portable wireless-networking protocol evaluation
paper_content:
Abstract Multi-hop wireless networks, such as sensor-, ad hoc- and mesh-nets, can be differentiated in terms of participating devices and usage scenarios. However they share strong characteristics and requirements, such as node cooperation to enable multi-hop forwarding and dynamic routing protocols to deliver packets. As a result of these similarities, protocols designed for all these wireless networks revolve around a common core of functionality, for example coping with link and node dynamics. They differ only in additional network-specific functionality, such as tree routing structures in sensornets, and parameterization, for example buffer sizes. This convergence of functionality and design goals, as well as the sheer number of proposed protocols in each network class, motivates the idea of applying protocols to more than just their one original class. However, network-layer protocols are usually developed for and tested in only one class of wireless network due to the lack of a platform that allows testing of protocols across different classes of networks. As a result, we unnecessarily constrain the range of settings and scenarios in which we test network protocols. In this article, we propose a platform for protocol testing and evaluation in multiple, heterogeneous networks and discuss the requirements and challenges of such a solution. As a first step and case study, we present the detailed architecture of TinyWifi, a platform for executing native sensornet protocols on Linux-driven wireless devices as found in wireless mesh and mobile ad-hoc networks (MANETs). TinyWifi builds on nesC code base that abstracts from TinyOS and enables the execution of nesC-based protocols in Linux. Using this abstraction, we expand the applicability and means of protocol execution from one class of wireless network to another without re-implementation. We demonstrate the generality of TinyWifi by evaluating four well-established protocols on IEEE 802.11 and 802.15.4 based testbeds using a single implementation. Based on the experience of building TinyWifi and the presented evaluation, we deduce the feasibility of a cross-network evaluation platform and sketch the requirements for inclusion of further network classes.
---
paper_title: Practical Arduino: Cool Projects for Open Source Hardware
paper_content:
Create your own Arduino-based designs, gain in-depth knowledge of the architecture of Arduino, and learn the user-friendly Arduino language all in the context of practical projects that you can build yourself at home. Get hands-on experience using a variety of projects and recipes for everything from home automation to test equipment. Arduino has taken off as an incredibly popular building block among ubicomp (ubiquitous computing) enthusiasts, robotics hobbyists, and DIY home automation developers. Authors Jonathan Oxer and Hugh Blemings provide detailed instructions for building a wide range of both practical and fun Arduino-related projects, covering areas such as hobbies, automotive, communications, home automation, and instrumentation. Take Arduino beyond "blink" to a wide variety of projects from simple to challenging Hands-on recipes for everything from home automation to interfacing with your car engine management system Explanations of techniques and references to handy resources for ubiquitous computing projects Supplementary material includes a circuit schematic reference, introductions to a range of electronic engineering principles and general hints & tips. These combine with the projects themselves to make Practical Arduino: Cool Projects for Open Source Hardware an invaluable reference for Arduino users of all levels. You'll learn a wide variety of techniques that can be applied to your own projects. What you'll learn Communication with serial devices including RFID readers, temperature sensors, and GPS modules Connecting Arduino to Ethernet and WiFi networks Adding synthesized speech to Arduino Linking Arduino to web services Decoding data streams from commercial wireless devices How to make DIY prototyping shields for only a couple of dollars Who is this book for? This book is for hobbyists and developers interested in physical computing using a low-cost, easy-to-learn platform.
---
paper_title: FamiWare: a family of event-based middleware for ambient intelligence
paper_content:
Most of the middlewares currently available focus on one type of device (e.g., TinyOS sensors) and/or are designed with one requirement in mind (e.g., data management). This is an important limitation since most of the AmI applications work with several devices (such as sensors, smartphones or PDAs) and use a high diversity of low-level services. Ideally, the middleware should provide a single interface for accessing all those services able to work in heterogeneous devices. To address this issue, we propose a family of configurable middleware (FamiWare) with a really flexible architecture, instead of building a single version of a middleware with a rigid structure. In this work, we present the architecture of our middleware that can be configured, following a Software Product Line approach, in order to be instantiated in a particular device fulfilling specific application requirements. Furthermore, we evaluate that the decisions taken at architecture and implementation are the adequate ones for this kind of constrained devices.
---
paper_title: A Security Framework for Smart Ubiquitous Industrial Resources
paper_content:
Conventional approaches to manage and control security seem to have reached their limits in new complex environments. These environments are open, dynamic, heterogeneous, distributed, self-managing, collaborative, international, nomadic, and ubiquitous. We are currently working on a middleware platform focused on the industrial needs, UBIWARE. UBIWARE integrates Ubiquitous Computing with Semantic Web, Distributed AI, Security and Privacy, and Enterprise Application Integration. In this paper, we describe our long-term vision for the security and privacy management in complex multi-agent systems like UBIWARE, SURPAS. The security infrastructure has to become pervasive, interoperable and intelligent enough to naturally fit UBIWARE. SURPAS aims at policy-based optimal collecting, composing, configuring and provisioning of security measures. Particularly, we analyze the security implications of UBIWARE, present the SURPAS research framework, and the SURPAS abstract architecture.
---
paper_title: Self-Organised Middleware Architecture for the Internet-of-Things
paper_content:
Presently, middleware technologies abound for the Internet-of-Things (IoT), directed at hiding the complexity of underlying technologies and easing the use and management of IoT resources. The middleware solutions of today are capable technologies, which provide much advanced services and that are built using superior architectural models, they however fail short in some important aspects: existing middleware do not properly activate the link between diverse applications with much different monitoring purposes and many disparate sensing networks that are of heterogeneous nature and geographically dispersed. Then, current middleware are unfit to provide some system-wide global arrangement (intelligence, routing, data delivery) emerging from the behaviors of the constituent nodes, rather than from the coordination of single elements, i.e. self-organization. This paper presents the SIMPLE self-organized and intelligent middleware platform. SIMPLE middleware innovates from current state-of-research exactly by exhibiting self-organization properties, a focus on data-dissemination using multi-level subscriptions processing and a tiered networking approach able to cope with many disparate, widespread and heterogeneous sensing networks (e.g. WSN). In this way, the SIMLE middleware is provided as robust zero-configuration technology, with no central dependable system, immune to failures, and able to efficiently deliver the right data at the right time, to needing applications.
---
paper_title: UbiRoad: Semantic Middleware for Context-Aware Smart Road Environments
paper_content:
A smart road environment is such a traffic environment that is equipped with all necessary facilities to enable seamless mobile service provisioning to the users. However, advanced sensors and network architectures deployed within the traffic environment are insufficient to make mobile service provisioning autonomous and proactive, thus minimizing drivers’ distraction during their presence in the environment. For that, an Intelligent Transportation System, which is operating on top of numerous sensor and access networks and governing the process of mobile services provisioning to the users in self-managed and proactive way, must be deployed. Specifically, such system should provide solutions to the following two interoperability problems: interoperability between the in-car and roadside devices produced and programmed by different vendors and/or providers, and the need for seamless and flexible collaboration (including discovery, coordination, conflict resolution and possibly even negotiation) amongst the smart road devices and services. To tackle these problems, in this paper we propose UbiRoad middleware intending utilization of semantic languages and semantic technologies for declarative specification of devices’ and services’ behavior, application of software agents as engines executing those specifications, and establishment of common ontologies to facilitate and govern seamless interoperation of devices, services and humans. The main contribution of the paper includes the requirements and the architecture of complex traffic management systems and shows how such systems may benefit from utilization of semantic and agent technologies.
---
paper_title: RFID Added Value Sensing Capabilities: European Advances in Integrated RFID-WSN Middleware
paper_content:
Radio frequency identification (RFID) is a key technology for Europe. Since the initial emergence of the technology, there has been a noticeable shift away from RFID pilot projects of the early days, towards a broad deployment of RFID in order to increase the efficiency and innovation of processes. Even though Europe is a leading player in the world of RFID technology, several challenges need to be addressed in order for RFID to reach its full potential. The funded under FP7 ICT project ASPIRE is one of the coordinated European efforts to further the advancement of this technology, in the areas of enabling technology development for RFID. In particular, the focus of ASPIRE is on design, development and adoption of an innovative, programmable, royalty-free, lightweight and privacy friendly RFID middleware.
---
paper_title: Many task computing for orthologous genes identification in protozoan genomes using Hydra
paper_content:
One of the main advantages of using a scientific workflow management system (SWfMS) is to orchestrate data flows among scientific activities and register provenance of the whole workflow execution. Nevertheless, the execution control of distributed activities in high performance computing environments by SWfMS presents challenges such as steering control and provenance gathering. Such challenges may become a complex task to be accomplished in bioinformatics experiments, particularly in Many Task Computing scenarios. This paper presents a data parallelism solution for a bioinformatics experiment supported by Hydra, a middleware that bridges SWfMS and high performance computing to enable workflow parallelization with provenance gathering. Hydra Many Task Computing parallelization strategies can be registered and reused. Using Hydra, provenance may also be uniformly gathered. We have evaluated Hydra using an Orthologous Gene Identification workflow. Experimental results show that a systematic approach for distributing parallel activities is viable, sparing scientist time and diminishing operational errors, with the additional benefits of distributed provenance support. Copyright © 2011 John Wiley & Sons, Ltd. ::: ::: (The speed-up is based on comparisons with executions using one core in the cluster.)
---
paper_title: XGSN: An Open-source Semantic Sensing Middleware for the Web of Things
paper_content:
We present XGSN, an open-source system that relies on semantic representations of sensor metadata and observations, to guide the process of annotating and publishing sensor data on the Web. XGSN is able to handle the data acquisition process of a wide number of devices and protocols, and is designed as a highly extensible platform, leveraging on the existing capabilities of the Global Sensor Networks (GSN) middleware. Going beyond traditional sensor management systems, XGSN is capable of enriching virtual sensor descriptions with semantically annotated content using standard vocabularies. In the proposed approach, sensor data and observations are annotated using an ontology network based on the SSN ontology, providing a standardized queryable representation that makes it easier to share, discover, integrate and interpret the data. XGSN manages the annotation process for the incoming sensor observations, producing RDF streams that are sent to the cloud-enabled Linked Sensor Middleware, which can internally store the data or perform continuous query processing. The distributed nature of XGSN allows deploying different remote instances that can interchange observation data, so that virtual sensors can be aggregated and consume data from other remote virtual sensors. In this paper we show how this approach has been implemented in XGSN, and incorporated to the wider OpenIoT platform, providing a highly flexible and scalable system for managing the life-cycle of sensor data, from acquisition to publishing, in the context of the semantic Web of Things.
---
paper_title: A Survey of Middleware for Internet of Things
paper_content:
This paper provides a survey of middleware system for Internet of Things (IoT). IoT is considered as a part of future internet and ubiquitous computing, and it creates a true ubiquitous or smart environment. The middleware for IoT acts as a bond joining the heterogeneous domains of applications communicating over heterogeneous interfaces. Comprehensive review of the existing middleware systems for IoT is provided here to achieve the better understanding of the current gaps and future directions in this field. Fundamental functional blocks are proposed for this middleware system, and based on that a feature wise classification is performed on the existing IoT-middleware. Open issues are analyzed and our vision on the research scope in this area is presented.
---
paper_title: SIXTH: A Middleware for Supporting Ubiquitous Sensing in Personal Health Monitoring
paper_content:
For an arbitrary event, a lack of the prevailing context compromises understanding. In health monitoring services, this may have serious repercussions. Yet many biomedical devices tend to exhibit a lack of openness and interoperability that reduces their potential as active nodes in broader healthcare information systems. One approach to addressing this deficiency rests in the realization of a middleware solution that is heterogeneous in a multiplicity of dimensions, whilst supporting dynamic reprogramming as the needs of patients change. This paper demonstrates how such functionality may be interwoven into a middleware solution, both from a design and implementation perspective.
---
paper_title: The Internet of Things: A survey
paper_content:
This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details.
---
paper_title: Enabling a Mobile, Dynamic and Heterogeneous Discovery Service in a Sensor Web by Using AndroSIXTH
paper_content:
Achieving the vision of Ambient Intelligence, a world where devices adapt and anticipation our needs without intervention, requires a device to connect to multiple sensors to achieve this goal. One solution to this goal is to create a sensor web between sensors. This proves to be challenging due to the range of devices, different application requirements and is compounded by the fact that devices with their corresponding sensors can be mobile. Therefore a sensor web also requires the ability of heterogenous sensors to be discovered dynamically. This paper seeks to address the challenge of discovery by demonstrating how this can be achieved using a lightweight discovery service developed for this paper. AndroSIXTH aims to improve network middleware SIXTH with discovery services and extend its abilities to mobile networks. To illustrate the functionality of AndroSIXTH discovery service and its importance to the creation of Ambient Intelligent applications, a case study will be examined that demonstrates how through a seamless discovery service, an Augmented Reality environment can be created and used for maintenance and deployment of sensors for an Ambient Intelligent environment.
---
paper_title: Data parallelism in bioinformatics workflows using Hydra
paper_content:
Large scale bioinformatics experiments are usually composed by a set of data flows generated by a chain of activities (programs or services) that may be modeled as scientific workflows. Current Scientific Workflow Management Systems (SWfMS) are used to orchestrate these workflows to control and monitor the whole execution. It is very common in bioinformatics experiments to process very large datasets. In this way, data parallelism is a common approach used to increase performance and reduce overall execution time. However, most of current SWfMS still lack on supporting parallel executions in high performance computing (HPC) environments. Additionally keeping track of provenance data in distributed environments is still an open, yet important problem. Recently, Hydra middleware was proposed to bridge the gap between the SWfMS and the HPC environment, by providing a transparent way for scientists to parallelize workflow executions while capturing distributed provenance. This paper analyzes data parallelism scenarios in bioinformatics domain and presents an extension to Hydra middleware through a specific cartridge that promotes data parallelism in bioinformatics workflows. Experimental results using workflows with BLAST show performance gains with the additional benefits of distributed provenance support.
---
paper_title: Middleware for Wireless Sensor Networks: A Survey
paper_content:
Given the fast growing technological progress in microelectronics and wireless communication devices, in the near future, it is foreseeable that Wireless Sensor Networks (WSN) will offer and make possible a wide range of applications. However real world integration and application development on such networks composed of tiny, low power and limited resources devices are not easy. Therefore, middleware services are a novel approach offering many possibilities and drastically enhancing the application development on WSN. This survey shows the current state of research in this domain. It discusses middleware challenges in such networks and presents some representative middleware specifically designed for WSN. The selection of the studied methods tries to cover as many views of objectives and approaches as possible. We will focus on discovering similarities and differences by making classifications, comparisons and appropriateness studies. At the end we argue that most of the proposed work is at an early stage and there is still a long way to go before a middleware that fully meets the wide variety of WSN requirements is achieved.
---
paper_title: Coalitions and Incentives for Content Distribution over a Secure Peer-to-Peer Middleware
paper_content:
Nowadays, Peer-to-Peer is responsible for more than 60% of Internet traffic. These protocols have proved to save bandwidth and computing resources in content distribution system. But, problems related to user behaviour, such as free riding, still persist, and users must be motivated to share content. In previous work, we have designed and simulated a coalition and incentive theoretical mechanism for content distribution that aims to fight against problems in user behaviour. In this pa- per, we present a real implementation of it. Since devel- oping a peer-to-peer application from scratch is a labori- ous and error prone task, we use SMEPP, a middleware that aims to ease the development of secure distributed application, to implement it.
---
paper_title: MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications
paper_content:
Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data across multiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.
---
paper_title: Integrating WSN with web services for patient's record management using RFID
paper_content:
Web service supports interoperability for collecting, storing, manipulating and retrieving data from heterogeneous environments. The Wireless Sensor Network is a resource-constrained device, which is low cost, low power and small in size and used in various applications such as industrial control & monitoring, environmental sensing, health care, etc. The main intent is to design a middleware that hides the complexity of accessing the sensor network environment and developing an application for sensor web enablement. This concept holds great importance because integrating wireless sensor network into IP-based systems are still a challenging issue. It is very important to collect patient's details during the emergency period. To create a web service to manage patient's personal data with the help of Radio Frequency Identification Tag (RFID), Web Service is dedicated to collecting, storing, manipulating, and making available clinical information. Context-aware services are needed for searching information more accurately and to produce highly impeccable output.
---
paper_title: Correlation analysis of MQTT loss and delay according to QoS level
paper_content:
MQTT is an open protocol developed and released by IBM. To ensure the reliability of message transmission, MQTT supports three levels of QoS. In this paper, we analyze MQTT message transmission process which consists of real wired/wireless publish client, broker server and Subscribe client. By transmitting messages through 3 levels of QoS with various sizes of payloads, we have captured packets to analyze end-to-end delays and message loss.
---
paper_title: SOCRADES: A framework for developing intelligent systems in manufacturing
paper_content:
Nowadays two main reasons are determining the increased number of intelligent systems in manufacturing. The first one is market-driven: demand is more and more variable and mass customization is the new way to compete in manufacturing, requiring feasible reconfigurable manufacturing systems. The second one is technology-pushed: the increasing availability of high-performance, low-power electronic components and the emerging wireless technologies are boosting the creation of intelligent systems (with personal intelligent perception, reasoning, etc.). However research should still be done in intelligent systems? field. Since the complexity both of the topic and of the consequences on manufacturing domain, a framework for addressing the overall field should be defined. We propose the framework adopted in SOCRADES, a European research project exploiting service oriented architecture paradigm both at the device and at the application level. Finally the impacts of SOCRADES architecture on manufacturing performance are described and motivated.
---
paper_title: Connecting mobile things to global sensor network middleware using system-generated wrappers
paper_content:
Internet of Things (IoT) will create a cyberphysical world where all the things around us are connected to the Internet, sense and produce "big data" that has to be stored, processed and communicated with minimum human intervention. With the ever increasing emergence of new sensors, interfaces and mobile devices, the grand challenge is to keep up with this race in developing software drivers and wrappers for IoT things. In this paper, we examine the approaches that automate the process of developing middleware drivers/wrappers for the IoT things. We propose ASCM4GSN architecture to address this challenge efficiently and effectively. We demonstrate the proposed approach using Global Sensor Network (GSN) middleware which exemplifies a cluster of data streaming engines. The ASCM4GSN architecture significantly speeds up the wrapper development and sensor configuration process as demonstrated for Android mobile phone based sensors as well as for Sun SPOT sensors.
---
paper_title: ManySense: An Extensible and Accessible Middleware for Consumer-Oriented Heterogeneous Body Sensor Networks
paper_content:
Consumer-oriented wearable sensors such as smart watches are becoming popular, but each manufacturer uses its own data access mechanism. At the same time, the need for inferred context data is increasing in context-aware applications. A system is needed to provide a unified access to heterogeneous wearable devices for context-aware application developers. We propose ManySense—an Android-based middleware for heterogeneous consumer-oriented BSNs. Extensibility is achieved through adapter interfaces which allow sensors and context inferencing algorithms to be coupled with the middleware. Accessibility of the middleware allows third party applications to access raw sensor data and inferred context data uniformly. This paper provides two main contributions which are divided into several outcomes: (1) design and implementation of the ManySense BSN middleware that allows low-effort addition of new sensors and context inferencing algorithms through adapter interfaces, provides unified access to optionally filtered sensor data and inferred context data for third party applications, mediates control queries to sensor adapters and context inferencing adapters, and facilitates adapter development through an SDK and (2) evaluation of ManySense by comparing its performance with manual sensor data acquisition, analysis of ManySense’s extensibility through adapter interfaces, and analysis of ManySense’s accessibility from third party applications.
---
paper_title: Linked Data -- The story so far
paper_content:
The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
---
paper_title: D2R Server – Publishing Relational Databases on the Semantic Web
paper_content:
D2R Server is a tool for publishing the content of relational databases on the Semantic Web. Database content is mapped to RDF by a declarative mapping which specifies how resources are identified and how property values are generated from database content. Based on this mapping, D2R Server allows Web agents to retrieve RDF and XHTML representations of resources and to query non-RDF databases using the SPARQL query language over the SPARQL protocol. The generated representations are richly interlinked on RDF and XHTML level in order to enable browsers and crawlers to navigate database content.
---
paper_title: Scheduling with Quality of Service Requirements in Real-Time Energy Harvesting Sensors
paper_content:
This paper is concerned with the problem of periodic task scheduling in sensor nodes powered by energy harvesters. We address this issue by proposing two energy-aware scheduling algorithms, respectively called Green-RTO and Green-BWP. They aim to guarantee an acceptable Quality of Service (QoS) measured in terms of deadline success ratio.
---
paper_title: Energy Harvesting for Autonomous Wireless Sensor Networks
paper_content:
Wireless sensor nodes (WSNs) are employed today in many different application areas, ranging from health and lifestyle to automotive, smart building, predictive maintenance (e.g., of machines and infrastructure), and active RFID tags. Currently these devices have limited lifetimes, however, since they require significant operating power. The typical power requirements of some current portable devices, including a body sensor network, are shown in Figure 1.
---
paper_title: An improved magnetoelectric vibration energy harvester for wireless sensors
paper_content:
An energy harvester is presented to scavenge ambient vibration energy using Terfenol-D/PMN-PT/Terfenol-D laminate magnetoelectric transducers. The harvester uses eight magnets to make up a magnetic circuit to produce a concentrated flux gradient, which makes the harvester generate high power. The nonlinear vibration characteristics and the electrical-output performances of the harvester at resonance are analyzed. A prototype has been fabricated and tested. The experimental results are in agreement with the analytical results. The prototype can produce a sufficient power to supply low consumption wireless sensors applied in the Internet of Things.
---
paper_title: Dynamic share energy provisioning service for one-hop multiple RFID tags identification system
paper_content:
Internet of Things applications using RFID sensors are a challenging task due to the limited capacity of batteries. Thus, energy efficient control has become more critical design with RFID sensor integrating complex tag identification processing techniques. Previous works on power efficient control in multiple RFID tags identification systems often design tag anti-collision protocols in identification process, and seldom consider the features of tags are able to detect energy within its radio rang among each other. This paper is dedicated to developing a share energy provisioning (SEP) strategy for energy-limited multiple RFID tag identification system. First, SEP can dynamically adapt the variable energy resources due to the cognitive radio technique. Second, SEP combines the energy control with tags groups in wait time T, through classifying the tag into different groups according to its distances. Third, it introduces the optimization theoretical analysis energy for multiple RFID tags identification system, so as to minimize the time and energy of it takes to send tags data to reader. Finally, it shares the energy resource as different energy harvest under energy-limited RFID systems. Experimental results demonstrate the energy efficiency of the proposed approach.
---
paper_title: Quality-Driven Energy-Neutralized Power and Relay Selection for Smart Grid Wireless Multimedia Sensor Based IoTs
paper_content:
With the popularity of photovoltaic based green energy in smart grid systems, accurate information gathering becomes a critical issue in predicting microgrid power input. In this paper, we propose a new quality-optimized multimedia information gathering scheme, in the energy harvesting wireless sensor networks based Internet of things system, to provide the best-effort sky camera information accuracy for further predicting available photovoltaic power. In the proposed approach, the power control and relay node selection strategies are jointly optimized to achieve maximum sky camera image quality subject to the harvestable energy neutrality constraint. Simulation results show that the proposed scheme can improve multimedia data transmission quality by exploring adaptive transmission power and relay selection strategy.
---
paper_title: Movers and Shakers: Kinetic Energy Harvesting for the Internet of Things
paper_content:
Numerous energy harvesting wireless devices that will serve as building blocks for the Internet of Things (IoT) are currently under development. However, there is still only limited understanding of the properties of various energy sources and their impact on energy harvesting adaptive algorithms. Hence, we focus on characterizing the kinetic (motion) energy that can be harvested by a wireless node with an IoT form factor and on developing energy allocation algorithms for such nodes. In this paper, we describe methods for estimating harvested energy from acceleration traces. To characterize the energy availability associated with specific human activities (e.g., relaxing, walking, cycling), we analyze a motion dataset with over 40 participants. Based on acceleration measurements that we collected for over 200 hours, we study energy generation processes associated with day-long human routines. We also briefly summarize our experiments with moving objects. We develop energy allocation algorithms that take into account practical IoT node design considerations, and evaluate the algorithms using the collected measurements. Our observations provide insights into the design of motion energy harvesters, IoT nodes, and energy harvesting adaptive algorithms.
---
paper_title: Energy Harvesting Sensor Nodes: Survey and Implications
paper_content:
Sensor networks with battery-powered nodes can seldom simultaneously meet the design goals of lifetime, cost, sensing reliability and sensing and transmission coverage. Energy-harvesting, converting ambient energy to electrical energy, has emerged as an alternative to power sensor nodes. By exploiting recharge opportunities and tuning performance parameters based on current and expected energy levels, energy harvesting sensor nodes have the potential to address the conflicting design goals of lifetime and performance. This paper surveys various aspects of energy harvesting sensor systems- architecture, energy sources and storage technologies and examples of harvesting-based nodes and applications. The study also discusses the implications of recharge opportunities on sensor node operation and design of sensor network solutions.
---
| Title: Software Toolkits: Practical Aspects of the Internet of Things—A Survey
Section 1: Introduction
Description 1: Introduce the concept of the Internet of Things (IoT), its history, and the motivation of the paper to provide practical guidance on utilizing software toolkits for IoT.
Section 2: Node Operating Systems
Description 2: Summarize various node operating systems for IoT including TinyOS, Contiki, Nut/OS, Mote Runner, Raspberry Pi, Arduino, and Android, along with their characteristics and applications.
Section 3: IoT Middleware
Description 3: Discuss different types of middleware, their importance, and how middleware addresses issues such as interoperability, scalability, and data management in IoT. Include examples such as Hydra, ASPIRE, UBISOAP, SOCRADES, and GSN.
Section 4: Data Storage via RDF
Description 4: Explain the use of the RDF data model for storing and managing semantic data in IoT, and describe relevant systems like the D2RQ Platform for accessing relational databases as virtual RDF graphs.
Section 5: IoT Energy Harvesting
Description 5: Explore various energy harvesting methods for the IoT, focusing on techniques for converting environmental energy to electrical energy and the challenges associated with energy efficiency and sustainability.
Section 6: Conclusions
Description 6: Summarize the importance of integrating multiple techniques to realize the IoT, reiterate the practical guidance provided by the survey, and emphasize the role of both industry and academia in advancing IoT technologies. |
Technological Aspects of E-Learning Readiness in Higher Education: A Review of the Literature | 3 | ---
paper_title: The role of readiness factors in E-learning outcomes: An empirical study
paper_content:
Although many researchers have studied different factors which affect E-Learning outcomes, there is little research on assessment of the intervening role of readiness factors in E-Learning outcomes. This study proposes a conceptual model to determine the role of readiness factors in the relationship between E-Learning factors and E-Learning outcomes. Readiness factors are divided into three main groups including: technical, organizational and social. A questionnaire was completed by 96 respondents. This sample consists of teachers at Tehran high schools who are utilizing a technology-based educating. Hierarchical regression analysis is done and its results strongly support the appropriateness of the proposed model and prove that readiness factors variable plays a moderating role in the relationship between E-Learning factors and outcomes. Also latent moderated structuring (LMS) technique and MPLUS3 software are used to determine each variable's ranking. Results show that organizational readiness factors have the most important effect on E-Learning outcomes. Also teachers' motivation and training is the most important factor in E-Learning. Findings of this research will be helpful for both academics and practitioners of E-Learning systems.
---
paper_title: Critical success factors for e-learning in developing countries: A comparative analysis between ICT experts and faculty
paper_content:
This study identifies the critical success factors that influence the acceptance of e-learning systems in developing countries. E-learning is a popular mode of delivering educational materials in higher education by universities throughout the world. This study identifies multiple factors that influence the success of e-learning systems from the literature and compares the relative importance among two stakeholder groups in developing countries, ICT experts and faculty. This study collected 76 usable responses using the Delphi method and Analytic Hierarchy Process (AHP) approach. The results reveal 6 dimensions and 20 critical success factors for e-learning systems in developing countries. Findings illustrate the importance of curriculum design for learning performance. Technology awareness, motivation, and changing learners' behavior are prerequisites for successful e-learning implementations. Several recommendations are provided to aid the implementation of e-learning systems for developing countries which have relevance for researchers and practitioners. Limitations as well as possible research directions are also discussed.
---
paper_title: Higher education and national development : universities and societies in transition
paper_content:
Contributors Acknowledgements Introduction David Bridges and Terence McLaughlin PART ONE UNIVERSITIES, SOCIETIES AND TRANSITIONS: SETTING THE SCENE 1 1 Comparing and Transferring: Visions, Politics, and Universities Robert Cowen 2 Conceptions of the University and the Demands of Contemporary Societies Richard Smith PART TWO UNIVERSITIES AND TRANSITIONS IN CONCEPTIONS OF SOCIETY 3 The Development of Higher Education for the Knowledge Society and the Knowledge Economy Palmira Juceviciene and Rimantas Vaitkus 4 The Role of the University in the Development of the Learning Society Palmira Juceviciene 5 The Concept of the 'Intelligent Country' Robertas Jucevicius PART THREE UNIVERSITIES AND ECONOMIC DEVELOPMENT 6 Concepts of Development: the Role of Education Flavio Comim 7 The Role of the University in Regional Economic Development David Bridges 8 Regional Universities in the Baltic Sea Region: Higher Education and Regional Development Kazimierz Musial PART FOUR UNIVERSITIES AND THE DEMANDS OF THE ECONOMY 9 The Role of Higher Education in National Innovation Systems in Central and Eastern Europe Slavo RadoA evic and Monika Kriaucioniene 10 Bridging Knowledge and Economy: Technology Transfer and Higher Education Arunas LukoA evicius 11 The Changing Requirements for Business Management and Business Education in the 'Countries in Transition' -- Combining Cultural and Institutional Perspectives Giedrius Jucevicius 12 Competence Development for the Knowledge-Driven Economy Daiva Lepaite 13 Concepts of a Service University Arild Tjeldvoll and Aukse BlaA enaite PART FIVE UNIVERSITIES AND SOCIAL, CIVIC AND ETHICAL DEMANDS 14 Higher Education as an Agent of Social Innovation Brigita Janiunaite and Dalija Gudaityte 15 The Role of the University in Community Development: Responding to the Challenges of Globalization Irena Leliugiene and Viktorija BarA auskiene 16 Higher Education and its Contribution to Public Health: Tackling Health Inequalities Through Health Policy Development in Lithuania Vilius Grabauskas 17 Spirituality and Citizenship in Higher Education Hanan A. Alexander 18 Higher Education, Scientific Research and Social Change 452 Sir Brian Heap PART SIX UNIVERSITIES, SOCIETIES AND TRANSITIONS IN PERSPECTIVE 19 The Audit and 'Embrace' of Quality in a Higher Education System Under Change Barbara Zamorski 20 Universities and Societies: Traditions, Transitions and Tensions 498 Terence McLaughlin Index
---
paper_title: E-Learning in Malaysia: Success Factors in Implementing E-Learning Program
paper_content:
The main objective of this study was to identify successful factors in implementing an e-learning program. Existing literature has identified several successful factors in implementing an e-learning program. These factors include program content, web page accessibility, learners’ participation and involvement, web site security and support, institution commitment, interactive learning environment, instructor competency, and presentation and design. All these factors were tested together with other related criteria which are important for e-learning program implementation. The samples were collected based on quantitative methods, specifically, self-administrated questionnaires. All the criteria that were tested to see if they were important in an e-learning program implementation.
---
paper_title: The role of readiness factors in E-learning outcomes: An empirical study
paper_content:
Although many researchers have studied different factors which affect E-Learning outcomes, there is little research on assessment of the intervening role of readiness factors in E-Learning outcomes. This study proposes a conceptual model to determine the role of readiness factors in the relationship between E-Learning factors and E-Learning outcomes. Readiness factors are divided into three main groups including: technical, organizational and social. A questionnaire was completed by 96 respondents. This sample consists of teachers at Tehran high schools who are utilizing a technology-based educating. Hierarchical regression analysis is done and its results strongly support the appropriateness of the proposed model and prove that readiness factors variable plays a moderating role in the relationship between E-Learning factors and outcomes. Also latent moderated structuring (LMS) technique and MPLUS3 software are used to determine each variable's ranking. Results show that organizational readiness factors have the most important effect on E-Learning outcomes. Also teachers' motivation and training is the most important factor in E-Learning. Findings of this research will be helpful for both academics and practitioners of E-Learning systems.
---
paper_title: E-learning readiness assessment model: a case study of higher institutions of learning in Uganda
paper_content:
Uganda faces challenges in her efforts to achieve her goal "education for all", as a developing country. E-learning has been suggested as an alternative approach that can overcome these challenges involved in reaching underserved students. It is therefore important for an institution to know if it is ready for e-learning. This study examined the readiness for e-learning by Ugandan institutions of higher learning and proposed ways to encourage the use and development of e-learning systems aimed at up-lifting the education standards of the country. Data collected from eight Ugandan universities on analysis revealed that: awareness, culture, technology, pedagogy and content need to be considered in e-learning readiness assessment. These results together with the review of existing models for e-learning readiness assessment led to the development of a modified model for e-learning readiness assessment. Each layer corresponds to the attribute that is to be used for assessing the institutional e-learning readiness.
---
paper_title: Presumptions and actions affecting an e-learning adoption by the educational system - Implementation using virtual private networks
paper_content:
In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.
---
paper_title: Measuring teachers' readiness for e-learning in higher education institutions associated with the subject of electricity in Turkey
paper_content:
Implementing e-learning in higher education institutions (HEIs) is influenced by various barriers and drivers. The majority of barriers are related to the challenging issue concerning the integration of e-learning into universities. Hence, it is deemed relevant to understand whether different stakeholders in HEIs tend to embrace or ostracize e-learning for their work. This study investigates the extent to which the HEIs associated with the science of electricity in Turkey are ready for e-learning. It also examines two factors that presumably affect the perceptions of academic staff on e-learning: first, the degree to which teachers believe that e-learning would be free of effort and would enhance their teaching; second, whether teachers need training on e-learning before embarking on it. To address these issues, a web-based survey was distributed to 417 programs in 360 HEIs in Turkey. More than 1206 active academic staff were invited to participate in the survey with 289 answering all the questions and 53 some of them. Descriptive and inferential statistics were computed. Overall, the findings indicate that the academic staff in the HEI associated with the subject of electricity in Turkey generally show positive experiences, confidences and attitudes towards e-learning. In spite of the fact that their readiness seems to be sufficient, their attitudes towards e-learning must be strengthened in order to facilitate effective adoption of e-learning.
---
paper_title: An eclectic model for assessing e-learning readiness in the Iranian universities
paper_content:
Information technologies have caused the accumulation and interaction of knowledge to be increasingly reshaped with significant ramifications affecting the processes of acquisition, communication and dissemination of knowledge in almost all societies. In the meantime, assessing the capabilities of the educational system for the successful introduction and implementation of e-learning programs - namely, e-learning readiness - is of paramount importance for the goals of the national higher education to be achieved. To serve the above purpose, this survey attempts to propose a proper framework for strengthening the existing capabilities and identifying possible deficits. As such, the first part of the paper elaborates on an appropriate model developed for assessing e-learning readiness of the Iranian higher education institutions based on comparative studies as well as the national experts' views. It is noteworthy that the proposed model has been objectively tailored in accordance with particular features and local characteristics of the country, and has eventually been applied and tested against real situation in one of the most prestigious national universities for complementary studies. Thus, it is assumed flexibly adaptable and safely advisable to be practically applied for assessing e-learning readiness in all of the universities country wide.
---
paper_title: Resistance to Change: The Rest of the Story
paper_content:
Prevailing views of resistance to change tell a one-sided story that favors change agents by proposing that resistance is an irrational and dysfunctional reaction located “over there” in change recipients. We tell the rest of the story by proposing that change agents contribute to the occurrence of resistance through their own actions and inactions and that resistance can be a resource for change. We conclude by proposing how resistance might be restructured.
---
paper_title: Measuring Readiness for e-Learning: Reflections from an Emerging Country
paper_content:
In order to benefit from e-learning, companies should conduct considerable up-front analysis to assess their readiness. There are a number of instruments in the market that can be used for assessing readiness for elearning. However, almost all of these instruments are developed to be used in countries that have a mature field of human resources development. So, these instruments consist of terms, phrases, and applications that are meaningless for many companies in especially emerging countries where human resources development field has just shown an improvement. This article includes the description of a survey instrument that has been developed to assess e-learning readiness of companies in these kinds of countries and the results of a study that examines organizational readiness of companies for e-learning in Turkey. The study reveals that companies surveyed are overall ready for e-learning but they need to improve need to improve themselves, particularly in the area of human resources, in order to be able to successfully implement e-learning. Although this instrument has been developed according to the cultural characteristics of Turkish companies it can easily be adapted to be used by companies of other emerging countries.
---
paper_title: Higher education and national development : universities and societies in transition
paper_content:
Contributors Acknowledgements Introduction David Bridges and Terence McLaughlin PART ONE UNIVERSITIES, SOCIETIES AND TRANSITIONS: SETTING THE SCENE 1 1 Comparing and Transferring: Visions, Politics, and Universities Robert Cowen 2 Conceptions of the University and the Demands of Contemporary Societies Richard Smith PART TWO UNIVERSITIES AND TRANSITIONS IN CONCEPTIONS OF SOCIETY 3 The Development of Higher Education for the Knowledge Society and the Knowledge Economy Palmira Juceviciene and Rimantas Vaitkus 4 The Role of the University in the Development of the Learning Society Palmira Juceviciene 5 The Concept of the 'Intelligent Country' Robertas Jucevicius PART THREE UNIVERSITIES AND ECONOMIC DEVELOPMENT 6 Concepts of Development: the Role of Education Flavio Comim 7 The Role of the University in Regional Economic Development David Bridges 8 Regional Universities in the Baltic Sea Region: Higher Education and Regional Development Kazimierz Musial PART FOUR UNIVERSITIES AND THE DEMANDS OF THE ECONOMY 9 The Role of Higher Education in National Innovation Systems in Central and Eastern Europe Slavo RadoA evic and Monika Kriaucioniene 10 Bridging Knowledge and Economy: Technology Transfer and Higher Education Arunas LukoA evicius 11 The Changing Requirements for Business Management and Business Education in the 'Countries in Transition' -- Combining Cultural and Institutional Perspectives Giedrius Jucevicius 12 Competence Development for the Knowledge-Driven Economy Daiva Lepaite 13 Concepts of a Service University Arild Tjeldvoll and Aukse BlaA enaite PART FIVE UNIVERSITIES AND SOCIAL, CIVIC AND ETHICAL DEMANDS 14 Higher Education as an Agent of Social Innovation Brigita Janiunaite and Dalija Gudaityte 15 The Role of the University in Community Development: Responding to the Challenges of Globalization Irena Leliugiene and Viktorija BarA auskiene 16 Higher Education and its Contribution to Public Health: Tackling Health Inequalities Through Health Policy Development in Lithuania Vilius Grabauskas 17 Spirituality and Citizenship in Higher Education Hanan A. Alexander 18 Higher Education, Scientific Research and Social Change 452 Sir Brian Heap PART SIX UNIVERSITIES, SOCIETIES AND TRANSITIONS IN PERSPECTIVE 19 The Audit and 'Embrace' of Quality in a Higher Education System Under Change Barbara Zamorski 20 Universities and Societies: Traditions, Transitions and Tensions 498 Terence McLaughlin Index
---
paper_title: E-LEARNING READINESS ASSESSMENT MODEL IN KENYAS’ HIGHER EDUCATION INSTITUTIONS: A CASE STUDY OF UNIVERSITY OF NAIROBI
paper_content:
In order to benefit from eLearning, institutions should conduct considerable up-front analysis to assess their eLearning readiness. Studies show that there are numerous models that have been developed, however, they are used in developed counties whose eReadiness is high hence not applicable in developing countries. This paper includes a model that has been developed to assess eLearning readiness of lecturers from institutions of higher learning in Kenya. It investigates the eLearning readiness of lecturers from the University of Nairobi, and the objective was to carry out a diagnostic eLearning readiness assessment of lecturers and determine the factors that influence eLearning readiness. The questionnaires were administered to the lecturers. The results obtained indicate that an overwhelming majority are ready. In addition, the study results show that there is no significant relationship between age, gender, and level of education on eLearning readiness. The study results indicate that technological readiness is the most important factor followed by culture readiness. Most of the lecturers felt that more training on content development need to be conducted. In conclusion, the lecturers are ready for eLearning but the ICT infrastructure is not adequate enough to support the use of eLearning.
---
paper_title: The Comparison of Different E-Readiness Assessment Tools
paper_content:
Digital integration, with its information technology (IT) infrastructure, and its applications of e-government, e- commerce, e-learning, and other e-applications, is becoming of increasing importance, as a vital tool for development, both nationally and internationally. During the last decade, many leaders in government, business, and social organizations around the globe have considered how best to harness the power of IT for development. E-readiness assessments are meant to guide development efforts by providing some suitable tools for comparison and gauging progress. Several e-readiness initiatives have been launched to help countries in this area, and numerous e-readiness assessment tools have been created and used by different groups, each looking at various aspects of IT, society, and the economy. This paper is concerned with the comparison of the e-readiness assessment tools. In the first section, the e- readiness definition, background and importance will be reviewed. In the next section the focus will be on some of the existing e-readiness tools such as, the MI, the EIU, the UNCTAD, the TAI, the GDI, the NRI and the KAM. At the end of this section, a comparison among the mentioned tools will be presented. Finally, a suitable tool for comparing the e-readiness in developing countries will be proposed.
---
paper_title: Measuring teachers' readiness for e-learning in higher education institutions associated with the subject of electricity in Turkey
paper_content:
Implementing e-learning in higher education institutions (HEIs) is influenced by various barriers and drivers. The majority of barriers are related to the challenging issue concerning the integration of e-learning into universities. Hence, it is deemed relevant to understand whether different stakeholders in HEIs tend to embrace or ostracize e-learning for their work. This study investigates the extent to which the HEIs associated with the science of electricity in Turkey are ready for e-learning. It also examines two factors that presumably affect the perceptions of academic staff on e-learning: first, the degree to which teachers believe that e-learning would be free of effort and would enhance their teaching; second, whether teachers need training on e-learning before embarking on it. To address these issues, a web-based survey was distributed to 417 programs in 360 HEIs in Turkey. More than 1206 active academic staff were invited to participate in the survey with 289 answering all the questions and 53 some of them. Descriptive and inferential statistics were computed. Overall, the findings indicate that the academic staff in the HEI associated with the subject of electricity in Turkey generally show positive experiences, confidences and attitudes towards e-learning. In spite of the fact that their readiness seems to be sufficient, their attitudes towards e-learning must be strengthened in order to facilitate effective adoption of e-learning.
---
paper_title: Evaluation of Infrastructure for E-Learning System in AOU-Bahrain Branch
paper_content:
E-learning web sites have become a mission critical component of the Arab Open University (AOU) as more and more distance learning have come to rely on it. This paper presents a guide line to measure and to evaluate the performance of e-learning system that has supported a real-time communication in campus environment. The overall e-learning system performance is measured by Network performance, applications server performance, database server performance and websites tools performance based on the number of accesses by students and faculties. This research work creates a series of tables as index of infrastructure healthiness. Apart from that two on-line analysis questionnaires, oriented to staff and students, have been conducted online to measure and evaluate the overall elearning system performance, based on the outcomes of conducted questionnaires the overall performance of e-learning system have been measured.
---
paper_title: Measuring Readiness for e-Learning: Reflections from an Emerging Country
paper_content:
In order to benefit from e-learning, companies should conduct considerable up-front analysis to assess their readiness. There are a number of instruments in the market that can be used for assessing readiness for elearning. However, almost all of these instruments are developed to be used in countries that have a mature field of human resources development. So, these instruments consist of terms, phrases, and applications that are meaningless for many companies in especially emerging countries where human resources development field has just shown an improvement. This article includes the description of a survey instrument that has been developed to assess e-learning readiness of companies in these kinds of countries and the results of a study that examines organizational readiness of companies for e-learning in Turkey. The study reveals that companies surveyed are overall ready for e-learning but they need to improve need to improve themselves, particularly in the area of human resources, in order to be able to successfully implement e-learning. Although this instrument has been developed according to the cultural characteristics of Turkish companies it can easily be adapted to be used by companies of other emerging countries.
---
paper_title: ERA - E-Learning Readiness Analysis: A eHealth Case Study of E-Learning Readiness
paper_content:
Electronic learning is seen as a good solution for organisations that deal with fast changing knowledge and for reducing the cost of training. E-learning is a good opportunity for companies but needs to be well prepared because it takes often high investment costs. That is why it is important for a company to know if it is e-ready. E-readiness is already well covered in literature and several models are suggested. We used these models to develop an e-learning readiness measurement instrument and questionnaire. We used our instrument to check whether the Flemish hospitals were e-ready for e-learning.
---
| Title: Technological Aspects of E-Learning Readiness in Higher Education: A Review of the Literature
Section 1: E-Learning Readiness Models and Frameworks: A Review
Description 1: This section presents a review of the various models on e-learning readiness in the literature.
Section 2: Discussion
Description 2: This section discusses the findings from the reviewed e-learning readiness models and frameworks.
Section 3: Conclusion
Description 3: This section presents the conclusion and the future directions for the research. |
Security Requirements for the Rest of Us: A Survey | 11 | ---
paper_title: Engineering Security Requirements
paper_content:
Most requirements engineers are poorly trained to elicit, analyze, and specify security requirements, often confusing them with the architectural security mechanisms that are traditionally used to fulfill them. They thus end up specifying architecture and design constraints rather than true security requirements. This article defines the different types of security requirements and provides associated examples and guildlines with the intent of enabling requirements engineers to adequately specify security requirements without unnecessarily constraining the security and architecture teams from using the most appropriate security mechanisms for the job.
---
paper_title: When Security meets Software Engineering: A Case of Modelling Secure Information Systems. Information Systems
paper_content:
Although security is a crucial issue for information systems, traditionally, it is considered after the definition of the system. This approach often leads to problems, which most of the times translate into security vulnerabilities. From the viewpoint of the traditional security paradigm, it should be possible to eliminate such problems through better integration of security and software engineering. This paper firstly argues for the need to develop a methodology that considers security as an integral part of the whole system development process, and secondly it contributes to the current state of the art by proposing an approach that considers security concerns as an integral part of the entire system development process and by relating this approach with existing work. The different stages of the approach are described with the aid of a real-life case study; a health and social care information system.
---
paper_title: Security Requirements Engineering: A Framework for Representation and Analysis
paper_content:
This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.
---
paper_title: The affordable application of formal methods to software engineering
paper_content:
The purpose of this research paper is to examine (1) why formal methods are required for software systems today; (2) the Praxis High Integrity Systems' Correctness-by-Construction methodology; and (3) an affordable application of a formal methods methodology to software engineering. The cultivated research for this paper included literature reviews of documents found across the Internet and in publications as well as reviews of conference proceedings including the 2004 High Confidence Software and Systems Conference and the 2004 Special Interest Group on Ada Conference. This research realized that (1) our reliance on software systems for national, business and personal critical processes outweighs the trust we have in our systems; (2) there is a growing demand for the ability to trust our software systems; (3) methodologies such as Praxis' Correctness-by-Construction are readily available and can provide this needed level of trust; (4) tools such as Praxis' SparkAda when appropriately applied can be an affordable approach to applying formal methods to a software system development process; (5) software users have a responsibility to demand correctness; and finally, (6) software engineers have the responsibility to provide this correctness. Further research is necessary to determine what other methodologies and tools are available to provide affordable approaches to applying formal methods to software engineering. In conclusion, formal methods provide an unprecedented ability to build trust in the correctness of a system or component. Through the development of methodologies such as Praxis' Correctness by Construction and tools such as SparkAda, it is becoming ever more cost advantageous to implement formal methods within the software engineering lifecycle. As the criticality of our IT systems continues to steadily increase, so must our trust that these systems will perform as expected. Software system clients, such as government, businesses and all other IT users, must demand that their IT systems be delivered with a proven level of correctness or trust commensurate to the criticality of the function they perform.
---
paper_title: Engineering Security Requirements
paper_content:
Most requirements engineers are poorly trained to elicit, analyze, and specify security requirements, often confusing them with the architectural security mechanisms that are traditionally used to fulfill them. They thus end up specifying architecture and design constraints rather than true security requirements. This article defines the different types of security requirements and provides associated examples and guildlines with the intent of enabling requirements engineers to adequately specify security requirements without unnecessarily constraining the security and architecture teams from using the most appropriate security mechanisms for the job.
---
paper_title: Security Requirements Engineering: A Framework for Representation and Analysis
paper_content:
This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.
---
paper_title: Extending XP practices to support security requirements engineering
paper_content:
This paper proposes a way of extending eXtreme Programming (XP) practices, in particular the original planning game and the coding guidelines, to aid the developers and the customer to engineer security requirements while maintaining the iterative and rapid feedback-driven nature of XP. More specifically, these steps result in two new security-specific flavours of XP User stories: Abuser stories (threat scenarios) and Security-related User stories (security functionalities). The introduced extensions also aid in formulating security-specific coding and design standards to be used in the project, as well as in understanding the need for supporting specific Security-related User stories by the system. The proposed extensions have been tested in a student project.
---
paper_title: Software Security: Building Security In
paper_content:
Summary form only given. Software security has come a long way in the last few years, but we've really only just begun. I will present a detailed approach to getting past theory and putting software security into practice. The three pillars of software security are applied risk management, software security best practices (which I call touchpoints), and knowledge. By describing a manageably small set of touchpoints based around the software artifacts that you already produce, I avoid religious warfare over process and get on with the business of software security. That means you can adopt the touchpoints without radically changing the way you work. The touchpoints I will describe include: code review using static analysis tools; architectural risk analysis; penetration testing; security testing; abuse case development; and security requirements. Like the yin and the yang, software security requires a careful balance-attack and defense, exploiting and designing, breaking and building-bound into a coherent package. Create your own Security Development Lifecycle by enhancing your existing software development lifecycle with the touchpoints
---
paper_title: Demystifying the threat modeling process
paper_content:
In today's hostile online environment, software must be designed to withstand malicious attacks of all kinds. Unfortunately, even security-conscious products can fall prey when designers fail to understand the threats their software faces or the ways in which adversaries might try to attack it. To better understand a product's threat environment and defend against potential attacks, Microsoft uses threat modeling, which should be treated like any other part of the design and specification process. In fact, singling it out as a special activity performed outside the normal design process actually detracts from its importance to the overall development life cycle. We must consider security needs throughout the design process, just as we do with performance, usability, localizability, serviceability, or any other facet.
---
paper_title: Elaborating security requirements by construction of intentional anti-models
paper_content:
Caring for security at requirements engineering time is a message that has finally received some attention recently. However, it is not yet very clear how to achieve this systematically through the various stages of the requirements engineering process. The paper presents a constructive approach to the modeling, specification and analysis of application-specific security requirements. The method is based on a goal-oriented framework for generating and resolving obstacles to goal satisfaction. The extended framework addresses malicious obstacles (called anti-goals) set up by attackers to threaten security goals. Threat trees are built systematically through anti-goal refinement until leaf nodes are derived that are either software vulnerabilities observable by the attacker or anti-requirements implementable by this attacker. New security requirements are then obtained as countermeasures by application of threat resolution operators to the specification of the anti-requirements and vulnerabilities revealed by the analysis. The paper also introduces formal epistemic specification constructs and patterns that may be used to support a formal derivation and analysis process. The method is illustrated on a Web-based banking system for which subtle attacks have been reported recently.
---
paper_title: The Trustworthy Computing Security Development Lifecycle
paper_content:
This paper discusses the trustworthy computing security development lifecycle (or simply the SDL), a process that Microsoft has adopted for the development of software that needs to withstand malicious attack. The process encompasses the addition of a series of security-focused activities and deliverables to each of the phases of Microsoft's software development process. These activities and deliverables include the development of threat models during software design, the use of static analysis code-scanning tools during implementation, and the conduct of code reviews and security testing during a focused "security push". Before software subject to the SDL can be released, it must undergo a final security review by a team independent from its development group. When compared to software that has not been subject to the SDL, software that has undergone the SDL has experienced a significantly reduced rate of external discovery of security vulnerabilities. This paper describes the SDL and discusses experience with its implementation across a range of Microsoft software.
---
paper_title: Information modeling for automated risk analysis
paper_content:
Systematic security risk analysis requires an information model which integrates the system design, the security environment (the attackers, security goals etc) and proposed security requirements. Such a model must be scalable to accommodate large systems, and support the efficient discovery of threat paths and the production of risk-based metrics; the modeling approach must balance complexity, scalability and expressiveness. This paper describes such a model; novel features include combining formal information modeling with informal requirements traceability to support the specification of security requirements on incompletely specified services, and the typing of information flow to quantify path exploitability and model communications security.
---
paper_title: Using abuse case models for security requirements analysis
paper_content:
The relationships between the work products of a security engineering process can be hard to understand, even for persons with a strong technical background but little knowledge of security engineering. Market forces are driving software practitioners who are not security specialists to develop software that requires security features. When these practitioners develop software solutions without appropriate security-specific processes and models, they sometimes fail to produce effective solutions. We have adapted a proven object oriented modeling technique, use cases, to capture and analyze security requirements in a simple way. We call the adaptation an abuse case model. Its relationship to other security engineering work products is relatively simple, from a user perspective.
---
paper_title: Extending XP practices to support security requirements engineering
paper_content:
This paper proposes a way of extending eXtreme Programming (XP) practices, in particular the original planning game and the coding guidelines, to aid the developers and the customer to engineer security requirements while maintaining the iterative and rapid feedback-driven nature of XP. More specifically, these steps result in two new security-specific flavours of XP User stories: Abuser stories (threat scenarios) and Security-related User stories (security functionalities). The introduced extensions also aid in formulating security-specific coding and design standards to be used in the project, as well as in understanding the need for supporting specific Security-related User stories by the system. The proposed extensions have been tested in a student project.
---
paper_title: An extended misuse case notation: Including vulnerabilities and the insider threat
paper_content:
Access control is a key feature of healthcare information systems. Access control is about enforcing rules to ensure that only authorized users get access to resources in a system. In healthcare systems this means protecting patient privacy. However, the top priority is always to provide the best possible care for a patient. This depends on the clinicians having access to the information they need to make the best, most informed, care decisions. Care processes are often unpredictable and hard to map to strict access control rules. As a result, in emergency or otherwise unexpected situations, clinicians need to be able to bypass access control. In a crisis, availability of information takes precedence over privacy concerns. This duality of concerns is what makes access control in healthcare systems so challenging and interesting as a research subject. To create access control models for healthcare we need to understand how healthcare works. Before creating a model we need to understand the requirements the model should fulfill. Though many access control models have been proposed and argued to be suitable for healthcare, little work has been published on access control requirements for healthcare. This PhD project has focused on bridging the gap between formalized models and real world requirements for access control in healthcare by targeting the following research goals:RG1 To collect knowledge that forms a foundation for access control requirements in healthcare systems.RG2 To create improved access control models for healthcare systems based on real requirements.This PhD project has consisted of a number of smaller, distinct, but relatedprojects to reach the research goals. The main contributions can be summarized as:C1 Requirements for access control in healthcare: Studies performed onaudit data, in workshops, by observation and interviews have helped discoverrequirements. Results from this work include methods for access controlrequirements elicitation in addition to the actual requirements discovered.C2 Process-based access control: The main conclusion from the requirementswork is that access control should be tailored to care processes. Care processesare highly dynamic and often unpredictable, and access control needs to adaptto this. This thesis suggests how existing sources of process information, bothexplicit and implicit, may be used for this purpose.C3 Personally controlled health records (PCHR): This thesis explores theconsequences of making the patient the administrator of access control andproposes a model based on these initial requirements. From a performedusability study it is clear that the main challenge is how to keep the patientinformed about the consequences of sharing.
---
paper_title: Dealing with Security Requirements During the Development of Information Systems
paper_content:
A growing concern for information systems (ISs) is their quality, such as security, accuracy, user-friendliness and performance. Although the quality of an IS is determined largely by the development process, relatively little attention has been paid to the methodology for achieving high quality. A recent proposal [32] takes a process-oriented approach to representing non-functional, or quality, requirements (NFRs) as potentially conflicting or harmonious goals and using them during the development of software systems. By treating security requirements as a class of NFRs, this paper applies this process-oriented approach to designing secure ISs. This involves identification and representation of various types of security requirements (as goals), generic design knowledge and goal interactions. This treatment allows reusing generic design knowledge, detecting goal interactions, capturing and reasoning about design rationale, and assessing the degree of goal achievement. Security requirements serve as a class of criteria for selecting among design decisions, and justify the overall design. This paper also describes a prototype design tool, and illustrates it using a credit card system example.
---
paper_title: Eliciting security requirements with misuse cases
paper_content:
Use case diagrams (L. Jacobson et al., 1992) have proven quite helpful in requirements engineering, both for eliciting requirements and getting a better overview of requirements already stated. However, not all kinds of requirements are equally well supported by use case diagrams. They are good for functional requirements, but poorer at e.g., security requirements, which often concentrate on what should not happen in the system. With the advent of e- and m-commerce applications, security requirements are growing in importance, also for quite simple applications where a short lead time is important. Thus, it would be interesting to look into the possibility for applying use cases on this arena. The paper suggests how this can be done, extending the diagrams with misuse cases. This new construct makes it possible to represent actions that the system should prevent, together with those actions which it should support.
---
paper_title: Security Use Cases
paper_content:
Although use cases are a popular modeling approach for engineering functional requirements, they are often misused when it comes to engineering security requirements because requirements engineers unnecessarily specify security architectural mechanisms instead of security requirements. After discussing the relationships between misuse cases, security use cases, and security mechanisms, this column provides examples and guidelines for properly specifying essential (i.e., requirements-level) security use cases.
---
paper_title: Dealing with non-functional requirements: three experimental studies of a process-oriented approach
paper_content:
Quality characteristics are vital for the success of software systems. To remedy the problems inherent in ad hoc development, a framework has been developed to deal with non-functional requirements (quality requirements or NFRs). Taking the premise that the quality of a product depends on the quality of the process that leads from high-Ievel NFRs to the product, the framework's objectives are to represent NFR-specific requirements, consider design tradeoffs, relate design decisions to IYFRs, justify the decisions, and assist defect detection. The purpose of this paper is to give an initial evaluation of the extent to which the framework's objectives are met. Three small portions of information systems were studied by the authors using the framework. The framework and empirical studies are evaluated herein, both from the viewpoint of domain experts who have reviewed the framework and studies, and ourselves as framework developers and users. The systems studied have a variety of characteristics, reflecting a variety of real application domains, and the studies deal with three important classes of NFRs for systems, namely, accuracy, security, and performance. The studies provide preliminary support for the usefulness of certain aspects of the framework, while raising some open issues.
---
paper_title: Towards agile security in web applications
paper_content:
In this paper, we present an approach that we have used to address security when running projects according to agile principles. Misuse stories have been added to user stories to capture malicious use of the application. Furthermore, misuse stories have been implemented as automated tests (unit tests, acceptance tests) in order to perform security regression testing. Penetration testing, system hardening and securing deployment have been started in early iterations of the project.
---
paper_title: Security Requirements Engineering: A Framework for Representation and Analysis
paper_content:
This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.
---
paper_title: Security Requirements Engineering: A Framework for Representation and Analysis
paper_content:
This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.
---
paper_title: Demystifying the threat modeling process
paper_content:
In today's hostile online environment, software must be designed to withstand malicious attacks of all kinds. Unfortunately, even security-conscious products can fall prey when designers fail to understand the threats their software faces or the ways in which adversaries might try to attack it. To better understand a product's threat environment and defend against potential attacks, Microsoft uses threat modeling, which should be treated like any other part of the design and specification process. In fact, singling it out as a special activity performed outside the normal design process actually detracts from its importance to the overall development life cycle. We must consider security needs throughout the design process, just as we do with performance, usability, localizability, serviceability, or any other facet.
---
paper_title: Engineering Security Requirements
paper_content:
Most requirements engineers are poorly trained to elicit, analyze, and specify security requirements, often confusing them with the architectural security mechanisms that are traditionally used to fulfill them. They thus end up specifying architecture and design constraints rather than true security requirements. This article defines the different types of security requirements and provides associated examples and guildlines with the intent of enabling requirements engineers to adequately specify security requirements without unnecessarily constraining the security and architecture teams from using the most appropriate security mechanisms for the job.
---
paper_title: Security Requirements Engineering: A Framework for Representation and Analysis
paper_content:
This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.
---
| Title: Security Requirements for the Rest of Us: A Survey
Section 1: Introduction
Description 1: Introduce the importance of security requirements in software development and provide a brief overview of the survey's purpose.
Section 2: Security Requirements in the Literature
Description 2: Summarize the findings and claims from existing literature regarding security requirements and their elicitation.
Section 3: How to Proceed
Description 3: Discuss methods and steps proposed by various authors and approaches for eliciting security requirements, focusing on practical techniques for developers.
Section 4: Security Requirements Artifacts
Description 4: Describe the different artifacts produced during the process of eliciting security requirements, such as misuse cases, attack trees, and softgoal interdependency graphs.
Section 5: Comparisons
Description 5: Compare different approaches to security requirements elicitation and discuss the reasons for the differences in these approaches.
Section 6: Proposed Approach
Description 6: Outline a new, lightweight method for eliciting security requirements that is suitable for average software developers.
Section 7: Security Objectives
Description 7: Discuss the identification of high-level security objectives and the steps developers need to take to understand and identify these objectives.
Section 8: Asset Identification
Description 8: Explain how to identify and prioritize assets that need protection in the system.
Section 9: Threat Analysis
Description 9: Recommend methods for analyzing threats to the identified assets, using categories like Stride and techniques like attack trees.
Section 10: Documentation of Security Requirements
Description 10: Provide guidance on how to document security requirements effectively, ensuring visibility and traceability.
Section 11: Does Lightweight Equal Worthless?
Description 11: Address the potential limitations of a lightweight approach and advocate for its practicality in improving security in software development projects.
Section 12: About the Authors
Description 12: Provide brief biographies of the authors, highlighting their research interests and contact information. |
Survey on Revocation in Ciphertext-Policy Attribute-Based Encryption | 10 | ---
paper_title: Attribute-based encryption for fine-grained access control of encrypted data
paper_content:
As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE).
---
paper_title: Ciphertext-policy attribute-based encryption: an expressive, efficient, and provably secure realization
paper_content:
We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions.
---
paper_title: Addressing cloud computing security issues
paper_content:
The recent emergence of cloud computing has drastically altered everyone's perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.
---
paper_title: Parallel search over encrypted data under attribute based encryption on the Cloud Computing
paper_content:
Data confidentiality in the Cloud Computing is a very challenging task. Encryption is one of the most secure methods ensuring this task, and searchable encryption techniques are used to search on encrypted data without the need for decryption. But, despite this secure measure some leaks may appear when searching over data. In this article, we propose to improve confidentiality of outsourced data. We are particularly interested in reinforcing the access control on the search result, when the search is performed over encrypted data. The property behind this aspect of security is known as ACAS (Access Control Aware Search) principle. We present a hybridization of Searchable Encryption and Attribute Based Encryption techniques in order to satisfy the ACAS property. The proposed model supports a personalized and secure multi-user access to outsourced data, presenting high search performance. It deals with multi-keywords searches and is designed to speed up the search time by taking advantage of High Performance Computing, which is widely used in Cloud Computing. Two Attribute Based Encryption techniques are considered on the side of the Cloud and some conducted experiments show the efficiency of the proposed method.
---
paper_title: Ciphertext-Policy Attribute-Based Encryption
paper_content:
In several distributed systems a user should only be able to access data if a user posses a certain set of credentials or attributes. Currently, the only method for enforcing such policies is to employ a trusted server to store the data and mediate access control. However, if any server storing the data is compromised, then the confidentiality of the data will be compromised. In this paper we present a system for realizing complex access control on encrypted data that we call ciphertext-policy attribute-based encryption. By using our techniques encrypted data can be kept confidential even if the storage server is untrusted; moreover, our methods are secure against collusion attacks. Previous attribute-based encryption systems used attributes to describe the encrypted data and built policies into user's keys; while in our system attributes are used to describe a user's credentials, and a party encrypting data determines a policy for who can decrypt. Thus, our methods are conceptually closer to traditional access control methods such as role-based access control (RBAC). In addition, we provide an implementation of our system and give performance measurements.
---
paper_title: Expressive CP-ABE with partially hidden access structures
paper_content:
At Eurocrypt 2005, Sahai and Waters [7] introduced the concept of attribute-based encryption (ABE). ABE enables public key based one-to-many encryption and is envisioned as a promising cryptographic primitive for realizing scalable and fine-grained access control systems. There are two kinds of ABE schemes [1], key-policy ABE (KP-ABE) and ciphertext-policy ABE (CP-ABE) schemes. This paper, our concern is on the latter.
---
paper_title: Elliptic curves over finite fields and the computation of square roots mod $p$
paper_content:
In this paper we present a deterministic algorithm to compute the number of F^-points of an elliptic curve that is defined over a finite field Fv and which is given by a Weierstrass equation. The algorithm takes 0(log9 q) elementary operations. As an application wc give an algorithm to compute square roots mod p. For fixed .i e Z, it takes 0(log9p) elementary operations to compute fx mod p. 1. Introduction. In this paper we present an algorithm to compute the number of F(/-points of an elliptic curve defined over a finite field F , which is given by a Weierstrass equation. We restrict ourselves to the case where the characteristic of F^ is not 2 or 3. The algorithm is deterministic, does not depend on any unproved hypotheses and takes 0(log9 0. If one applies fast multiplication techniques, the algorithm will take 0((|x|1/2log p)6+f) elementary operations for any e > 0. Let £ be an elliptic curve defined over the prime field Fp and let an affine model of it be given by a Weierstrass equation Y2 = X3 + AX + B (A,BeFp). An explicit formula for the number of F^-points on £ is given by
---
paper_title: Attribute-Based Encryption Optimized for Cloud Computing
paper_content:
In this work, we aim to make attribute-based encryption ABE more suitable for access control to data stored in the cloud. For this purpose, we concentrate on giving to the encryptor full control over the access rights, providing feasible key management even in case of multiple independent authorities, and enabling viable user revocation, which is essential in practice. Our main result is an extension of the decentralized CP-ABE scheme of Lewko and Waters [6] with identity-based user revocation. Our revocation system is made feasible by removing the computational burden of a revocation event from the cloud service provider, at the expense of some permanent, yet acceptable overhead of the encryption and decryption algorithms run by the users. Thus, the computation overhead is distributed over a potentially large number of users, instead of putting it on a single party e.g., a proxy server, which would easily lead to a performance bottleneck. The formal security proof of our scheme is given in the generic bilinear group and random oracle models.
---
paper_title: Attribute-based encryption for fine-grained access control of encrypted data
paper_content:
As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE).
---
paper_title: Verification of multi-owner shared data with collusion resistant user revocation in cloud
paper_content:
These days users can easily store and share their data with each other using the cloud technology. Large numbers of users are not assured about integrity of their data by the reason of threats to security in a cloud. Many mechanisms are proposed and being used to verify the integrity or the correctness of single owner shared data. They suggest attaching signatures to the data. Proposed system will provide public auditing on multi-owner shared data. When user is revoked from the group, there must be some method to resign those blocks that are signed by that revoked user. Proposed system will also provide efficient user revocation with collusion resistance i.e. even if cloud colludes with any revoked users; it will not understand the contents of the data which is stored on cloud.
---
paper_title: Cryptographic Enforcement of Information Flow Policies without Public Information via Tree Partitions
paper_content:
We may enforce an information flow policy by encrypting a protected resource and ensuring that only users authorized by the policy are able to decrypt the resource. In most schemes in the literature that use symmetric cryptographic primitives, each user is assigned a single secret and derives decryption keys using this secret and publicly available information. Recent work has challenged this approach by developing schemes, based on a chain partition of the information flow policy, that do not require public information for key derivation, the trade-off being that a user may need to be assigned more than one secret. In general, many different chain partitions exist for the same policy and, until now, it was not known how to compute an appropriate one. ::: In this paper, we introduce the notion of a tree partition, of which chain partitions are a special case. We show how a tree partition may be used to define a cryptographic enforcement scheme and prove that such schemes can be instantiated in such a way as to preserve the strongest security properties known for cryptographic enforcement schemes. We establish a number of results linking the amount of secret material that needs to be distributed to users with a weighted acyclic graph derived from the tree partition. These results enable us to develop efficient algorithms for deriving tree and chain partitions that minimize the amount of secret material that needs to be distributed.
---
paper_title: Ciphertext-Policy Attribute-Based Encryption
paper_content:
In several distributed systems a user should only be able to access data if a user posses a certain set of credentials or attributes. Currently, the only method for enforcing such policies is to employ a trusted server to store the data and mediate access control. However, if any server storing the data is compromised, then the confidentiality of the data will be compromised. In this paper we present a system for realizing complex access control on encrypted data that we call ciphertext-policy attribute-based encryption. By using our techniques encrypted data can be kept confidential even if the storage server is untrusted; moreover, our methods are secure against collusion attacks. Previous attribute-based encryption systems used attributes to describe the encrypted data and built policies into user's keys; while in our system attributes are used to describe a user's credentials, and a party encrypting data determines a policy for who can decrypt. Thus, our methods are conceptually closer to traditional access control methods such as role-based access control (RBAC). In addition, we provide an implementation of our system and give performance measurements.
---
paper_title: Ciphertext-policy attribute-based encryption: an expressive, efficient, and provably secure realization
paper_content:
We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions.
---
paper_title: Attribute-Based Encryption Optimized for Cloud Computing
paper_content:
In this work, we aim to make attribute-based encryption ABE more suitable for access control to data stored in the cloud. For this purpose, we concentrate on giving to the encryptor full control over the access rights, providing feasible key management even in case of multiple independent authorities, and enabling viable user revocation, which is essential in practice. Our main result is an extension of the decentralized CP-ABE scheme of Lewko and Waters [6] with identity-based user revocation. Our revocation system is made feasible by removing the computational burden of a revocation event from the cloud service provider, at the expense of some permanent, yet acceptable overhead of the encryption and decryption algorithms run by the users. Thus, the computation overhead is distributed over a potentially large number of users, instead of putting it on a single party e.g., a proxy server, which would easily lead to a performance bottleneck. The formal security proof of our scheme is given in the generic bilinear group and random oracle models.
---
paper_title: A Survey on Attribute-based Encryption Schemes of Access Control in Cloud Environments
paper_content:
In Attribute-based Encryption (ABE) scheme, attributes play a very important role. Attributes have been exploited to generate a public key for encrypting data and have been used as an access policy to control users' access. The access policy can be categorized as either key-policy or ciphertext-policy. The key-policy is the access structure on the user's private key, and the ciphertext-policy is the access structure on the ciphertext. And the access structure can also be categorized as either monotonic or non-monotonic one. Using ABE schemes can have the advantages: (1) to reduce the communication overhead of the Internet, and (2) to provide a fine-grained access control. In this paper, we survey a basic attribute-based encryption scheme, two various access policy attribute-based encryption schemes, and two various access structures, which are analyzed for cloud environments. Finally, we list the comparisons of these schemes by some criteria for cloud environments.
---
paper_title: Ciphertext-policy attribute-based encryption: an expressive, efficient, and provably secure realization
paper_content:
We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions.
---
paper_title: Hierarchical attribute-based encryption for fine-grained access control in cloud storage services
paper_content:
Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.
---
paper_title: Revocable Data Access Control for Multi-Authority Cloud Storage Using Cipher Text-Policy Attribute Based Encryption
paper_content:
In several distributed systems a user should only be able to access data if a user possesses a certain set of credentials or attributes. Currently, the only method for enforcing such polices is to employ a trusted server store the data and mediate confidentiality of the data will be compromised. In this paper we present a system for realizing complex access control on encrypted data that we call cipher text-policy attribute-based encryption. By using our techniques encrypted data can be secure against collusion attacks. Previous attribute-based encryption systems used attributes to describe the encrypted data and built policies into user's key; while in our system attributes are used to describe a user's credentials, and a party encrypting data determines a policy for who can decrypt. Thus, out methods are conceptually closer to traditional access control methods such as role-based access control(RBAC).In addition, we provide an implementation of our system and five performance measurements. In cloud computing the data security is achieved by Data Access Control Scheme. Cipher text-Policy Attribute-based Encryption (CP-ABE) is considered as one of the most suitable scheme for data access control in cloud storage. This scheme provides data owners more direct control on access policies. However, CP-ABE schemes to data access control for cloud storage systems are difficult because of the attribute revocation problem. So This paper produce survey on efficient and revocable data access control scheme for multi-authority cloud storage systems, where there are multiple authorities cooperate and each authority is able to issue attributes independently. Specifically, this paper surveys a revocable multi-authority CP-ABE scheme. The attribute revocation method can efficiently achieve both forward security and backward security. This survey shows that revocable multi-authority CP-ABE scheme is secure in the random oracle model and is more efficient than previous multi-authority CP-ABE. Key Words —Access control; multi-authority; CP-ABE; attribute revocation; cloud storage.
---
paper_title: EFFICIENT AND SECURE ATTRIBUTE REVOCATION OF DATA IN MULTI-AUTHORITY CLOUD STORAGE
paper_content:
One method which is effective and also ensures security in cloud storage is data access control. Now-a-days this method faces lot of challenges from data out sourcing as well as un-trusted cloud servers. In this case data owners should get direct controls on access policies, which is provided by Cipher text-Policy Attribute-based Encryption (CP-ABE).Due to attribute revocation it is not easy for applying existing CP-ABE schemes to data in cloud storage. This design gives an efficient, expressive and revocation in data access control scheme in multi-authority storage cloud systems, in which there is co-existence of multiple authorities and each will be able to issue independently attributes. Specifically, this implemented system has a revocable multi-authority CP-ABE scheme, which gives the underlying techniques that helps to design schemes in data access control. This efficient revocation method achieves forward security as well as backward security. The screenshots show results in the data access control scheme which is secure and efficient.
---
paper_title: Ciphertext-policy attribute-based encryption: an expressive, efficient, and provably secure realization
paper_content:
We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions.
---
paper_title: Ciphertext-Policy Attribute-Based Encryption
paper_content:
In several distributed systems a user should only be able to access data if a user posses a certain set of credentials or attributes. Currently, the only method for enforcing such policies is to employ a trusted server to store the data and mediate access control. However, if any server storing the data is compromised, then the confidentiality of the data will be compromised. In this paper we present a system for realizing complex access control on encrypted data that we call ciphertext-policy attribute-based encryption. By using our techniques encrypted data can be kept confidential even if the storage server is untrusted; moreover, our methods are secure against collusion attacks. Previous attribute-based encryption systems used attributes to describe the encrypted data and built policies into user's keys; while in our system attributes are used to describe a user's credentials, and a party encrypting data determines a policy for who can decrypt. Thus, our methods are conceptually closer to traditional access control methods such as role-based access control (RBAC). In addition, we provide an implementation of our system and give performance measurements.
---
paper_title: Fully secure multi-authority ciphertext-policy attribute-based encryption without random oracles
paper_content:
Recently Lewko and Waters proposed the first fully secure multi-authority ciphertext-policy attribute-based encryption (CP-ABE) system in the random oracle model, and leave the construction of a fully secure multi-authority CP-ABE in the standard model as an open problem. Also, there is no CP-ABE system which can completely prevent individual authorities from decrypting ciphertexts. In this paper, we propose a new multi-authority CP-ABE system which addresses these two problems positively. In this new system, there are multiple Central Authorities (CAs) and Attribute Authorities (AAs), the CAs issue identity-related keys to users and are not involved in any attribute related operations, AAs issue attribute-related keys to users and each AA manages a different domain of attributes. The AAs operate independently from each other and do not need to know the existence of other AAs. Messages can be encrypted under any monotone access structure over the entire attribute universe. The system is adaptively secure in the standard model with adaptive authority corruption, and can support large attribute universe.
---
paper_title: An Efficient and Secure Solution for Attribute Revocation Problem Utilizing CP-ABE Scheme in Mobile Cloud Computing
paper_content:
the advent of business apps which allow users to form dynamic groups so that they can store data on cloud servers and share the data within their user groups through their mobile devices. A major concern comes here that mobile users need the security of their group data which should not be accessible to other group users. To solve the issue, ABE or Attribute Based Encryption techniques are employed as they are vastly recognized as a valid and robust mechanism to provide fine access control over the data to legitimate users. At the same time, as there are complex computations involved in key issuing and data encryption by AAs' (Attribute Authorities) and decryption by legitimate users, there exist some efficiency issues. Rekeying plays a major role in dynamic systems where nodes come-in and move-out. As revocation of user rights requires the system to secure data from moved out users, rekeying has to be done on entire data set belonging to that attribute users in the group. However, the cost of re-keying is another concern for system efficiency which should not be compensated with a compromise on data security. There are many research works carried out earlier on data security for web applications using ABE, but there are limited studies on CP-ABE in mobile computing with multi- authority data storage system. A system is implemented which allows user groups to register, CAs'(Certificate Authorities) to allow registrations of Users and AAs and assign public Keys, AAs to manage attributes and revoke user access with re- keying and a centralized server for data persistence. Experimental results show the effectiveness of proposed solution and efficiency of re-keying mechanism while evoking user access rights on system architecture. Keywords-Based Encryption, CP-ABE, Mobile Data Security,
---
paper_title: Attribute-Based Encryption Optimized for Cloud Computing
paper_content:
In this work, we aim to make attribute-based encryption ABE more suitable for access control to data stored in the cloud. For this purpose, we concentrate on giving to the encryptor full control over the access rights, providing feasible key management even in case of multiple independent authorities, and enabling viable user revocation, which is essential in practice. Our main result is an extension of the decentralized CP-ABE scheme of Lewko and Waters [6] with identity-based user revocation. Our revocation system is made feasible by removing the computational burden of a revocation event from the cloud service provider, at the expense of some permanent, yet acceptable overhead of the encryption and decryption algorithms run by the users. Thus, the computation overhead is distributed over a potentially large number of users, instead of putting it on a single party e.g., a proxy server, which would easily lead to a performance bottleneck. The formal security proof of our scheme is given in the generic bilinear group and random oracle models.
---
paper_title: New Ciphertext-Policy Attribute-Based Encryption with Efficient Revocation
paper_content:
Attribute-based encryption (ABE) is getting popular for its fine-grained access control in cloud computing. However, dynamic user or attribute revocation is a challenge in original ABE schemes. To address this issue, a new cipher text-policy ABE scheme with efficient revocation is proposed. In the new scheme, the master key is randomly divided into generating the secret key and delegation key, which are sent to the user and the cloud service provider, respectively. In our proposed scheme, the authority removes user's attribute without affecting other users' access privileges with this attribute. Finally, the scheme is proven selectively-structure chosen plaintext attack secure under the decisional q-Parallel Bilinear Diffie-Hellman Exponent (q-PBDHE) assumption. Compared with some existing schemes, our scheme has the lower storage overhead and communication cost.
---
paper_title: Multi-authority fine-grained access control with accountability and its application in cloud
paper_content:
Abstract Attribute-based encryption (ABE) is one of critical primitives for the application of fine-grained access control. To reduce the trust assumption on the attribute authority and in the meanwhile enhancing the privacy of users and the security of the encryption scheme, the notion of multi-authority ABE with an anonymous key issuing protocol has been proposed. In an ABE scheme, it allows to encrypt data for a set of users satisfying some specified attribute policy and any leakage of a decryption key cannot be associated to a user. As a result, a misbehaving user could abuse the property of access anonymity by sharing its key other unauthorized users. On the other hand, the previous work mainly focus on the key-policy ABE, which cannot support ciphertext-policy access control. In this paper, we propose a privacy-aware multi-authority ciphertext-policy ABE scheme with accountability , which hides the attribute information in the ciphertext and allows to trace the dishonest user identity who shares the decryption key. The efficiency analysis demonstrates that the new scheme is efficient, and the computational overhead in the tracing algorithm is only proportional to the length of the identity. Finally, we also show how to apply it in cloud computing to achieve accountable fine-grained access control system.
---
paper_title: Multi-Authority Attribute Based Encryption
paper_content:
In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in the affirmative. ::: ::: Our scheme allows any polynomial number of independent authorities to monitor attributes and distribute secret keys. An encryptor can choose, for each authority, a number dk and a set of attributes; he can then encrypt a message such that a user can only decrypt if he has at least dk of the given attributes from each authority k. Our scheme can tolerate an arbitrary number of corrupt authoritites. ::: ::: We also show how to apply our techniques to achieve a multiauthority version of the large universe fine grained access control ABE presented by Gopal et al. [GPSW06].
---
paper_title: Multi-authority attribute-based encryption access control scheme with policy hidden for cloud storage
paper_content:
For realizing the flexible, scalable and fuzzy fine-grained access control, ciphertext policy attribute-based encryption (CP-ABE) scheme has been widely used in the cloud storage system. However, the access structure of CP-ABE scheme is outsourced to the cloud storage server, resulting in the disclosure of access policy privacy. In addition, there are multiple authorities that coexist and each authority is able to issue attributes independently in the cloud storage system. However, existing CP-ABE schemes cannot be directly applied to data access control for multi-authority cloud storage system, due to the inefficiency for user revocation. In this paper, to cope with these challenges, we propose a decentralized multi-authority CP-ABE access control scheme, which is more practical for supporting the user revocation. In addition, this scheme can protect the data privacy and the access policy privacy with policy hidden in the cloud storage system. Here, the access policy that is realized by employing the linear secret sharing scheme. Finally, the security and performance analyses demonstrate that our scheme has high security in terms of access policy privacy and efficiency in terms of computational cost of user revocation.
---
paper_title: DAC-MACS: Effective data access control for multi-authority cloud storage systems
paper_content:
Data access control is an effective way to ensure the data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server. Ciphertext-Policy Attribute-based Encryption (CP-ABE) is a promising technique for access control of encrypted data. It requires a trusted authority manages all the attributes and distributes keys in the system. In cloud storage systems, there are multiple authorities co-exist and each authority is able to issue attributes independently. However, existing CP-ABE schemes cannot be directly applied to data access control for multi-authority cloud storage systems, due to the inefficiency of decryption and revocation. In this paper, we propose DAC-MACS (Data Access Control for Multi-Authority Cloud Storage), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multi-authority CP-ABE scheme with efficient decryption and also design an efficient attribute revocation method that can achieve both forward security and backward security. The analysis and the simulation results show that our DAC-MACS is highly efficient and provably secure under the security model.
---
paper_title: Efficient decentralized attribute-based access control for cloud storage with user revocation
paper_content:
Cloud storage access control is very important for the security of outsourced data, where Attribute-based Encryption (ABE) is regarded as one of the most promising technologies. Current researches mainly focus on decentralized ABE, a variant of multi-authority ABE scheme, because conventional ABE schemes depend on a single authority to issue secret keys for all of users, which is very impractical in a large-scale cloud. A decentralized ABE scheme should not rely on a central authority and can eliminate the need for collaborative computation. However, constructing such an efficient and practical decentralized ABE scheme remains a challenging research problem. In this study, we design a new decentralized ciphertext-policy attribute-based encryption access control scheme for cloud storage systems. Firstly, our scheme dose not require any central authority and global coordination among multiple authorities. Then, it supports any LSSS access structure and thus can encrypt data in terms of any boolean formula. In addition, we also utilize Proxy Re-encryption technique to overcome the user revocation problem in decentralized ABE schemes, thus making our scheme more practical. Our security and performance analysis demonstrate the presented scheme's security strength and efficiency in terms of flexibility and computation.
---
paper_title: DACC: Distributed Access Control in Clouds
paper_content:
We propose a new model for data storage and access in clouds. Our scheme avoids storing multiple encrypted copies of same data. In our framework for secure data storage, cloud stores encrypted data (without being able to decrypt them). The main novelty of our model is addition of key distribution centers (KDCs). We propose DACC (Distributed Access Control in Clouds) algorithm, where one or more KDCs distribute keys to data owners and users. KDC may provide access to particular fields in all records. Thus, a single key replaces separate keys from owners. Owners and users are assigned certain set of attributes. Owner encrypts the data with the attributes it has and stores them in the cloud. The users with matching set of attributes can retrieve the data from the cloud. We apply attribute-based encryption based on bilinear pairings on elliptic curves. The scheme is collusion secure, two users cannot together decode any data that none of them has individual right to access. DACC also supports revocation of users, without redistributing keys to all the users of cloud services. We show that our approach results in lower communication, computation and storage overheads, compared to existing models and schemes.
---
paper_title: Owner Specified Excessive Access Control for Attribute Based Encryption
paper_content:
Attribute-based encryption (ABE) has emerged as a promising solution for access control to diverse set of users in cloud computing systems. Policy can just specify whether (or not) any specific user should be given access to data, but it lacks to provide data owner the privilege to specify (how much) fraction, or (which) specific chunk from that data to be accessed or decrypted. In this paper, we address this issue, and propose a scheme that will give data owner excessive access control, so that he can specify specific chunk out of total data to be accessed by user depending on his attributes. In our scheme, a data owner can encrypt data over attributes specified in a policy, but even if user’s attributes satisfy the policy; he can decrypt data (partially or fully) fractionally based on his attributes specified by owner. The owner can also prioritize user’s access based on his designation, or hierarchal role in a specific organization. We also address to resolve the issue of attributes repetition, due to which the cost of computations in encryption by owner and ciphertext size is reduced. Furthermore, we achieve it with a single ciphertext over policy for entire data, and proof our scheme to be secure in the generic group and random oracle model. Theoretical comparisons of computations with existing constructions, and performance of the scheme evaluated in the Charm simulator is reasonable enough to be adopted in practice.
---
paper_title: Attribute-based fine-grained access control with efficient revocation in cloud storage systems
paper_content:
A cloud storage service allows data owner to outsource their data to the cloud and through which provide the data access to the users. Because the cloud server and the data owner are not in the same trust domain, the semi-trusted cloud server cannot be relied to enforce the access policy. To address this challenge, traditional methods usually require the data owner to encrypt the data and deliver decryption keys to authorized users. These methods, however, normally involve complicated key management and high overhead on data owner. In this paper, we design an access control framework for cloud storage systems that achieves fine-grained access control based on an adapted Ciphertext-Policy Attribute-based Encryption (CP-ABE) approach. In the proposed scheme, an efficient attribute revocation method is proposed to cope with the dynamic changes of users' access privileges in large-scale systems. The analysis shows that the proposed access control scheme is provably secure in the random oracle model and efficient to be applied into practice.
---
paper_title: RAAC: Robust and Auditable Access Control With Multiple Attribute Authorities for Public Cloud Storage
paper_content:
Data access control is a challenging issue in public cloud storage systems. Ciphertext-policy attribute-based encryption (CP-ABE) has been adopted as a promising technique to provide flexible, fine-grained, and secure data access control for cloud storage with honest-but-curious cloud servers. However, in the existing CP-ABE schemes, the single attribute authority must execute the time-consuming user legitimacy verification and secret key distribution, and hence, it results in a single-point performance bottleneck when a CP-ABE scheme is adopted in a large-scale cloud storage system. Users may be stuck in the waiting queue for a long period to obtain their secret keys, thereby resulting in low efficiency of the system. Although multi-authority access control schemes have been proposed, these schemes still cannot overcome the drawbacks of single-point bottleneck and low efficiency, due to the fact that each of the authorities still independently manages a disjoint attribute set. In this paper, we propose a novel heterogeneous framework to remove the problem of single-point performance bottleneck and provide a more efficient access control scheme with an auditing mechanism. Our framework employs multiple attribute authorities to share the load of user legitimacy verification. Meanwhile, in our scheme, a central authority is introduced to generate secret keys for legitimacy verified users. Unlike other multi-authority access control schemes, each of the authorities in our scheme manages the whole attribute set individually. To enhance security, we also propose an auditing mechanism to detect which attribute authority has incorrectly or maliciously performed the legitimacy verification procedure. Analysis shows that our system not only guarantees the security requirements but also makes great performance improvement on key generation.
---
paper_title: HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable Access Control in Cloud Computing
paper_content:
Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.
---
paper_title: A Robust and Verifiable Threshold Multi-Authority Access Control System in Public Cloud Storage
paper_content:
Attribute-based Encryption is observed as a promising cryptographic leading tool to assurance data owners’ direct regulator over their data in public cloud storage. The former ABE schemes include only one authority to maintain the whole attribute set, which can carry a single-point bottleneck on both security and performance. Then, certain multi-authority schemes are planned, in which numerous authorities distinctly maintain split attribute subsets. However, the single-point bottleneck problem remains unsolved. In this survey paper, from another perspective, we conduct a threshold multi-authority CP-ABE access control scheme for public cloud storage, named TMACS, in which multiple authorities jointly manage a uniform attribute set. In TMACS, taking advantage of (t, n) threshold secret allocation, the master key can be shared among multiple authorities, and a lawful user can generate his/her secret key by interacting with any t authorities. Security and performance analysis results show that TMACS is not only verifiable secure when less than t authorities are compromised, but also robust when no less than t authorities are alive in the system. Also, by efficiently combining the traditional multi-authority scheme with TMACS, we construct a hybrid one, which satisfies the scenario of attributes coming from different authorities as well as achieving security and system-level robustness.
---
paper_title: Expressive, Efficient, and Revocable Data Access Control for Multi-Authority Cloud Storage
paper_content:
Data access control is an effective way to ensure the data security in the cloud. Due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Ciphertext-Policy Attribute-based Encryption (CP-ABE) is regarded as one of the most suitable technologies for data access control in cloud storage, because it gives data owners more direct control on access policies. However, it is difficult to directly apply existing CP-ABE schemes to data access control for cloud storage systems because of the attribute revocation problem. In this paper, we design an expressive, efficient and revocable data access control scheme for multi-authority cloud storage systems, where there are multiple authorities co-exist and each authority is able to issue attributes independently. Specifically, we propose a revocable multi-authority CP-ABE scheme, and apply it as the underlying techniques to design the data access control scheme. Our attribute revocation method can efficiently achieve both forward security and backward security. The analysis and simulation results show that our proposed data access control scheme is secure in the random oracle model and is more efficient than previous works.
---
paper_title: PIRATTE: Proxy-based Immediate Revocation of ATTribute-based Encryption
paper_content:
Access control to data in traditional enterprises is typically enforced through reference monitors. However, as more and more enterprise data is outsourced, trusting third party storage servers is getting challenging. As a result, cryptography, specifically Attribute-based encryption (ABE) is getting popular for its expressiveness. The challenge of ABE is revocation. ::: To address this challenge, we propose PIRATTE, an architecture that supports fine-grained access control policies and dynamic group membership. PIRATTE is built using attribute-based encryption; a key and novel feature of our architecture, however, is that it is possible to remove access from a user without issuing new keys to other users or re-encrypting existing ciphertexts. We achieve this by introducing a proxy that participates in the decryption process and enforces revocation constraints. The proxy is minimally trusted and cannot decrypt ciphertexts or provide access to previously revoked users. We describe the PIRATTE construction and provide a security analysis along with performance evaluation.We also describe an architecture for online social network that can use PIRATTE, and prototype application of PIRATTE on Facebook.
---
paper_title: Dynamic User Revocation and Key Refreshing for Attribute-Based Encryption in Cloud Storage
paper_content:
Cloud storage provides the potential for on-demand massive data storage, but its highly dynamic and heterogeneous environment presents significant data protection challenges. Ciphertext-policy attribute-based encryption (CP-ABE) enables fine-grained access control. However, important issues such as efficient user revocation and key refreshing are not straightforward, which constrains the adoption of CP-ABE in cloud storage systems. In this paper we propose a dynamic user revocation and key refreshing model for CP-ABE schemes. A key feature of our model is its generic possibility in general CP-ABE schemes to refresh the system keys or remove the access from a user without issuing new keys to other users or re-encrypting existing ciphertexts. Our model is efficient and suitable for application in cloud storage environments. As an example, we use BSW's CP-ABE scheme to show the adaptation of our model to a CP-ABE scheme.
---
paper_title: Efficient revocation in ciphertext-policy attribute-based encryption based cryptographic cloud storage
paper_content:
It is secure for customers to store and share their sensitive data in the cryptographic cloud storage. However, the revocation operation is a sure performance killer in the cryptographic access control system. To optimize the revocation procedure, we present a new efficient revocation scheme which is efficient, secure, and unassisted. In this scheme, the original data are first divided into a number of slices, and then published to the cloud storage. When a revocation occurs, the data owner needs only to retrieve one slice, and re-encrypt and re-publish it. Thus, the revocation process is accelerated by affecting only one slice instead of the whole data. We have applied the efficient revocation scheme to the ciphertext-policy attribute-based encryption (CP-ABE) based cryptographic cloud storage. The security analysis shows that our scheme is computationally secure. The theoretically evaluated and experimentally measured performance results show that the efficient revocation scheme can reduce the data owner’s workload if the revocation occurs frequently.
---
paper_title: TMACS: A Robust and Verifiable Threshold Multi-Authority Access Control System in Public Cloud Storage
paper_content:
Attribute-based Encryption (ABE) is regarded as a promising cryptographic conducting tool to guarantee data owners’ direct control over their data in public cloud storage. The earlier ABE schemes involve only one authority to maintain the whole attribute set, which can bring a single-point bottleneck on both security and performance. Subsequently, some multi-authority schemes are proposed, in which multiple authorities separately maintain disjoint attribute subsets. However, the single-point bottleneck problem remains unsolved. In this paper, from another perspective, we conduct a threshold multi-authority CP-ABE access control scheme for public cloud storage, named TMACS, in which multiple authorities jointly manage a uniform attribute set. In TMACS, taking advantage of ( $t,n$ ) threshold secret sharing, the master key can be shared among multiple authorities, and a legal user can generate his/her secret key by interacting with any $t$ authorities. Security and performance analysis results show that TMACS is not only verifiable secure when less than $t$ authorities are compromised, but also robust when no less than $t$ authorities are alive in the system. Furthermore, by efficiently combining the traditional multi-authority scheme with TMACS, we construct a hybrid one, which satisfies the scenario of attributes coming from different authorities as well as achieving security and system-level robustness.
---
paper_title: EFFICIENT AND SECURE ATTRIBUTE REVOCATION OF DATA IN MULTI-AUTHORITY CLOUD STORAGE
paper_content:
One method which is effective and also ensures security in cloud storage is data access control. Now-a-days this method faces lot of challenges from data out sourcing as well as un-trusted cloud servers. In this case data owners should get direct controls on access policies, which is provided by Cipher text-Policy Attribute-based Encryption (CP-ABE).Due to attribute revocation it is not easy for applying existing CP-ABE schemes to data in cloud storage. This design gives an efficient, expressive and revocation in data access control scheme in multi-authority storage cloud systems, in which there is co-existence of multiple authorities and each will be able to issue independently attributes. Specifically, this implemented system has a revocable multi-authority CP-ABE scheme, which gives the underlying techniques that helps to design schemes in data access control. This efficient revocation method achieves forward security as well as backward security. The screenshots show results in the data access control scheme which is secure and efficient.
---
paper_title: A robust and verifiable threshold multi-authority access control system in public cloud storage
paper_content:
Attribute base Encryption is the cryptographic conducting tool to assurance data owners enduring control above their data in public cloud storage. The proposed ABE plans include one and only power (Authority) to keep up the entire trait (Key) set, which can carry a solitary (single) point bottleneck on both safety and execution. In this way, some multi-power (Multi-Authority) plans are proposed, in which various powers independently keep up disjoint trait subsets. In any case, the single-point bottleneck issue stays unsolved. In this paper, from another point of view, we lead an edge multi-power CP-ABE access control plan for open distributed storage, named TMACS, in which various powers together deal with a uniform characteristic set. In [9] TMACS, exploiting (t; n) limit mystery sharing, the expert (Master) key can be shared among numerous powers, and a legitimate client can produce his/her mystery (Private) key by cooperating with any t powers. Security and execution investigation results demonstrate that system is not just undeniable secure when not as much as t powers are traded off, additionally dynamic when no not as a great deal as t powers are alive in the framework. Besides, by proficiently joining the customary multi-power plan with system, we build a half and half one, which fulfils the situation of traits originating from various powers and accomplishing security and framework level strength.
---
paper_title: Identity-based proxy re-encryption
paper_content:
In a proxy re-encryption scheme a semi-trusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. A number of solutions have been proposed in the public-key setting. In this paper, we address the problem of Identity-Based proxy re-encryption, where ciphertexts are transformed from one identityto another. Our schemes are compatible with current IBE deployments and do not require any extra work from the IBE trusted-party key generator. In addition, they are non-interactive and one of them permits multiple re-encryptions. Their security is based on a standard assumption (DBDH) in the random oracle model.
---
paper_title: Research on Ciphertext-Policy Attribute-Based Encryption with Attribute Level User Revocation in Cloud Storage
paper_content:
Attribute-based encryption (ABE) scheme is more and more widely used in the cloud storage, which can achieve fine-grained access control. However, it is an important challenge to solve dynamic user and attribute revocation in the original scheme. In order to solve this problem, this paper proposes a ciphertext-policy ABE (CP-ABE) scheme which can achieve attribute level user attribution. In this scheme, if some attribute is revoked, then the ciphertext corresponding to this attribute will be updated so that only the individuals whose attributes meet the access control policy and have not been revoked will be able to carry out the key updating and decrypt the ciphertext successfully. This scheme is proved selective-structure secure based on the -Parallel Bilinear Diffie-Hellman Exponent (BDHE) assumption in the standard model. Finally, the performance analysis and experimental verification have been carried out in this paper, and the experimental results show that, compared with the existing revocation schemes, although our scheme increases the computational load of storage service provider (CSP) in order to achieve the attribute revocation, it does not need the participation of attribute authority (AA), which reduces the computational load of AA. Moreover, the user does not need any additional parameters to achieve the attribute revocation except for the private key, thus saving the storage space greatly.
---
paper_title: A Survey of Attribute-based Access Control with User Revocation in Cloud Data Storage
paper_content:
Cloud storage service is one of cloud services where cloud service provider can provide storage space to customers. Because cloud storage service has many advantages which include convenience, high computation and capacity, it attracts the user to outsource data in the cloud. However, the user outsources data directly in cloud storage service that is unsafe when outsourcing data is sensitive for the user. Therefore, ciphertext-policy attribute-based encryption is a promising cryptographic solution in cloud environment, which can be drawn up for access control by the data owner to define access policy. Unfortunately, an outsourced architecture applied with the attribute-based encryption introduces many challenges in which one of the challenges is revocation. The issue is a threat to data security in the data owner. In this paper, we survey related studies in cloud data storage with revocation and define their requirements. Then we explain and analyze four representative approaches. Finally, we provide some topics for future research
---
paper_title: EASiER: encryption-based access control in social networks with efficient revocation
paper_content:
A promising approach to mitigate the privacy risks in Online Social Networks (OSNs) is to shift access control enforcement from the OSN provider to the user by means of encryption. However, this creates the challenge of key management to support complex policies involved in OSNs and dynamic groups. To address this, we propose EASiER, an architecture that supports fine-grained access control policies and dynamic group membership by using attribute-based encryption. A key and novel feature of our architecture, however, is that it is possible to remove access from a user without issuing new keys to other users or re-encrypting existing ciphertexts. We achieve this by creating a proxy that participates in the decryption process and enforces revocation constraints. The proxy is minimally trusted and cannot decrypt ciphertexts or provide access to previously revoked users. We describe EASiER architecture and construction, provide performance evaluation, and prototype application of our approach on Facebook.
---
paper_title: Attribute-Based Encryption Optimized for Cloud Computing
paper_content:
In this work, we aim to make attribute-based encryption ABE more suitable for access control to data stored in the cloud. For this purpose, we concentrate on giving to the encryptor full control over the access rights, providing feasible key management even in case of multiple independent authorities, and enabling viable user revocation, which is essential in practice. Our main result is an extension of the decentralized CP-ABE scheme of Lewko and Waters [6] with identity-based user revocation. Our revocation system is made feasible by removing the computational burden of a revocation event from the cloud service provider, at the expense of some permanent, yet acceptable overhead of the encryption and decryption algorithms run by the users. Thus, the computation overhead is distributed over a potentially large number of users, instead of putting it on a single party e.g., a proxy server, which would easily lead to a performance bottleneck. The formal security proof of our scheme is given in the generic bilinear group and random oracle models.
---
paper_title: Attribute-based fine-grained access control with efficient revocation in cloud storage systems
paper_content:
A cloud storage service allows data owner to outsource their data to the cloud and through which provide the data access to the users. Because the cloud server and the data owner are not in the same trust domain, the semi-trusted cloud server cannot be relied to enforce the access policy. To address this challenge, traditional methods usually require the data owner to encrypt the data and deliver decryption keys to authorized users. These methods, however, normally involve complicated key management and high overhead on data owner. In this paper, we design an access control framework for cloud storage systems that achieves fine-grained access control based on an adapted Ciphertext-Policy Attribute-based Encryption (CP-ABE) approach. In the proposed scheme, an efficient attribute revocation method is proposed to cope with the dynamic changes of users' access privileges in large-scale systems. The analysis shows that the proposed access control scheme is provably secure in the random oracle model and efficient to be applied into practice.
---
paper_title: Concrete Attribute-Based Encryption Scheme with Verifiable Outsourced Decryption
paper_content:
As more sensitive data is shared and stored by third-party sites on the internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data is that it can be selectively shared only at a coarse-grained level. Attribute based encryption is a public-key-based one-to-many encryption that allows users to encrypt and decry pt data based on user attributes. A promising application of ABE is flexible access control of encrypted data stored in the cloud using access policies and ascribed attributes associated with private keys and cipher text. One.One of the main efficiency drawbacks of the existing ABE schemes is that decryption involves expensive pairing operations and the number of such operations grows with the complexity of the access policy. Finally, we show an implementation scheme and result of performance measurements, which indicates a significant reduction on computing resources imposed on users.
---
paper_title: Provably Secure Threshold-Based ABE Scheme Without Bilinear Map
paper_content:
In several distributed environments, users can decrypt a secret message using a certain number of valid attributes or credentials. Attribute-based encryption (ABE) is the most promising technique to achieve such fine-grain access control. In recent years, many ABE schemes have been proposed, but most of them are constructed based on the concept of pairing and secret sharing scheme. This paper aims at presenting a pairing-free threshold-based ABE scheme (PT-ABE) over multiplicative group. The propose work is secured under the standard decisional Diffie–Hellman (DDH) assumption, and both error-tolerant and collusion-free. The scheme does not consider random oracle operation to prove its security. We compare the PT-ABE scheme with other relevant ABE schemes and find that our scheme is much more efficient and flexible than others. Besides, we propose a protocol based on PT-ABE scheme and show that PT-ABE is perfectly suitable in cloud environment to provide cloud security. To the best of our knowledge, the proposed scheme should be implemented in real- life distributed scenarios, as it is well secured, flexible and perform better than existing ones.
---
paper_title: Pairing-based CP-ABE with constant-size ciphertexts and secret keys for cloud environment
paper_content:
Ciphertext-policy attribute-based encryption (CP-ABE) scheme can be deployed in a mobile cloud environment to ensure that data outsourced to the cloud will be protected from unauthorized access. Since mobile devices are generally resource-constrained, CP-ABE schemes designed for a mobile cloud deployment should have constant sizes for secret keys and ciphertexts. However, most existing CP-ABE schemes do not provide both constant size ciphertexts and secret keys. Thus, in this paper, we propose a new pairing-based CP-ABE scheme, which offers both constant size ciphertexts and secret keys (CSCTSK) with an expressive AND gate access structure. We then show that the proposed CP-ABE-CSCTSK scheme is secure against chosen-ciphertext adversary in the selective security model, and present a comparative summary to demonstrate the utility of the scheme. Since mobile devices are generally resource-constrained and cloud services are Internet-based and pay-by-use, a key feature in ciphertext-policyAttribute-based encryption (CP-ABE) should be constant sizes for secret keys and ciphertexts.In this paper, we propose a new pairing-based CP-ABE scheme, which offers both constant size ciphertexts and secret keys (CSCTSK) with an expressive AND gate access structure.We then show that the proposed CP-ABE-CSCTSK scheme is secure against chosen-ciphertext adversary in the selective security model, and demonstrate its utility.
---
paper_title: An efficient and expressive ciphertext-policy attribute-based encryption scheme with partially hidden access structures, revisited
paper_content:
Ciphertext-policy attribute-based encryption (CP-ABE) has been regarded as one of the promising solutions to protect data security and privacy in cloud storage services. In a CP-ABE scheme, an access structure is included in the ciphertext, which, however, may leak sensitive information about the underlying plaintext and the privileged recipients in that anyone who sees the ciphertext is able to learn the attributes of the privileged recipients from the associated access structure. In order to address this issue, CP-ABE with partially hidden access structures was introduced where each attribute is divided into an attribute name and an attribute value and the attribute values of the attributes in an access structure are not given in the ciphertext. Though a number of CP-ABE schemes with partially hidden access structures have been proposed, most of them only enable restricted access structures, whereas several other schemes supporting expressive access structures are computationally inefficient due to the fact that they are built in the composite-order groups. To our knowledge, there has been little attention paid to the design of expressive CP-ABE schemes with partially hidden access structures in the prime-order groups. In this paper, we revisit this problem, and present an expressive CP-ABE scheme supporting partially hidden access structures in the prime-order groups with improved efficiency.
---
paper_title: Converting pairing-based cryptosystems from composite-order groups to prime-order groups
paper_content:
We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In particular, we define a generalized version of the subgroup decision problem and give explicit constructions of bilinear groups in which the generalized subgroup decision assumption follows from the decision Diffie-Hellman assumption, the decision linear assumption, and/or related assumptions in prime-order groups. ::: ::: We apply our framework and our prime-order group constructions to create more efficient versions of cryptosystems that originally required composite-order groups. Specifically, we consider the Boneh-Goh-Nissim encryption scheme, the Boneh-Sahai-Waters traitor tracing system, and the Katz-Sahai-Waters attribute-based encryption scheme. We give a security theorem for the prime-order group instantiation of each system, using assumptions of comparable complexity to those used in the composite-order setting. Our conversion of the last two systems to prime-order groups answers a problem posed by Groth and Sahai.
---
paper_title: High efficient key-insulated attribute based encryption scheme without bilinear pairing operations
paper_content:
Abstract ::: Attribute based encryption (ABE) has been widely applied for secure data protection in various data sharing systems. However, the efficiency of existing ABE schemes is not high enough since running encrypt and decrypt algorithms need frequent bilinear pairing operations, which may occupy too much computing resources on terminal devices. What’s more, since different users may share the same attributes in the system, a single user’s private key exposure will threaten the security and confidentiality of the whole system. Therefore, to further decrease the computation cost in attribute based cryptosystem as well as provide secure protection when key exposure happens, in this paper, we firstly propose a high efficient key-insulated ABE algorithm without pairings. The key-insulated mechanism guarantees both forward security and backward security when key exposure or user revocation happens. Besides, during the running of algorithms in our scheme, users and attribute authority needn’t run any bilinear pairing operations, which will increase the efficiency to a large extent. The high efficiency and security analysis indicate that our scheme is more appropriate for secure protection in data sharing systems.
---
paper_title: An efficient access control scheme with outsourcing capability and attribute update for fog computing
paper_content:
Fog computing as an extension of cloud computing provides computation, storage and application services to end users. Ciphertext-policy attribute-based encryption (CP-ABE) is a well-known cryptographic technology for guaranteeing data confidentiality and fine-grained data access control. It enables data owners to define flexible access policy for data sharing. However, in CP-ABE systems, the problems of the time required to encrypt, decrypt and attribute update are long-standing unsolved in the literature. In this paper, we propose the first access control (CP-ABE) scheme supporting outsourcing capability and attribute update for fog computing. Specifically, the heavy computation operations of encryption and decryption are outsourced to fog nodes, thus the computation operations for data owners to encrypt and users to decrypt are irrelevant to the number of attributes in the access structure and secret keys, respectively. The cost brought by attribute update is efficient in the sense that we only concentrate on the update of the ciphertext associated with the corresponding updated attribute. The security analysis shows that the proposed scheme is secure under the decisional bilinear Diffie–Hellman assumption. The proposed scheme is efficient, and the time of encryption for data owners and decryption for users are small and constant. The computational ability of fog nodes are fully utilizing during the access control, so the tiny computing cost is left to end users with resource-constrained devices.
---
paper_title: Security and Privacy in Smart Health: Efficient Policy-Hiding Attribute-Based Access Control
paper_content:
With the rapid development of the Internet of Things and cloud computing technologies, smart health (s-health) is expected to significantly improve the quality of health care. However, data security and user privacy concerns in s-health have not been adequately addressed. As a well-received solution to realize fine-grained access control, ciphertext-policy attribute-based encryption (CP-ABE) has the potential to ensure data security in s-health. Nevertheless, direct adoption of the traditional CP-ABE in s-health suffers two flaws. For one thing, access policies are in cleartext form and reveal sensitive health-related information in the encrypted s-health records (SHRs). For another, it usually supports small attribute universe, which places an undesirable limitation on practical deployments of CP-ABE because the size of its public parameters grows linearly with the size of the universe. To address these problems, we introduce PASH, a privacy-aware s-health access control system, in which the key ingredient is a large universe CP-ABE with access policies partially hidden. In PASH, attribute values of access policies are hidden in encrypted SHRs and only attribute names are revealed. In fact, attribute values carry much more sensitive information than generic attribute names. Particularly, PASH realizes an efficient SHR decryption test which needs a small number of bilinear pairings. The attribute universe can be exponentially large and the size of public parameters is small and constant. Our security analysis indicates that PASH is fully secure in the standard model. Performance comparisons and experimental results show that PASH is more efficient and expressive than previous schemes.
---
paper_title: Secure and fine-grained access control on e-healthcare records in mobile cloud computing
paper_content:
Abstract In the era of Internet of things, wearable devices can be used to monitor residents’ health and upload collected health data to cloud servers for sharing, which facilitates the development of e-healthcare record (EHR) systems. However, before finding wide applications, EHR systems have to tackle privacy and efficiency challenges. For one thing, the confidentiality of EHRs is one of most important issues concerned by patients. For another, wearable devices in mobile cloud computing are often resource-constrained to some extent. In this paper, we propose a fine-grained EHR access control scheme which is proven secure in the standard model under the decisional parallel bilinear Diffie–Hellman exponent assumption. In the proposed scheme, an EHR owner can generate offline ciphertexts before knowing EHR data and access policies, which performs a majority of computation tasks. Furthermore, the online phase can rapidly assemble the final ciphertexts when EHR data and access policies become known. Our EHR access control scheme allows access policies encoded in linear secret sharing schemes. Extensive performance comparisons and simulation results indicate that the proposed solution is very suitable for mobile cloud computing.
---
paper_title: Large universe attribute based access control with efficient decryption in cloud storage system
paper_content:
Propose large universe attribute based access control scheme with efficient decryption.Solve the problems that ciphertext size and decryption time scale with the complexity of access structure.Our scheme reduces ciphertext size and decryption time without the cloud computing server knowing the underlying plaintext.Our scheme verifies the correctness of transformation done by the cloud computing server.Use PowerTutor to test power consumed by OutKeyGen, Decrypt, and OutDecrypt in Android. Ciphertext Policy Attribute Based Encryption scheme is a promising technique for access control in the cloud storage, since it allows the data owner to define access policy over the outsourced data. However, the existing attribute based access control mechanism in the cloud storage is based on small universe construction, where the attribute set is defined at setup, and the size of the public parameters scales with the number of attributes. A large number of new attributes need to be added to the system over time, small universe attribute based access control is no longer suitable for cloud storage, whereas large universe attribute based encryption where any string can be employed as an attribute and attributes are not required to be enumerated at system setup meets this requirement. Unfortunately, one of the main efficiency drawbacks of existing large universe attribute based encryption is that ciphertext size and decryption time scale with the complexity of the access structure. In this work, we propose large universe attribute based access control scheme with efficient decryption. The user provides the cloud computing server with a transformation key with which the cloud computing server transforms the ciphertext associated with the access structure satisfied by the attributes associated with the private key into a simple and short ciphertext; thus it significantly reduces the time for the user to decrypt the ciphertext without the cloud computing server knowing the underlying plaintext; the user can check whether the transformation done by the cloud computing server is correct to verify transformation correctness. Security analysis and performance evaluation show our scheme is secure and efficient.
---
paper_title: Efficient and robust attribute-based encryption supporting access policy hiding in Internet of Things
paper_content:
The term of Internet of Things (IoT) remarkably increases the ubiquity of the internet by integrating smart object-based infrastructures. How to achieve efficient fine-grained data access control while preserving data privacy is a challenge task in the scenario of IoT. Despite ciphertext-policy attribute-based encryption (CP-ABE) can provide fine-grained data access control by allowing the specific users whose attributes match the access policy to decrypt ciphertexts. However, existing CP-ABE schemes will leak users attribute values to the attribute authority (AA) in the phase of key generation, which poses a significant threat to users privacy. To address this issue, we propose a new CP-ABE scheme which can successfully protect the users attribute values against the AA based on 1-out-of-n oblivious transfer technique. In addition, we use Attribute Bloom Filter to protect the attribute type of the access policy in the ciphertext. Finally, security and efficiency evaluations show that the proposed scheme can achieve the desired security goals, while keeping comparable computation overhead. We propose an efficient and robust attribute-encryption scheme supporting access policy hidden based on 1-out-of-n oblivious transfer.Our scheme does not require any change for the outsourced data in case of a new data is uploaded.We show how to extend the proposed scheme to support multi-user scenarios.
---
paper_title: Secure attribute-based data sharing for resource-limited users in cloud computing
paper_content:
Abstract Data sharing becomes an exceptionally attractive service supplied by cloud computing platforms because of its convenience and economy. As a potential technique for realizing fine-grained data sharing, attribute-based encryption (ABE) has drawn wide attentions. However, most of the existing ABE solutions suffer from the disadvantages of high computation overhead and weak data security, which has severely impeded resource-constrained mobile devices to customize the service. The problem of simultaneously achieving fine-grainedness, high-efficiency on the data owner's side, and standard data confidentiality of cloud data sharing actually still remains unresolved. This paper addresses this challenging issue by proposing a new attribute-based data sharing scheme suitable for resource-limited mobile users in cloud computing. The proposed scheme eliminates a majority of the computation task by adding system public parameters besides moving partial encryption computation offline. In addition, a public ciphertext test phase is performed before the decryption phase, which eliminates most of computation overhead due to illegitimate ciphertexts. For the sake of data security, a Chameleon hash function is used to generate an immediate ciphertext, which will be blinded by the offline ciphertexts to obtain the final online ciphertexts. The proposed scheme is proven secure against adaptively chosen-ciphertext attacks, which is widely recognized as a standard security notion. Extensive performance analysis indicates that the proposed scheme is secure and efficient.
---
paper_title: Fine-grained access control system based on fully outsourced attribute-based encryption
paper_content:
First fully outsourced attributed-based encryption scheme.Lightweight operations for the private key generator and users.Imperceptible communication cost for the private key generator and users.Rigorous theoretical and detailed experimental analyses of our proposal.Suitable for cloud applications on mobile devices. Attribute-based encryption (ABE) has potential to be applied in cloud computing applications to provide fine-grained access control over encrypted data. However, the computation cost of ABE is considerably expensive, because the pairing and exponentiation operations grow with the complexity of access formula. In this work, we propose a fully outsourced ciphertext-policy ABE scheme that for the first time achieves outsourced key generation, encryption and decryption simultaneously. In our scheme, heavy computations are outsourced to public cloud service providers, leaving no complex operations for the private key generator (PKG) and only one modular exponentiation for the sender or the receiver, and the communication cost of the PKG and users is optimized. Moreover, we give the security proof and implement our scheme in Charm, and the experimental results indicate that our scheme is efficient and practical.
---
paper_title: Improving Privacy and Security in Decentralizing Multi-Authority Attribute-Based Encryption in Cloud Computing
paper_content:
Decentralizing multi-authority attribute-based encryption (ABE) has been adopted for solving problems arising from sharing confidential corporate data in cloud computing. For decentralizing multi-authority ABE systems that do not rely on a central authority, collusion resistance can be achieved using a global identifier. Therefore, identity needs to be managed globally, which results in the crucial problems of privacy and security. A scheme is developed that does not use a central authority to manage users and keys, and only simple trust relations need to be formed by sharing the public key between each attribute authority (AA). User identities are unique by combining a user’s identity with the identity of the AA where the user is located. Once a key request needs to be made to an authority outside the domain, the request needs to be performed by the authority in the current domain rather than by the users, so, user identities remain private to the AA outside the domain, which will enhance privacy and security. In addition, the key issuing protocol between AA is simple as the result of the trust relationship of AA. Moreover, extensibility for authorities is also supported by the scheme presented in this paper. The scheme is based on composite order bilinear groups. A proof of security is presented that uses the dual system encryption methodology.
---
paper_title: Improving Privacy and Security in Decentralized Ciphertext-Policy Attribute-Based Encryption
paper_content:
In previous privacy-preserving multiauthority attribute-based encryption (PPMA-ABE) schemes, a user can acquire secret keys from multiple authorities with them knowing his/her attributes and furthermore, a central authority is required. Notably, a user’s identity information can be extracted from his/her some sensitive attributes. Hence, existing PPMA-ABE schemes cannot fully protect users’ privacy as multiple authorities can collaborate to identify a user by collecting and analyzing his attributes. Moreover, ciphertext-policy ABE (CP-ABE) is a more efficient public-key encryption, where the encryptor can select flexible access structures to encrypt messages. Therefore, a challenging and important work is to construct a PPMA-ABE scheme where there is no necessity of having the central authority and furthermore, both the identifiers and the attributes can be protected to be known by the authorities. In this paper, a privacy-preserving decentralized CP-ABE (PPDCP-ABE) is proposed to reduce the trust on the central authority and protect users’ privacy. In our PPDCP-ABE scheme, each authority can work independently without any collaboration to initial the system and issue secret keys to users. Furthermore, a user can obtain secret keys from multiple authorities without them knowing anything about his global identifier and attributes.
---
| Title: Survey on Revocation in Ciphertext-Policy Attribute-Based Encryption
Section 1: Introduction
Description 1: Introduce the topic, its importance in cloud storage and data security, and highlight the focus on CP-ABE and the revocation issue.
Section 2: Elliptic Curve Cryptography
Description 2: Explain elliptic curve cryptography, its principles, and its relevance to CP-ABE.
Section 3: Attribute-Based Encryption
Description 3: Provide an overview of attribute-based encryption, differentiating between KP-ABE and CP-ABE, and discuss key requirements for ABE systems.
Section 4: Ciphertext-Policy Attribute-Based Encryption (CP-ABE)
Description 4: Describe the CP-ABE mechanism, its entities, and operational processes, and discuss its merits and issues.
Section 5: The Revocation Problem
Description 5: Detail the importance of revocation in access control, challenges faced, and types of revocation in CP-ABE schemes.
Section 6: The Types of the CP-ABE Scheme
Description 6: Classify CP-ABE schemes into single authority and multiauthority, describing the advantages and limitations of each.
Section 7: The Single Authority Scheme
Description 7: Discuss the characteristics, benefits, and challenges of single authority CP-ABE schemes with examples.
Section 8: The Multiauthority Attribute Based Access Control System
Description 8: Elaborate on multiauthority systems, their advantages over single authority schemes, and the associated challenges.
Section 9: Research Challenges and Future Directions
Description 9: Identify current research challenges in CP-ABE revocation and potential future directions for enhancing security and efficiency.
Section 10: Conclusions
Description 10: Summarize the findings, emphasize the need for efficient revocation techniques, and propose future research areas. |
Small-signal stability analysis for the multi-terminal VSC MVDC distribution network; a review | 8 | ---
paper_title: VSC-Based MVDC Railway Electrification System
paper_content:
This paper proposes a new railway electrification system in which the voltage-source converter (VSC) becomes the basic building block. This will allow existing railways, comprising several ac and dc subsystems, to be transformed into simpler medium-voltage dc (MVDC) multiterminal power systems feeding mobile loads. Moreover, the VSC-based unified scheme will substantially facilitate the connectivity among otherwise heterogeneous railway systems, while the integration of distributed generation and storage is achieved in a straightforward fashion. In addition to the general MVDC architecture, details are provided about the dc catenary layout, dual-voltage locomotive configurations, and dc-dc links between urban and long-distance railways. The need for a supervisory control system, and its role in coordinating local VSC controllers, so that the resulting power flows are optimized while the catenary voltage is kept within limits, are discussed. The proposed railway electrification paradigm is compared with the standard 25-kV, ac electrification system by means of a real case study.
---
paper_title: Constant power loads in More Electric Vehicles - an overview
paper_content:
Power electronic converters are playing an ever increasingly important role in today's electric vehicles. These regulated power electronic converters act as Constant Power Loads (CPLs) to the converters driving them. A CPL has a negative input impedance which results in a destabilising effect on closed-loop converters driving them. The use of power electronics in More Electric Vehicles (MEVs) is discussed, along with the CPLs they present and how they are modeled. Negative impedance instability is then discussed and the stability of various DC-DC converter topologies when loaded by CPLs is analyzed in both voltage-mode and current-mode control operating in Continuous Conduction Mode (CCM) and Discontinuous Conduction Mode (DCM). Finally, the currently proposed control strategies that can stabilize a power electronic converter are discussed.
---
paper_title: POWER ELECTRONICS FOR DISTRIBUTED ENERGY SYSTEMS AND TRANSMISSION AND DISTRIBUTION APPLICATIONS
paper_content:
Power electronics can provide utilities the ability to more effectively deliver power to their customers while providing increased reliability to the bulk power system. In general, power electronics is the process of using semiconductor switching devices to control and convert electrical power flow from one form to another to meet a specific need. These conversion techniques have revolutionized modern life by streamlining manufacturing processes, increasing product efficiencies, and increasing the quality of life by enhancing many modern conveniences such as computers, and they can help to improve the delivery of reliable power from utilities. This report summarizes the technical challenges associated with utilizing power electronics devices across the entire spectrum from applications to manufacturing and materials development, and it provides recommendations for research and development (R&D) needs for power electronics systems in which the U.S. Department of Energy (DOE) could make a substantial impact toward improving the reliability of the bulk power system.
---
paper_title: A case for medium voltage DC for distribution circuit applications
paper_content:
In this survey paper, applications of medium voltage direct current (MVDC) to distribution substations are discussed and compared to conventional ac for power conversion from transmission voltage levels to distribution voltage levels. Technical changes in the substation due to the dc equipment, efficiency of the substation, and protective measures are major aspects to consider in the MVDC concept development.
---
paper_title: Impact and opportunities of medium-voltage DC grids in urban railway systems
paper_content:
Due to the increasing number of power-electronic components connected to the grid, medium-voltage dc (MVDC) grids are a promising alternative to established ac distribution grids. Within this work, the advantages and additional flexibility of substations for (urban) railways connected to future MVDC distribution grids are investigated. Possible topologies are presented and the efficiencies are compared.
---
paper_title: VSC HVDC transmission corridor: An option for PV power injection and AC network stability support
paper_content:
The pressure of increasing power demand and supply inequality is forcing utilities to interconnect AC systems to meet demands. High Voltage Direct Current (HVDC) schemes are becoming a more attractive solution as they have been used extensively in interconnected power systems worldwide. This paper investigates the role of voltage source converter (VSC) based HVDC transmission corridor for PV power injection and for AC network stability support. Overview of Namibia Caprivi-link interconnector a case study, potential of very large scale PV in Namibia and prospects of PV power injection on the DC-link is presented. The system is modelled simulated in Matlab/Simulink. Critical contingencies such as sudden island conditions, three-phase to ground fault are simulated with and without PV penetration. Results show the stability support on the AC side networks by PV power injection on the dc-link.
---
paper_title: Design and simulation of a DC electric vehicle charging station connected to a MVDC infrastructure
paper_content:
Medium Voltage DC (MVDC) infrastructure serves as a platform for interconnecting renewable electric power generation, including wind and solar. Abundant loads such as industrial facilities, data centers, and electric vehicle charging stations (EVCS) can also be powered using MVDC technology. MVDC networks are expected to improve efficiency by serving as an additional layer between the transmission and distribution level voltages for which generation sources and loads could directly connect with smaller rated power conversion equipment. This paper investigates an EVCS powered by a MVDC bus. A bidirectional DC-DC converter with appropriate controls serves as the interface between the EVCS and MVDC bus. Two scenarios are investigated for testing and comparing EVCS operation: 1) EVCS power supplied by the interconnected MVDC model and 2) EVCS power supplied by an equivalent voltage source. Comparisons between both are discussed. The CCM/DCM buck mode operation of the bidirectional DC-DC converter is explored as well as the system isolation benefits that come with its use.
---
paper_title: Medium Voltage DC Network Modeling and Analysis with Preliminary Studies for Optimized Converter Configuration Through PSCAD Simulation Environment
paper_content:
With the advancement of high capacity power electronics technologies, most notably in high voltage direct current (HVDC) applications, the concept of developing and implementing future transmission networks through a DC backbone presents a realistic and advantageous option over traditional AC approaches. Currently, most consumer electrical equipment requires DC power to function, thus requiring an AC/DC conversion. New forms of distributed generation, such as solar photovoltaic power, produce a direct DC output. Establishing an accessible and direct supply of DC power to serve such resources and loads creates the potential to mitigate losses experienced in the AC/DC conversion process, reduce overall electrical system infrastructure, and lessen the amount of power generated from power plants, as well as other advantages. For the reasons listed, medium voltage DC (MVDC) networks represent a promising, initial platform for interconnecting relatively low voltage generation resources such as photovoltaic panels, serving loads, and supplying other equipment on a common DC bus bar. Future industrial parks, ship power systems, hybrid plug-in vehicles, and energy storage systems are all avenues for future implementation of the concept. This thesis introduces an initial design and simulation model of the MVDC network concept containing renewable generation, power electronic converters, and induction machine loads. Each of the equipment models are developed and modeled in PSCAD and validated analytically. The models of the represented system equipment and components are individually presented and accompanied with their simulated results to demonstrate the validity of the overall model. Finally, the equipment models are assembled together into a meshed system to perform traditional preliminary studies on the overall power system including wind speed adjustments, load energizing, and fault-clearing analysis in order to evaluate aspects of various operational phenomena such as potential overvoltages, system stability issues, and other unexpected occurrences.
---
paper_title: The potential of distributed generation to provide ancillary services
paper_content:
The growing concerns regarding electric power quality and availability have led to the installation of more and more distributed generation. In parallel and in the context of an accelerating trend towards deregulation of the electric industry, the unbundling of services, many grouped under ancillary services, should create a market for some of these services. This paper discusses the potential of distributed generation (DG) to provide some of these services. In particular, DG can serve locally as the equivalent of a spinning reserve and voltage support of the AC bus. The main types of distributed generation with emphasis on the power electronic interface and the configurations appropriate to provide ancillary services are reviewed. The flexibility and features provided by the power electronic interface are illustrated. In addition to control of the real power, other functions can be incorporated into the design of the interface to provide services, such as reactive power, and resources associated with power quality. These include voltage sag compensation and harmonic filtering. The implications on the design of the power converter interface are discussed.
---
paper_title: Wind Farm Grid Integration Using VSC Based HVDC Transmission - An Overview
paper_content:
The paper gives an overview of HVAC and HVDC connection of wind farm to the grid, with an emphasis on voltage source converter (VSC)-based HVDC for large wind farms requiring long distance cable connection. Flexible control capabilities of a VSC-based HVDC system enables smooth integration of wind farm into the power grid network while meeting the grid code requirements (GCR). Operation of a wind farm with VSC-based HVDC connection is described.
---
paper_title: DC distribution for industrial systems: opportunities and challenges
paper_content:
This paper investigates the opportunities and chal- lenges associated with adopting a dc distribution scheme for indus- trial power systems. A prototype dc distribution system has been simulated to investigate the issues. One of the issues focused is the interaction between power converters that are used to convert ac to dc and dc to ac. Another challenging issue investigated is the system grounding. These issues become challenging mainly due to the neu- tral voltage shift associated with the power converters. The paper shows that converter interactions can be minimized with proper filtering and control on the converters. The paper also proposes a grounding scheme and shows that this scheme provides an ef- fective solution by keeping the neutral voltages low under normal conditions and by limiting the fault currents during fault condi- tions. With these features, dc distribution provides very reliable and high-quality power.
---
paper_title: Voltage transient propagation in AC and DC datacenter distribution architectures
paper_content:
Global industries, governments, organizations, and institutions rely on the operation of datacenters in order to successfully meet their day-to-day objectives. The increased reliance on datacenters combined with the growth of the industry sector has led to more careful considerations for datacenter design. Stakeholders in the datacenter industry have begun to consider alternative electrical distribution systems that provide greater reliability and operating efficiency, as well as lower capital costs and ease-of-installation. Among the many alternatives proposed, facility-level DC distribution has emerged as the option providing the highest operating efficiency.
---
paper_title: MVDC marine electrical distribution: Are we ready?
paper_content:
Marine high-power on-board electrical systems are predominantly utilizing three-phase medium voltage alternating current (MVAC) distribution. Depending on the size and purpose of the ship, on-board electrical systems provide supply to loads in excess of 60MW. Increasingly, medium voltage direct current (MVDC) distribution systems are being considered as an alternative. An increase in the fuel efficiency of the prime movers, the removal of bulky low frequency transformer and the easier integration of different storage technologies are especially attractive. This paper discusses the opportunities but also the technical difficulties associated with the transition from an MVAC to an MVDC system, using an existing LNG tanker MVAC on-board distribution system, as example.
---
paper_title: Medium-voltage DC distribution grids in urban areas
paper_content:
The potential of medium-voltage dc (MVDC) grids in various applications has been under investigation by different researchers. In several works, the advantages of implementing MVDC technology into collector grids of wind and photovoltaic (PV) parks are pointed out. The main driver is that the grid-side inverter of the distributed generators as well as lossy grid filters and bulky 50 Hz transformer become obsolete. However, MVDC grids are a promising alternative to established ac grids not only for power generation. Within this work, the advantages regarding system efficiency and investment costs of using MVDC for the distribution of electrical energy is presented. Conventional low-voltage ac-grids and possible medium-voltage ac grids are also analyzed to compare the different implementation approaches.
---
paper_title: Constant power loads and negative impedance instability in automotive systems: definition, modeling, stability, and control of power electronic converters and motor drives
paper_content:
Power electronic converters and electric motor drives are being put into use at an increasingly rapid rate in advanced automobiles. However, the new advanced automotive electrical systems employ multivoltage level hybrid ac and dc as well as electromechanical systems that have unique characteristics, dynamics, and stability problems that are not well understood due to the nonlinearity and time dependency of converters and because of their constant power characteristics. The purpose of this paper is to present an assessment of the negative impedance instability concept of the constant power loads (CPLs) in automotive power systems. The main focus of this paper is to analyze and propose design criteria of controllers for automotive converters/systems operating with CPLs. The proposed method is to devise a new comprehensive approach to the applications of power electronic converters and motor drives in advanced automotive systems. Sliding-mode and feedback linearization techniques along with large-signal phase plane analysis are presented as methods to analyze, control, and stabilize automotive converters/systems with CPLs
---
paper_title: Optimum design of medium-voltage DC collector grids depending on the offshore-wind-park power
paper_content:
DC collector grids within offshore wind park offer advantages regarding efficiency and investment costs. Dual-active bridge (DAB) dc-dc converter systems can be applied for stepping-up the dc output-voltage of wind turbines (WT). The increase of the voltage level reduces the effort for cabling and increases the system efficiency. Within this paper, the advantage of DAB converter systems used in offshore medium-voltage dc (MVDC) grids is presented. Also, the optimum design of wind park clusters regarding the wake effect and grid losses is investigated to maximize the energy yield.
---
paper_title: Issues of Connecting Wind Farms into Power Systems
paper_content:
Wind power industry is developing rapidly, more and more wind farms are being connected into power systems. Integration of large scale wind farms into power systems presents some challenges that must be addressed, such as system operation and control, system stability, and power quality. This paper describes modern wind power systems, presents requirements of wind turbine connection and discusses the possible control methods for wind turbines to meet the specifications
---
paper_title: Impact of DFIG Based Offshore Wind Farms Connected Through VSC-HVDC Link on Power System Stability
paper_content:
With the increased levels of offshore wind power penetration into power systems, the impact of offshore wind power on stability of power systems require more investigation. In this paper, the effects of a large scale doubly fed induction generator (DFIG) based offshore wind farm (OWF) on power system stability are examined. The OWF is connected to the main onshore grid through a voltage source converter (VSC) based high voltage direct current (HVDC) link. A large scale DFIG based OWF is connected to the New England 10-machine 39-bus test system through a VSC-HVDC. One of the synchronous generators in the test system is replaced by an OWF with an equivalent generated power. As the voltage source converter can control the active and reactive power independently, the use of the onshore side converter to control its terminal voltage is investigated. The behaviour of the test system is evaluated under both small and large grid disturbances in both cases with and without the offshore wind farm.
---
paper_title: Design aspects of a medium-voltage direct current (MVDC) grid for a university campus
paper_content:
Today's power systems use alternating current (ac) for transmission and distribution of electrical energy, although the first grids were based on direct current (dc). Due to the absence of appropriate equipment to change voltage levels dc technology did not become widely accepted and was finally ruled out by the more efficient ac infrastructure. However, as a result of considerable technical progress, high-voltage direct current (HVDC) transmission has found its way back into power systems. At lower voltage and power levels medium-voltage dc (MVDC) distribution has been proposed for offshore wind farms and industrial applications. This paper describes the design of an MVDC grid for the interconnection of high-power test benches at a university campus. Voltage control within the dc grid as well as the behavior in different fault scenarios is analyzed using numerical simulations. To assess the environmental impact of the grid the magnetic flux density emitted by the dc cable lines is calculated.
---
paper_title: Ship to Grid: Medium-Voltage DC Concepts in Theory and Practice
paper_content:
Corporate research centers, universities, power equipment vendors, end users, and other market participants around the world are beginning to explore and consider the use of dc in future transmission and distribution system applications. Recent developments and trends in electric power consumption indicate an increasing use of dc-based power and constant power loads. In addition, growth in renewable energy resources requires dc interfaces for optimal integration. A strong case is being made for intermeshed ac and dc networks, with new concepts emerging at the medium-voltage (MV) level for MV dc infrastructure developments.
---
paper_title: Power System Stability and Control
paper_content:
Part I: Characteristics of Modern Power Systems. Introduction to the Power System Stability Problem. Part II: Synchronous Machine Theory and Modelling. Synchronous Machine Parameters. Synchronous Machine Representation in Stability Studies. AC Transmission. Power System Loads. Excitation in Stability Studies. Prime Mover and Energy Supply Systems. High-Voltage Direct-Current Transmission. Control of Active Power and Reactive Power. Part III: Small Signal Stability. Transient Stability. Voltage Stability. Subsynchronous Machine Representation in Stability Studies. AC Transmission. Power System Loads. Excitation in Stability Studies. Prime Mover and Energy Supply Systems, High-Voltage Direct-Current Transmission. Control of Active Power and Reactive Power. Part III: Small Signal Stability. Transient Stability. Voltage Stability. Subsynchronous Oscillations. Mid-Term and Long-Term Stability. Methods of Improving System Stability.
---
paper_title: Power electronic technologies for flexible DC distribution grids
paper_content:
Market liberalization has significantly changed the energy supply system in Europe, i.e. from a top-down centralized power generation system towards a more decentralized system. In addition, partially due to incentive programs, vast amounts of renewable power generator systems (mostly wind and PV) have been installed. More flexible grid structures are needed to cope with this new landscape of distributed generation. This paper explores the role of state-of-the-art power electronics to realize the required infrastructure. The potential efficiency gains and cost savings that can be realized by using DC-to-DC converters in electronic substations are presented in detail.
---
paper_title: Modeling, analysis, and validation of a preliminary design for a 20 kV medium voltage DC substation
paper_content:
With the advancement of high capacity power electronics technologies, most notably in HVDC applications, the concept of developing and implementing future transmission subsystems through a DC backbone presents a realistic and advantageous option over traditional AC approaches. Currently, electrical equipment or devices requiring DC power to function, whether loads or resources, necessitate AC/DC conversion technologies. Having an accessible and direct supply of DC power to serve such loads and resources creates the potential to mitigate losses experienced in the AC/DC conversion process, reduce overall electrical system infrastructure, and lessen the amount of power generated from power plants, as well as other advantages. This paper introduces an initial design and simulation model of a medium voltage DC (MVDC) substation concept containing renewable generation, power electronic converters, and induction machine loads. Each of the components is developed and modeled in PSCAD and validated analytically. The models of the represented system equipment and components are individually presented and accompanied with their simulated results to demonstrate the validity of the overall model. Future work will build upon the model to develop additional loads and resources, control strategies for optimized integration, and more detailed component models.
---
paper_title: MVDC - The New Technology for Distribution Networks
paper_content:
MVDC is starting to be considered as an option for enhancing transfer capacity and providing increased power quality at distribution networks. There is a term starting to be used of “soft open-point” which can provide controlled power transfer between two 11kV or 33kV distribution groups, without affecting short-circuit levels, voltage differences, loop flows or limitations due to phase-angle differences. The 4-quadrant converters can provide reactive power support and voltage control at each end of the link and multi-terminal is also feasible. There are future technology opportunities including enhancement of existing corridors through the conversion of existing AC lines to DC. This paper provides a technology overview as well as information on recently deployed projects ranging from linking of oil and gas platforms, through to an urban infeed. It will summarise the benefits of MVDC and the applications where it may provide a competitive or preferential alternative solution to conventional technology.
---
paper_title: A review of design criteria for low voltage DC distribution stability
paper_content:
The performance advantages for Low Voltage Direct Current electrical distribution is becoming clearer however, the commercial opportunities, design processes and standardisation is currently lacking. This paper presents an overview of the current development status of LVDC distribution and reviews the modelling and stability criteria available to designers of DC distribution systems for land, aerospace and marine power system applications.
---
paper_title: Understanding of tuning techniques of converter controllers for VSC-HVDC
paper_content:
A mathematical model of a voltage source converter is presented in the synchronous reference frame for investigating VSC-HVDC for transferring wind power through a long distance. This model is used to analyze voltage and current control loops for the VSC and study their dynamics. Vector control is used for decoupled control of active and reactive power and the transfer functions are derived for the control loops. In investigating the operating conditions for HVDC systems, the tuning of controllers is one of the critical stages of the design of control loops. Three tuning techniques are discussed in the paper and analytical expressions are derived for calculating the parameters of the current and voltage controllers. The tuning criteria are discussed and simulations are used to test the performance of such tuning techniques.
---
paper_title: Admittance space stability analysis of power electronic systems
paper_content:
Power electronics based power distribution systems (PEDSs) are becoming increasingly common, particularly in marine and aerospace applications. Stability analysis of this class of systems is crucial due to the potential for negative impedance instability. Existing techniques of stability analysis introduce artificial conservativeness, are sensitive to component grouping, and at the same time do not explicitly address uncertainties and variations in operating point. A new stability criterion, which reduces artificial conservativeness and is also insensitive to component grouping is described. In addition, a means of readily establishing design specifications from an arbitrary stability criterion which specifically includes a provision to incorporate uncertainty, parameter variation, and nonlinearities is set forth. The method is presented in the context of a hardware test system and is experimentally validated.
---
paper_title: A review of design criteria for low voltage DC distribution stability
paper_content:
The performance advantages for Low Voltage Direct Current electrical distribution is becoming clearer however, the commercial opportunities, design processes and standardisation is currently lacking. This paper presents an overview of the current development status of LVDC distribution and reviews the modelling and stability criteria available to designers of DC distribution systems for land, aerospace and marine power system applications.
---
paper_title: Null double injection and the extra element theorem
paper_content:
The extra element theorem (EET) states that any transfer function of a linear system can be expressed in terms of its value when a given 'extra' element is absent, and a correction factor involving the extra element and two driving-point impedances are seen by the element. In the present work, the EET is derived and applied to several examples in a manner that has been developed and refined in the classroom over a number of years. The concept of null double injection is introduced first, because it is the key to making easy the calculation of the two driving-point impedances needed for the EET correction factor. The EET for series and parallel elements is then considered, and attention is also given to the EET as an analysis tool, to the symmetry of the two forms of the EET, and to return ratios and sensitivity. >
---
paper_title: Comprehensive Review of Stability Criteria for DC Power Distribution Systems
paper_content:
Power-electronics-based dc power distribution systems, consisting of several interconnected feedback-controlled switching converters, suffer from potential degradation of stability and dynamic performance caused by negative incremental impedances due to the presence of constant power loads. For this reason, the stability analysis of these systems is a significant design consideration. This paper reviews all the major stability criteria for dc distribution systems that have been developed so far: the Middlebrook Criterion, the Gain Margin and Phase Margin Criterion, the Opposing Argument Criterion, the Energy Source Analysis Consortium (ESAC) Criterion, and the Three-Step Impedance Criterion. In particular, the paper discusses, for each criterion, the artificial conservativeness characteristics in the design of dc distribution systems, and the formulation of design specifications that ensure system stability. Moreover, the Passivity-Based Stability Criterion is discussed, which has been recently proposed as an alternative stability criterion. While all prior stability criteria are based on forbidden regions for the polar plot of the so-called minor loop gain, which is an impedance ratio, the proposed criterion is based on imposing passivity of the overall bus impedance. A meaningful simulation example is presented to illustrate the main characteristics of the reviewed stability criteria.
---
paper_title: Three-dimensional stability analysis of DC power electronics based systems
paper_content:
Power electronics based power distribution systems are becoming increasingly common, particularly in marine and aerospace applications. Stability analysis of this class of systems is crucial due to the potential for negative impedance instability. Existing techniques of stability analysis introduce artificial conservativeness, are sensitive to component grouping, and at the same time do not explicitly address uncertainties and variations in operating point. Recently, a new stability criterion which reduces artificial conservativeness and is also insensitive to component grouping has been set forth along with a means of readily establishing design specifications from an arbitrary stability criterion which specifically includes a provision to incorporate uncertainty, parameter variation, and nonlinearities. Therein, the method is used to develop a load admittance constraint based on a generalized source impedance. In this paper, that work is further explained and the converse problem, that of generating a constraint on the source impedance from the load admittance, is also illustrated.
---
paper_title: Comprehensive review of stability criteria for DC distribution systems
paper_content:
Power-electronics-based DC power distribution systems, consisting of several interconnected feedback-controlled switching converters, suffer from potential degradation of stability induced by negative incremental impedances due to the presence of constant power loads. For this reason, the stability analysis of these systems is a significant design consideration. This paper reviews all the major stability criteria for DC distribution systems that have been developed so far: the Middlebrook Criterion, the Gain Margin and Phase Margin Criterion, the Opposing Argument Criterion, and the Energy Source Analysis Consortium Criterion. In particular, the paper discusses, for each criterion, the artificial conservativeness characteristics in the design of DC distribution systems, and the formulation of design specifications so that the system would be stable. Finally, the Passivity-Based Stability Criterion is discussed, which has been recently proposed to reduce design conservativeness and improve design-oriented characteristics. While all prior stability criteria are based on forbidden regions for the polar plot of the so-called minor loop gain which is an impedance ratio, the proposed criterion is based on imposing passivity of the overall bus impedance. However, stability is clearly an insufficient requirement for DC power distribution systems; performance is also a requirement. A simulation example is presented to illustrate that the Passivity-Based Stability Criterion guarantees both stability and performance.
---
paper_title: Design and simulation of a DC electric vehicle charging station connected to a MVDC infrastructure
paper_content:
Medium Voltage DC (MVDC) infrastructure serves as a platform for interconnecting renewable electric power generation, including wind and solar. Abundant loads such as industrial facilities, data centers, and electric vehicle charging stations (EVCS) can also be powered using MVDC technology. MVDC networks are expected to improve efficiency by serving as an additional layer between the transmission and distribution level voltages for which generation sources and loads could directly connect with smaller rated power conversion equipment. This paper investigates an EVCS powered by a MVDC bus. A bidirectional DC-DC converter with appropriate controls serves as the interface between the EVCS and MVDC bus. Two scenarios are investigated for testing and comparing EVCS operation: 1) EVCS power supplied by the interconnected MVDC model and 2) EVCS power supplied by an equivalent voltage source. Comparisons between both are discussed. The CCM/DCM buck mode operation of the bidirectional DC-DC converter is explored as well as the system isolation benefits that come with its use.
---
paper_title: Apparent impedance analysis: A new method for power system stability analysis
paper_content:
In this paper a new method for power system stability analysis is introduced. The method is based on injection of a small voltage or current in an arbitrary point of the system. The apparent impedance is defined as the ratio between the voltage and current in the injection point. It is shown that the apparent impedance can be used to estimate the eigenvalues of the system that are observable from the injection point. The eigenvalues are obtained by applying the Vector Fitting algorithm to the measured set of apparent impedances. The proposed method holds some advantages over the well established impedance-based analysis method: It is no longer needed to estimate the source and load impedance equivalents separately, and it is not necessary to make any assumption regarding where the source and load are located. This reduces the required measurements and data processing. Furthermore, the stability analysis is global in the sense that the resulting stability margin does not depend on the injection point location. Finally, the method is well suited for real-time implementation due to low computational requirements. The method is outlined for DC-systems, while further work will extend the theory to cover single-phase and three-phase AC systems.
---
paper_title: DC distribution for industrial systems: opportunities and challenges
paper_content:
This paper investigates the opportunities and chal- lenges associated with adopting a dc distribution scheme for indus- trial power systems. A prototype dc distribution system has been simulated to investigate the issues. One of the issues focused is the interaction between power converters that are used to convert ac to dc and dc to ac. Another challenging issue investigated is the system grounding. These issues become challenging mainly due to the neu- tral voltage shift associated with the power converters. The paper shows that converter interactions can be minimized with proper filtering and control on the converters. The paper also proposes a grounding scheme and shows that this scheme provides an ef- fective solution by keeping the neutral voltages low under normal conditions and by limiting the fault currents during fault condi- tions. With these features, dc distribution provides very reliable and high-quality power.
---
paper_title: Small-Signal Stability Assessment of Power Electronics Based Power Systems: A Discussion of Impedance- and Eigenvalue-Based Methods
paper_content:
This paper investigates the small-signal stability of power electronics-based power systems in frequency domain. A comparison between the impedance-based and the eigenvalue-based stability analysis methods is presented. A relation between the characteristics equation of the eigenvalues and poles and zeros of the minor-loop gain from the impedance-based analysis have been derived analytically. It is shown that both stability analysis methods can effectively determine the stability of the system. In the case of the impedance-based method, a low phase-margin in the Nyquist plot of the minor-loop gain indicates that the system can exhibit harmonic oscillations. A weakness of the impedance method is the limited observability of certain states given its dependence on the definition of local source-load subsystems, which makes it necessary to investigate the stability at different subsystems. To address this limitation, the paper discusses critical locations where the application of the method can reveal the impact of a passive component or a controller gain on the stability. On the other hand, the eigenvalue-based method, being global, can determine the stability of the entire system; however, it cannot unambiguously predict sustained harmonic oscillations in voltage source converter (VSC) based high voltage dc (HVdc) systems caused by pulse-width modulation (PWM) switching. To generalize the observations, the two methods have been applied to dc-dc converters. To illustrate the difference and the relation between the two-methods, the two stability analysis methods are then applied to a two-terminal VSC-based HVdc system as an example of power electronics-based power systems, and the theoretical analysis has been further validated by simulation and experiments.
---
paper_title: Impedance-Based Local Stability Criterion for DC Distributed Power Systems
paper_content:
This paper addresses the stability issue of dc distributed power systems (DPS). Impedance-based methods are effective for stability assessment of voltage-source systems and current-source systems. However, these methods may not be suitable for applications involving variation of practical parameters, loading conditions, system's structures, and operating modes. Thus, for systems that do not resemble simple voltage-source systems or current-source systems, stability assessment is much less readily performed. This paper proposes an impedance-based criterion for stability assessment of dc DPS. We first classify any converter in a dc DPS as either a bus voltage controlled converter (BVCC) or a bus current controlled converter (BCCC). As a result, a dc DPS can be represented in a general form regardless of its structure and operating mode. Then, the minor loop gain of the standard dc DPS is derived precisely using a two-port small signal model. Application of the Nyquist criterion on the derived minor loop gain gives the stability requirement for the dc DPS. This proposed criterion is applicable to dc DPSs, regardless of the control method and the connection configuration. Finally, a 480 W photovoltaic (PV) system with battery energy storage and a 200 W dc DPS, in which the source converter employs a droop control, are fabricated to validate the effectiveness of the proposed criterion.
---
paper_title: Power electronic technologies for flexible DC distribution grids
paper_content:
Market liberalization has significantly changed the energy supply system in Europe, i.e. from a top-down centralized power generation system towards a more decentralized system. In addition, partially due to incentive programs, vast amounts of renewable power generator systems (mostly wind and PV) have been installed. More flexible grid structures are needed to cope with this new landscape of distributed generation. This paper explores the role of state-of-the-art power electronics to realize the required infrastructure. The potential efficiency gains and cost savings that can be realized by using DC-to-DC converters in electronic substations are presented in detail.
---
paper_title: Apparent Impedance Analysis: A Small-Signal Method for Stability Analysis of Power Electronic-Based Systems
paper_content:
In this paper, a new method for power system stability analysis is introduced. The method is based on injection of a small voltage or current in an arbitrary point of a power system. The apparent impedance is then defined as the ratio between the voltage and current at the injection point. It is shown that the apparent impedance can be used to estimate the eigenvalues of the system that are observable from the injection point. The eigenvalues are obtained by applying system identification techniques to the measured set of apparent impedances. The method is similar to the well-established impedance-based stability analysis based on source and load impedance models. However, while the source/load impedance ratio is viewed as the minor-loop gain, the apparent impedance can be viewed as a closed-loop transfer function. It can also be expressed as the parallel connection of the source and load impedance. It is shown, in this paper, how the system eigenvalues can be extracted based on a set of apparent impedance values. The apparent impedance holds, therefore, complementary information compared with the existing impedance-based stability analysis. The method can also be used as a tool to validate analytically derived state-space models. In this paper, the method is presented as a simulation tool, while further work will extend it to include experimental setups. Two case studies are presented to illustrate the method: 1) a dc case with a buck converter feeding a constant power load and 2) a three-phase grid-connected voltage source converter with a current controller and a phase lock loop. The estimated (apparent) eigenvalues of the studied systems are equal to those obtained from the analytic state-space model.
---
paper_title: Stability analysis of an all-electric ship MVDC Power Distribution System using a novel Passivity-Based Stability Criterion
paper_content:
The present paper describes a recently proposed Passivity-Based Stability Criterion (PBSC) and demonstrates its usefulness and applicability to the stability analysis of the MVDC Power Distribution System for the All-Electric Ship. The criterion is based on imposing passivity of the overall DC bus impedance. If passivity of the bus impedance is ensured, stability is guaranteed as well. The PBSC, in contrast with existing stability criteria, such as the Middlebrook criterion and its extensions, which are based on the minor loop gain concept, i.e. an impedance ratio at a given interface, offers several advantages: reduction of artificial design conservativeness, insensitivity to component grouping, applicability to multi-converter systems and to systems in which the power flow direction changes, for example as a result of system reconfiguration. Moreover, the criterion can be used in conjunction with an active method to improve system stability, called the Positive Feed-Forward (PFF) control, for the design of virtual damping networks. By designing the virtual impedance so that the bus impedance passivity condition is met, the approach results in greatly improved stability and damping of transients on the DC bus voltage. Simulation validation is performed using a switching-model DC power distribution system.
---
paper_title: A review of design criteria for low voltage DC distribution stability
paper_content:
The performance advantages for Low Voltage Direct Current electrical distribution is becoming clearer however, the commercial opportunities, design processes and standardisation is currently lacking. This paper presents an overview of the current development status of LVDC distribution and reviews the modelling and stability criteria available to designers of DC distribution systems for land, aerospace and marine power system applications.
---
paper_title: Comprehensive Review of Stability Criteria for DC Power Distribution Systems
paper_content:
Power-electronics-based dc power distribution systems, consisting of several interconnected feedback-controlled switching converters, suffer from potential degradation of stability and dynamic performance caused by negative incremental impedances due to the presence of constant power loads. For this reason, the stability analysis of these systems is a significant design consideration. This paper reviews all the major stability criteria for dc distribution systems that have been developed so far: the Middlebrook Criterion, the Gain Margin and Phase Margin Criterion, the Opposing Argument Criterion, the Energy Source Analysis Consortium (ESAC) Criterion, and the Three-Step Impedance Criterion. In particular, the paper discusses, for each criterion, the artificial conservativeness characteristics in the design of dc distribution systems, and the formulation of design specifications that ensure system stability. Moreover, the Passivity-Based Stability Criterion is discussed, which has been recently proposed as an alternative stability criterion. While all prior stability criteria are based on forbidden regions for the polar plot of the so-called minor loop gain, which is an impedance ratio, the proposed criterion is based on imposing passivity of the overall bus impedance. A meaningful simulation example is presented to illustrate the main characteristics of the reviewed stability criteria.
---
paper_title: Comprehensive review of stability criteria for DC distribution systems
paper_content:
Power-electronics-based DC power distribution systems, consisting of several interconnected feedback-controlled switching converters, suffer from potential degradation of stability induced by negative incremental impedances due to the presence of constant power loads. For this reason, the stability analysis of these systems is a significant design consideration. This paper reviews all the major stability criteria for DC distribution systems that have been developed so far: the Middlebrook Criterion, the Gain Margin and Phase Margin Criterion, the Opposing Argument Criterion, and the Energy Source Analysis Consortium Criterion. In particular, the paper discusses, for each criterion, the artificial conservativeness characteristics in the design of DC distribution systems, and the formulation of design specifications so that the system would be stable. Finally, the Passivity-Based Stability Criterion is discussed, which has been recently proposed to reduce design conservativeness and improve design-oriented characteristics. While all prior stability criteria are based on forbidden regions for the polar plot of the so-called minor loop gain which is an impedance ratio, the proposed criterion is based on imposing passivity of the overall bus impedance. However, stability is clearly an insufficient requirement for DC power distribution systems; performance is also a requirement. A simulation example is presented to illustrate that the Passivity-Based Stability Criterion guarantees both stability and performance.
---
paper_title: Small-Signal Stability Assessment of Power Electronics Based Power Systems: A Discussion of Impedance- and Eigenvalue-Based Methods
paper_content:
This paper investigates the small-signal stability of power electronics-based power systems in frequency domain. A comparison between the impedance-based and the eigenvalue-based stability analysis methods is presented. A relation between the characteristics equation of the eigenvalues and poles and zeros of the minor-loop gain from the impedance-based analysis have been derived analytically. It is shown that both stability analysis methods can effectively determine the stability of the system. In the case of the impedance-based method, a low phase-margin in the Nyquist plot of the minor-loop gain indicates that the system can exhibit harmonic oscillations. A weakness of the impedance method is the limited observability of certain states given its dependence on the definition of local source-load subsystems, which makes it necessary to investigate the stability at different subsystems. To address this limitation, the paper discusses critical locations where the application of the method can reveal the impact of a passive component or a controller gain on the stability. On the other hand, the eigenvalue-based method, being global, can determine the stability of the entire system; however, it cannot unambiguously predict sustained harmonic oscillations in voltage source converter (VSC) based high voltage dc (HVdc) systems caused by pulse-width modulation (PWM) switching. To generalize the observations, the two methods have been applied to dc-dc converters. To illustrate the difference and the relation between the two-methods, the two stability analysis methods are then applied to a two-terminal VSC-based HVdc system as an example of power electronics-based power systems, and the theoretical analysis has been further validated by simulation and experiments.
---
paper_title: Impedance based stability analysis of VSC-based HVDC system
paper_content:
With increasing activities in development of Voltage Source Converter (VSC) based High Voltage dc (HVDC) transmission system, it is necessary to pre-asses their impact on system stability before connecting to the main ac grid. Existing approaches to study such instability are mainly based on converter control models and do not include the effects of grid impedance. This paper presents the stability analysis of VSC- based HVDC system which includes controller dynamics and also main ac grid impedance. It is shown that the system will remain stable as longs as the ratio of dc system impedance and HVDC converter dc input impedance satisfies the Nyquist stability criterion. An analytical linearized model of a point-to-point connection HVDC system is developed to demonstrate the application of this method and the result is compared with nonlinear model developed in matlab/simulink association with simpower system.
---
paper_title: Passivity-Based Stability Assessment of Grid-Connected VSCs—An Overview
paper_content:
The interconnection stability of a grid-connected voltage-source converter (VSC) can be assessed by the passivity properties of the VSC input admittance. If critical grid resonances fall within regions where the input admittance acts passively, i.e., has nonnegative real part, then their destabilization is generally prevented. This paper presents an overview of passivity-based stability assessment, including techniques for space-vector modeling of VSCs whereby expressions for the input admittance can be derived. Design recommendations for minimizing the negative-real-part region are given as well.
---
paper_title: A novel Passivity-Based Stability Criterion (PBSC) for switching converter DC distribution systems
paper_content:
A novel Passivity-Based Stability Criterion (PBSC) is proposed for the stability analysis and design of DC power distribution systems. The proposed criterion is based on imposing passivity of the overall bus impedance. If passivity of the overall DC bus impedance is ensured, stability is guaranteed as well. The PBSC reduces artificial design conservativeness and sensitivity to component grouping typical of existing stability criteria, such as the Middlebrook criterion and its extensions. Moreover, the criterion is easily applicable to multi-converter systems and to systems in which the power flow direction changes, for example as a result of system reconfiguration. Moreover, the criterion can be used for the design of active damping networks for DC power distribution systems. The approach results in greatly improved stability and damping of transients on the DC bus voltage. Experimental validation is performed using a hardware test-bed that emulates a DC power distribution system.
---
paper_title: Small signal stability analysis of a shipboard MVDC power system
paper_content:
Recent developments in high power rated Voltage Source Converters (VSCs) have resulted in their successful application in Multi-Terminal HVDC (MTDC) transmission systems and also have potential in the Medium Voltage DC (MVDC) distribution systems. A multi-zonal MVDC architecture has been identified by the US Navy as one of the possible architectures for the shipboard power distribution. Selection of a particular architecture for the shipboard power system will require extensive studies to be carried out with respect to various protection, control and stability issues. This paper presents the findings of small signal stability studies carried out on a zonal MVAC as well as a MVDC architecture for the ship board power distribution system.
---
paper_title: Eigenvalue Sensitivity Analysis for Dynamic Power System
paper_content:
Eigenvalue sensitivity analysis has been an effective tool for power system controller design. However, research on eigenvalue sensitivity with respect to system operating parameters is still limited. This paper presents new results of an eigenvalue sensitivity analysis with respect to operating parameters, preceded with a comprehensive review on eigenvalue sensitivity analysis and applications during the last few decades. The method is based on explicit expression of the derivatives of augmented system matrix with respect to system operating parameters. IEEE 5-machine 14-bus system is used to demonstrate the effectiveness of the method. The eigenvalue sensitivity analysis provides useful information for power system planning and control.
---
paper_title: Small-Signal Stability Analysis of Offshore AC Network Having Multiple VSC-HVDC Systems
paper_content:
This paper presents a methodology to perform small-signal analysis of an offshore ac network, which is formed by interconnecting several offshore wind power plants. The offshore ac network is connected with different onshore ac grids using point-to-point voltage-source-converter high-voltage direct current (VSC-HVDC) transmission links. In such a network, each offshore VSC-HVDC converter operates in grid-forming mode. In this paper, the offshore VSC grid-forming control is enhanced by using a frequency and voltage droop scheme in order to establish a coordinated grid control among the offshore converters. A small-signal model of the offshore ac network is developed that includes the high-voltage alternating current cables' model, the converters' current, and voltage-control model, the frequency droop scheme, and the voltage droop scheme. Based on this model, an eigenvalue analysis is performed in order to study the influence of the frequency and voltage droop gains on overall offshore ac network stability. Finally, theoretical analysis is validated by performing nonlinear dynamic simulation.
---
paper_title: Small-Signal Stability Analysis of Multi-Terminal VSC-Based DC Transmission Systems
paper_content:
A model suitable for small-signal stability analysis and control design of multi-terminal dc networks is presented. A generic test network that combines conventional synchronous and offshore wind generation connected to shore via a dc network is used to illustrate the design of enhanced voltage source converter (VSC) controllers. The impact of VSC control parameters on network stability is discussed and the overall network dynamic performance assessed in the event of small and large perturbations. Time-domain simulations conducted in Matlab/Simulink are used to validate the operational limits of the VSC controllers obtained from the small-signal stability analysis.
---
paper_title: A case for medium voltage DC for distribution circuit applications
paper_content:
In this survey paper, applications of medium voltage direct current (MVDC) to distribution substations are discussed and compared to conventional ac for power conversion from transmission voltage levels to distribution voltage levels. Technical changes in the substation due to the dc equipment, efficiency of the substation, and protective measures are major aspects to consider in the MVDC concept development.
---
paper_title: Small-signal stability study of the Cigré DC grid test system with analysis of participation factors and parameter sensitivity of oscillatory modes
paper_content:
This paper presents a detailed case study of small-signal stability analysis for the Cigre DC-grid test system. The presented investigations are intended for identifying critical modes of interaction between different parts of the electrical system and the controllers in multi-terminal HVDC (MTDC) schemes. Oscillatory and critical modes of the system are investigated by modal analysis, including participation factor analysis and studies of the parameters sensitivity. The potentially unstable modes are investigated by calculating the participation factors of the system, which identify the influencing components. The sensitivity of the eigenvalues to the control parameters is also presented in order to reach an improved control design for MTDC with sufficient damping and enhanced dynamic response. Time-domain simulation results are further presented in order to verify the transient performance of the control system.
---
paper_title: Wind Farm Grid Integration Using VSC Based HVDC Transmission - An Overview
paper_content:
The paper gives an overview of HVAC and HVDC connection of wind farm to the grid, with an emphasis on voltage source converter (VSC)-based HVDC for large wind farms requiring long distance cable connection. Flexible control capabilities of a VSC-based HVDC system enables smooth integration of wind farm into the power grid network while meeting the grid code requirements (GCR). Operation of a wind farm with VSC-based HVDC connection is described.
---
paper_title: Small-Signal Stability Assessment of Power Electronics Based Power Systems: A Discussion of Impedance- and Eigenvalue-Based Methods
paper_content:
This paper investigates the small-signal stability of power electronics-based power systems in frequency domain. A comparison between the impedance-based and the eigenvalue-based stability analysis methods is presented. A relation between the characteristics equation of the eigenvalues and poles and zeros of the minor-loop gain from the impedance-based analysis have been derived analytically. It is shown that both stability analysis methods can effectively determine the stability of the system. In the case of the impedance-based method, a low phase-margin in the Nyquist plot of the minor-loop gain indicates that the system can exhibit harmonic oscillations. A weakness of the impedance method is the limited observability of certain states given its dependence on the definition of local source-load subsystems, which makes it necessary to investigate the stability at different subsystems. To address this limitation, the paper discusses critical locations where the application of the method can reveal the impact of a passive component or a controller gain on the stability. On the other hand, the eigenvalue-based method, being global, can determine the stability of the entire system; however, it cannot unambiguously predict sustained harmonic oscillations in voltage source converter (VSC) based high voltage dc (HVdc) systems caused by pulse-width modulation (PWM) switching. To generalize the observations, the two methods have been applied to dc-dc converters. To illustrate the difference and the relation between the two-methods, the two stability analysis methods are then applied to a two-terminal VSC-based HVdc system as an example of power electronics-based power systems, and the theoretical analysis has been further validated by simulation and experiments.
---
paper_title: A review on the small signal stability of microgrid
paper_content:
Small signal stability is the key issue in the operation and control of a microgrid. For the small signal stability of an inverter-based microgrid, existing analysis methods mainly contain eigenvalue analysis using state-space models, impedance analysis based on impedance models and other nonlinear analysis methods. This paper reviews these analytical methods as well as their respective advantages and disadvantages. Finally, for the improvement of the small signal stability, this paper summarizes the research situation in terms of the optimization of controller parameters, the improvement of droop control and the optimized design of hierarchical control strategy.
---
paper_title: Power System Stability and Control
paper_content:
Part I: Characteristics of Modern Power Systems. Introduction to the Power System Stability Problem. Part II: Synchronous Machine Theory and Modelling. Synchronous Machine Parameters. Synchronous Machine Representation in Stability Studies. AC Transmission. Power System Loads. Excitation in Stability Studies. Prime Mover and Energy Supply Systems. High-Voltage Direct-Current Transmission. Control of Active Power and Reactive Power. Part III: Small Signal Stability. Transient Stability. Voltage Stability. Subsynchronous Machine Representation in Stability Studies. AC Transmission. Power System Loads. Excitation in Stability Studies. Prime Mover and Energy Supply Systems, High-Voltage Direct-Current Transmission. Control of Active Power and Reactive Power. Part III: Small Signal Stability. Transient Stability. Voltage Stability. Subsynchronous Oscillations. Mid-Term and Long-Term Stability. Methods of Improving System Stability.
---
paper_title: Stability Analysis of Aircraft Power Systems Based on a Unified Large Signal Model
paper_content:
Complex power electronic conversion devices, most of which have high transmission performance, are important power conversion units in modern aircraft power systems. However, these devices can also affect the stability of the aircraft power system more and more prominent due to their dynamic and nonlinear characteristics. To analyze the stability of aircraft power systems in a simple, accurate and comprehensive way, this paper develops a unified large signal model of aircraft power systems. In this paper, first the Lyapunov linearization method and the mixed potential theory are employed to analyze small signal and large signal stability, respectively, and then a unified stability criterion is proposed to estimate small and large signal stability problems. Simulation results show that the unified large signal model of aircraft power systems presented in this paper can be used to analyze the stability problem of aircraft power systems in an accurate and comprehensive way. Furthermore, with simplicity, universality and structural uniformity, the unified large signal model lays a good foundation for the optimal design of aircraft power systems.
---
paper_title: VSC HVDC transmission corridor: An option for PV power injection and AC network stability support
paper_content:
The pressure of increasing power demand and supply inequality is forcing utilities to interconnect AC systems to meet demands. High Voltage Direct Current (HVDC) schemes are becoming a more attractive solution as they have been used extensively in interconnected power systems worldwide. This paper investigates the role of voltage source converter (VSC) based HVDC transmission corridor for PV power injection and for AC network stability support. Overview of Namibia Caprivi-link interconnector a case study, potential of very large scale PV in Namibia and prospects of PV power injection on the DC-link is presented. The system is modelled simulated in Matlab/Simulink. Critical contingencies such as sudden island conditions, three-phase to ground fault are simulated with and without PV penetration. Results show the stability support on the AC side networks by PV power injection on the dc-link.
---
paper_title: Stability Criterion for Cascaded System With Constant Power Load
paper_content:
With the development of renewable energy, dc distribution power system (DPS) becomes more and more attractive. The stability of whole system is still a big concern though every single converter is well designed based on the stand-alone operation with sufficient stability. Since the cascaded connection of power converters is one of the most dominant connection forms in the dc DPS, the stability analysis of the cascaded system is very important to ensure stability of the whole system. Based on the Lyapunov linearization method and Brayton–Moser's mixed potential theory, stability of equilibrium point and an estimation of the region of attraction are investigated for cascaded system in the dc DPS. Based on the analysis, stability prediction criteria for cascaded system under small-signal and large-signal disturbances are obtained. The two criteria are simple and straightforward, which can be unified to get a general stability criterion to predict system's stability under both small-signal and large-signal transient disturbances. The relationship between system parameters and the stability is presented and discussed in the paper. Therefore, instead of trial and error, the proposed criterion can predict and guarantee the stability operation of cascaded system during the design process, and it is also helpful to select matched power converters in system level design. The simulation and experimental results verify the effectiveness of the proposed criterion.
---
paper_title: Issues of Connecting Wind Farms into Power Systems
paper_content:
Wind power industry is developing rapidly, more and more wind farms are being connected into power systems. Integration of large scale wind farms into power systems presents some challenges that must be addressed, such as system operation and control, system stability, and power quality. This paper describes modern wind power systems, presents requirements of wind turbine connection and discusses the possible control methods for wind turbines to meet the specifications
---
paper_title: Power System Stability and Control
paper_content:
Part I: Characteristics of Modern Power Systems. Introduction to the Power System Stability Problem. Part II: Synchronous Machine Theory and Modelling. Synchronous Machine Parameters. Synchronous Machine Representation in Stability Studies. AC Transmission. Power System Loads. Excitation in Stability Studies. Prime Mover and Energy Supply Systems. High-Voltage Direct-Current Transmission. Control of Active Power and Reactive Power. Part III: Small Signal Stability. Transient Stability. Voltage Stability. Subsynchronous Machine Representation in Stability Studies. AC Transmission. Power System Loads. Excitation in Stability Studies. Prime Mover and Energy Supply Systems, High-Voltage Direct-Current Transmission. Control of Active Power and Reactive Power. Part III: Small Signal Stability. Transient Stability. Voltage Stability. Subsynchronous Oscillations. Mid-Term and Long-Term Stability. Methods of Improving System Stability.
---
paper_title: Modeling, analysis, and validation of a preliminary design for a 20 kV medium voltage DC substation
paper_content:
With the advancement of high capacity power electronics technologies, most notably in HVDC applications, the concept of developing and implementing future transmission subsystems through a DC backbone presents a realistic and advantageous option over traditional AC approaches. Currently, electrical equipment or devices requiring DC power to function, whether loads or resources, necessitate AC/DC conversion technologies. Having an accessible and direct supply of DC power to serve such loads and resources creates the potential to mitigate losses experienced in the AC/DC conversion process, reduce overall electrical system infrastructure, and lessen the amount of power generated from power plants, as well as other advantages. This paper introduces an initial design and simulation model of a medium voltage DC (MVDC) substation concept containing renewable generation, power electronic converters, and induction machine loads. Each of the components is developed and modeled in PSCAD and validated analytically. The models of the represented system equipment and components are individually presented and accompanied with their simulated results to demonstrate the validity of the overall model. Future work will build upon the model to develop additional loads and resources, control strategies for optimized integration, and more detailed component models.
---
paper_title: MATCONT: A MATLAB package for numerical bifurcation analysis of ODEs
paper_content:
MATCONT is a graphical MATLAB software package for the interactive numerical study of dynamical systems. It allows one to compute curves of equilibria, limit points, Hopf points, limit cycles, period doubling bifurcation points of limit cycles, and fold bifurcation points of limit cycles. All curves are computed by the same function that implements a prediction-correction continuation algorithm based on the Moore-Penrose matrix pseudo-inverse. The continuation of bifurcation points of equilibria and limit cycles is based on bordering methods and minimally extended systems. Hence no additional unknowns such as singular vectors and eigenvectors are used and no artificial sparsity in the systems is created. The sparsity of the discretized systems for the computation of limit cycles and their bifurcation points is exploited by using the standard Matlab sparse matrix methods. The MATLAB environment makes the standard MATLAB Ordinary Differential Equations (ODE) Suite interactively available and provides computational and visualization tools; it also eliminates the compilation stage and so makes installation straightforward. Compared to other packages such as AUTO and CONTENT, adding a new type of curves is easy in the MATLAB environment. We illustrate this by a detailed description of the limit point curve type.
---
paper_title: Bifurcation theory and its application to nonlinear dynamical phenomena in an electrical power system
paper_content:
A tutorial introduction in bifurcation theory is given, and the applicability of this theory to study nonlinear dynamical phenomena in a power system network is explored. The predicted behavior is verified through time simulation. Systematic application of the theory revealed the existence of stable and unstable periodic solutions as well as voltage collapse. A particular response depends on the value of the parameter under consideration. It is shown that voltage collapse is a subset of the overall bifurcation phenomena that a system may experience under the influence of system parameters. A low-dimensional center manifold reduction is applied to capture the relevant dynamics involved in the voltage collapse process. The need for the consideration of nonlinearity, especially when the system is highly stressed, is emphasized. >
---
paper_title: Controlling chaos and bifurcations of SMIB power system experiencing SSR phenomenon using SSSC
paper_content:
Abstract This paper presents the effect of Static Synchronous Series Compensation (SSSC) on the bifurcations of heavily loaded Single Machine Infinite Bus (SMIB) power system experiencing Subsynchronous Resonance (SSR). In SSR phenomenon, the series compensation increases the power transfer capability of the transmission line. However, Hopf bifurcation is depicted at certain compensation levels. The system then routes to chaos via torus breakdown scenario in case of conventional compensation (variable series capacitor) scheme. In this study, the effect of replacing the conventional compensation with SSSC is highlighted. Varying the SSSC controller reference voltage changes the compensation degree. The results show that the operating point of the system never loses stability at any realistic compensation degree in case of SSSC which means that all bifurcations of the system have been eliminated. Time domain simulations coincide with the results of the bifurcation analysis. The robustness of the SSSC compensation scheme and its controller is verified by subjecting a single-phase to ground fault at the end of the transmission line. The results are compared with the case of conventional compensation. Additionally, the effect of the SSSC controller gain on the location of the Hopf bifurcation is addressed.
---
paper_title: Dynamic Stability Analysis of Synchronverter-Dominated Microgrid Based on Bifurcation Theory
paper_content:
In the traditional power grid, inverter-inter-faced distributed energy resources are widely used in microgrids such as conventional synchronous generators (SGs) in the traditional power grid. They are expected to decrease the rotational inertia and spinning reserve capacity that are specific to SGs, while synchronverters can compensate for the loss. Thus, an increasing number of interfaced inverters may act as synchronverters in a microgrid. The dynamic stability of the synchronverter-dominated microgrid must thus be carefully evaluated to ensure the reliable operation of this type of system. This paper presents a nonlinear model of a synchronverter-dominated microgrid. The eigenvalues of the system state matrix, along with the participation factors, are analyzed to determine the predominant parameters affecting the stability. Bifurcation theory is then used to predict and describe the unstable phenomenon as the system parameters fluctuate; the effect of these parameters on the system stability is then examined. A simulation on MATLAB/Simulink and experiments were performed and results validated the presented analysis.
---
paper_title: Discrete-Time Tool for Stability Analysis of DC Power Electronics-Based Cascaded Systems
paper_content:
DC distribution power systems are vulnerable to instability because of the destabilizing effect of converter-controlled constant power loads (CPLs) and input filters. Standard stability analysis tools based on averaging linearization techniques can be used only when the switching frequency of the converter is significantly higher than the cutoff frequency of the filter. However, dc distribution systems with a reduced size filter, and consequently a high cutoff frequency, are common in transportation applications. Conventional methods fail to detect instabilities in the system because they do not take into account the switching effect. To overcome this drawback, this paper proposes a discrete-time method to analyze the stability of dc distribution systems. This model is applied here to a dc power system with a CPL. The switching effects and the nonlinearities of the system model are taken into account with a simple discretization approach. The proposed method is able to predict the dynamic properties of the system, such as slow-scale and fast-scale instabilities. An active stabilizer is also included in the system model in order to extend the stability margin of the system. Finally, these observations are validated experimentally on a laboratory test bench.
---
paper_title: Discrete-Time Modeling, Stability Analysis, and Active Stabilization of DC Distribution Systems With Multiple Constant Power Loads
paper_content:
This paper presents the stability analysis of dc distributed power systems with multiple converter-controlled loads. The load converters are tightly controlled, behaving as constant power loads with low-damped $LC\;$ filters. The dynamic behavior of the system in high frequency range is often not studied with the classical tools based on conventional averaging techniques. However, dc power systems with the reduced size filter, and consequently, the high resonant frequency, are widely used in transportation applications. In this paper, the stability analysis of the system is established based on a discrete-time model of the system, taking into account the switching frequency and intrinsic nonlinearities of the system model. The impacts of the filter parameters and interactions among the constant power loads are investigated with the proposed discrete-time method. Moreover, an active stabilizer is developed and included in the dynamic model of the system, in order to extend the stability margin. The theoretical observations are then validated experimentally on a laboratory hardware prototype.
---
paper_title: An application of chaos and bifurcation in nonlinear dynamical power systems
paper_content:
In this article, a nonlinear dynamical phenomena leading to bifurcation and chaos in power systems is explored using a sample power system. After giving an introduction to nonlinear dynamical power systems in section II a basic knowledge to nonlinear dynamics and chaos theory is given. Section III deals with bifurcation theory. In section IV a dynamical power system model has examined. In section V the theories are applied to a sample power system example and various bifurcation and chaotic phenomena are examined. The predicted behavior is verified by simulations using MATLAB tools.
---
paper_title: A review on the small signal stability of microgrid
paper_content:
Small signal stability is the key issue in the operation and control of a microgrid. For the small signal stability of an inverter-based microgrid, existing analysis methods mainly contain eigenvalue analysis using state-space models, impedance analysis based on impedance models and other nonlinear analysis methods. This paper reviews these analytical methods as well as their respective advantages and disadvantages. Finally, for the improvement of the small signal stability, this paper summarizes the research situation in terms of the optimization of controller parameters, the improvement of droop control and the optimized design of hierarchical control strategy.
---
paper_title: Design aspects of a medium-voltage direct current (MVDC) grid for a university campus
paper_content:
Today's power systems use alternating current (ac) for transmission and distribution of electrical energy, although the first grids were based on direct current (dc). Due to the absence of appropriate equipment to change voltage levels dc technology did not become widely accepted and was finally ruled out by the more efficient ac infrastructure. However, as a result of considerable technical progress, high-voltage direct current (HVDC) transmission has found its way back into power systems. At lower voltage and power levels medium-voltage dc (MVDC) distribution has been proposed for offshore wind farms and industrial applications. This paper describes the design of an MVDC grid for the interconnection of high-power test benches at a university campus. Voltage control within the dc grid as well as the behavior in different fault scenarios is analyzed using numerical simulations. To assess the environmental impact of the grid the magnetic flux density emitted by the dc cable lines is calculated.
---
| Title: Small-signal stability analysis for the multi-terminal VSC MVDC distribution network; a review
Section 1: Introduction
Description 1: Provide an overview of the motivation for the study, the significance of MVDC technology, and its benefits and challenges.
Section 2: Classical control-based stability analysis
Description 2: Discuss various classical control techniques such as Routh-Hurwitz criterion, root-locus plot, and frequency-response methods used for small-signal stability analysis of the MVDC system.
Section 3: Impedance-based stability analysis
Description 3: Explain the impedance-based method where the system is analyzed using source and load impedances, including Middlebrook criterion and its extensions.
Section 4: Passivity-based stability criteria
Description 4: Describe the passivity-based method that imposes a passivity condition on the overall bus impedance to ensure system stability.
Section 5: State-space modelling and eigenvalue analysis
Description 5: Examine the state-space modelling and eigenvalue analysis method that provides a global stability assessment by analyzing the system's eigenvalues.
Section 6: Lyapunov linearisation method
Description 6: Review the Lyapunov linearisation method, evaluating system stability using the roots of its state-equation and applying Lyapunov's criteria.
Section 7: Bifurcation theory
Description 7: Detail the use of bifurcation theory to study the stability of non-linear dynamic systems, focusing on changes in system behavior due to parameter variations.
Section 8: Conclusion
Description 8: Summarize the key findings of the review, compare the strengths and weaknesses of the stability methods, and suggest suitable approaches for MVDC distribution networks. |
Survey on AI-Based Multimodal Methods for Emotion Detection | 8 | ---
paper_title: Inference of attitudes from nonverbal communication in two channels.
paper_content:
Hydroxy terminated polybutadiene is reacted with naphtyl-potassium and 1-bromo-2,4-pentadiene in successive steps to form a bis(1,3-pentadienyl ether) derivative. Then the bismaleimide of dimer diamine is added to the polybutadiene derivative whereby a room temperature cure to an elastomer is achieved.
---
paper_title: AutoEmotive: bringing empathy to the driving experience to manage stress
paper_content:
With recent developments in sensing technologies, it's becoming feasible to comfortably measure several aspects of emotions during challenging daily life situations. This work describes how the stress of drivers can be measured through different types of interactions, and how the information can enable several interactions in the car with the goal of helping to manage stress. These new interactions could help not only to bring empathy to the driving experience but also to improve driver safety and increase social awareness.
---
paper_title: Toward Machine Emotional Intelligence: Analysis of Affective Physiological State
paper_content:
The ability to recognize emotion is one of the hallmarks of emotional intelligence, an aspect of human intelligence that has been argued to be even more important than mathematical and verbal intelligences. This paper proposes that machine intelligence needs to include emotional intelligence and demonstrates results toward this goal: developing a machine's ability to recognize the human affective state given four physiological signals. We describe difficult issues unique to obtaining reliable affective data and collect a large set of data from a subject trying to elicit and experience each of eight emotional states, daily, over multiple weeks. This paper presents and compares multiple algorithms for feature-based recognition of emotional state from this data. We analyze four physiological signals that exhibit problematic day-to-day variations: The features of different emotions on the same day tend to cluster more tightly than do the features of the same emotion on different days. To handle the daily variations, we propose new features and algorithms and compare their performance. We find that the technique of seeding a Fisher Projection with the results of sequential floating forward search improves the performance of the Fisher Projection and provides the highest recognition rates reported to date for classification of affect from physiology: 81 percent recognition accuracy on eight classes of emotion, including neutral.
---
paper_title: YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context
paper_content:
This work focuses on automatically analyzing a speaker's sentiment in online videos containing movie reviews. In addition to textual information, this approach considers adding audio features as typically used in speech-based emotion recognition as well as video features encoding valuable valence information conveyed by the speaker. Experimental results indicate that training on written movie reviews is a promising alternative to exclusively using (spoken) in-domain data for building a system that analyzes spoken movie review videos, and that language-independent audio-visual analysis can compete with linguistic analysis.
---
paper_title: Multimodal Sentiment Intensity Analysis in Videos: Facial Gestures and Verbal Messages
paper_content:
People share their opinions, stories, and reviews through online video sharing websites every day. The automatic analysis of these online opinion videos is bringing new or understudied research challenges to the field of computational linguistics and multimodal analysis. Among these challenges is the fundamental question of exploiting the dynamics between visual gestures and verbal messages to be able to better model sentiment. This article addresses this question in four ways: introducing the first multimodal dataset with opinion-level sentiment intensity annotations; studying the prototypical interaction patterns between facial gestures and spoken words when inferring sentiment intensity; proposing a new computational representation, called multimodal dictionary, based on a language-gesture study; and evaluating the authors' proposed approach in a speaker-independent paradigm for sentiment intensity prediction. The authors' study identifies four interaction types between facial gestures and verbal content: neutral, emphasizer, positive, and negative interactions. Experiments show statistically significant improvement when using multimodal dictionary representation over the conventional early fusion representation (that is, feature concatenation).
---
paper_title: Tensor Fusion Network for Multimodal Sentiment Analysis
paper_content:
Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language. In this paper, we pose the problem of multimodal sentiment analysis as modeling intra-modality and inter-modality dynamics. We introduce a novel model, termed Tensor Fusion Network, which learns both such dynamics end-to-end. The proposed approach is tailored for the volatile nature of spoken language in online videos as well as accompanying gestures and voice. In the experiments, our model outperforms state-of-the-art approaches for both multimodal and unimodal sentiment analysis.
---
paper_title: Fusing audio, visual and textual clues for sentiment analysis from multimodal content
paper_content:
A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%.
---
paper_title: Utterance-Level Multimodal Sentiment Analysis
paper_content:
During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.
---
paper_title: RRSS-Rating Reviews Support System purpose built for movies recommendation
paper_content:
This paper describes the part of a recommendation system designed for the recognition of film reviews (RRSS). Such a system allows the automatic collection, evaluation and rating of reviews and opinions of the movies. First the system searches and retrieves texts supposed to be movie reviews from the Internet. Subsequently the system carries out an evaluation and rating of the movie reviews. Finally, the system automatically associates a digital assessment with each review. The goal of the system is to give the score of reviews associated with the user who wrote them. All of this data is the input to the cognitive engine. Data from our base allows the making of correspondences, which are required for cognitive algorithms to improve, advanced recommending functionalities for e-business and e-purchase websites. In this paper we will describe the different methods on automatically identifying opinions using natural language knowledge and techniques of classification.
---
paper_title: Tool of the intelligence economic: Recognition function of reviews critics
paper_content:
This paper describes the part of recommender system designed for movies’ critics recognition. Such a system allows the automatic collection, evaluation and rating of critics and opinions of the movies. First the system searches and retrieves texts supposed to be movies’ reviews from the Internet. Subsequently the system carries out an evaluation and rating of movies’ critics. Finally the system automatically associates a numerical mark to each critic. The goal of system is to give the score of critics associated to the users’ who wrote them. All of this data are the input to the cognitive engine. Data from our base allow making correspondences which are required for cognitive algorithms to improve advanced recommending functionalities for e-business and e-purchases websites. Our sesystem uses three different methods for classifying opinions from reviews critics. In this paper we describe the part of system which is based on automatically identifying opinions using natural language processing knowledge.
---
paper_title: Opinion mining and sentiment analysis
paper_content:
An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
---
paper_title: EXPLORE THE EFFECTS OF EMOTICONS ON TWITTER SENTIMENT ANALYSIS
paper_content:
In recent years, Twitter Sentiment Analysis (TSA) has become a hot research topic. The target of this task is to analyse the sentiment polarity of the tweets. There are a lot of machine learning methods specifically developed to solve TSA problems, such as fully supervised method, distantly supervised method and combined method of these two. Considering the specialty of tweets that a limitation of 140 characters, emoticons have important effects on TSA. In this paper, we compare three emoticon pre-processing methods: emotion deletion (emoDel), emoticons 2-valued translation (emo2label) and emoticon explanation (emo2explanation). Then, we propose a method based on emoticon-weight lexicon, and conduct experiments based on Naive Bayes classifier, to validate the crucial role emoticons play on guiding emotion tendency in a tweet. Experiments on real data sets demonstrate that emoticons are vital to TSA.
---
paper_title: Sentiment analysis on tweets for social events
paper_content:
Sentiment analysis or opinion mining is an important type of text analysis that aims to support decision making by extracting and analyzing opinion oriented text, identifying positive and negative opinions, and measuring how positively or negatively an entity (i.e., people, organization, event, location, product, topic, etc.) is regarded. As more and more users express their political and religious views on Twitter, tweets become valuable sources of people's opinions. Tweets data can be efficiently used to infer people's opinions for marketing or social studies. This paper proposes a Tweets Sentiment Analysis Model (TSAM) that can spot the societal interest and general people's opinions in regard to a social event. In this paper, Australian federal election 2010 event was taken as an example for sentiment analysis experiments. We are primarily interested in the sentiment of the specific political candidates, i.e., two primary minister candidates - Julia Gillard and Tony Abbot. Our experimental results demonstrate the effectiveness of the system.
---
paper_title: Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network
paper_content:
Twitter messages are increasingly used to determine consumer sentiment towards a brand. The existing literature on Twitter sentiment analysis uses various feature sets and methods, many of which are adapted from more traditional text classification problems. In this research, we introduce an approach to supervised feature reduction using n-grams and statistical analysis to develop a Twitter-specific lexicon for sentiment analysis. We augment this reduced Twitter-specific lexicon with brand-specific terms for brand-related tweets. We show that the reduced lexicon set, while significantly smaller (only 187 features), reduces modeling complexity, maintains a high degree of coverage over our Twitter corpus, and yields improved sentiment classification accuracy. To demonstrate the effectiveness of the devised Twitter-specific lexicon compared to a traditional sentiment lexicon, we develop comparable sentiment classification models using SVM. We show that the Twitter-specific lexicon is significantly more effective in terms of classification recall and accuracy metrics. We then develop sentiment classification models using the Twitter-specific lexicon and the DAN2 machine learning approach, which has demonstrated success in other text classification problems. We show that DAN2 produces more accurate sentiment classification results than SVM while using the same Twitter-specific lexicon.
---
paper_title: Emoticon Smoothed Language Models for Twitter Sentiment Analysis
paper_content:
Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.
---
paper_title: Predicting Depression From Language-Based Emotion Dynamics: Longitudinal Analysis of Facebook and Twitter Status Updates
paper_content:
Background: Frequent expression of negative emotion words on social media has been linked to depression. However, metrics have relied on average values, not dynamic measures of emotional volatility. Objective: The aim of this study was to report on the associations between depression severity and the variability (time-unstructured) and instability (time-structured) in emotion word expression on Facebook and Twitter across status updates. Methods: Status updates and depression severity ratings of 29 Facebook users and 49 Twitter users were collected through the app MoodPrism. The average proportion of positive and negative emotion words used, within-person variability, and instability were computed. Results: Negative emotion word instability was a significant predictor of greater depression severity on Facebook (rs(29)=.44, P=.02, 95% CI 0.09-0.69), even after controlling for the average proportion of negative emotion words used (partial rs(26)=.51, P=.006) and within-person variability (partial rs(26)=.49, P=.009). A different pattern emerged on Twitter where greater negative emotion word variability indicated lower depression severity (rs(49)=−.34, P=.01, 95% CI −0.58 to 0.09). Differences between Facebook and Twitter users in their emotion word patterns and psychological characteristics were also explored. Conclusions: The findings suggest that negative emotion word instability may be a simple yet sensitive measure of time-structured variability, useful when screening for depression through social media, though its usefulness may depend on the social media platform. [J Med Internet Res 2018;20(5):e168]
---
paper_title: Once More with Feeling: Supportive Responses to Social Sharing on Facebook
paper_content:
Life is more than cat pictures. There are tough days, heartbreak, and hugs. Under what contexts do people share these feelings online, and how do their friends respond? Using millions of de-identified Facebook status updates with poster-annotated feelings (e.g., “feeling thankful” or “feeling worried”), we examine the magnitude and circumstances in which people share positive or negative feelings and characterize the nature of the responses they receive. We find that people share greater proportions of both positive and negative emotions when their friend networks are smaller and denser. Consistent with social sharing theory, hearing about a friendâ s troubles on Facebook causes friends to reply with more emotional and supportive comments. Friendsâ comments are also more numerous and longer. Posts with positive feelings, on the other hand, receive more likes, and their comments have more positive language. Feelings that relate to the posterâ s self worth, such as “feeling defeated,” “feeling unloved,” or “feeling accomplished” amplify these effects.
---
paper_title: Twitter Analysis: Studying US Weekly Trends in Work Stress and Emotion
paper_content:
We propose the use of Twitter analysis as an alternative source of data to document weekly trends in emotion and stress, and attempt to use the method to estimate the work recovery effect of weekends. On the basis of 2,102,176,189 Tweets, we apply Pennebaker's linguistic inquiry word count (LIWC) approach to measure daily Tweet content across 18 months, aggregated to the US national level of analysis. We derived a word count dictionary to assess work stress and applied p-technique factor analysis to the daily word count data from 19 substantively different content areas covered by the LIWC dictionaries. Dynamic factor analysis revealed two latent factors in day-level variation of Tweet content. These two factors are: (a) a negative emotion/stress/somatic factor, and (b) a positive emotion/food/friends/home/family/leisure factor, onto which elements of work, money, achievement, and health issues have strong negative loadings. The weekly trend analysis revealed a clear “Friday dip” for work stress and negative emotion expressed on Twitter. In contrast, positive emotion Tweets showed a “mid-week dip” for Tuesday-Wednesday-Thursday and “weekend peak” for Friday through Sunday, whereas work/money/achievement/health problem Tweets showed a small “weekend dip” on Fridays through Sundays. Results partially support the Effort-Recovery theory. Implications and limitations of the method are discussed.
---
paper_title: When perceptions defy reality: The relationships between depression and actual and perceived Facebook social support
paper_content:
Abstract Background Although the relationship between depression and “offline” social support is well established, numerous questions surround the relationship between “online” social support and depression. We explored this issue by examining the social support dynamics that characterize the way individuals with varying levels of depression (Study 1) and SCID-diagnosed clinically depressed and non-depressed individuals (Study 2) interact with Facebook, the world's largest online social network. Method Using a novel methodology, we examined how disclosing positive or negative information on Facebook influences the amount of social support depressed individuals (a) actually receive (based on actual social support transactions recorded on Facebook walls) and (b) think they receive (based on subjective assessments) from their Facebook network. Results Contrary to prior research indicating that depression correlates with less actual social support from “offline” networks, across both studies depression was positively correlated with social support from Facebook networks when participants disclosed negative information ( p =.02 in Study 1 and p =.06 in Study 2). Yet, depression was negatively correlated with how much social support participants thought they received from their Facebook networks ( p =.005 in Study 1 and p =.001 in Study 2). Limitations The sample size was relatively small in Study 2, reflecting difficulties of recruiting individuals with Major Depressive Disorder. Conclusions These results demonstrate that an asymmetry characterizes the relationship between depression and different types of Facebook social support and further identify perceptions of Facebook social support as a potential intervention target. (243 words; 250 max)
---
paper_title: Short text classification in twitter to improve information filtering
paper_content:
In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as "Bag-Of-Words" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.
---
paper_title: The dynamics of health behavior sentiments on a large online social network
paper_content:
Modifiable health behaviors, a leading cause of illness and death in many countries, are often driven by individual beliefs and sentiments about health and disease. Individual behaviors affecting health outcomes are increasingly modulated by social networks, for example through the associations of like-minded individuals - homophily - or through peer influence effects. Using a statistical approach to measure the individual temporal effects of a large number of variables pertaining to social network statistics, we investigate the spread of a health sentiment towards a new vaccine on Twitter, a large online social network. We find that the effects of neighborhood size and exposure intensity are qualitatively very different depending on the type of sentiment. Generally, we find that larger numbers of opinionated neighbors inhibit the expression of sentiments. We also find that exposure to negative sentiment is contagious - by which we merely mean predictive of future negative sentiment expression - while exposure to positive sentiments is generally not. In fact, exposure to positive sentiments can even predict increased negative sentiment expression. Our results suggest that the effects of peer influence and social contagion on the dynamics of behavioral spread on social networks are strongly content-dependent.
---
paper_title: Fisher Kernels on Phase-Based Features for Speech Emotion Recognition
paper_content:
The involvement of affect information in a spoken dialogue system can increase the user-friendliness and provide a more natural way for the interaction experience. This can be reached by speech emotion recognition, where the features are usually dominated by the spectral amplitude information while they ignore the use of the phase spectrum. In this chapter, we propose to use phase-based features to build up such an emotion recognition system. To exploit these features, we employ Fisher kernels. The according technique encodes the phase-based features by their deviation from a generative Gaussian mixture model. The resulting representation is fed to train a classification model with a linear kernel classifier. Experimental results on the GeWEC database including ‘normal’ and whispered phonation demonstrate the effectiveness of our method.
---
paper_title: Comparative analysis of emotion estimation methods based on physiological measurements for real-time applications
paper_content:
In order to improve intelligent Human-Computer Interaction it is important to create a personalized adaptive emotion estimator that is able to learn over time emotional response idiosyncrasies of individual person and thus enhance estimation accuracy. This paper, with the aim of identifying preferable methods for such a concept, presents an experiment-based comparative study of seven feature reduction and seven machine learning methods commonly used for emotion estimation based on physiological signals. The analysis was performed on data obtained in an emotion elicitation experiment involving 14 participants. Specific discrete emotions were targeted with stimuli from the International Affective Picture System database. The experiment was necessary to achieve the uniformity in the various aspects of emotion elicitation, data processing, feature calculation, self-reporting procedures and estimation evaluation, in order to avoid inconsistency problems that arise when results from studies that use different emotion-related databases are mutually compared. The results of the performed experiment indicate that the combination of a multilayer perceptron (MLP) with sequential floating forward selection (SFFS) exhibited the highest accuracy in discrete emotion classification based on physiological features calculated from ECG, respiration, skin conductance and skin temperature. Using leave-one-session-out crossvalidation method, 60.3% accuracy in classification of 5 discrete emotions (sadness, disgust, fear, happiness and neutral) was obtained. In order to identify which methods may be the most suitable for real-time estimator adaptation, execution and learning times of emotion estimators were also comparatively analyzed. Based on this analysis, preferred feature reduction method for real-time estimator adaptation was minimum redundancy - maximum relevance (mRMR), which was the fastest approach in terms of combined execution and learning time, as well as the second best in accuracy, after SFFS. In combination with mRMR, highest accuracies were achieved by k-nearest neighbor (kNN) and MLP with negligible difference (50.33% versus 50.54%); however, mRMR+kNN is preferable option for real-time estimator adaptation due to considerably lower combined execution and learning time of kNN versus MLP.
---
paper_title: Analysis of emotion recognition using facial expressions, speech and multimodal information
paper_content:
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on facial expressions or speech, relatively limited work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. This paper analyzes the strengths and the limitations of systems based only on facial expressions or acoustic information. It also discusses two approaches used to fuse these two modalities: decision level and feature level integration. Using a database recorded from an actress, four emotions were classified: sadness, anger, happiness, and neutral state. By the use of markers on her face, detailed facial motions were captured with motion capture, in conjunction with simultaneous speech recordings. The results reveal that the system based on facial expression gave better performance than the system based on just acoustic information for the emotions considered. Results also show the complementarily of the two modalities and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably.
---
paper_title: Emotion Recognition based on Phoneme Classes
paper_content:
Recognizing human emotions/attitudes from speech cues has gained increased attention recently. Most previous work has focused primarily on suprasegmental prosodic features calculated at the utterance level for this purpose. Notably, not much attention is paid to details at the segmental phoneme level in the modeling. Based on the hypothesis that different emotions have varying effects on the properties of the different speech sounds, this paper investigates the usefulness of phoneme-level modeling for the classification of emotional states from speech. Hidden Markov models (HMM) based on short-term spectral features are used for this purpose using data obtained from a recording of an actress’ expressing 4 different emotional states anger, happiness, neutral, and sadness. We designed and compared two sets of HMM classifiers: a generic set of ”emotional speech” HMMs (one for each emotion) a set of broad phoneticclass based HMMs for each emotion type considered. Five broad phonetic classes were used to explore the effect of emotional coloring on different phoneme classes, and it was found that (spectral properties of) vowel sounds were the best indicator to emotions in terms of the classification performance. The experiments also showed that the best performance can be obtained by using phoneme-class classifiers over generic “emotional” HMM classifier and classifiers based on global prosodic features. To see complementary effect of the prosodic and spectral features, two classifiers were combined at the decision level. The improvement was 0.55% in absolute compared with the result from phoneme-class based HMM classifier.
---
paper_title: Class-level spectral features for emotion recognition
paper_content:
The most common approaches to automatic emotion recognition rely on utterance-level prosodic features. Recent studies have shown that utterance-level statistics of segmental spectral features also contain rich information about expressivity and emotion. In our work we introduce a more fine-grained yet robust set of spectral features: statistics of Mel-Frequency Cepstral Coefficients computed over three phoneme type classes of interest - stressed vowels, unstressed vowels and consonants in the utterance. We investigate performance of our features in the task of speaker-independent emotion recognition using two publicly available datasets. Our experimental results clearly indicate that indeed both the richer set of spectral features and the differentiation between phoneme type classes are beneficial for the task. Classification accuracies are consistently higher for our features compared to prosodic or utterance-level spectral features. Combination of our phoneme class features with prosodic features leads to even further improvement. Given the large number of class-level spectral features, we expected feature selection will improve results even further, but none of several selection methods led to clear gains. Further analyses reveal that spectral features computed from consonant regions of the utterance contain more information about emotion than either stressed or unstressed vowel features. We also explore how emotion recognition accuracy depends on utterance length. We show that, while there is no significant dependence for utterance-level prosodic features, accuracy of emotion recognition using class-level spectral features increases with the utterance length.
---
paper_title: Recognizing emotion in speech
paper_content:
The paper explores several statistical pattern recognition techniques to classify utterances according to their emotional content. The authors have recorded a corpus containing emotional speech with over a 1000 utterances from different speakers. They present a new method of extracting prosodic features from speech, based on a smoothing spline approximation of the pitch contour. To make maximal use of the limited amount of training data available, they introduce a novel pattern recognition technique: majority voting of subspace specialists. Using this technique, they obtain classification performance that is close to human performance on the task.
---
paper_title: Techniques and applications of emotion recognition in speech
paper_content:
Affective computing opens a new area of research in computer science with the aim to improve the way how humans and machines interact. Recognition of human emotions by machines is becoming a significant focus in recent research in different disciplines related to information sciences and Human-Computer Interaction (HCI). In particular, emotion recognition in human speech is important, as it is the primary communication tool of humans. This paper gives a brief overview of the current state of the research in this area with the aim to underline different techniques that are being used for detecting emotional states in vocal expressions. Furthermore, approaches for extracting speech features from speech datasets and machine learning methods with special emphasis on classifiers are analysed. In addition to the mentioned techniques, this paper also gives an outline of the areas where emotion recognition could be utilised such as healthcare, psychology, cognitive sciences and marketing.
---
paper_title: Feature Selection for Speech Emotion Recognition in Spanish and Basque: On the Use of Machine Learning to Improve Human-Computer Interaction
paper_content:
Study of emotions in human–computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.
---
paper_title: Detection and Analysis of Emotion From Speech Signals
paper_content:
Recognizing emotion from speech has become one the active research themes in speech processing and in applications based on human-computer interaction. This paper conducts an experimental study on recognizing emotions from human speech. The emotions considered for the experiments include neutral, anger, joy and sadness. The distinuishability of emotional features in speech were studied first followed by emotion classification performed on a custom dataset. The classification was performed for different classifiers. One of the main feature attribute considered in the prepared dataset was the peak-to-peak distance obtained from the graphical representation of the speech signals. After performing the classification tests on a dataset formed from 30 different subjects, it was found that for getting better accuracy, one should consider the data collected from one person rather than considering the data from a group of people.
---
paper_title: Comparative analysis of emotion estimation methods based on physiological measurements for real-time applications
paper_content:
In order to improve intelligent Human-Computer Interaction it is important to create a personalized adaptive emotion estimator that is able to learn over time emotional response idiosyncrasies of individual person and thus enhance estimation accuracy. This paper, with the aim of identifying preferable methods for such a concept, presents an experiment-based comparative study of seven feature reduction and seven machine learning methods commonly used for emotion estimation based on physiological signals. The analysis was performed on data obtained in an emotion elicitation experiment involving 14 participants. Specific discrete emotions were targeted with stimuli from the International Affective Picture System database. The experiment was necessary to achieve the uniformity in the various aspects of emotion elicitation, data processing, feature calculation, self-reporting procedures and estimation evaluation, in order to avoid inconsistency problems that arise when results from studies that use different emotion-related databases are mutually compared. The results of the performed experiment indicate that the combination of a multilayer perceptron (MLP) with sequential floating forward selection (SFFS) exhibited the highest accuracy in discrete emotion classification based on physiological features calculated from ECG, respiration, skin conductance and skin temperature. Using leave-one-session-out crossvalidation method, 60.3% accuracy in classification of 5 discrete emotions (sadness, disgust, fear, happiness and neutral) was obtained. In order to identify which methods may be the most suitable for real-time estimator adaptation, execution and learning times of emotion estimators were also comparatively analyzed. Based on this analysis, preferred feature reduction method for real-time estimator adaptation was minimum redundancy - maximum relevance (mRMR), which was the fastest approach in terms of combined execution and learning time, as well as the second best in accuracy, after SFFS. In combination with mRMR, highest accuracies were achieved by k-nearest neighbor (kNN) and MLP with negligible difference (50.33% versus 50.54%); however, mRMR+kNN is preferable option for real-time estimator adaptation due to considerably lower combined execution and learning time of kNN versus MLP.
---
paper_title: The development of the Athens Emotional States Inventory (AESI): collection, validation and automatic processing of emotionally loaded sentences
paper_content:
AbstractObjectives. The development of ecologically valid procedures for collecting reliable and unbiased emotional data towards computer interfaces with social and affective intelligence targeting patients with mental disorders. Methods. Following its development, presented with, the Athens Emotional States Inventory (AESI) proposes the design, recording and validation of an audiovisual database for five emotional states: anger, fear, joy, sadness and neutral. The items of the AESI consist of sentences each having content indicative of the corresponding emotion. Emotional content was assessed through a survey of 40 young participants with a questionnaire following the Latin square design. The emotional sentences that were correctly identified by 85% of the participants were recorded in a soundproof room with microphones and cameras. A preliminary validation of AESI is performed through automatic emotion recognition experiments from speech. Results. The resulting database contains 696 recorded utterances ...
---
paper_title: Comparison of the Efficiency of Time and Frequency Descriptors Based on Different Classification Conceptions
paper_content:
Extraction and detailed analysis of sound files using the MPEG 7 standard descriptors is extensively explored. However, an automatic description of the specific field of sounds of nature still needs an intensive research. This publication presents a comparison of effectiveness of time and frequency descriptors applied in recognition of species of birds by their voices. The results presented here are a continuation of the research/studies on this subject. Three different conceptions of classification - the WEKA system as classical tool, linguistically modelled fuzzy system and artificial neural network were used for testing the descriptors’ effectiveness. The analysed sounds of birds come from 10 different species of birds: Corn Crake, Hawk, Blackbird, Cuckoo, Lesser Whitethroat, Chiffchaff, Eurasian Pygmy Owl, Meadow Pipit, House Sparrow and Firecrest. For the analysis of the physical features of a song, MPEG 7 standard audio descriptors were used.
---
paper_title: Techniques and applications of emotion recognition in speech
paper_content:
Affective computing opens a new area of research in computer science with the aim to improve the way how humans and machines interact. Recognition of human emotions by machines is becoming a significant focus in recent research in different disciplines related to information sciences and Human-Computer Interaction (HCI). In particular, emotion recognition in human speech is important, as it is the primary communication tool of humans. This paper gives a brief overview of the current state of the research in this area with the aim to underline different techniques that are being used for detecting emotional states in vocal expressions. Furthermore, approaches for extracting speech features from speech datasets and machine learning methods with special emphasis on classifiers are analysed. In addition to the mentioned techniques, this paper also gives an outline of the areas where emotion recognition could be utilised such as healthcare, psychology, cognitive sciences and marketing.
---
paper_title: Fuzzy System for the Classification of Sounds of Birds Based on the Audio Descriptors
paper_content:
This paper presents an application of fuzzy systems for the classification of sounds coded by the selected MPEG-7 descriptors. The model of the fuzzy classification system is based on the audio descriptors for a few chosen species of birds: Great Spotted Woodpecker, Greylag, Goldfinch, Chaffinch. The paper proposes two fuzzy models that definitely differ by the description of the input linguistic variables. The results show, that both approaches are effective. However, second one is more flexible in a case of future expanding of the model with next descriptors or species of birds.
---
paper_title: Facial expression and Emotion
paper_content:
Cross-cultural research on facial expression and the developments of methods to measure facial expression are briefly summarized. What has been learned about emotion from this work on the face is then elucidated. Four questions about facial expression and emotion are discussed. What information does an expression typically convey? Can there be emotion without facial expression? Can there be a facial expression of emotion without emotion? How do individuals differ in their facial expressions of emotion?
---
paper_title: The Human Face as a Dynamic Tool for Social Communication
paper_content:
As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences — about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digital economy, increasing globalization and cultural integration, understanding precisely which face information supports social communication and which produces misunderstanding is central to the evolving needs of modern society (for example, in the design of socially interactive digital avatars and companion robots). Doing so is challenging, however, because the face can be thought of as comprising a high-dimensional, dynamic information space, and this impacts cognitive science and neuroimaging, and their broader applications in the digital economy. New opportunities to address this challenge are arising from the development of new methods and technologies, coupled with the emergence of a modern scientific culture that embraces cross-disciplinary approaches. Here, we briefly review one such approach that combines state-of-the-art computer graphics, psychophysics and vision science, cultural psychology and social cognition, and highlight the main knowledge advances it has generated. In the light of current developments, we provide a vision of the future directions in the field of human facial communication within and across cultures.
---
paper_title: Perception of emotional expressions in different representations using facial feature points
paper_content:
Facial expression recognition is an enabling technology for affective computing. Many existing facial expression analysis systems rely on automatically tracked facial feature points. Although psychologists have studied emotion perception from manually specified or marker-based point-light displays, no formal study exists on the amount of emotional information conveyed through automatically tracked feature points. We assess the utility of automatically extracted feature points in conveying emotions for posed and naturalistic data and present results from an experiment that compared human raters' judgements of emotional expressions between actual video clips and three automatically generated representations of them. The implications for optimal face representation and creation of realistic animations are discussed.
---
paper_title: Monitoring chronic disease at home using connected devices
paper_content:
The research purpose is to study the impact’s use of connected devices on cardiovascular diseases. In this paper, we will explain our experimental methodology as well as the first outcomes. Three connected objects are being used for this experiment: a heart rate monitor belt, a tensiometer and a smartwatch. This communication intends to explain the methodology and the procedure that are being used to conduct this project. Our main objective is to monitor participants during their daily routine life, to record and to collect data continuously. This health care monitoring is a case of study of system of systems engineering within it we manage interactions between three aspects: humans, environment and sensors. The idea, then, is to study the different relations and correlations existing between variables coming from those aspects. An important part, of this work, takes into consideration the participant’s emotional aspect (stress, happiness, sadness, among) and analyse that using the appropriate artificial intelligence tools. Being able to detect participant’s emotion, categorize it and analyse its impact on cardiovascular disease, is the main goal of this work.
---
paper_title: Multimodal emotion recognition by combining physiological signals and facial expressions: A preliminary study
paper_content:
Lately, multimodal approaches for automatic emotion recognition have gained significant scientific interest. In this paper, emotion recognition by combining physiological signals and facial expressions was studied. Heart rate variability parameters, respiration frequency, and facial expressions were used to classify person's emotions while watching pictures with emotional content. Three classes were used for both valence and arousal. The preliminary results show that, over the proposed channels, detecting arousal seem to be easier compared to valence. While the classification performance of 54.5% was attained with arousal, only 38.0% of the samples were classified correctly in terms of valence. In future, additional modalities as well as feature selection will be utilized to improve the results.
---
paper_title: Emotion recognition based on EEG features in movie clips with channel selection
paper_content:
Emotion plays an important role in human interaction. People can explain their emotions in terms of word, voice intonation, facial expression, and body language. However, brain–computer interface (BCI) systems have not reached the desired level to interpret emotions. Automatic emotion recognition based on BCI systems has been a topic of great research in the last few decades. Electroencephalogram (EEG) signals are one of the most crucial resources for these systems. The main advantage of using EEG signals is that it reflects real emotion and can easily be processed by computer systems. In this study, EEG signals related to positive and negative emotions have been classified with preprocessing of channel selection. Self-Assessment Manikins was used to determine emotional states. We have employed discrete wavelet transform and machine learning techniques such as multilayer perceptron neural network (MLPNN) and k-nearest neighborhood (kNN) algorithm to classify EEG signals. The classifier algorithms were initially used for channel selection. EEG channels for each participant were evaluated separately, and five EEG channels that offered the best classification performance were determined. Thus, final feature vectors were obtained by combining the features of EEG segments belonging to these channels. The final feature vectors with related positive and negative emotions were classified separately using MLPNN and kNN algorithms. The classification performance obtained with both the algorithms are computed and compared. The average overall accuracies were obtained as 77.14 and 72.92% by using MLPNN and kNN, respectively.
---
paper_title: Multimodal emotion recognition by combining physiological signals and facial expressions: A preliminary study
paper_content:
Lately, multimodal approaches for automatic emotion recognition have gained significant scientific interest. In this paper, emotion recognition by combining physiological signals and facial expressions was studied. Heart rate variability parameters, respiration frequency, and facial expressions were used to classify person's emotions while watching pictures with emotional content. Three classes were used for both valence and arousal. The preliminary results show that, over the proposed channels, detecting arousal seem to be easier compared to valence. While the classification performance of 54.5% was attained with arousal, only 38.0% of the samples were classified correctly in terms of valence. In future, additional modalities as well as feature selection will be utilized to improve the results.
---
paper_title: Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals
paper_content:
Abstract The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for ‘Depressing’ with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach.
---
paper_title: Emotion recognition from physiological signals
paper_content:
Emotion recognition is one of the great challenges in human–human and human–computer interaction. Accurate emotion recognition would allow computers to recognize human emotions and therefore react accordingly. In this paper, an approach for emotion recognition based on physiological signals is proposed. Six basic emotions: joy, sadness, fear, disgust, neutrality and amusement are analysed using physiological signals. These emotions are induced through the presentation of International Affecting Picture System (IAPS) pictures to the subjects. The physiological signals of interest in this analysis are: electromyogram signal (EMG), respiratory volume (RV), skin temperature (SKT), skin conductance (SKC), blood volume pulse (BVP) and heart rate (HR). These are selected to extract characteristic parameters, which will be used for classifying the emotions. The SVM (support vector machine) technique is used for classifying these parameters. The experimental results show that the proposed methodology provides in g...
---
paper_title: Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals
paper_content:
A new dynamic fusion method for designing an emotion recognition system is proposed.A weight is assigned to each classifier based on its performance.The performance of the classifiers during the training and testing phases is considered.Static weights in varying contexts such as emotions do not produce acceptable results.Dynamic weighting strategy improves the performance of the system considerably. In this study, we proposed a new adaptive method for fusing multiple emotional modalities to improve the performance of the emotion recognition system. Three-channel forehead biosignals along with peripheral physiological measurements (blood volume pressure, skin conductance, and interbeat intervals) were utilized as emotional modalities. Six basic emotions, i.e., anger, sadness, fear, disgust, happiness, and surprise were elicited by displaying preselected video clips for each of the 25 participants in the experiment; the physiological signals were collected simultaneously. In our multimodal emotion recognition system, recorded signals with the formation of several classification units identified the emotions independently. Then the results were fused using the adaptive weighted linear model to produce the final result. Each classification unit is assigned a weight that is determined dynamically by considering the performance of the units during the testing phase and the training phase results. This dynamic weighting scheme enables the emotion recognition system to adapt itself to each new user. The results showed that the suggested method outperformed conventional fusion of the features and classification units using the majority voting method. In addition, a considerable improvement, compared to the systems that used the static weighting schemes for fusing classification units, was also shown. Using support vector machine (SVM) and k-nearest neighbors (KNN) classifiers, the overall classification accuracies of 84.7% and 80% were obtained in identifying the emotions, respectively. In addition, applying the forehead or physiological signals in the proposed scheme indicates that designing a reliable emotion recognition system is feasible without the need for additional emotional modalities.
---
paper_title: A Hybrid Model for Automatic Emotion Recognition in Suicide Notes
paper_content:
We describe the Open University team's submission to the 2011 i2b2/VA/Cincinnati Medical Natural Language Processing Challenge, Track 2 Shared Task for sentiment analysis in suicide notes. This Shared Task focused on the development of automatic systems that identify, at the sentence level, affective text of 15 specific emotions from suicide notes. We propose a hybrid model that incorporates a number of natural language processing techniques, including lexicon-based keyword spotting, CRF-based emotion cue identification, and machine learning-based emotion classification. The results generated by different techniques are integrated using different vote-based merging strategies. The automated system performed well against the manually-annotated gold standard, and achieved encouraging results with a micro-averaged F-measure score of 61.39% in textual emotion recognition, which was ranked 1st place out of 24 participant teams in this challenge. The results demonstrate that effective emotion recognition by an automated system is possible when a large annotated corpus is available.
---
paper_title: OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit
paper_content:
Various open-source toolkits exist for speech recognition and speech processing. These toolkits have brought a great benefit to the research community, i.e. speeding up research. Yet, no such freely available toolkit exists for automatic affect recognition from speech. We herein introduce a novel open-source affect and emotion recognition engine, which integrates all necessary components in one highly efficient software package. The components include audio recording and audio file reading, state-of-the-art paralinguistic feature extraction and plugable classification modules. In this paper we introduce the engine and extensive baseline results. Pre-trained models for four affect recognition tasks are included in the openEAR distribution. The engine is tailored for multi-threaded, incremental on-line processing of live input in real-time, however it can also be used for batch processing of databases.
---
paper_title: The EU-Emotion Stimulus Set: A validation study.
paper_content:
The EU-Emotion Stimulus Set is a newly developed collection of dynamic multimodal emotion and mental state representations. A total of 20 emotions and mental states are represented through facial expressions, vocal expressions, body gestures and contextual social scenes. This emotion set is portrayed by a multi-ethnic group of child and adult actors. Here we present the validation results, as well as participant ratings of the emotional valence, arousal and intensity of the visual stimuli from this emotion stimulus set. The EU-Emotion Stimulus Set is available for use by the scientific community and the validation data are provided as a supplement available for download.
---
paper_title: Affective and Behavioural Computing: Lessons Learnt from the First Computational Paralinguistics Challenge
paper_content:
Abstract In this article, we review the INTERSPEECH 2013 Computational Paralinguistics ChallengE (ComParE) – the first of its kind– in light of the recent developments in affective and behavioural computing. The impact of the first ComParE instalment is manifold: first, it featured various new recognition tasks including social signals such as laughter and fillers, conflict in dyadic group discussions, and atypical communication due to pervasive developmental disorders, as well as enacted emotion; second, it marked the onset of the ComParE, subsuming all tasks investigated hitherto within the realm of computational paralinguistics; finally, besides providing a unified test-bed under well-defined and strictly comparable conditions, we present the definite feature vector used for computation of the baselines, thus laying the foundation for a successful series of follow-up Challenges. Starting with a review of the preceding INTERSPEECH Challenges, we present the four Sub-Challenges of ComParE 2013. In particular, we provide details of the Challenge databases and a meta-analysis by conducting experiments of logistic regression on single features and evaluating the performances achieved by the participants.
---
| Title: Survey on AI-Based Multimodal Methods for Emotion Detection
Section 1: Introduction
Description 1: This section introduces the field of affective computing, its significance, the need for systems sensitive to human emotions, and the motivation for using multimodal approaches for emotion detection.
Section 2: Multimodal Emotion Analysis
Description 2: This section provides an overview of traditional and advanced techniques used in multimodal emotion analysis, the challenges faced, and notable datasets and methods.
Section 3: Emotion in Text
Description 3: This section discusses the process of identifying emotions in text through opinion mining and sentiment analysis, detailing methodologies and challenges associated with text-based emotion detection.
Section 4: Emotion Detection in the Sound
Description 4: This section explains emotion recognition based on audio signals, including prosodic features and speech analysis, and addresses the technical challenges and methodologies involved.
Section 5: Emotion in Image and Video
Description 5: This section focuses on detecting emotions through nonverbal cues such as facial expressions and gestures captured in images and videos, describing various tools and techniques used for analysis.
Section 6: Existing Tools for Automatic Facial Emotion Recognition
Description 6: This section lists and describes some of the notable tools and software packages available for automatic facial emotion recognition.
Section 7: Emotion Detected by the Physiological and Motor Signals
Description 7: This section explores emotion detection through physiological signals and motor data, detailing the relevant signals, methodologies, and accuracy of various detection techniques.
Section 8: Conclusion
Description 8: This section summarizes the survey, underlining the potential and existing limitations of AI-based multimodal emotion detection methods, and suggests directions for future research. |
Survey of Uncertainty Handling in Cloud Service Discovery and Composition | 6 | ---
paper_title: The Uncertainty of Valuation
paper_content:
Valuation is often said to be “an art not a science” but this relates to the techniques employed to calculate value not to the underlying concept itself. Valuation is the process of estimating price in the market place. Yet, such an estimation will be affected by uncertainties. These input uncertainties will translate into an uncertainty with the output figure, the valuation. The degree of the uncertainties will vary according to the level of market activity; the more active a market, the more credence will be given to the input information. In the UK at the moment the Royal Institution of Chartered Surveyors (RICS) is considering ways in which the uncertainty of the valuation can be conveyed to the use of the valuation, but as yet no definitive view has been taken apart from a single Guidance Note (GN5). One of the major problems is that valuation models (in the UK) are based on comparable information and rely on single inputs. They are not probability‐based, yet uncertainty is probability driven. This paper discusses the issues underlying uncertainty in valuations and suggests a probability‐based model (using Crystal Ball) to address the shortcomings of the current model.
---
paper_title: Quantitative Risk Analysis for Mobile Cloud Computing: A Preliminary Approach and a Health Application Case Study
paper_content:
Mobile cloud computing is presented as the next logical step in the adoption of cloud based systems. However, there are number of issues inherent in it that may limit its uptake. These issues are best understood as "risks" which span the whole structure and the life cycle of mobile clouds and could be as varied as security, operations, performance, and end-users behaviours. This paper is a first step in developing a quantitative risk model suitable for a dynamic mobile cloud environment. We also use this model to analyse a mobile cloud based health application and report our findings, which have implications for cloud computing as a whole.
---
paper_title: Cloud Computing: The New Frontier of Internet Computing
paper_content:
Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking technologies and raises issues in the architecture, design, and implementation of existing networks and data centers. The relevant research has just recently gained momentum, and the space of potential ideas and solutions is still far from being widely explored.
---
paper_title: Dynamic audit services for integrity verification of outsourced storages in clouds
paper_content:
In this paper, we propose a dynamic audit service for verifying the integrity of an untrusted and outsourced storage. Our audit service is constructed based on the techniques, fragment structure, random sampling and index-hash table, supporting provable updates to outsourced data, and timely abnormal detection. In addition, we propose a probabilistic query and periodic verification for improving the performance of audit services. Our experimental results not only validate the effectiveness of our approaches, but also show our audit system verifies the integrity with lower computation overhead, requiring less extra storage for audit metadata.
---
paper_title: Dynamic audit services for integrity verification of outsourced storages in clouds
paper_content:
In this paper, we propose a dynamic audit service for verifying the integrity of an untrusted and outsourced storage. Our audit service is constructed based on the techniques, fragment structure, random sampling and index-hash table, supporting provable updates to outsourced data, and timely abnormal detection. In addition, we propose a probabilistic query and periodic verification for improving the performance of audit services. Our experimental results not only validate the effectiveness of our approaches, but also show our audit system verifies the integrity with lower computation overhead, requiring less extra storage for audit metadata.
---
paper_title: Cloud Computing: The New Frontier of Internet Computing
paper_content:
Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking technologies and raises issues in the architecture, design, and implementation of existing networks and data centers. The relevant research has just recently gained momentum, and the space of potential ideas and solutions is still far from being widely explored.
---
paper_title: An Effective Architecture for Automated Appliance Management System Applying Ontology-Based Cloud Discovery
paper_content:
Cloud computing is a computing paradigm which allows access of computing elements and storages on-demand over the Internet. Virtual Appliances, pre-configured, ready-to-run applications are emerging as a breakthrough technology to solve the complexities of service deployment on Cloud infrastructure. However, an automated approach to deploy required appliances on the most suitable Cloud infrastructure is neglected by previous works which is the focus of this work. In this paper, we propose an effective architecture using ontology-based discovery to provide QoS aware deployment of appliances on Cloud service providers. In addition, we test our approach on a case study and the result shows the efficiency and effectiveness of the proposed work.
---
paper_title: Environmental Modeling and Health Risk Analysis (Acts/Risk)
paper_content:
1. Introduction 2. Principles of Environmental Modeling 3. Conservation Principles, and Environmental Transformation and Transport 4. Air Pathway Analysis 5. Groundwater Pathway Analysis 6. Surface Water Pathway Analysis 7. Uncertainty and Variability Analysis 8. Health Risk Analysis 9. Application: Pesticide Transport in Shallow Groundwater and Environmental Risk Assessment APPENDIX A. Definitions of Acronyms and Abbreviations APPENDIX B. Environmental Modeling and Exposure Analysis Terms APPENDIX C. Definitions and Operations of the ACTS/RISK Software APPENDIX D. MCL levels of contaminants APPENDIX E. Conversion tables and Properties of Water
---
paper_title: A Generalization of Bayesian Inference
paper_content:
Procedures of statistical inference are described which generalize Bayesian inference in specific ways. Probability is used in such a way that in general only bounds may be placed on the probabilities of given events, and probability systems of this kind are suggested both for sample information and for prior information. These systems are then combined using a specified rule. Illustrations are given for inferences about trinomial probabilities, and for inferences about a monotone sequence of binomial pi. Finally, some comments are made on the general class of models which produce upper and lower probabilities, and on the specific models which underlie the suggested inference procedures.
---
paper_title: Possibility theory and statistical reasoning
paper_content:
Numerical possibility distributions can encode special convex families of probability measures. The connection between possibility theory and probability theory is potentially fruitful in the scope of statistical reasoning when uncertainty due to variability of observations should be distinguished from uncertainty due to incomplete information. This paper proposes an overview of numerical possibility theory. Its aim is to show that some notions in statistics are naturally interpreted in the language of this theory. First, probabilistic inequalites (like Chebychev's) offer a natural setting for devising possibility distributions from poor probabilistic information. Moreover, likelihood functions obey the laws of possibility theory when no prior probability is available. Possibility distributions also generalize the notion of confidence or prediction intervals, shedding some light on the role of the mode of asymmetric probability densities in the derivation of maximally informative interval substitutes of probabilistic information. Finally, the simulation of fuzzy sets comes down to selecting a probabilistic representation of a possibility distribution, which coincides with the Shapley value of the corresponding consonant capacity. This selection process is in agreement with Laplace indifference principle and is closely connected with the mean interval of a fuzzy interval. It sheds light on the ''defuzzification'' process in fuzzy set theory and provides a natural definition of a subjective possibility distribution that sticks to the Bayesian framework of exchangeable bets. Potential applications to risk assessment are pointed out.
---
paper_title: When upper probabilities are possibility measures
paper_content:
Abstract A characteristic property is given for a pair of upper and lower probabilities (induced by lower probability bounds on a finite set of events) to coincide with possibility and necessity measures. Approximations of upper probabilities by possibility measures are discussed. The problem of combining possibility distributions viewed as upper probabilities is investigated, and the basic fuzzy set intersections are justified in this framework.
---
paper_title: Quantitative Risk Analysis for Mobile Cloud Computing: A Preliminary Approach and a Health Application Case Study
paper_content:
Mobile cloud computing is presented as the next logical step in the adoption of cloud based systems. However, there are number of issues inherent in it that may limit its uptake. These issues are best understood as "risks" which span the whole structure and the life cycle of mobile clouds and could be as varied as security, operations, performance, and end-users behaviours. This paper is a first step in developing a quantitative risk model suitable for a dynamic mobile cloud environment. We also use this model to analyse a mobile cloud based health application and report our findings, which have implications for cloud computing as a whole.
---
paper_title: Evolution in Relation to Risk and Trust Management
paper_content:
In this paper, a methodology within risk and trust management in general, and risk and trust assessment in particular, isn't well equipped to address trust issues in evolution.
---
paper_title: A QoS-aware service discovery method for elastic cloud computing in an unstructured peer-to-peer network
paper_content:
SUMMARY ::: Traditionally, service discovery is often promoted by the centralized approach that typically suffers from single point of failure, poor reliability, poor scalability, to name a few. In view of this challenge, a QoS-aware service discovery method is investigated for elastic cloud computing in an unstructured peer-to-peer network in this paper. Concretely speaking, the method is deployed by two phases, that is, service registering phase and service discovery phase. More specifically, for a peer node engaged in the unstructured peer-to-peer network, it firstly registers its functional and nonfunctional information to its neighbors in a flooding way. With the multiple registered information, the QoS-aware service discovery is promoted in a probabilistic flooding way according to the network traffic. At last, extensive simulations are conducted to evaluate the feasibility of our method. Copyright © 2013 John Wiley & Sons, Ltd.
---
paper_title: A Trustworhty Model for Reliable Cloud Service Discovery
paper_content:
computing is a new model for delivering new applications and services. Its adoption is gaining ground because most of the services provided by the cloud are of low cost and readily available for use. Despite many promises by the cloud service providers, users remain much concerned about the general risk associated with the adoption of the cloud. The availability of many cloud service providers on one hand promotes competition in the cloud market and gives end users more freedom to choose the best cloud provider however it became a tedious and time consuming task for potential cloud users to evaluate and compare the available cloud offerings in the market. Hence, discovering a reliable service is a daunting task. This research proposed a trustworthy model for reliable cloud service discovery.
---
paper_title: A QoS-Satisfied Prediction Model for Cloud-Service Composition Based on a Hidden Markov Model
paper_content:
Various significant issues in cloud computing, such as service provision, service matching, and service assessment, have attracted researchers’ attention recently. Quality of service (QoS) plays an increasingly important role in the provision of cloud-based services, by aiming for the seamless and dynamic integration of cloud-service components. In this paper, we focus on QoS-satisfied predictions about the composition of cloud-service components and present a QoS-satisfied prediction model based on a hidden Markov model. In providing a cloud-based service for a user, if the user’s QoS cannot be satisfied by a single cloud-service component, component composition should be considered, where its QoS-satisfied capability needs to be proactively predicted to be able to guarantee the user’s QoS. We discuss the proposed model in detail and prove some aspects of the model. Simulation results show that our model can achieve high prediction accuracies.
---
paper_title: Cloud Architecture for Dynamic Service Composition
paper_content:
Service composition provides value-adding services through composing basic Web services, which may be provided by various organizations. Cloud computing presents an efficient managerial, on-demand, and scalable way to integrate computational resources hardware, platform, and software. However, existing Cloud architecture lacks the layer of middleware to enable dynamic service composition. To enable and accelerate on-demand service composition, the authors explore the paradigm of dynamic service composition in the Cloud for Pervasive Service Computing environments and propose a Cloud-based Middleware for Dynamic Service Composition CM4SC. In this approach, the authors introduce the CM4SC 'Composition as a Service' middleware layer into conventional Cloud architecture to allow automatic composition planning, service discovery and service composition. The authors implement the CM4SC middleware prototype utilizing Windows Azure Cloud platform. The prototype demonstrates the feasibility of CM4SC for accelerating dynamic service composition and that the CM4SC middleware-accelerated Cloud architecture offers a novel way for realizing dynamic service composition.
---
paper_title: A multi-dimensional trust-aware cloud service selection mechanism based on evidential reasoning approach
paper_content:
In the last few years, cloud computing as a new computing paradigm has gone through significant development, but it is also facing many problems. One of them is the cloud service selection problem. As increasingly boosting cloud services are offered through the internet and some of them may be not reliable or even malicious, how to select trustworthy cloud services for cloud users is a big challenge. In this paper, we propose a multi-dimensional trust-aware cloud service selection mechanism based on evidential reasoning (ER) approach that integrates both perception-based trust value and reputation based trust value, which are derived from direct and indirect trust evidence respectively, to identify trustworthy services. Here, multi-dimensional trust evidence, which reflects the trustworthiness of cloud services from different aspects, is elicited in the form of historical users' feedback ratings. Then, the ER approach is applied to aggregate the multi-dimensional trust ratings to obtain the real-time trust value and select the most trustworthy cloud service of certain type for the active users. Finally, the fresh feedback from the active users will update the trust evidence for other service users in the future.
---
paper_title: Dynamic service composition enabled by introspective agent coordination
paper_content:
Service composition has received much interest from many research communities. The major research efforts published to date propose the use of service orchestration to model this problem. However, the designed orchestration approaches are static since they follow a predefined plan specifying the services to be composed and their data flow, and most of them are centralized around a composition engine. Further, task decomposition is made prior to service composition, whereas it should be performed according to the available competencies. In order to overcome these limitations, we propose to model a dynamic approach for service composition. The studied approach relies on the decentralized and autonomous collaboration of a set of services whose aim is to achieve a specific goal. In our work, this goal is to satisfy requirements in software services that are freely expressed by human users (i.e. not predefined through a composition plan). We propose to enable the service collaborations through a multi-agent coordination protocol. In our model, agents offer services and are endowed with introspective capabilities, that is, they can access and reason on their state and actions at runtime. Thereby, the agents are capable of decomposing a monolithic task according to their service skills, and dynamically coordinating with their acquaintances to cover the whole task achievement. This paper presents the adaptive agent-based approach we propose for dynamic service composition, describes its architecture, specifies the underlying coordination protocol, called omposer, verifies the protocol's main properties, and validates it by unfolding an implemented scenario.
---
paper_title: What's inside the Cloud? An architectural map of the Cloud landscape
paper_content:
We propose an integrated Cloud computing stack architecture to serve as a reference point for future mash-ups and comparative studies. We also show how the existing Cloud landscape maps into this architecture and identify an infrastructure gap that we plan to address in future work.
---
paper_title: A QoS-aware service discovery method for elastic cloud computing in an unstructured peer-to-peer network
paper_content:
SUMMARY ::: Traditionally, service discovery is often promoted by the centralized approach that typically suffers from single point of failure, poor reliability, poor scalability, to name a few. In view of this challenge, a QoS-aware service discovery method is investigated for elastic cloud computing in an unstructured peer-to-peer network in this paper. Concretely speaking, the method is deployed by two phases, that is, service registering phase and service discovery phase. More specifically, for a peer node engaged in the unstructured peer-to-peer network, it firstly registers its functional and nonfunctional information to its neighbors in a flooding way. With the multiple registered information, the QoS-aware service discovery is promoted in a probabilistic flooding way according to the network traffic. At last, extensive simulations are conducted to evaluate the feasibility of our method. Copyright © 2013 John Wiley & Sons, Ltd.
---
paper_title: Compatibility-Aware Cloud Service Composition under Fuzzy Preferences of Users
paper_content:
When a single Cloud service (i.e., a software image and a virtual machine), on its own, cannot satisfy all the user requirements, a composition of Cloud services is required. Cloud service composition, which includes several tasks such as discovery, compatibility checking, selection, and deployment, is a complex process and users find it difficult to select the best one among the hundreds, if not thousands, of possible compositions available. Service composition in Cloud raises even new challenges caused by diversity of users with different expertise requiring their applications to be deployed across difference geographical locations with distinct legal constraints. The main difficulty lies in selecting a combination of virtual appliances (software images) and infrastructure services that are compatible and satisfy a user with vague preferences. Therefore, we present a framework and algorithms which simplify Cloud service composition for unskilled users. We develop an ontology-based approach to analyze Cloud service compatibility by applying reasoning on the expert knowledge. In addition, to minimize effort of users in expressing their preferences, we apply combination of evolutionary algorithms and fuzzy logic for composition optimization. This lets users express their needs in linguistics terms which brings a great comfort to them compared to systems that force users to assign exact weights for all preferences.
---
paper_title: Reasoning With Partially Ordered Information in a Possibilistic Logic Framework
paper_content:
In many applications, the reliability relation associated with available information is only partially defined, while most of existing uncertainty frameworks deal with totally ordered pieces of knowledge. Partial pre-orders offer more flexibility than total pre-orders to represent incomplete knowledge. Possibilistic logic, which is an extension of classical logic, deals with totally ordered information. It offers a natural qualitative framework for handling uncertain information. Priorities are encoded by means of weighted formulas, where weights are lower bounds of necessity measures. This paper proposes an extension of possibilistic logic for dealing with partially ordered pieces of knowledge. We show that there are two different ways to define a possibilistic logic machinery which both extend the standard one.
---
| Title: Survey of Uncertainty Handling in Cloud Service Discovery and Composition
Section 1: CLOUD COMPUTING SERVICE
Description 1: This section discusses the different types of cloud services, including IaaS, SaaS, and PaaS, detailing their functionalities and the level of user control and flexibility they offer.
Section 2: UNCERTAINTY THEORY AND CLOUD RISK MODELING
Description 2: This section introduces uncertainty theory and cloud risk modeling, explaining the different theories related to uncertainty (probability, belief function, and possibility theories) and their application to cloud risk management.
Section 3: CLOUD SERVICE DISCOVERY
Description 3: This section explores methods and challenges in cloud service discovery, focusing on how trust and uncertainty affect the process and presenting various approaches and models used to enhance service discovery.
Section 4: CLOUD SERVICE COMPOSITION
Description 4: This section examines service composition in the cloud, differentiating between static and dynamic compositions, and discusses the impact of uncertainty on the reliability and quality of composite services.
Section 5: DISCUSION
Description 5: This section provides an analysis of the current challenges and limitations in cloud service discovery and composition, highlighting the importance of managing uncertainty and comparing different approaches.
Section 6: CONCLUSIONS
Description 6: This section summarizes the findings of the survey, emphasizing the significance of uncertainty handling in cloud service discovery and composition, and suggesting directions for future research. |
Technical Privacy Metrics: a Systematic Survey | 8 | ---
paper_title: Genomic Privacy Metrics: A Systematic Comparison
paper_content:
The human genome uniquely identifies, and contains highly sensitive information about, individuals. This creates a high potential for misuse of genomic data (e.g., Genetic discrimination). This paper investigates how genomic privacy can be measured in scenarios where an adversary aims to infer a person's genome by constructing probability distributions on the values of genetic variations. Specifically, we investigate 22 privacy metrics using adversaries of different strengths, and uncover problems with several metrics that have previously been used for genomic privacy. We then give suggestions on metric selection, and illustrate the process with a case study on Alzheimer's disease.
---
paper_title: Quantifying Location Privacy
paper_content:
It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.
---
paper_title: On the Fundamentals of Anonymity Metrics
paper_content:
In recent years, a handful of anonymity metrics have been proposed that are either based on (i) the number participants in the given scenario, (ii) the probability distribution in an anonymous network regarding which participant is the sender / receiver, or (iii) a combination thereof. In this paper, we discuss elementary properties of metrics in general and anonymity metrics in particular, and then evaluate the behavior of a set of state-of-the-art anonymity metrics when applied in a number of scenarios. On the basis of this evaluation and basic measurement theory, we also define criteria for anonymity metrics and show that none of the studied metrics fulfill all criteria. Lastly, based on previous work on entropy-based anonymity metrics, as well as on theories on the effective support size of the entropy function and on Huffman codes, we propose an alternative metric — the scaled anonymity set size — that fulfills these criteria.
---
paper_title: Engineering Privacy in Public: Confounding Face Recognition
paper_content:
The objective of DARPA’s Human ID at a Distance (HID) program “is to develop automated biometric identification technologies to detect, recognize and identify humans at great distances.” While nominally intended for security applications, if deployed widely, such technologies could become an enormous privacy threat, making practical the automatic surveillance of individuals on a grand scale. Face recognition, as the HID technology most rapidly approaching maturity, deserves immediate research attention in order to understand its strengths and limitations, with an objective of reliably foiling it when it is used inappropriately. This paper is a status report for a research program designed to achieve this objective within a larger goal of similarly defeating all HID technologies.
---
paper_title: The dining cryptographers problem: Unconditional sender and recipient untraceability
paper_content:
Keeping confidential who sends which messages, in a world where any physical transmission can be traced to its origin, seems impossible. The solution presented here is unconditionally or cryptographically secure, depending on whether it is based on one-time-use keys or on public keys, respectively. It can be adapted to address efficiently a wide variety of practical considerations.
---
paper_title: A survey of state-of-the-art in anonymity metrics
paper_content:
Anonymization enables organizations to protect their data and systems from a diverse set of attacks and preserve privacy; however, in the area of anonymized network data, few, if any, are able to precisely quantify how anonymized their information is for any particular dataset. Indeed, recent research indicates that many anonymization techniques leak some information. An ability to confidently measure this information leakage and any changes in anonymity levels plays a crucial role in facilitating the free-flow of cross-organizational network data sharing and promoting wider adoption of anonyimzation techniques. Fortunately, multiple methods of analyzing anonymity exist. Typical approaches use simple quantifications and probabilistic models; however, to the best of our knowledge, only one network data anonymization metric has been proposed. More importantly, no one-stop-shop paper exists that comprehensively surveys this area for other candidate measures; therefore, this paper explores the state-of-the-art of anonymity metrics. The objective is to provide a macro-level view of the systematic analysis of anonymity preservation, degradation, or elimination for data anonymization as well as network communciations anonymization.
---
paper_title: A survey of state-of-the-art in anonymity metrics
paper_content:
Anonymization enables organizations to protect their data and systems from a diverse set of attacks and preserve privacy; however, in the area of anonymized network data, few, if any, are able to precisely quantify how anonymized their information is for any particular dataset. Indeed, recent research indicates that many anonymization techniques leak some information. An ability to confidently measure this information leakage and any changes in anonymity levels plays a crucial role in facilitating the free-flow of cross-organizational network data sharing and promoting wider adoption of anonyimzation techniques. Fortunately, multiple methods of analyzing anonymity exist. Typical approaches use simple quantifications and probabilistic models; however, to the best of our knowledge, only one network data anonymization metric has been proposed. More importantly, no one-stop-shop paper exists that comprehensively surveys this area for other candidate measures; therefore, this paper explores the state-of-the-art of anonymity metrics. The objective is to provide a macro-level view of the systematic analysis of anonymity preservation, degradation, or elimination for data anonymization as well as network communciations anonymization.
---
paper_title: Privacy-preserving data publishing: A survey of recent developments
paper_content:
The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions.
---
paper_title: A unified framework for location privacy
paper_content:
We introduce a novel framework that provides a logical structure for classifying and organizing fundamental components and concepts of location privacy. Our framework models mobile networks and applications, threats, location-privacy preserving mechanisms, and metrics. We demonstrate the relevance of our framework by showing how the existing proposals in the field of location privacy are embodied appropriately in the framework. Our framework provides "the big picture" of research on location privacy and hence aims at paving the way for future research. It helps researchers to better understand this field of research, identify open problems, appropriately design new schemes, and position their work with respect to other efforts. The terminology proposed in this framework also facilitates establishing an inter-disciplinary research community on location privacy.
---
paper_title: Privacy assessment in vehicular networks using simulation
paper_content:
Vehicular networks are envisioned to play an important role in the building of intelligent transportation systems. However, the dangers of the wireless transmission of potentially exploitable information such as detailed locations are often overlooked or only inadequately addressed in field operational tests or standardization efforts. One of the main reasons for this is that the concept of privacy is difficult to quantify. While vehicular network algorithms are usually evaluated by means of simulation, it is a non-trivial task to assess the performance of a privacy protection mechanism. In this paper we discuss the principles, challenges, and necessary steps in terms of privacy assessment in vehicular networks. We identify useful and practical metrics that allow the comparison and evaluation of privacy protection algorithms. We present a systematic literature review that sheds light on the current state of the art and give recommendations for future research directions in the field.
---
paper_title: Towards Privacy Protection in Smart Grid
paper_content:
The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers' demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.
---
paper_title: Stalking online: on user privacy in social networks
paper_content:
With the extreme popularity of Web and online social networks, a large amount of personal information has been made available over the Internet. On the other hand, advances in information retrieval, data mining and knowledge discovery technologies have enabled users to efficiently satisfy their information needs over the Internet or from large-scale data sets. However, such technologies also help the adversaries such as web stalkers to discover private information about their victims from mass data. In this paper, we study privacy-sensitive information that are accessible from the Web, and how these information could be utilized to discover personal identities. In the proposed scenario, an adversary is assumed to possess a small piece of "seed" information about a targeted user, and conduct extensive and intelligent search to identify the target over both the Web and an information repository collected from the Web. In particular, two types of attackers are modeled, namely tireless attackers and resourceful attackers. We then analyze detailed attacking mechanisms that could be performed by these attackers, and quantify the threats of both types of attacks to general Web users. With extensive experiments and sophisticated analysis, we show that a large portion of users with online presence are highly identifiable, even when only a small piece of (possibly inaccurate) seed information is known to the attackers.
---
paper_title: Genomic Privacy Metrics: A Systematic Comparison
paper_content:
The human genome uniquely identifies, and contains highly sensitive information about, individuals. This creates a high potential for misuse of genomic data (e.g., Genetic discrimination). This paper investigates how genomic privacy can be measured in scenarios where an adversary aims to infer a person's genome by constructing probability distributions on the values of genetic variations. Specifically, we investigate 22 privacy metrics using adversaries of different strengths, and uncover problems with several metrics that have previously been used for genomic privacy. We then give suggestions on metric selection, and illustrate the process with a case study on Alzheimer's disease.
---
paper_title: Towards measuring anonymity
paper_content:
This paper introduces an information theoretic model that allows to quantify the degree of anonymity provided by schemes for anonymous connections. It considers attackers that obtain probabilistic information about users. The degree is based on the probabilities an attacker, after observing the system, assigns to the different users of the system as being the originators of a message. As a proof of concept, the model is applied to some existing systems. The model is shown to be very useful for evaluating the level of privacy a system provides under various attack scenarios, for measuring the amount of information an attacker gets with a particular attack and for comparing different systems amongst each other.
---
paper_title: Unraveling an old cloak: k-anonymity for location privacy
paper_content:
There is a rich collection of literature that aims at protecting the privacy of users querying location-based services. One of the most popular location privacy techniques consists in cloaking users' locations such that k users appear as potential senders of a query, thus achieving k-anonymity. This paper analyzes the effectiveness of k-anonymity approaches for protecting location privacy in the presence of various types of adversaries. The unraveling of the scheme unfolds the inconsistency between its components, mainly the cloaking mechanism and the k-anonymity metric. We show that constructing cloaking regions based on the users' locations does not reliably relate to location privacy, and argue that this technique may even be detrimental to users' location privacy. The uncovered flaws imply that existing k-anonymity scheme is a tattered cloak for protecting location privacy.
---
paper_title: L-diversity: Privacy beyond k-anonymity
paper_content:
Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called k-anonymity has gained popularity. In a k-anonymized dataset, each record is indistinguishable from at least k − 1 other records with respect to certain identifying attributes. In this article, we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems. First, an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. This is a known problem. Second, attackers often have background knowledge, and we show that k-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks, and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks. In addition to building a formal foundation for e-diversity, we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently.
---
paper_title: On k-Anonymity and the Curse of Dimensionality
paper_content:
In recent years, the wide availability of personal data has made the problem of privacy preserving data mining an important one. A number of methods have recently been proposed for privacy preserving data mining of multidimensional data records. One of the methods for privacy preserving data mining is that of anonymization, in which a record is released only if it is indistinguishable from k other entities in the data. We note that methods such as k-anonymity are highly dependent upon spatial locality in order to effectively implement the technique in a statistically robust way. In high dimensional space the data becomes sparse, and the concept of spatial locality is no longer easy to define from an application point of view. In this paper, we view the k-anonymization problem from the perspective of inference attacks over all possible combinations of attributes. We show that when the data contains a large number of attributes which may be considered quasi-identifiers, it becomes difficult to anonymize the data without an unacceptably high amount of information loss. This is because an exponential number of combinations of dimensions can be used to make precise inference attacks, even when individual attributes are partially specified within a range. We provide an analysis of the effect of dimensionality on k-anonymity methods. We conclude that when a data set contains a large number of attributes which are open to inference attacks, we are faced with a choice of either completely suppressing most of the data or losing the desired level of anonymity. Thus, this paper shows that the curse of high dimensionality also applies to the problem of privacy preserving data mining.
---
paper_title: Personalized privacy preservation
paper_content:
We study generalization for preserving privacy in publication of sensitive data. The existing methods focus on a universal approach that exerts the same amount of preservation for all persons, with-out catering for their concrete needs. The consequence is that we may be offering insufficient protection to a subset of people, while applying excessive privacy control to another subset. Motivated by this, we present a new generalization framework based on the concept of personalized anonymity. Our technique performs the minimum generalization for satisfying everybody's requirements, and thus, retains the largest amount of information from the microdata. We carry out a careful theoretical study that leads to valuable insight into the behavior of alternative solutions. In particular, our analysis mathematically reveals the circumstances where the previous work fails to protect privacy, and establishes the superiority of the proposed solutions. The theoretical findings are verified with extensive experiments.
---
paper_title: The dining cryptographers problem: Unconditional sender and recipient untraceability
paper_content:
Keeping confidential who sends which messages, in a world where any physical transmission can be traced to its origin, seems impossible. The solution presented here is unconditionally or cryptographically secure, depending on whether it is based on one-time-use keys or on public keys, respectively. It can be adapted to address efficiently a wide variety of practical considerations.
---
paper_title: On the Anonymity of Home/Work Location Pairs
paper_content:
Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.
---
paper_title: Towards measuring anonymity
paper_content:
This paper introduces an information theoretic model that allows to quantify the degree of anonymity provided by schemes for anonymous connections. It considers attackers that obtain probabilistic information about users. The degree is based on the probabilities an attacker, after observing the system, assigns to the different users of the system as being the originators of a message. As a proof of concept, the model is applied to some existing systems. The model is shown to be very useful for evaluating the level of privacy a system provides under various attack scenarios, for measuring the amount of information an attacker gets with a particular attack and for comparing different systems amongst each other.
---
paper_title: Stop-and-Go-MIXes Providing Probabilistic Anonymity in an Open System
paper_content:
Currently known basic anonymity techniques depend on identity verification. If verification of user identities is not possible due to the related management overhead or a general lack of information (e.g. on the Internet), an adversary can participate several times in a communication relationship and observe the honest users. In this paper we focus on the problem of providing anonymity without identity verification. The notion of probabilistic anonymity is introduced. Probabilistic anonymity is based on a publicly known security parameter, which determines the security of the protocol. For probabilistic anonymity the insecurity, expressed as the probability of having only one honest participant, approaches 0 at an exponential rate as the security parameter is changed linearly. Based on our security model we propose a new MIX variant called “Stop-and-Go-MIX” (SG-MIX) which provides anonymity without identity verification, and prove that it is probabilistically secure.
---
paper_title: A Formal Model of Obfuscation and Negotiation for Location Privacy
paper_content:
Obfuscation concerns the practice of deliberately degrading the quality of information in some way, so as to protect the privacy of the individual to whom that information refers. In this paper, we argue that obfuscation is an important technique for protecting an individual's location privacy within a pervasive computing environment. The paper sets out a formal framework within which obfuscated location-based services are defined. This framework provides a computationally efficient mechanism for balancing an individual's need for high-quality information services against that individual's need for location privacy. Negotiation is used to ensure that a location-based service provider receives only the information it needs to know in order to provide a service of satisfactory quality. The results of this work have implications for numerous applications of mobile and location-aware systems, as they provide a new theoretical foundation for addressing the privacy concerns that are acknowledged to be retarding the widespread acceptance and use of location-based services.
---
paper_title: Protecting and evaluating genomic privacy in medical tests and personalized medicine
paper_content:
In this paper, we propose privacy-enhancing technologies for medical tests and personalized medicine methods that use patients' genomic data. Focusing on genetic disease-susceptibility tests, we develop a new architecture (between the patient and the medical unit) and propose a "privacy-preserving disease susceptibility test" (PDS) by using homomorphic encryption and proxy re-encryption. Assuming the whole genome sequencing to be done by a certified institution, we propose to store patients' genomic data encrypted by their public keys at a "storage and processing unit" (SPU). Our proposed solution lets the medical unit retrieve the encrypted genomic data from the SPU and process it for medical tests and personalized medicine methods, while preserving the privacy of patients' genomic data. We also quantify the genomic privacy of a patient (from the medical unit's point of view) and show how a patient's genomic privacy decreases with the genetic tests he undergoes due to (i) the nature of the genetic test, and (ii) the characteristics of the genomic data. Furthermore, we show how basic policies and obfuscation methods help to keep the genomic privacy of a patient at a high level. We also implement and show, via a complexity analysis, the practicality of PDS.
---
paper_title: Quantifying and measuring anonymity
paper_content:
The design of anonymous communication systems is a relatively new field, but the desire to quantify the security these systems offer has been an important topic of research since its beginning. In recent years, anonymous communication systems have evolved from obscure tools used by specialists to mass-market software used by millions of people. In many cases the users of these tools are depending on the anonymity offered to protect their liberty, or more. As such, it is of critical importance that not only can we quantify the anonymity these tools offer, but that the metrics used represent realistic expectations, can be communicated clearly, and the implementations actually offer the anonymity they promise. This paper will discuss how metrics, and the techniques used to measure them, have been developed for anonymous communication tools including low-latency networks and high-latency email systems. © 2014 Springer-Verlag Berlin Heidelberg.
---
paper_title: Quantifying Location Privacy
paper_content:
It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.
---
paper_title: Measuring long-term location privacy in vehicular communication systems
paper_content:
Vehicular communication systems are an emerging form of communication that enables new ways of cooperation among vehicles, traffic operators, and service providers. However, many vehicular applications rely on continuous and detailed location information of the vehicles, which has the potential to infringe the users' location privacy. A multitude of privacy-protection mechanisms have been proposed in recent years. However, few efforts have been made to develop privacy metrics that can provide a quantitative way to assess the privacy risk, evaluate the effectiveness of a given privacy-enhanced design, and explore the full possibilities of protection methods. In this paper, we present a location privacy metric for measuring location privacy in vehicular communication systems. As computers do not forget and most drivers of motor vehicles follow certain daily driving patterns, if a user's location information is gathered and stored over a period of time, e.g., weeks or months, such cumulative information might be exploited by an adversary performing a location privacy attack to gain useful information on the user's whereabouts. Thus to precisely reflect the underlying privacy values, in our approach we take into account the accumulated information. Specifically, we develop methods and algorithms to process, propagate, and reflect the accumulated information in the privacy measurements. The feasibility and correctness of our approaches are evaluated by various case studies and extensive simulations. Our results show that accumulated information, if available to an adversary, can have a significant impact on location privacy of the users of vehicular communication systems. The methods and algorithms developed in this paper provide detailed insights into location privacy and thus contribute to the development of future-proof, privacy-preserving vehicular communication systems.
---
paper_title: Metrics for Security and Performance in Low-Latency Anonymity Systems
paper_content:
In this paper we explore the tradeoffs between security and performance in anonymity networks such as Tor. Using probability of path compromise as a measure of security, we explore the behaviour of various path selection algorithms with a Tor path simulator. We demonstrate that assumptions about the relative expense of IP addresses and cheapness of bandwidth break down if attackers are allowed to purchase access to botnets, giving plentiful IP addresses, but each with relatively poor symmetric bandwidth. We further propose that the expected latency of data sent through a network is a useful performance metric, show how it may be calculated, and demonstrate the counter-intuitive result that Tor's current path selection scheme, designed for performance, both performs well and is good for anonymity in the presence of a botnet-based adversary.
---
paper_title: Structuring anonymity metrics
paper_content:
This paper structures different anonymity metrics. We show that there is no single all-purpose metric for anonymity. We present different models for anonymity metrics on the network layer and on the application layer, and propose a way to merge these models into a combined model aiming at providing metrics usable within a user-centric identity management system. Thereby we distinguish the user's and the service provider's point of view using the notions of local and global anonymity. As a generalization of Shannon-, Min-and Max-Entropy, we use Renyi-Entropy as a framework to create anonymity metrics appropriate for different situations.
---
paper_title: Attacking Unlinkability: The Importance of Context
paper_content:
A system that protects the unlinkability of certain data items (e. g. identifiers of communication partners, messages, pseudonyms, transactions, votes) does not leak information that would enable an adversary to link these items. The adversary could, however, take advantage of hints from the context in which the system operates. In this paper, we introduce a new metric that enables one to quantify the (un)linkability of the data items and, based on this, we consider the effect of some simple contextual hints.
---
paper_title: Does additional information always reduce anonymity?
paper_content:
We discuss information-theoretic anonymity metrics, that use entropy over the distribution of all possible recipients to quantify anonymity. We identify a common misconception: the entropy of the distribution describing the potentialreceivers does not always decrease given more information.We show the relation of these a-posteriori distributions with the Shannon conditional entropy, which is an average overall possible observations.
---
paper_title: Privacy–Security Trade-Offs in Biometric Security Systems—Part I: Single Use Case
paper_content:
This is the second part of a two-part paper on the information theoretic study of biometric security systems. In this paper, the performance of reusable biometric security systems, in which the same biometric information is reused in multiple locations, is analyzed. The scenario in which the subsystems are jointly designed is first considered. An outer bound on the achievable trade-off between the privacy leakage of the biometric measurements and rates of keys generated at the subsystems is derived. A scheme that achieves the derived outer bound is then presented. Next, an incremental design approach is studied, in which the biometric measurements are reused while keeping the existing system intact. An achievable privacy-security trade-off region for this design approach is derived. It is shown that under certain conditions, the incremental design approach can achieve the performance of the joint design approach. Finally, examples are given to illustrate the results derived.
---
paper_title: Towards measuring anonymity
paper_content:
This paper introduces an information theoretic model that allows to quantify the degree of anonymity provided by schemes for anonymous connections. It considers attackers that obtain probabilistic information about users. The degree is based on the probabilities an attacker, after observing the system, assigns to the different users of the system as being the originators of a message. As a proof of concept, the model is applied to some existing systems. The model is shown to be very useful for evaluating the level of privacy a system provides under various attack scenarios, for measuring the amount of information an attacker gets with a particular attack and for comparing different systems amongst each other.
---
paper_title: On the Fundamentals of Anonymity Metrics
paper_content:
In recent years, a handful of anonymity metrics have been proposed that are either based on (i) the number participants in the given scenario, (ii) the probability distribution in an anonymous network regarding which participant is the sender / receiver, or (iii) a combination thereof. In this paper, we discuss elementary properties of metrics in general and anonymity metrics in particular, and then evaluate the behavior of a set of state-of-the-art anonymity metrics when applied in a number of scenarios. On the basis of this evaluation and basic measurement theory, we also define criteria for anonymity metrics and show that none of the studied metrics fulfill all criteria. Lastly, based on previous work on entropy-based anonymity metrics, as well as on theories on the effective support size of the entropy function and on Huffman codes, we propose an alternative metric — the scaled anonymity set size — that fulfills these criteria.
---
paper_title: Structuring anonymity metrics
paper_content:
This paper structures different anonymity metrics. We show that there is no single all-purpose metric for anonymity. We present different models for anonymity metrics on the network layer and on the application layer, and propose a way to merge these models into a combined model aiming at providing metrics usable within a user-centric identity management system. Thereby we distinguish the user's and the service provider's point of view using the notions of local and global anonymity. As a generalization of Shannon-, Min-and Max-Entropy, we use Renyi-Entropy as a framework to create anonymity metrics appropriate for different situations.
---
paper_title: Privacy-preserving distributed clustering using generative models
paper_content:
We present a framework for clustering distributed data in unsupervised and semisupervised scenarios, taking into account privacy requirements and communication costs. Rather than sharing parts of the original or perturbed data, we instead transmit the parameters of suitable generative models built at each local data site to a central location. We mathematically show that the best representative of all the data is a certain "mean" model, and empirically show that this model can be approximated quite well by generating artificial samples from the underlying distributions using Markov Chain Monte Carlo techniques, and then fitting a combined global model with a chosen parametric form to these samples. We also propose a new measure that quantifies privacy based on information theoretic concepts, and show that decreasing privacy leads to a higher quality of the combined model and vice versa. We provide empirical results on different data types to highlight the generality of our framework. The results show that high quality distributed clustering can be achieved with little privacy loss and low communication cost.
---
paper_title: Mix-Zones for Location Privacy in Vehicular Networks
paper_content:
Vehicular Networks (VNs) seek to provide, among other applications, safer driving conditions. To do so, vehicles need to periodically broadcast safety messages providing preciseposition information ...
---
paper_title: Personal use of the genomic data: Privacy vs. storage cost
paper_content:
In this paper, we propose privacy-enhancing technologies for personal use of the genomic data and analyze the tradeoff between genomic privacy and storage cost of the genomes. First, we highlight the potential privacy threats on the genomic data. Then, focusing specifically on a disease-susceptibility test, we develop a new architecture (between the patient and the medical unit) and propose a privacy-preserving algorithm by utilizing homomorphic encryption. Assuming the whole genome sequencing is done by a certified institution, we propose to store patients' genomic data encrypted by their public keys at a Storage and Processing Unit (SPU). The proposed algorithm lets the SPU process the encrypted genomic data for medical tests while preserving the privacy of patients' genomic data. We extensively analyze the relationship between the storage cost (of the genomic data), the level of genomic privacy (of the patient), and the characteristics of the genomic data. Furthermore, we show via a complexity analysis the practicality of the proposed scheme.
---
paper_title: On non-cooperative location privacy: a game-theoretic analysis
paper_content:
In mobile networks, authentication is a required primitive for the majority of security protocols. However, an adversary can track the location of mobile nodes by monitoring pseudonyms used for authentication. A frequently proposed solution to protect location privacy suggests that mobile nodes collectively change their pseudonyms in regions called mix zones. Because this approach is costly, self-interested mobile nodes might decide not to cooperate and could thus jeopardize the achievable location privacy. In this paper, we analyze the non-cooperative behavior of mobile nodes by using a game-theoretic model, where each player aims at maximizing its location privacy at a minimum cost. We first analyze the Nash equilibria in n-player complete information games. Because mobile nodes in a privacy-sensitive system do not know their opponents' payoffs, we then consider incomplete information games. We establish that symmetric Bayesian-Nash equilibria exist with simple threshold strategies in n-player games and derive the equilibrium strategies. By means of numerical results, we show that mobile nodes become selfish when the cost of changing pseudonyms is small, whereas they cooperate more when the cost of changing pseudonyms increases. Finally, we design a protocol - the PseudoGame protocol - based on the results of our analysis.
---
paper_title: Feeling-based location privacy protection for location-based services
paper_content:
Anonymous location information may be correlated with restricted spaces such as home and office for subject re-identification. This makes it a great challenge to provide location privacy protection for users of location-based services. Existing work adopts traditional K-anonymity model and ensures that each location disclosed in service requests is a spatial region that has been visited by at least K users. This strategy requires a user to specify an appropriate value of K in order to achieve a desired level of privacy protection. This is problematic because privacy is about feeling, and it is awkward for one to scale her feeling using a number. In this paper, we propose a feeling-based privacy model. The model allows a user to express her privacy requirement by specifying a public region, which the user would feel comfortable if the region is reported as her location. The popularity of the public region, measured using entropy based on its visitors' footprints inside it, is then used as the user's desired level of privacy protection. With this model in place, we present a novel technique that allows a user's location information to be reported as accurate as possible while providing her sufficient location privacy protection. The new technique supports trajectory cloaking and can be used in application scenarios where a user needs to make frequent location updates along a trajectory that cannot be predicted. In addition to evaluating the effectiveness of the proposed technique under various conditions through simulation, we have also implemented an experimental system for location privacy-aware uses of location-based services.
---
paper_title: Wherefore art thou r3579x?: anonymized social networks, hidden patterns, and structural steganography
paper_content:
In a social network, nodes correspond topeople or other social entities, and edges correspond to social links between them. In an effort to preserve privacy, the practice of anonymization replaces names with meaningless unique identifiers. We describe a family of attacks such that even from a single anonymized copy of a social network, it is possible for an adversary to learn whether edges exist or not between specific targeted pairs of nodes.
---
paper_title: Measuring anonymity with relative entropy
paper_content:
Anonymity is the property of maintaining secret the identity of users performing a certain action. Anonymity protocols often use random mechanisms which can be described probabilistically. In this paper, we propose a probabilistic process calculus to describe protocols for ensuring anonymity, and we use the notion of relative entropy from information theory to measure the degree of anonymity these protocols can guarantee. Furthermore, we prove that the operators in the probabilistic process calculus are non-expansive, with respect to this measuring method. We illustrate our approach by using the example of the Dining Cryptographers Problem.
---
paper_title: Addressing the concerns of the lacks family: quantification of kin genomic privacy
paper_content:
The rapid progress in human-genome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after their posting on genome-sharing websites). What are then the implications for kin genomic privacy? We formalize the problem and detail an efficient reconstruction attack based on graphical models and belief propagation. With this approach, an attacker can infer the genomes of the relatives of an individual whose genome is observed, relying notably on Mendel's Laws and statistical relationships between the nucleotides (on the DNA sequence). Then, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics. Genomic data reveals Mendelian diseases and the likelihood of developing degenerative diseases such as Alzheimer's. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genome-sharing website and from an online social network.
---
paper_title: Using Binning to Maintain Confidentiality of Medical Data
paper_content:
Abstract ::: Biomedical informatics in general and pharmacogenomics in particular require a research platform that simultaneously enables discovery while protecting research subjects' privacy and information confidentiality. The development of inexpensive DNA sequencing and analysis technologies promises unprecedented database access to very specific information about individuals. To allow analysis of this data without compromising the research subjects' privacy, we must develop methods for removing identifying information from medical and genomic data. In this paper, we build upon the idea that binned database records are more difficult to trace back to individuals. We represent symbolic and numeric data hierarchically, and bin them by generalizing the records. We measure the information loss due to binning using an information theoretic measure called mutual information. The results show that we can bin the data to different levels of precision and use the bin size to control the tradeoff between privacy and data resolution.
---
paper_title: Anonymity protocols as noisy channels
paper_content:
We propose a framework in which anonymity protocols are interpreted as particular kinds of channels, and the degree of anonymity provided by the protocol as the converse of the channel's capacity. We also investigate how the adversary can test the system to try to infer the user's identity, and we study how his probability of success depends on the characteristics of the channel. We then illustrate how various notions of anonymity can be expressed in this framework, and show the relation with some definitions of probabilistic anonymity in literature.
---
paper_title: Formalized Information-Theoretic Proofs of Privacy using the HOL4 Theorem-Prover
paper_content:
Below we present an information-theoretic method for proving the amount of information leaked by programs formalized using the HOL4 theorem-prover. The advantages of this approach are that the analysis is quantitative, and therefore capable of expressing partial leakage, and that proofs are performed using the HOL4 theorem-prover, and are therefore guaranteed to be logically and mathematically consistent with the formalization. The applicability of this methodology to proving privacy properties of Privacy Enhancing Technologies is demonstrated by proving the anonymity of the Dining Cryptographers protocol. To the best of the author's knowledge, this is the first machine-verified proof of privacy of the Dining Cryptographers protocol for an unbounded number of participants and a quantitative metric for privacy.
---
paper_title: Revisiting a combinatorial approach toward measuring anonymity
paper_content:
Recently, Edman et al. proposed the system's anonymity level [10], a combinatorial approach to measure the amount of additional information needed to reveal the communication pattern in a mix-based anonymous communication system as a whole. The metric is based on the number of possible bijective mappings between the inputs and the outputs of the mix. In this work we show that Edman et al.'s approach fails to capture the anonymity loss caused by subjects sending or receiving more than one message. We generalize the system's anonymity level in scenarios where user relations can be modeled as yes/no relations to cases where subjects send and receive an arbitrary number of messages. Further, we describe an algorithm to compute the redefined metric.
---
paper_title: How Much is too Much? Leveraging Ads Audience Estimation to Evaluate Public Profile Uniqueness
paper_content:
This paper addresses the important goal of quantifying the threat of linking external records to public Online Social Networks (OSN) user profiles, by providing a method to estimate the uniqueness of such profiles and by studying the amount of information carried by public profile attributes. Our first contribution is to leverage the Ads audience estimation platform of a major OSN to compute the information surprisal (IS) based uniqueness of public profiles, independently from the used profiles dataset. Then, we measure the quantity of information carried by the revealed attributes and evaluate the impact of the public release of selected combinations of these attributes on the potential to identify user profiles. Our measurement results, based on an unbiased sample of more than 400 thousand Facebook public profiles, show that, when disclosed in such profiles, current city has the highest individual attribute potential for unique identification and the combination of gender, current city and age can identify close to 55% of users to within a group of 20 and uniquely identify around 18% of users. We envisage the use of our methodology to assist both OSNs in designing better anonymization strategies when releasing user records and users to evaluate the potential for external parties to uniquely identify their public profiles and hence make it easier to link them with other data sources.
---
paper_title: Privacy Against Statistical Inference
paper_content:
We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.
---
paper_title: De-anonymizing Social Networks
paper_content:
Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.
---
paper_title: A Framework for Computing the Privacy Scores of Users in Online Social Networks
paper_content:
A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities. In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.
---
paper_title: Protecting consumer privacy from electric load monitoring
paper_content:
The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters retain and distribute highly accurate profiles of home energy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human activity within the served site. This paper introduces a new class of algorithms and systems, called Non Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance-driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery-suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residence's aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.
---
paper_title: Cooperative state estimation for preserving privacy of user behaviors in smart grid
paper_content:
Smart grid promises a reliable and secure electricity infrastructure to meet the future demand growth. However, the increase of data types and data amount from advanced smart grid introduce new privacy issues, which have to be resolved for customers. This paper presents a cooperative state estimation technique that protects the privacy of users' daily activities. By exploiting the kernel of an electric grid configuration matrix, we develop an error free state estimation technique that can hide the behavioral information of users effectively. The proposed scheme can obfuscate the privacy-prone data without compromising the performance of state estimation. We evaluate our obfuscation scheme using data from 1349 meters in 5 IEEE Electric Test Bus Systems. Our simulation results demonstrate high level of illegibility and resilience of our scheme with an affordable communication overhead.
---
paper_title: Handicapping attacker's confidence: an alternative to k-anonymization
paper_content:
We present an approach of limiting the confidence of inferring sensitive properties to protect against the threats caused by data mining abilities. The problem has dual goals: preserve the information for a wanted data analysis request and limit the usefulness of unwanted sensitive inferences that may be derived from the release of data. Sensitive inferences are specified by a set of “privacy templates". Each template specifies the sensitive property to be protected, the attributes identifying a group of individuals, and a maximum threshold for the confidence of inferring the sensitive property given the identifying attributes. We show that suppressing the domain values monotonically decreases the maximum confidence of such sensitive inferences. Hence, we propose a data transformation that minimally suppresses the domain values in the data to satisfy the set of privacy templates. The transformed data is free of sensitive inferences even in the presence of data mining algorithms. The prior k-anonymization k has been italicized consistently throughout this article. focuses on personal identities. This work focuses on the association between personal identities and sensitive properties.
---
paper_title: Privacy-preserving data publishing: A survey of recent developments
paper_content:
The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions.
---
paper_title: Toward Privacy in Public Databases
paper_content:
We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely” should not be learned at all. ::: ::: An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also obtain two utility results involving clustering.
---
paper_title: Information disclosure under realistic assumptions: privacy versus optimality
paper_content:
The problem of information disclosure has attracted much interest from the research community in recent years. When disclosing information, the challenge is to provide as much information as possible (optimality) while guaranteeing a desired safety property for privacy (such as l-diversity). A typical disclosure algorithm uses a sequence of disclosure schemas to output generalizations in the nonincreasing order of data utility; the algorithm releases the first generalization that satisfies the safety property. In this paper, we assert that the desired safety property cannot always be guaranteed if an adversary has the knowledge of the underlying disclosure algorithm. We propose a model for the additional information disclosed by an algorithm based on the definition of deterministic disclosure function (DDF), and provide definitions of p-safe and p-optimal DDFs. We give an analysis for the complexity to compute a p-optimal DDF. We show that deciding whether a DDF is p-optimal is an NP-hard problem, and only under specific conditions, we can solve the problem in polynomial time with respect to the size of the set of all possible database instances and the length of the disclosure generalization sequence. We then consider the problem of microdata disclosure and the safety condition of l-diversity. We relax the notion of p-optimality to weak p-optimality, and develop a weak p-optimal algorithm which is polynomial in the size of the original table and the length of the generalization sequence.
---
paper_title: Preservation of proximity privacy in publishing numerical sensitive data
paper_content:
We identify proximity breach as a privacy threat specific to numerical sensitive attributes in anonymized data publication. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual must fall in a short interval --- even though the adversary may have low confidence about the victim's actual value. None of the existing anonymization principles (e.g., k-anonymity, l-diversity, etc.) can effectively prevent proximity breach. We remedy the problem by introducing a novel principle called (e, m)-anonymity. Intuitively, the principle demands that, given a QI-group G, for every sensitive value x in G, at most 1/m of the tuples in G can have sensitive values "similar" to x, where the similarity is controlled by e. We provide a careful analytical study of the theoretical characteristics of (e, m)-anonymity, and the corresponding generalization algorithm. Our findings are verified by experiments with real data.
---
paper_title: t-Closeness: Privacy Beyond k-Anonymity and l-Diversity
paper_content:
The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain "identifying" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.
---
paper_title: L-diversity: Privacy beyond k-anonymity
paper_content:
Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called k-anonymity has gained popularity. In a k-anonymized dataset, each record is indistinguishable from at least k − 1 other records with respect to certain identifying attributes. In this article, we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems. First, an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. This is a known problem. Second, attackers often have background knowledge, and we show that k-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks, and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks. In addition to building a formal foundation for e-diversity, we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently.
---
paper_title: Aggregate Query Answering on Anonymized Tables
paper_content:
Privacy is a serious concern when microdata need to be released for ad hoc analyses. The privacy goals of existing privacy protection approaches (e.g., k-anonymity and l-diversity) are suitable only for categorical sensitive attributes. Since applying them directly to numerical sensitive attributes (e.g., salary) may result in undesirable information leakage, we propose privacy goals to better capture the need of privacy protection for numerical sensitive attributes. Complementing the desire for privacy is the need to support ad hoc aggregate analyses over microdata. Existing generalization-based anonymization approaches cannot answer aggregate queries with reasonable accuracy. We present a general framework of permutation-based anonymization to support accurate answering of aggregate queries and show that, for the same grouping, permutation-based techniques can always answer aggregate queries more accurately than generalization-based approaches. We further propose several criteria to optimize permutations for accurate answering of aggregate queries, and develop efficient algorithms for each criterion.
---
paper_title: Privacy-preserving data publishing: A survey of recent developments
paper_content:
The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions.
---
paper_title: Preservation of proximity privacy in publishing numerical sensitive data
paper_content:
We identify proximity breach as a privacy threat specific to numerical sensitive attributes in anonymized data publication. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual must fall in a short interval --- even though the adversary may have low confidence about the victim's actual value. None of the existing anonymization principles (e.g., k-anonymity, l-diversity, etc.) can effectively prevent proximity breach. We remedy the problem by introducing a novel principle called (e, m)-anonymity. Intuitively, the principle demands that, given a QI-group G, for every sensitive value x in G, at most 1/m of the tuples in G can have sensitive values "similar" to x, where the similarity is controlled by e. We provide a careful analytical study of the theoretical characteristics of (e, m)-anonymity, and the corresponding generalization algorithm. Our findings are verified by experiments with real data.
---
paper_title: Multirelational k-Anonymity
paper_content:
k-anonymity protects privacy by ensuring that data cannot be linked to a single individual. In a k-anonymous data set, any identifying information occurs in at least k tuples. Much research has been done to modify a single-table data set to satisfy anonymity constraints. This paper extends the definitions of k-anonymity to multiple relations and shows that previously proposed methodologies either fail to protect privacy or overly reduce the utility of the data in a multiple relation setting. We also propose two new clustering algorithms to achieve multirelational anonymity. Experiments show the effectiveness of the approach in terms of utility and efficiency.
---
paper_title: Protecting privacy against location-based personal identification
paper_content:
This paper presents a preliminary investigation on the privacy issues involved in the use of location-based services. It is argued that even if the user identity is not explicitly released to the service provider, the geo-localized history of user-requests can act as a quasi-identifier and may be used to access sensitive information about specific individuals. The paper formally defines a framework to evaluate the risk in revealing a user identity via location information and presents preliminary ideas about algorithms to prevent this to happen.
---
paper_title: Anonymizing sequential releases
paper_content:
An organization makes a new release as new information become available, releases a tailored view for each data request, releases sensitive information and identifying information separately. The availability of related releases sharpens the identification of individuals by a global quasi-identifier consisting of attributes from related releases. Since it is not an option to anonymize previously released data, the current release must be anonymized to ensure that a global quasi-identifier is not effective for identification. In this paper, we study the sequential anonymization problem under this assumption. A key question is how to anonymize the current release so that it cannot be linked to previous releases yet remains useful for its own release purpose. We introduce the lossy join, a negative property in relational database design, as a way to hide the join relationship among releases, and propose a scalable and practical solution.
---
paper_title: Privacy for Smart Meters: Towards Undetectable Appliance Load Signatures
paper_content:
Smart grid privacy encompasses the privacy of information extracted by analysing smart metering data. In this paper, we suggest that home electrical power routing can be used to moderate the home's load signature in order to hide appliance usage information. In particular, 1) we introduce a power management model using a rechargeable battery, 2) we propose a power mixing algorithm, and 3) we evaluate its protection level by proposing three different privacy metrics: an information theoretic (relative entropy), a clustering classification, and a correlation/regression one; these are tested on different metering datasets. This paper sets the ground for further research on the subject of optimising home energy management with regards to hiding load signatures.
---
paper_title: A New RFID Privacy Model
paper_content:
This paper critically examines some recently proposed RFID privacy models. It shows that some models suffer from weaknesses such as insufficient generality and unrealistic assumptions regarding the adversary's ability to corrupt tags. We propose a new RFID privacy model that is based on the notion of indistinguishability and that does not suffer from the identified drawbacks. We demonstrate the easy applicability of our model by applying it to multiple existing RFID protocols.
---
paper_title: Defining Strong Privacy for RFID
paper_content:
In this work, we consider privacy in radio frequency identification (RFID) systems. Our contribution is twofold: (1) we propose a simple, formal definition of strong privacy useful for basic analysis of RFID systems, as well as a different (weaker) definition applicable to multi-verifier systems; and (2) we apply our definition to reveal vulnerabilities in proposed privacy-enhancing RFID protocols. This paper is a highly abbreviated version of Juels, A et al, (2005)
---
paper_title: Privacy integrated queries: an extensible platform for privacy-preserving data analysis
paper_content:
Privacy Integrated Queries (PINQ) is an extensible data analysis platform designed to provide unconditional privacy guarantees for the records of the underlying data sets. PINQ provides analysts with access to records through an SQL-like declarative language (LINQ) amidst otherwise arbitrary C# code. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's guarantees require no trust placed in the expertise or diligence of the analysts, broadening the scope for design and deployment of privacy-preserving data analyses, especially by privacy nonexperts.
---
paper_title: On the complexity of differentially private data release: efficient algorithms and hardness results
paper_content:
We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a "synthetic data set" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role. For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.
---
paper_title: No free lunch in data privacy
paper_content:
Differential privacy is a powerful tool for providing privacy-preserving noisy query answers over statistical databases. It guarantees that the distribution of noisy query answers changes very little with the addition or deletion of any tuple. It is frequently accompanied by popularized claims that it provides privacy without any assumptions about the data and that it protects against attackers who know all but one record. In this paper we critically analyze the privacy protections offered by differential privacy. First, we use a no-free-lunch theorem, which defines non-privacy as a game, to argue that it is not possible to provide privacy and utility without making assumptions about how the data are generated. Then we explain where assumptions are needed. We argue that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process. This is different from limiting the inference about the presence of a tuple (for example, Bob's participation in a social network may cause edges to form between pairs of his friends, so that it affects more than just the tuple labeled as "Bob"). The definition of evidence of participation, in turn, depends on how the data are generated -- this is how assumptions enter the picture. We explain these ideas using examples from social network research as well as tabular data for which deterministic statistics have been previously released. In both cases the notion of participation varies, the use of differential privacy can lead to privacy breaches, and differential privacy does not always adequately limit inference about participation.
---
paper_title: Differential Privacy
paper_content:
In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy
---
paper_title: Differential Privacy: An Economic Method for Choosing Epsilon
paper_content:
Differential privacy is becoming a gold standard notion of privacy, it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter a#x03B5;, which controls the crucial trade off between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something about the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose a#x03B5; on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
---
paper_title: Distributional differential privacy for large-scale smart metering
paper_content:
In smart power grids it is possible to match supply and demand by applying control mechanisms that are based on fine-grained load prediction. A crucial component of every control mechanism is monitoring, that is, executing queries over the network of smart meters. However, smart meters can learn so much about our lives that if we are to use such methods, it becomes imperative to protect privacy. Recent proposals recommend restricting the provider to differentially private queries, however the practicality of such approaches has not been settled. Here, we tackle an important problem with such approaches: even if queries at different points in time over statistically independent data are implemented in a differentially private way, the parameters of the distribution of the query might still reveal sensitive personal information. Protecting these parameters is hard if we allow for continuous monitoring, a natural requirement in the smart grid. We propose novel differentially private mechanisms that solve this problem for sum queries. We evaluate our methods and assumptions using a theoretical analysis as well as publicly available measurement data and show that the extra noise needed to protect distribution parameters is small.
---
paper_title: Geo-indistinguishability: differential privacy for location-based systems
paper_content:
The growing popularity of location-based systems, allowing unknown/untrusted servers to easily collect huge amounts of information regarding users' location, has recently started raising serious privacy concerns. In this paper we introduce geoind, a formal notion of privacy for location-based systems that protects the user's exact location, while allowing approximate information -- typically needed to obtain a certain desired service -- to be released. This privacy definition formalizes the intuitive notion of protecting the user's location within a radius $r$ with a level of privacy that depends on r, and corresponds to a generalized version of the well-known concept of differential privacy. Furthermore, we present a mechanism for achieving geoind by adding controlled random noise to the user's location. We describe how to use our mechanism to enhance LBS applications with geo-indistinguishability guarantees without compromising the quality of the application results. Finally, we compare state-of-the-art mechanisms from the literature with ours. It turns out that, among all mechanisms independent of the prior, our mechanism offers the best privacy guarantees.
---
paper_title: Computational Differential Privacy
paper_content:
The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient--i.e., computationally-bounded--adversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by Reingold et al.[1]. We extend the dense model theorem of Reingold et al. to demonstrate equivalence between two definitions (indistinguishability- and simulatability-based) of computational differential privacy. ::: ::: Our computational analogues of differential privacy seem to allow for more accurate constructions than the standard information-theoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentially-private protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differentially private.
---
paper_title: Privacy Against Statistical Inference
paper_content:
We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.
---
paper_title: Information Hiding, Anonymity and Privacy: A Modular Approach
paper_content:
We propose a new specification framework for information hiding properties such as anonymity and privacy. The framework is based on the concept of a function view, which is a concise representation of the attacker's partial knowledge about a function. We describe system behavior as a set of functions, and formalize different information hiding properties in terms of views of these functions. We present an extensive case study, in which we use the function view framework to systematically classify and rigorously define a rich domain of identity-related properties, and to demonstrate that privacy and anonymity are independent. ::: ::: The key feature of our approach is its modularity. It yields precise, formal specifications of information hiding properties for any protocol formalism and any choice of the attacker model as long as the latter induce an observational equivalence relation on protocol instances. In particular, specifications based on function views are suitable for any cryptographic process calculus that defines some form of indistinguishability between processes. Our definitions of information hiding properties take into account any feature of the security model, including probabilities, random number generation, timing, etc., to the extent that it is accounted for by the formalism in which the system is specified. ::: ::: Partially supported by ONR grants N00014-02-1-0109 and N00014-01-1-0837 and DARPA contract N66001-00-C-8015.
---
paper_title: Verifying privacy-type properties of electronic voting protocols
paper_content:
Electronic voting promises the possibility of a convenient, efficient and secure facility for recording and tallying votes in an election. Recently highlighted inadequacies of implemented systems have demonstrated the importance of formally verifying the underlying voting protocols. We study three privacy-type properties of electronic voting protocols: in increasing order of strength, they are vote-privacy, receipt-freeness and coercion-resistance. ::: ::: We use the applied pi calculus, a formalism well adapted to modelling such protocols, which has the advantages of being based on well-understood concepts. The privacy-type properties are expressed using observational equivalence and we show in accordance with intuition that coercion-resistance implies receipt-freeness, which implies vote-privacy. ::: ::: We illustrate our definitions on three electronic voting protocols from the literature. Ideally, these three properties should hold even if the election officials are corrupt. However, protocols that were designed to satisfy receipt-freeness or coercion-resistance may not do so in the presence of corrupt officials. Our model and definitions allow us to specify and easily change which authorities are supposed to be trustworthy.
---
paper_title: Anonymous webs of trust
paper_content:
Webs of trust constitute a decentralized infrastructure for establishing the authenticity of the binding between public keys and users and, more generally, trust relationships among users. This paper introduces the concept of anonymous webs of trust - an extension of webs of trust where users can authenticate messages and determine each other's trust level without compromising their anonymity. Our framework comprises a novel cryptographic protocol based on zero-knowledge proofs, a symbolic abstraction and formal verification of our protocol, and a prototypical implementation based on the OpenPGP standard. The framework is capable of dealing with various core and optional features of common webs of trust, such as key attributes, key expiration dates, existence of multiple certificate chains, and trust measures between different users.
---
paper_title: Probabilistic analysis of anonymity
paper_content:
We present a formal analysis technique for probabilistic security properties of peer-to-peer communication systems based on random message routing among members. The behavior of group members and the adversary is modeled as a discrete-time Markov chain, and security properties are expressed as PCTL formulas. To illustrate feasibility of the approach, we model the Crowds system for anonymous Web browsing, and use a probabilistic model checker, PRISM, to perform automated analysis of the system and verify anonymity guarantees it provides. The main result of the Crowds analysis is a demonstration of how certain forms of anonymity degrade with the increase in group size and the number of random routing paths.
---
paper_title: Robust De-anonymization of Large Sparse Datasets
paper_content:
We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
---
paper_title: Defending Anonymous Communications Against Passive Logging Attacks
paper_content:
We study the threat that passive logging attacks pose to anonymous communications. Previous work analyzed these attacks under limiting assumptions. We first describe a possible defense that comes from breaking the assumption of uniformly random path selection. Our analysis shows that the defense improves anonymity in the static model, where nodes stay in the system, but fails in a dynamic model, in which nodes leave and join. Additionally, we use the dynamic model to show that the intersection attack creates a vulnerability in certain peer-to-peer systems for anonymous communications. We present simulation results that show that attack times are significantly lower in practice than the upper bounds given by previous work. To determine whether users' Web traffic has communication patterns required by the attacks, we collected and analyzed the Web requests of users. We found that, for our study frequent and repeated communication to the same Web site is common.
---
paper_title: Metrics for Security and Performance in Low-Latency Anonymity Systems
paper_content:
In this paper we explore the tradeoffs between security and performance in anonymity networks such as Tor. Using probability of path compromise as a measure of security, we explore the behaviour of various path selection algorithms with a Tor path simulator. We demonstrate that assumptions about the relative expense of IP addresses and cheapness of bandwidth break down if attackers are allowed to purchase access to botnets, giving plentiful IP addresses, but each with relatively poor symmetric bandwidth. We further propose that the expected latency of data sent through a network is a useful performance metric, show how it may be calculated, and demonstrate the counter-intuitive result that Tor's current path selection scheme, designed for performance, both performs well and is good for anonymity in the presence of a botnet-based adversary.
---
paper_title: Measuring query privacy in location-based services
paper_content:
The popularity of location-based services leads to serious concerns on user privacy. A common mechanism to protect users' location and query privacy is spatial generalisation. As more user information becomes available with the fast growth of Internet applications, e.g., social networks, attackers have the ability to construct users' personal profiles. This gives rise to new challenges and reconsideration of the existing privacy metrics, such as k-anonymity. In this paper, we propose new metrics to measure users' query privacy taking into account user profiles. Furthermore, we design spatial generalisation algorithms to compute regions satisfying users' privacy requirements expressed in these metrics. By experimental results, our metrics and algorithms are shown to be effective and efficient for practical usage.
---
paper_title: ARM: Anonymous Routing Protocol for Mobile Ad hoc Networks
paper_content:
Due to the nature of radio transmissions, communications in wireless networks are easy to capture and analyze. Next to this, privacy enhancing techniques (PETs) proposed for wired networks such as the Internet often cannot be applied to mobile ad hoc networks (MANETs). In this paper we present a novel anonymous on demand routing scheme for MANETs. We identify a number of problems of previously proposed works and propose an efficient solution that provides anonymity in a stronger adversary model.
---
paper_title: The Boundary Between Privacy and Utility in Data Publishing
paper_content:
We consider the privacy problem in data publishing: given a database instance containing sensitive information "anonymize" it to obtain a view such that, on one hand attackers cannot learn any sensitive information from the view, and on the other hand legitimate users can use it to compute useful statistics. These are conflicting goals. In this paper we prove an almost crisp separation of the case when a useful anonymization algorithm is possible from when it is not, based on the attacker's prior knowledge. Our definition of privacy is derived from existing literature and relates the attacker's prior belief for a given tuple t, with the posterior belief for the same tuple. Our definition of utility is based on the error bound on the estimates of counting queries. The main result has two parts. First we show that if the prior beliefs for some tuples are large then there exists no useful anonymization algorithm. Second, we show that when the prior is bounded for all tuples then there exists an anonymization algorithm that is both private and useful. The anonymization algorithm that forms our positive result is novel, and improves the privacy/utility tradeoff of previously known algorithms with privacy/utility guarantees such as FRAPP.
---
paper_title: Hiding the presence of individuals from shared databases
paper_content:
Advances in information technology, and its use in research, are increasing both the need for anonymized data and the risks of poor anonymization. We present a metric, δ-presence, that clearly links the quality of anonymization to the risk posed by inadequate anonymization. We show that existing anonymization techniques are inappropriate for situations where δ-presence is a good metric (specifically, where knowing an individual is in the database poses a privacy risk), and present algorithms for effectively anonymizing to meet δ-presence. The algorithms are evaluated in the context of a real-world scenario, demonstrating practical applicability of the approach.
---
paper_title: Privacy-preserving data publishing: A survey of recent developments
paper_content:
The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions.
---
paper_title: Quantifying Location Privacy
paper_content:
It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.
---
paper_title: Do Dummies Pay Off? Limits of Dummy Traffic Protection in Anonymous Communications
paper_content:
Anonymous communication systems ensure that correspondence between senders and receivers cannot be inferred with certainty. However, when patterns are persistent, observations from anonymous communication systems enable the reconstruction of user behavioral profiles. Protection against profiling can be enhanced by adding dummy messages, generated by users or by the anonymity provider, to the communication. In this paper we study the limits of the protection provided by this countermeasure. We propose an analysis methodology based on solving a least squares problem that permits to characterize the adversary’s profiling error with respect to the user behavior, the anonymity provider behavior, and the dummy strategy. Focusing on the particular case of a timed pool mix we show how, given a privacy target, the performance analysis can be used to design optimal dummy strategies to protect this objective.
---
paper_title: PoolView: stream privacy for grassroots participatory sensing
paper_content:
This paper develops mathematical foundations and architectural components for providing privacy guarantees on stream data in grassroots participatory sensing applications, where groups of participants use privately-owned sensors to collectively measure aggregate phenomena of mutual interest. Grassroots applications refer to those initiated by members of the community themselves as opposed to by some governing or official entities. The potential lack of a hierarchical trust structure in such applications makes it harder to enforce privacy. To address this problem, we develop a privacy-preserving architecture, called PoolView, that relies on data perturbation on the client-side to ensure individuals' privacy and uses community-wide reconstruction techniques to compute the aggregate information of interest. PoolView allows arbitrary parties to start new services, called pools, to compute new types of aggregate information for their clients. Both the client-side and server-side components of PoolView are implemented and available for download, including the data perturbation and reconstruction components. Two simple sensing services are developed for illustration; one computes traffic statistics from subscriber GPS data and the other computes weight statistics for a particular diet. Evaluation, using actual data traces collected by the authors, demonstrates the privacy-preserving aggregation functionality in PoolView.
---
paper_title: Addressing the concerns of the lacks family: quantification of kin genomic privacy
paper_content:
The rapid progress in human-genome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after their posting on genome-sharing websites). What are then the implications for kin genomic privacy? We formalize the problem and detail an efficient reconstruction attack based on graphical models and belief propagation. With this approach, an attacker can infer the genomes of the relatives of an individual whose genome is observed, relying notably on Mendel's Laws and statistical relationships between the nucleotides (on the DNA sequence). Then, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics. Genomic data reveals Mendelian diseases and the likelihood of developing degenerative diseases such as Alzheimer's. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genome-sharing website and from an online social network.
---
paper_title: De-anonymizing Social Networks
paper_content:
Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.
---
paper_title: Measuring anonymity: the disclosure attack
paper_content:
Anonymity services hide user identity at the network or address level but are vulnerable to attacks involving repeated observations of the user. Quantifying the number of observations required for an attack is a useful measure of anonymity.
---
paper_title: Users get routed: traffic correlation on tor by realistic adversaries
paper_content:
We present the first analysis of the popular Tor anonymity network that indicates the security of typical users against reasonably realistic adversaries in the Tor network or in the underlying Internet. Our results show that Tor users are far more susceptible to compromise than indicated by prior work. Specific contributions of the paper include(1)a model of various typical kinds of users,(2)an adversary model that includes Tor network relays, autonomous systems(ASes), Internet exchange points (IXPs), and groups of IXPs drawn from empirical study,(3) metrics that indicate how secure users are over a period of time,(4) the most accurate topological model to date of ASes and IXPs as they relate to Tor usage and network configuration,(5) a novel realistic Tor path simulator (TorPS), and(6)analyses of security making use of all the above. To show that our approach is useful to explore alternatives and not just Tor as currently deployed, we also analyze a published alternative path selection algorithm, Congestion-Aware Tor. We create an empirical model of Tor congestion, identify novel attack vectors, and show that it too is more vulnerable than previously indicated.
---
paper_title: Tor: The Second-Generation Onion Router
paper_content:
We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication.
---
paper_title: Caravan: Providing location privacy for vanet
paper_content:
Abstract : In vehicular ad hoc networks (VANET), it is possible to locate and track a vehicle based on its transmissions, during communication with other vehicles or the road-side infrastructure. This type of tracking leads to threats on the location privacy of the vehicle's user. In this paper, we study the problem of providing location privacy in VANET by allowing vehicles to prevent tracking of their broadcast communications. We first, identify the unique characteristics of VANET that must be considered when designing suitable location privacy solutions. Based on these observations, we propose a location privacy scheme called CARAVAN, and evaluate the privacy enhancement achieved under some existing standard constraints of VANET applications, and in the presence of a global adversary.
---
paper_title: Preserving privacy in gps traces via uncertainty-aware path cloaking
paper_content:
Motivated by a probe-vehicle based automotive traffic monitoring system, this paper considers the problem of guaranteed anonymity in a dataset of location traces while maintaining high data accuracy. We find through analysis of a set of GPS traces from 233 vehicles that known privacy algorithms cannot meet accuracy requirements or fail to provide privacy guarantees for drivers in low-density areas. To overcome these challenges, we develop a novel time-to-confusion criterion to characterize privacy in a location dataset and propose an uncertainty-aware path cloaking algorithm that hides location samples in a dataset to provide a time-to-confusion guarantee for all vehicles. We show that this approach effectively guarantees worst case tracking bounds, while achieving significant data accuracy improvements.
---
paper_title: Quantifying Location Privacy
paper_content:
It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.
---
paper_title: Privacy-preserving data mining
paper_content:
A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data.
---
paper_title: When do data mining results violate privacy?
paper_content:
Privacy-preserving data mining has concentrated on obtaining valid results when the input data is private. An extreme example is Secure Multiparty Computation-based methods, where only the results are revealed. However, this still leaves a potential privacy breach: Do the results themselves violate privacy? This paper explores this issue, developing a framework under which this question can be addressed. Metrics are proposed, along with analysis that those metrics are consistent in the face of apparent problems.
---
paper_title: Preserving User Location Privacy in Mobile Data Management Infrastructures
paper_content:
Location-based services, such as finding the nearest gas station, require users to supply their location information. However, a user's location can be tracked without her consent or knowledge. Lowering the spatial and temporal resolution of location data sent to the server has been proposed as a solution. Although this technique is effective in protecting privacy, it may be overkill and the quality of desired services can be severely affected. In this paper, we suggest a framework where uncertainty can be controlled to provide high quality and privacy-preserving services, and investigate how such a framework can be realized in the GPS and cellular network systems. Based on this framework, we suggest a data model to augment uncertainty to location data, and propose imprecise queries that hide the location of the query issuer and yields probabilistic results. We investigate the evaluation and quality aspects for a range query. We also provide novel methods to protect our solutions against trajectory-tracing. Experiments are conducted to examine the effectiveness of our approaches.
---
paper_title: Location Privacy Protection Through Obfuscation-Based Techniques
paper_content:
The widespread adoption of mobile communication devices combined with technical improvements of location technologies are fostering the development of a new wave of applications that manage physical positions of individuals to offer location-based services for business, social or informational purposes. As an effect of such innovative services, however, privacy concerns are increasing, calling for more sophisticated solutions for providing users with different and manageable levels of privacy. In this work, we propose a way to express users privacy preferences on location information in a straightforward and intuitive way. Then, based on such location privacy preferences, we discuss a new solution, based on obfuscation techniques, which permits us to achieve, and quantitatively estimate through a metric, different degrees of location privacy.
---
paper_title: Preserving User Location Privacy in Mobile Data Management Infrastructures
paper_content:
Location-based services, such as finding the nearest gas station, require users to supply their location information. However, a user's location can be tracked without her consent or knowledge. Lowering the spatial and temporal resolution of location data sent to the server has been proposed as a solution. Although this technique is effective in protecting privacy, it may be overkill and the quality of desired services can be severely affected. In this paper, we suggest a framework where uncertainty can be controlled to provide high quality and privacy-preserving services, and investigate how such a framework can be realized in the GPS and cellular network systems. Based on this framework, we suggest a data model to augment uncertainty to location data, and propose imprecise queries that hide the location of the query issuer and yields probabilistic results. We investigate the evaluation and quality aspects for a range query. We also provide novel methods to protect our solutions against trajectory-tracing. Experiments are conducted to examine the effectiveness of our approaches.
---
paper_title: Quantifying Location Privacy
paper_content:
It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.
---
paper_title: D.M.: Unfriendly: multi-party privacy risks in social networks
paper_content:
As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user's private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook's privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced on group content.
---
paper_title: The scrambler attack: A robust physical layer attack on location privacy in vehicular networks
paper_content:
Vehicular networks provide the basis for a wide range of both safety and non-safety applications. One of the key challenges for wide acceptance is to which degree the drivers' privacy can be protected. The main technical privacy protection mechanism is the use of changing identifiers (from MAC to application layer), so called pseudonyms. The effectiveness of this approach, however, is clearly reduced if specific characteristics of the physical layer (e.g., in the transmitted signal) reveal the link between two messages with different pseudonyms. In this paper, we present such a fingerprinting technique: the scrambler attack. In contrast to other physical layer fingerprinting methods, it does not rely on potentially fragile features of the channel or the hardware, but exploits the transmitted scrambler state that each receiver has to derive in order to decode a packet, making this attack extremely robust. We show how the scrambler attack bypasses the privacy protection mechanism of state-of-the-art approaches and quantify the degradation of drivers' location privacy with an extensive simulation study. Based on our results, we identify additional technological requirements in order to enable privacy protection mechanisms on a large scale.
---
paper_title: Quantifying the Effect of Co-Location Information on Location Privacy
paper_content:
Mobile users increasingly report their co-locations with other users, in addition to revealing their locations to online services. For instance, they tag the names of the friends they are with, in the messages and in the pictures they post on social networking websites. Combined with (possibly obfuscated) location information, such co-locations can be used to improve the inference of the users’ locations, thus further threatening their location privacy: as co-location information is taken into account, not only a user’s reported locations and mobility patterns can be used to localize her, but also those of her friends (and the friends of their friends and so on). In this paper, we study this problem by quantifying the effect of co-location information on location privacy, with respect to an adversary such as a social network operator that has access to such information. We formalize the problem and derive an optimal inference algorithm that incorporates such co-location information, yet at the cost of high complexity. We propose two polynomial-time approximate inference algorithms and we extensively evaluate their performance on a real dataset. Our experimental results show that, even in the case where the adversary considers co-locations with only a single friend of the targeted user, the location privacy of the user is decreased by up to 75% in a typical setting. Even in the case where a user does not disclose any location information, her privacy can decrease by up to 16% due to the information reported by other users.
---
paper_title: Addressing the concerns of the lacks family: quantification of kin genomic privacy
paper_content:
The rapid progress in human-genome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after their posting on genome-sharing websites). What are then the implications for kin genomic privacy? We formalize the problem and detail an efficient reconstruction attack based on graphical models and belief propagation. With this approach, an attacker can infer the genomes of the relatives of an individual whose genome is observed, relying notably on Mendel's Laws and statistical relationships between the nucleotides (on the DNA sequence). Then, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics. Genomic data reveals Mendelian diseases and the likelihood of developing degenerative diseases such as Alzheimer's. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genome-sharing website and from an online social network.
---
paper_title: Sedic: privacy-aware data intensive computing on hybrid clouds
paper_content:
The emergence of cost-effective cloud services offers organizations great opportunity to reduce their cost and increase productivity. This development, however, is hampered by privacy concerns: a significant amount of organizational computing workload at least partially involves sensitive data and therefore cannot be directly outsourced to the public cloud. The scale of these computing tasks also renders existing secure outsourcing techniques less applicable. A natural solution is to split a task, keeping the computation on the private data within an organization's private cloud while moving the rest to the public commercial cloud. However, this hybrid cloud computing is not supported by today's data-intensive computing frameworks, MapReduce in particular, which forces the users to manually split their computing tasks. In this paper, we present a suite of new techniques that make such privacy-aware data-intensive computing possible. Our system, called Sedic, leverages the special features of MapReduce to automatically partition a computing job according to the security levels of the data it works on, and arrange the computation across a hybrid cloud. Specifically, we modified MapReduce's distributed file system to strategically replicate data, moving sanitized data blocks to the public cloud. Over this data placement, map tasks are carefully scheduled to outsource as much workload to the public cloud as possible, given sensitive data always stay on the private cloud. To minimize inter-cloud communication, our approach also automatically analyzes and transforms the reduction structure of a submitted job to aggregate the map outcomes within the public cloud before sending the result back to the private cloud for the final reduction. This also allows the users to interact with our system in the same way they work with MapReduce, and directly run their legacy code in our framework. We implemented Sedic on Hadoop and evaluated it using both real and synthesized computing jobs on a large-scale cloud test-bed. The study shows that our techniques effectively protect sensitive user data, offload a large amount of computation to the public cloud and also fully preserve the scalability of MapReduce.
---
paper_title: Private Information: To Reveal or not to Reveal
paper_content:
This article studies the notion of quantitative policies for trust management and gives protocols for realizing them in a disclosure-minimizing fashion. Specifically, Bob values each credential with a certain number of points, and requires a minimum total threshold of points before granting Alice access to a resource. In turn, Alice values each of her credentials with a privacy score that indicates her degree of reluctance to reveal that credential. Bob's valuation of credentials and his threshold are private. Alice's privacy-valuation of her credentials is also private. Alice wants to find a subset of her credentials that achieves Bob's required threshold for access, yet is of as small a value to her as possible. We give protocols for computing such a subset of Alice's credentials without revealing any of the two parties' above-mentioned private information. Furthermore, we develop a fingerprint method that allows Alice to independently and easily recover the optimal knapsack solution, once the computed optimal value is given, but also enables verification of the integrity of the optimal value. The fingerprint method is useful beyond the specific authorization problem studied, and can be applied to any integer knapsack dynamic programming in a private setting.
---
paper_title: On the Economics of Anonymity
paper_content:
Decentralized anonymity infrastructures are still not in wide use today. While there are technical barriers to a secure robust design, our lack of understanding of the incentives to participate in such systems remains a major roadblock. Here we explore some reasons why anonymity systems are particularly hard to deploy, enumerate the incentives to participate either as senders or also as nodes, and build a general model to describe the effects of these incentives. We then describe and justify some simplifying assumptions to make the model manageable, and compare optimal strategies for participants based on a variety of scenarios.
---
paper_title: Guide to measuring privacy concern: Review of survey and observational instruments
paper_content:
The debate about online privacy gives testimony of Web users' concerns. Privacy concerns make consumers adopt data protection features, guide their appreciation for existing features, and can steer their consumption choices amongst competing businesses. However, approaches to measure privacy concern are fragmented and often ad-hoc, at the detriment of reliable results. The need for measurement instruments for privacy concern is twofold. First, attitudes and opinions about data protection cannot be established and compared without reliable mechanisms. Second, behavioural studies, notably in technology acceptance and the behavioural economics of privacy require measures for concern as a moderating factor. In its first part, this paper provides a comprehensive review of existing survey instruments for measuring privacy concerns. The second part focuses on revealed preferences that can be used for opportunistically measuring privacy concerns in the wild or for scale validation. Recommendations for scale selection and reuse are provided.
---
paper_title: Personalized privacy preservation
paper_content:
We study generalization for preserving privacy in publication of sensitive data. The existing methods focus on a universal approach that exerts the same amount of preservation for all persons, with-out catering for their concrete needs. The consequence is that we may be offering insufficient protection to a subset of people, while applying excessive privacy control to another subset. Motivated by this, we present a new generalization framework based on the concept of personalized anonymity. Our technique performs the minimum generalization for satisfying everybody's requirements, and thus, retains the largest amount of information from the microdata. We carry out a careful theoretical study that leads to valuable insight into the behavior of alternative solutions. In particular, our analysis mathematically reveals the circumstances where the previous work fails to protect privacy, and establishes the superiority of the proposed solutions. The theoretical findings are verified with extensive experiments.
---
paper_title: Privacy in Social Networks: How Risky is Your Social Graph?
paper_content:
Several efforts have been made for more privacy aware Online Social Networks (OSNs) to protect personal data against various privacy threats. However, despite the relevance of these proposals, we believe there is still the lack of a conceptual model on top of which privacy tools have to be designed. Central to this model should be the concept of risk. Therefore, in this paper, we propose a risk measure for OSNs. The aim is to associate a risk level with social network users in order to provide other users with a measure of how much it might be risky, in terms of disclosure of private information, to have interactions with them. We compute risk levels based on similarity and benefit measures, by also taking into account the user risk attitudes. In particular, we adopt an active learning approach for risk estimation, where user risk attitude is learned from few required user interactions. The risk estimation process discussed in this paper has been developed into a Facebook application and tested on real data. The experiments show the effectiveness of our proposal.
---
paper_title: War of the benchmark means: time for a truce
paper_content:
For decades, computer benchmarkers have fought a War of Means. Although many have raised concerns with the geometric mean (GM), it continues to be used by SPEC and others. This war is an unnecessarymisunderstanding due to inadequately articulated implicit assumptions, plus confusio namong populations, their parameters, sampling methods, and sample statistics. In fact, all the Means have their uses, sometimes in combination. Metrics may be algebraically correct, but statistically irrelevant or misleading if applied to population distributions for which they are inappropriate. Normal (Gaussian) distributions are so useful that they are often assumed without question,but many important distributions are not normal.They require different analyses, most commonly by finding a mathematical transformations that yields a normal distribution,computing the metrics, and then back-transforming to the original scale. Consider the distribution of relative performance ratios of programs on two computers. The normal distribution is a good fit only when variance and skew are small, but otherwise generates logical impossibilities and misleading statistical measures. A much better choice is the lognormal (or log-normal) distribution, not just on theoretical grounds, but through the (necessary) validation with real data. Normal and lognormal distributions are similar for low variance and skew, but the lognormal handles skewed distributions reasonably, unlike the normal. Lognormal distributions occur frequently elsewhere are well-understood, and have standard methods of analysis.Everyone agrees that "Performance is not a single number," ... and then argues about which number is better. It is more important to understanding populations, appropriate methods, and proper ways to convey uncertainty. When population parameters are estimated via samples, statistically correct methods must be used to produce the appropriate means, measures of dispersion, Skew, confidence levels, and perhaps goodness-of-fit estimators. If the wrong Mean is chosen, it is difficult to achieve much. The GM predicts the mean relative performance of programs, not of workloads. The usual GM formula is rather unintuitive, and is often claimed to have no physical meaning. However, it is the back-transformed average of a lognormal distribution, as can be seen by the mathematical identity below. Its use is not onlystatistically appropriate in some cases, but enables straightforward computation of other useful statistics. "If a man will begin in certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end with certainties." — Francis Bacon, in Savage.
---
paper_title: Genomic Privacy Metrics: A Systematic Comparison
paper_content:
The human genome uniquely identifies, and contains highly sensitive information about, individuals. This creates a high potential for misuse of genomic data (e.g., Genetic discrimination). This paper investigates how genomic privacy can be measured in scenarios where an adversary aims to infer a person's genome by constructing probability distributions on the values of genetic variations. Specifically, we investigate 22 privacy metrics using adversaries of different strengths, and uncover problems with several metrics that have previously been used for genomic privacy. We then give suggestions on metric selection, and illustrate the process with a case study on Alzheimer's disease.
---
| Title: Technical Privacy Metrics: a Systematic Survey
Section 1: Introduction
Description 1: Provide an overview of privacy as a fundamental human right, historical definitions, challenges in defining privacy, and the need for privacy enhancing technologies (PETs). Introduce the contributions of the paper.
Section 2: Conditions for Quality of Metrics
Description 2: Discuss the conditions and criteria that high-quality privacy metrics should fulfill, mentioning both mathematical and domain-specific requirements.
Section 3: Privacy Domains
Description 3: Describe various domains where privacy enhancing technologies can be applied. Provide context and examples for the remainder of the paper by discussing specific domains such as communication systems, databases, location-based services, smart metering, social networks, and genome privacy.
Section 4: Characteristics of Privacy Metrics
Description 4: Identify and explain common characteristics that classify privacy metrics, including adversary models and data sources.
Section 5: Privacy Metrics
Description 5: Describe over eighty privacy metrics from the literature, grouped by the outputs they measure. Discuss their advantages, disadvantages, and application scenarios.
Section 6: How to Select Privacy Metrics
Description 6: Present a series of questions to guide the selection of privacy metrics for a given scenario. Discuss how to consider various aspects such as suitable output measures, adversary models, data sources, availability of input data, target audience, related work, quality of metrics, and existing metric implementations.
Section 7: Future Research Directions
Description 7: Outline areas that merit further research, including interdependent privacy, metrics that incorporate user attitudes and behaviors, ways to aggregate and combine metrics, and evaluating the quality of privacy metrics.
Section 8: Conclusion
Description 8: Summarize the comprehensive review of privacy metrics presented in the paper, the categorization of privacy metrics, and the method for choosing privacy metrics. Highlight the importance of selecting multiple metrics to cover various aspects of privacy. |
A review of feature extraction and classification algorithms for image RSVP-based BCI | 7 | ---
paper_title: A framework for rapid visual image search using single-trial brain evoked responses
paper_content:
We report the design and performance of a brain computer interface for single-trial detection of viewed images based on human dynamic brain response signatures in 32-channel electroencephalography (EEG) acquired during a rapid serial visual presentation. The system explores the feasibility of speeding up image analysis by tapping into split-second perceptual judgments of humans. We present an incremental learning system with less memory storage and computational cost for single-trial event-related potential (ERP) detection, which is trained using cross-session data. We demonstrate the efficacy of the method on the task of target image detection. We apply linear and nonlinear support vector machines (SVMs) and a linear logistic classifier (LLC) for single-trial ERP detection using data collected from image analysts and naive subjects. For our data the detection performance of the nonlinear SVM is better than the linear SVM and the LLC. We also show that our ERP-based target detection system is five-fold faster than the traditional image viewing paradigm.
---
paper_title: Brain Activity-Based Image Classification From Rapid Serial Visual Presentation
paper_content:
We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. Image clips were selected from a broad area image and presented in rapid succession (12/s) in 4.1-s bursts. Participants indicated by subsequent button press whether or not each burst of images included a target airplane feature. Image clip creation and search path selection were designed to maximize user comfort and maintain user awareness of spatial context. Independent component analysis (ICA) was used to extract a set of independent source time-courses and their minimally-redundant low-dimensional informative features in the time and time-frequency amplitude domains from 128-channel EEG data recorded during clip burst presentations in a training session. The naive Bayes fusion of two Fisher discriminant classifiers, computed from the 100 most discriminative time and time-frequency features, respectively, was used to estimate the likelihood that each clip contained a target feature. This estimator was applied online in a subsequent test session. Across eight training/test session pairs from seven participants, median area under the receiver operator characteristic curve, by tenfold cross validation, was 0.97 for within-session and 0.87 for between-session estimates, and was nearly as high (0.83) for targets presented in bursts that participants mistakenly reported to include no target features.
---
paper_title: Updating P300: An integrative theory of P3a and P3b
paper_content:
The empirical and theoretical development of the P300 event-related brain potential (ERP) is reviewed by considering factors that contribute to its amplitude, latency, and general characteristics. The neuropsychological origins of the P3a and P3b subcomponents are detailed, and how target/standard discrimination difficulty modulates scalp topography is discussed. The neural loci of P3a and P3b generation are outlined, and a cognitive model is proffered: P3a originates from stimulus-driven frontal attention mechanisms during task processing, whereas P3b originates from temporal-parietal activity associated with attention and appears related to subsequent memory processing. Neurotransmitter actions associating P3a to frontal/dopaminergic and P3b to parietal/norepinephrine pathways are highlighted. Neuroinhibition is suggested as an overarching theoretical mechanism for P300, which is elicited when stimulus detection engages memory operations.
---
paper_title: A review of classification algorithms for EEG-based brain-computer interfaces
paper_content:
In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.
---
paper_title: A Tutorial on Principal Component Analysis
paper_content:
Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
---
paper_title: High-throughput image search via single-trial event detection in a rapid serial visual presentation task
paper_content:
We describe a method, using linear discrimination, for detecting single-trial EEG signatures of object recognition events in a rapid serial visual presentation (RSVP) task. We record EEG using a high spatial density array (87 electrodes) during the rapid presentation (50-200 msec per image) of natural images. Subjects were instructed to release a button when they recognized a target image (an image with a person/people). Trials consisted of 100 images each, with a 50% chance of a single target being in a trial. Subject EEG was analyzed on a single-trial basis with an optimal spatial linear discriminator learned at multiple time windows after the presentation of an image. Linear discrimination enables the estimation of a forward model and thus allows for an approximate localization of the discriminating activity. Results show multiple loci for discriminating activity (e.g. motor and visual). Using these detected EEG signatures, we show that in many cases we can detect targets more accurately than the overt response (button release) and that such signatures can be used to prioritize images for high-throughput search.
---
paper_title: Cortically coupled computer vision for rapid image search
paper_content:
We describe a real-time electroencephalography (EEG)-based brain-computer interface system for triaging imagery presented using rapid serial visual presentation. A target image in a sequence of nontarget distractor images elicits in the EEG a stereotypical spatiotemporal response, which can be detected. A pattern classifier uses this response to reprioritize the image sequence, placing detected targets in the front of an image stack. We use single-trial analysis based on linear discrimination to recover spatial components that reflect differences in EEG activity evoked by target versus nontarget images. We find an optimal set of spatial weights for 59 EEG sensors within a sliding 50-ms time window. Using this simple classifier allows us to process EEG in real time. The detection accuracy across five subjects is on average 92%, i.e., in a sequence of 2500 images, resorting images based on detector output results in 92% of target images being moved from a random position in the sequence to one of the first 250 images (first 10% of the sequence). The approach leverages the highly robust and invariant object recognition capabilities of the human visual system, using single-trial EEG analysis to efficiently detect neural signatures correlated with the recognition event.
---
paper_title: An investigation of triggering approaches for the rapid serial visual presentation paradigm in brain computer interfacing
paper_content:
The rapid serial visual presentation (RSVP) paradigm is a method that can be used to extend the P300 based brain computer interface (BCI) approach to enable high throughput target image recognition applications. The method requires high temporal resolution and hence, generating reliable and accurate stimulus triggers is critical for high performance execution. The traditional RSVP paradigm is normally deployed on two computers where software triggers generated at runtime by the image presentation software on a presentation computer are acquired along with the raw electroencephalography (EEG) signals by a dedicated data acquisition system connected to a second computer. It is often assumed that the stimulus presentation timing as acquired via events arising in the stimulus presentation code is an accurate reflection of the physical stimulus presentation. This is not necessarily the case due to various and variable latencies that may arise in the overall system. This paper describes a study to investigate in a representative RSVP implementation whether or not software-derived stimulus timing can be considered an accurate reflection of the physical stimuli timing. To investigate this, we designed a simple circuit consisting of a light diode resistor comparator circuit (LDRCC) for recording the physical presentation of stimuli and which in turn generates what we refer to as hardware triggered events. These hardware-triggered events constitute a measure of ground truth and are captured along with the corresponding stimulus presentation command timing events for comparison. Our experimental results show that using software-derived timing only may introduce uncertainty as to the true presentation times of the stimuli and this uncertainty itself is highly variable at least in the representative implementation described here. For BCI protocols such as those utilizing RSVP, the uncertainly introduced will cause impairment of performance and we recommend the use of additional circuitry to capture the physical presentation of stimuli and that these hardware-derived triggers should instead constitute the event markers to be used for subsequent analysis of the EEG.
---
paper_title: A framework for rapid visual image search using single-trial brain evoked responses
paper_content:
We report the design and performance of a brain computer interface for single-trial detection of viewed images based on human dynamic brain response signatures in 32-channel electroencephalography (EEG) acquired during a rapid serial visual presentation. The system explores the feasibility of speeding up image analysis by tapping into split-second perceptual judgments of humans. We present an incremental learning system with less memory storage and computational cost for single-trial event-related potential (ERP) detection, which is trained using cross-session data. We demonstrate the efficacy of the method on the task of target image detection. We apply linear and nonlinear support vector machines (SVMs) and a linear logistic classifier (LLC) for single-trial ERP detection using data collected from image analysts and naive subjects. For our data the detection performance of the nonlinear SVM is better than the linear SVM and the LLC. We also show that our ERP-based target detection system is five-fold faster than the traditional image viewing paradigm.
---
paper_title: Common Spatio-Temporal Pattern for Single-Trial Detection of Event-Related Potential in Rapid Serial Visual Presentation Triage
paper_content:
Searching for target images in large volume imagery is a challenging problem and the rapid serial visual presentation (RSVP) triage is potentially a promising solution to the problem. RSVP triage is essentially a cortically-coupled computer vision technique that relies on single-trial detection of event-related potentials (ERP). In RSVP triage, images are shown to a subject in a rapid serial sequence. When a target image is seen by the subject, unique ERP characterized by P300 are elicited. Thus, in RSVP triage, accurate detection of such distinct ERP allows for fast searching of target images in large volume imagery. The accuracy of the distinct ERP detection in RSVP triage depends on the feature extraction method, for which the common spatial pattern analysis (CSP) was used with limited success. This paper presents a novel feature extraction method, termed common spatio-temporal pattern (CSTP), which is critical for robust single-trial detection of ERP. Unlike the conventional CSP, whereby only spatial patterns of ERP are considered, the present proposed method exploits spatial and temporal patterns of ERP separately, providing complementary spatial and temporal features for high accurate single-trial ERP detection. Numerical study using data collected from 20 subjects in RSVP triage experiments demonstrates that the proposed method offers significant performance improvement over the conventional CSP method (corrected p -value <; 0.05, Pearson r=0.64) and other competing methods in the literature. This paper further shows that the main idea of CSTP can be easily applied to other methods.
---
paper_title: Brain Activity-Based Image Classification From Rapid Serial Visual Presentation
paper_content:
We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. Image clips were selected from a broad area image and presented in rapid succession (12/s) in 4.1-s bursts. Participants indicated by subsequent button press whether or not each burst of images included a target airplane feature. Image clip creation and search path selection were designed to maximize user comfort and maintain user awareness of spatial context. Independent component analysis (ICA) was used to extract a set of independent source time-courses and their minimally-redundant low-dimensional informative features in the time and time-frequency amplitude domains from 128-channel EEG data recorded during clip burst presentations in a training session. The naive Bayes fusion of two Fisher discriminant classifiers, computed from the 100 most discriminative time and time-frequency features, respectively, was used to estimate the likelihood that each clip contained a target feature. This estimator was applied online in a subsequent test session. Across eight training/test session pairs from seven participants, median area under the receiver operator characteristic curve, by tenfold cross validation, was 0.97 for within-session and 0.87 for between-session estimates, and was nearly as high (0.83) for targets presented in bursts that participants mistakenly reported to include no target features.
---
paper_title: Updating P300: An integrative theory of P3a and P3b
paper_content:
The empirical and theoretical development of the P300 event-related brain potential (ERP) is reviewed by considering factors that contribute to its amplitude, latency, and general characteristics. The neuropsychological origins of the P3a and P3b subcomponents are detailed, and how target/standard discrimination difficulty modulates scalp topography is discussed. The neural loci of P3a and P3b generation are outlined, and a cognitive model is proffered: P3a originates from stimulus-driven frontal attention mechanisms during task processing, whereas P3b originates from temporal-parietal activity associated with attention and appears related to subsequent memory processing. Neurotransmitter actions associating P3a to frontal/dopaminergic and P3b to parietal/norepinephrine pathways are highlighted. Neuroinhibition is suggested as an overarching theoretical mechanism for P300, which is elicited when stimulus detection engages memory operations.
---
paper_title: The steady-state visual evoked potential in vision research: A review
paper_content:
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
---
paper_title: Towards a Cure for BCI Illiteracy
paper_content:
Brain–Computer Interfaces (BCIs) allow a user to control a computer application by brain activity as acquired, e.g., by EEG. One of the biggest challenges in BCI research is to understand and solve the problem of “BCI Illiteracy”, which is that BCI control does not work for a non-negligible portion of users (estimated 15 to 30%). Here, we investigate the illiteracy problem in BCI systems which are based on the modulation of sensorimotor rhythms. In this paper, a sophisticated adaptation scheme is presented which guides the user from an initial subject-independent classifier that operates on simple features to a subject-optimized state-of-the-art classifier within one session while the user interacts the whole time with the same feedback application. While initial runs use supervised adaptation methods for robust co-adaptive learning of user and machine, final runs use unsupervised adaptation and therefore provide an unbiased measure of BCI performance. Using this approach, which does not involve any offline calibration measurement, good performance was obtained by good BCI participants (also one novice) after 3–6 min of adaptation. More importantly, the use of machine learning techniques allowed users who were unable to achieve successful feedback before to gain significant control over the BCI system. In particular, one participant had no peak of the sensory motor idle rhythm in the beginning of the experiment, but could develop such peak during the course of the session (and use voluntary modulation of its amplitude to control the feedback application).
---
paper_title: Overview of NTCIR-13 NAILS task
paper_content:
In this paper we review the NTCIR-13 NAILS (Neurally Augmented Image Labelling Strategies) pilot task at NTCIR-13. We describe a first-of-its-kind RSVP (Rapid Serial Visual Presentation) - EEG (Electroencephalography) dataset released as part of the NTCIR-13 participation conference and the results of the participating organisations who benchmarked machine-learning strategies against each other using the provided unlabelled test data.
---
paper_title: The Foundations of Cost-Sensitive Learning
paper_content:
This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectation is computed using the conditional probability of each class given the example. Mathematically, let the entry in a cost matrix be the cost of predicting class when the true class is . If then the prediction is correct, while if the prediction is incorrect. The optimal prediction for an example is the class that minimizes ! (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For each , is a sum over the alternative possibilities for the true class of . In this framework, the role of a learning algorithm is to produce a classifier that for any example can estimate the probability " # of each class being the true class of . For an example , making the prediction means acting as if is the true class of . The essence of cost-sensitive decision-making is that it can be optimal to act as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is most likely legitimate. 1.1 Cost matrix properties A cost matrix always has the following structure when there are only two classes: actual negative actual positive predict negative $% $& ' )(!*+* $% -,. /(!*10 predict positive 2,& $& ' )(302* 2,& -,. /(30+0 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, while columns correspond to actual classes, i.e. row/column = / = predicted/actual. In our notation, the cost of a false positive is (302* while the cost of a false negative is (!*!0 . Conceptually, the cost of labeling an example incorrectly should always be greater than the cost of labeling it correctly. Mathematically, it should always be the case that ( 0 *54 ( *+* and ( *!064 ( 0 0 . We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated, so (!*+*879(302* but still (!*!0 4 (30+0 . In this case the optimal policy is to label all examples positive. Similarly, if (:0 * 4 (!*+* but (30 0;7 in a cost matrix if for all , = ?7 > @ A . In this case the cost of predicting > is no greater than the cost of predicting = , regardless of what the true class is. So it is optimal never to predict = . As a special case, the optimal prediction is always > if row > is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positive constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optimal are unchanged B if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline away from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonableness conditions can be transformed into a simpler matrix that always leads to the same decisions:
---
paper_title: Combining EEG and eye tracking: identification, characterization, and correction of eye movement artifacts in electroencephalographic data
paper_content:
Eye movements introduce large artifacts to electroencephalographic recordings (EEG) and thus render data analysis difficult or even impossible. Trials contaminated by eye movement and blink artifacts have to be discarded, hence in standard EEG-paradigms subjects are required to fixate on the screen. To overcome this restriction, several correction methods including regression and blind source separation have been proposed. Yet, there is no automated standard procedure established. By simultaneously recording eye movements and 64-channel-EEG during a guided eye movement paradigm, we investigate and review the properties of eye movement artifacts, including corneo-retinal dipole changes, saccadic spike potentials and eyelid artifacts, and study their interrelations during different types of eye- and eyelid movements. In concordance with earlier studies our results confirm that these artifacts arise from different independent sources and that depending on electrode site, gaze direction, and choice of reference these sources contribute differently to the measured signal. We assess the respective implications for artifact correction methods and therefore compare the performance of two prominent approaches, namely linear regression and independent component analysis (ICA). We show and discuss that due to the independence of eye artifact sources, regression-based correction methods inevitably over- or under-correct individual artifact components, while ICA is in principle suited to address such mixtures of different types of artifacts. Finally, we propose an algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner. In the data presented here, the algorithm performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials. Moreover it performed more reliable and almost twice as effective than human experts when those had to base their decision on IC topographies only. Furthermore, a receiver operating characteristic (ROC) analysis demonstrated an optimal balance of false positive and false negative at an area under curve (AUC) of more than 0.99. Removing the automatically detected ICs from the data resulted in removal or substantial suppression of ocular artifacts including microsaccadic spike potentials, while the relevant neural signal remained unaffected. In conclusion the present work aims at a better understanding of individual eye movement artifacts, their interrelations and the respective implications for eye artifact correction. Additionally, the proposed ICA-procedure provides a tool for optimized detection and correction of eye movement-related artifact components.
---
paper_title: P300 brain computer interface: current challenges and emerging trends
paper_content:
A brain-computer interface (BCI) enables communication without movement based on brain signals measured with electroencephalography (EEG). BCIs usually rely on one of three types of signals: the P300 and other components of the event-related potential (ERP), steady state visual evoked potential (SSVEP), or event related desynchronization (ERD). Although P300 BCIs were introduced over twenty years ago, the past few years have seen a strong increase in P300 BCI research. This closed-loop BCI approach relies on the P300 and other components of the ERP, based on an oddball paradigm presented to the subject. In this paper, we overview the current status of P300 BCI technology, and then discuss new directions: paradigms for eliciting P300s; signal processing methods; applications; and hybrid BCIs. We conclude that P300 BCIs are quite promising, as several emerging directions have not yet been fully explored and could lead to improvements in bit rate, reliability, usability, and flexibility.
---
paper_title: A framework for rapid visual image search using single-trial brain evoked responses
paper_content:
We report the design and performance of a brain computer interface for single-trial detection of viewed images based on human dynamic brain response signatures in 32-channel electroencephalography (EEG) acquired during a rapid serial visual presentation. The system explores the feasibility of speeding up image analysis by tapping into split-second perceptual judgments of humans. We present an incremental learning system with less memory storage and computational cost for single-trial event-related potential (ERP) detection, which is trained using cross-session data. We demonstrate the efficacy of the method on the task of target image detection. We apply linear and nonlinear support vector machines (SVMs) and a linear logistic classifier (LLC) for single-trial ERP detection using data collected from image analysts and naive subjects. For our data the detection performance of the nonlinear SVM is better than the linear SVM and the LLC. We also show that our ERP-based target detection system is five-fold faster than the traditional image viewing paradigm.
---
paper_title: Brain Activity-Based Image Classification From Rapid Serial Visual Presentation
paper_content:
We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. Image clips were selected from a broad area image and presented in rapid succession (12/s) in 4.1-s bursts. Participants indicated by subsequent button press whether or not each burst of images included a target airplane feature. Image clip creation and search path selection were designed to maximize user comfort and maintain user awareness of spatial context. Independent component analysis (ICA) was used to extract a set of independent source time-courses and their minimally-redundant low-dimensional informative features in the time and time-frequency amplitude domains from 128-channel EEG data recorded during clip burst presentations in a training session. The naive Bayes fusion of two Fisher discriminant classifiers, computed from the 100 most discriminative time and time-frequency features, respectively, was used to estimate the likelihood that each clip contained a target feature. This estimator was applied online in a subsequent test session. Across eight training/test session pairs from seven participants, median area under the receiver operator characteristic curve, by tenfold cross validation, was 0.97 for within-session and 0.87 for between-session estimates, and was nearly as high (0.83) for targets presented in bursts that participants mistakenly reported to include no target features.
---
paper_title: The Balanced Accuracy and Its Posterior Distribution
paper_content:
Evaluating the performance of a classification algorithm critically requires a measure of the degree to which unseen examples have been identified with their correct class labels. In practice, generalizability is frequently estimated by averaging the accuracies obtained on individual cross-validation folds. This procedure, however, is problematic in two ways. First, it does not allow for the derivation of meaningful confidence intervals. Second, it leads to an optimistic estimate when a biased classifier is tested on an imbalanced dataset. We show that both problems can be overcome by replacing the conventional point estimate of accuracy by an estimate of the posterior distribution of the balanced accuracy.
---
paper_title: A Tutorial on Principal Component Analysis
paper_content:
Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
---
paper_title: Cortically coupled computer vision for rapid image search
paper_content:
We describe a real-time electroencephalography (EEG)-based brain-computer interface system for triaging imagery presented using rapid serial visual presentation. A target image in a sequence of nontarget distractor images elicits in the EEG a stereotypical spatiotemporal response, which can be detected. A pattern classifier uses this response to reprioritize the image sequence, placing detected targets in the front of an image stack. We use single-trial analysis based on linear discrimination to recover spatial components that reflect differences in EEG activity evoked by target versus nontarget images. We find an optimal set of spatial weights for 59 EEG sensors within a sliding 50-ms time window. Using this simple classifier allows us to process EEG in real time. The detection accuracy across five subjects is on average 92%, i.e., in a sequence of 2500 images, resorting images based on detector output results in 92% of target images being moved from a random position in the sequence to one of the first 250 images (first 10% of the sequence). The approach leverages the highly robust and invariant object recognition capabilities of the human visual system, using single-trial EEG analysis to efficiently detect neural signatures correlated with the recognition event.
---
paper_title: Common Spatial Pattern Method for Channel Selelction in Motor Imagery Based Brain-computer Interface
paper_content:
A brain-computer interface (BCI) based on motor imagery (MI) translates the subject's motor intention into a control signal through classifying the electroencephalogram (EEG) patterns of different imagination tasks, e.g. hand and foot movements. Characteristic EEG spatial patterns make MI tasks substantially discriminable. Multi-channel EEGs are usually necessary for spatial pattern identification and therefore MI-based BCI is still in the stage of laboratory demonstration, to some extent, due to the need for constantly troublesome recording preparation. This paper presents a method for channel reduction in Mi-based BCI. Common spatial pattern (CSP) method was employed to analyze spatial patterns of imagined hand and foot movements. Significant channels were selected by searching the maximums of spatial pattern vectors in scalp mappings. A classification algorithm was developed by means of combining linear discriminant analysis towards event-related desynchronization (ERD) and readiness potential (RP). The classification accuracies with four optimal channels were 93.45% and 91.88% for two subjects
---
paper_title: Comparison of beamformers for EEG source signal reconstruction
paper_content:
Abstract Recently, several new beamformers have been introduced for reconstruction and localization of neural sources from EEG and MEG. Although studies have compared the accuracy of beamformers for localization of strong sources in the brain, a comparison of new and conventional beamformers for time-course reconstruction of a desired source has not been previously undertaken. In this study, 8 beamformers were examined with respect to several parameters, including variations in depth, orientation, magnitude, and frequency of the simulated source to determine their (i) effectiveness at time-course reconstruction of the sources, and (ii) stability of their performances with respect to the input changes. The spatial and directional pass-bands of the beamformers were estimated via simulated and real EEG sources to determine spatial resolution. White-noise spatial maps of the beamformers were calculated to show which beamformers have a location bias. Simulated EEG data were produced by projection via forward head modelling of simulated sources onto scalp electrodes, then superimposed on real background EEG. Real EEG was recorded from a patient with essential tremor and deep brain implanted electrodes. Gain – the ratio of SNR of the reconstructed time-course to the input SNR – was the primary measure of performance of the beamformers. Overall, minimum-variance beamformers had higher Gains and superior spatial resolution to those of the minimum-norm beamformers, although their performance was more sensitive to changes in magnitude, depth, and frequency of the simulated source. White-noise spatial maps showed that several, but not all, beamformers have an undesirable location bias.
---
paper_title: Optimizing Spatial filters for Robust EEG Single-Trial Analysis
paper_content:
Due to the volume conduction multichannel electroencephalogram (EEG) recordings give a rather blurred image of brain activity. Therefore spatial filters are extremely useful in single-trial analysis in order to improve the signal-to-noise ratio. There are powerful methods from machine learning and signal processing that permit the optimization of spatio-temporal filters for each subject in a data dependent fashion beyond the fixed filters based on the sensor geometry, e.g., Laplacians. Here we elucidate the theoretical background of the common spatial pattern (CSP) algorithm, a popular method in brain-computer interface (BCD research. Apart from reviewing several variants of the basic algorithm, we reveal tricks of the trade for achieving a powerful CSP performance, briefly elaborate on theoretical aspects of CSP, and demonstrate the application of CSP-type preprocessing in our studies of the Berlin BCI (BBCI) project.
---
paper_title: Common Spatio-Temporal Pattern for Single-Trial Detection of Event-Related Potential in Rapid Serial Visual Presentation Triage
paper_content:
Searching for target images in large volume imagery is a challenging problem and the rapid serial visual presentation (RSVP) triage is potentially a promising solution to the problem. RSVP triage is essentially a cortically-coupled computer vision technique that relies on single-trial detection of event-related potentials (ERP). In RSVP triage, images are shown to a subject in a rapid serial sequence. When a target image is seen by the subject, unique ERP characterized by P300 are elicited. Thus, in RSVP triage, accurate detection of such distinct ERP allows for fast searching of target images in large volume imagery. The accuracy of the distinct ERP detection in RSVP triage depends on the feature extraction method, for which the common spatial pattern analysis (CSP) was used with limited success. This paper presents a novel feature extraction method, termed common spatio-temporal pattern (CSTP), which is critical for robust single-trial detection of ERP. Unlike the conventional CSP, whereby only spatial patterns of ERP are considered, the present proposed method exploits spatial and temporal patterns of ERP separately, providing complementary spatial and temporal features for high accurate single-trial ERP detection. Numerical study using data collected from 20 subjects in RSVP triage experiments demonstrates that the proposed method offers significant performance improvement over the conventional CSP method (corrected p -value <; 0.05, Pearson r=0.64) and other competing methods in the literature. This paper further shows that the main idea of CSTP can be easily applied to other methods.
---
paper_title: xDAWN Algorithm to Enhance Evoked Potentials: Application to Brain–Computer Interface
paper_content:
A brain-computer interface (BCI) is a communication system that allows to control a computer or any other device thanks to the brain activity. The BCI described in this paper is based on the P300 speller BCI paradigm introduced by Farwell and Donchin. An unsupervised algorithm is proposed to enhance P300 evoked potentials by estimating spatial filters; the raw EEG signals are then projected into the estimated signal subspace. Data recorded on three subjects were used to evaluate the proposed method. The results, which are presented using a Bayesian linear discriminant analysis classifier, show that the proposed method is efficient and accurate.
---
paper_title: The LDA beamformer: Optimal estimation of ERP source time series using linear discriminant analysis.
paper_content:
We introduce a novel beamforming approach for estimating event-related potential (ERP) source time series based on regularized linear discriminant analysis (LDA). The optimization problems in LDA and linearly-constrained minimum-variance (LCMV) beamformers are formally equivalent. The approaches differ in that, in LCMV beamformers, the spatial patterns are derived from a source model, whereas in an LDA beamformer the spatial patterns are derived directly from the data (i.e., the ERP peak). Using a formal proof and MEG simulations, we show that the LDA beamformer is robust to correlated sources and offers a higher signal-to-noise ratio than the LCMV beamformer and PCA. As an application, we use EEG data from an oddball experiment to show how the LDA beamformer can be harnessed to detect single-trial ERP latencies and estimate connectivity between ERP sources. Concluding, the LDA beamformer optimally reconstructs ERP sources by maximizing the ERP signal-to-noise ratio. Hence, it is a highly suited tool for analyzing ERP source time series, particularly in EEG/MEG studies wherein a source model is not available.
---
paper_title: Single-Trial Classification of Event-Related Potentials in Rapid Serial Visual Presentation Tasks Using Supervised Spatial Filtering
paper_content:
Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.
---
paper_title: xDAWN Algorithm to Enhance Evoked Potentials: Application to Brain–Computer Interface
paper_content:
A brain-computer interface (BCI) is a communication system that allows to control a computer or any other device thanks to the brain activity. The BCI described in this paper is based on the P300 speller BCI paradigm introduced by Farwell and Donchin. An unsupervised algorithm is proposed to enhance P300 evoked potentials by estimating spatial filters; the raw EEG signals are then projected into the estimated signal subspace. Data recorded on three subjects were used to evaluate the proposed method. The results, which are presented using a Bayesian linear discriminant analysis classifier, show that the proposed method is efficient and accurate.
---
paper_title: Optimum principal components for spatial filtering of EEG to detect imaginary movement by coherence
paper_content:
Several techniques have been used to improve the signal-to-noise ratio to increase the detection rate of Event Related Potentials (ERPs). This work investigates the application of spatial filtering based on principal component analysis (PCA) to detect ERP due to left-hand index finger movement imagination. The EEG signals were recorded of central derivations (C4, C2, Cz, C1 and C3), positioned according to 10–10 International System. The optimal spatial filter was found by using the first principal component and the ERP detection was obtained by magnitude squared coherence technique. The best detection rate, by using original signal (without filtering), was obtained at C2 derivation, with 54.73% for significance level of 5%. For the same significance level, the detection rate of the filtered signal was drastically improved to 96.84%. Results suggest that spatial filter by using PCA might be a very useful tool in assisting the ERP detection for movement imagination for applications on brain machine interface.
---
paper_title: Dimensionality Reduction and Channel Selection of Motor Imagery Electroencephalographic Data
paper_content:
The performance of spatial filters based on independent components analysis (ICA) was evaluated by employing principal component analysis (PCA) preprocessing for dimensional reduction. The PCA preprocessing was not found to be a suitable method that could retain motor imagery information in a smaller set of components. In contrast, 6 ICA components selected on the basis of visual inspection performed comparably (61.9%) to the full range of 22 components (63.9%). An automated selection of ICA components based on a variance criterion was also carried out. Only 8 components chosen this way performed better (63.1%) than visually selected components. A similar analysis on the reduced set of electrodes over mid-central and centroparietal regions of the brain revealed that common spatial patterns (CSPs) and Infomax were able to detect motor imagery activity with a satisfactory accuracy.
---
paper_title: Brain Activity-Based Image Classification From Rapid Serial Visual Presentation
paper_content:
We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. Image clips were selected from a broad area image and presented in rapid succession (12/s) in 4.1-s bursts. Participants indicated by subsequent button press whether or not each burst of images included a target airplane feature. Image clip creation and search path selection were designed to maximize user comfort and maintain user awareness of spatial context. Independent component analysis (ICA) was used to extract a set of independent source time-courses and their minimally-redundant low-dimensional informative features in the time and time-frequency amplitude domains from 128-channel EEG data recorded during clip burst presentations in a training session. The naive Bayes fusion of two Fisher discriminant classifiers, computed from the 100 most discriminative time and time-frequency features, respectively, was used to estimate the likelihood that each clip contained a target feature. This estimator was applied online in a subsequent test session. Across eight training/test session pairs from seven participants, median area under the receiver operator characteristic curve, by tenfold cross validation, was 0.97 for within-session and 0.87 for between-session estimates, and was nearly as high (0.83) for targets presented in bursts that participants mistakenly reported to include no target features.
---
paper_title: Spatiotemporal Representations of Rapid Visual Target Detection: A Single-Trial EEG Classification Algorithm
paper_content:
Brain computer interface applications, developed for both healthy and clinical populations, critically depend on decoding brain activity in single trials. The goal of the present study was to detect distinctive spatiotemporal brain patterns within a set of event related responses. We introduce a novel classification algorithm, the spatially weighted FLD-PCA (SWFP), which is based on a two-step linear classification of event-related responses, using fisher linear discriminant (FLD) classifier and principal component analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the hierarchical discriminant component Analysis (HDCA), introduced by Parra, et al. 2007. We also consider a modified version of the HDCA, namely the hierarchical discriminant principal component analysis algorithm (HDPCA). We compare single-trial classification accuracies of all the three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 10 Hz) of images from five different object categories, based on single-trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm. Additionally, HDPCA significantly increases classification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars improve accuracy, and thus may be important in cases where high accuracy is crucial.
---
paper_title: Removing Electroencephalographic Artifacts by Blind Source Separation
paper_content:
Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity.
---
paper_title: A Tutorial on Principal Component Analysis
paper_content:
Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
---
paper_title: Characterization and Robust Classification of EEG Signal from Image RSVP Events with Independent Time-Frequency Features
paper_content:
This paper considers the problem of automatic characterization and detection of target images in a rapid serial visual presentation (RSVP) task based on EEG data. A novel method that aims to identify single-trial event-related potentials (ERPs) in time-frequency is proposed, and a robust classifier with feature clustering is developed to better utilize the correlated ERP features. The method is applied to EEG recordings of a RSVP experiment with multiple sessions and subjects. The results show that the target image events are mainly characterized by 3 distinct patterns in the time-frequency domain, i.e., a theta band (4.3 Hz) power boosting 300–700 ms after the target image onset, an alpha band (12 Hz) power boosting 500– 1000 ms after the stimulus onset, and a delta band (2 Hz) power boosting after 500 ms. The most discriminant timefrequency features are power boosting and are relatively consistent among multiple sessions and subjects. Since the original discriminant time-frequency features are highly correlated, we constructed the uncorrelated features using hierarchical clustering for better classification of target and non-target images. With feature clustering, performance (area under ROC) improved from 0.85 to 0.89 on within-session tests, and from 0.76 to 0.84 on cross-subject tests. The constructed uncorrelated features were more robust than the original discriminant features and corresponded to a number of local regions on the time-frequency plane.
---
paper_title: Analyzing Neural Time Series Data: Theory and Practice
paper_content:
This book offers a comprehensive guide to the theory and practice of analyzing electrical brain signals. It explains the conceptual, mathematical, and implementational (via Matlab programming) aspects of time-, time-frequency- and synchronization-based analyses of magnetoencephalography (MEG), electroencephalography (EEG), and local field potential (LFP) recordings from humans and nonhuman animals. It is the only book on the topic that covers both the theoretical background and the implementation in language that can be understood by readers without extensive formal training in mathematics, including cognitive scientists, neuroscientists, and psychologists. Readers who go through the book chapter by chapter and implement the examples in Matlab will develop an understanding of why and how analyses are performed, how to interpret results, what the methodological issues are, and how to perform single-subject-level and group-level analyses. Researchers who are familiar with using automated programs to perform advanced analyses will learn what happens when they click the "analyze now" button. The book provides sample data and downloadable Matlab code. Each of the 38 chapters covers one analysis topic, and these topics progress from simple to advanced. Most chapters conclude with exercises that further develop the material covered in the chapter. Many of the methods presented (including convolution, the Fourier transform, and Euler's formula) are fundamental and form the groundwork for other advanced data analysis methods. Readers who master the methods in the book will be well prepared to learn other approaches.
---
paper_title: High-throughput image search via single-trial event detection in a rapid serial visual presentation task
paper_content:
We describe a method, using linear discrimination, for detecting single-trial EEG signatures of object recognition events in a rapid serial visual presentation (RSVP) task. We record EEG using a high spatial density array (87 electrodes) during the rapid presentation (50-200 msec per image) of natural images. Subjects were instructed to release a button when they recognized a target image (an image with a person/people). Trials consisted of 100 images each, with a 50% chance of a single target being in a trial. Subject EEG was analyzed on a single-trial basis with an optimal spatial linear discriminator learned at multiple time windows after the presentation of an image. Linear discrimination enables the estimation of a forward model and thus allows for an approximate localization of the discriminating activity. Results show multiple loci for discriminating activity (e.g. motor and visual). Using these detected EEG signatures, we show that in many cases we can detect targets more accurately than the overt response (button release) and that such signatures can be used to prioritize images for high-throughput search.
---
paper_title: A framework for rapid visual image search using single-trial brain evoked responses
paper_content:
We report the design and performance of a brain computer interface for single-trial detection of viewed images based on human dynamic brain response signatures in 32-channel electroencephalography (EEG) acquired during a rapid serial visual presentation. The system explores the feasibility of speeding up image analysis by tapping into split-second perceptual judgments of humans. We present an incremental learning system with less memory storage and computational cost for single-trial event-related potential (ERP) detection, which is trained using cross-session data. We demonstrate the efficacy of the method on the task of target image detection. We apply linear and nonlinear support vector machines (SVMs) and a linear logistic classifier (LLC) for single-trial ERP detection using data collected from image analysts and naive subjects. For our data the detection performance of the nonlinear SVM is better than the linear SVM and the LLC. We also show that our ERP-based target detection system is five-fold faster than the traditional image viewing paradigm.
---
paper_title: Spatiotemporal Representations of Rapid Visual Target Detection: A Single-Trial EEG Classification Algorithm
paper_content:
Brain computer interface applications, developed for both healthy and clinical populations, critically depend on decoding brain activity in single trials. The goal of the present study was to detect distinctive spatiotemporal brain patterns within a set of event related responses. We introduce a novel classification algorithm, the spatially weighted FLD-PCA (SWFP), which is based on a two-step linear classification of event-related responses, using fisher linear discriminant (FLD) classifier and principal component analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the hierarchical discriminant component Analysis (HDCA), introduced by Parra, et al. 2007. We also consider a modified version of the HDCA, namely the hierarchical discriminant principal component analysis algorithm (HDPCA). We compare single-trial classification accuracies of all the three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 10 Hz) of images from five different object categories, based on single-trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm. Additionally, HDPCA significantly increases classification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars improve accuracy, and thus may be important in cases where high accuracy is crucial.
---
paper_title: Generative Adversarial Networks
paper_content:
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
---
paper_title: Regularized discriminant analysis
paper_content:
Abstract Linear and quadratic discriminant analysis are considered in the small-sample, high-dimensional setting. Alternatives to the usual maximum likelihood (plug-in) estimates for the covariance matrices are proposed. These alternatives are characterized by two parameters, the values of which are customized to individual situations by jointly minimizing a sample-based estimate of future misclassification risk. Computationally fast implementations are presented, and the efficacy of the approach is examined through simulation studies and application to data. These studies indicate that in many circumstances dramatic gains in classification accuracy can be achieved.
---
paper_title: Characterization and Robust Classification of EEG Signal from Image RSVP Events with Independent Time-Frequency Features
paper_content:
This paper considers the problem of automatic characterization and detection of target images in a rapid serial visual presentation (RSVP) task based on EEG data. A novel method that aims to identify single-trial event-related potentials (ERPs) in time-frequency is proposed, and a robust classifier with feature clustering is developed to better utilize the correlated ERP features. The method is applied to EEG recordings of a RSVP experiment with multiple sessions and subjects. The results show that the target image events are mainly characterized by 3 distinct patterns in the time-frequency domain, i.e., a theta band (4.3 Hz) power boosting 300–700 ms after the target image onset, an alpha band (12 Hz) power boosting 500– 1000 ms after the stimulus onset, and a delta band (2 Hz) power boosting after 500 ms. The most discriminant timefrequency features are power boosting and are relatively consistent among multiple sessions and subjects. Since the original discriminant time-frequency features are highly correlated, we constructed the uncorrelated features using hierarchical clustering for better classification of target and non-target images. With feature clustering, performance (area under ROC) improved from 0.85 to 0.89 on within-session tests, and from 0.76 to 0.84 on cross-subject tests. The constructed uncorrelated features were more robust than the original discriminant features and corresponded to a number of local regions on the time-frequency plane.
---
paper_title: Brain Activity-Based Image Classification From Rapid Serial Visual Presentation
paper_content:
We report the design and performance of a brain-computer interface (BCI) system for real-time single-trial binary classification of viewed images based on participant-specific dynamic brain response signatures in high-density (128-channel) electroencephalographic (EEG) data acquired during a rapid serial visual presentation (RSVP) task. Image clips were selected from a broad area image and presented in rapid succession (12/s) in 4.1-s bursts. Participants indicated by subsequent button press whether or not each burst of images included a target airplane feature. Image clip creation and search path selection were designed to maximize user comfort and maintain user awareness of spatial context. Independent component analysis (ICA) was used to extract a set of independent source time-courses and their minimally-redundant low-dimensional informative features in the time and time-frequency amplitude domains from 128-channel EEG data recorded during clip burst presentations in a training session. The naive Bayes fusion of two Fisher discriminant classifiers, computed from the 100 most discriminative time and time-frequency features, respectively, was used to estimate the likelihood that each clip contained a target feature. This estimator was applied online in a subsequent test session. Across eight training/test session pairs from seven participants, median area under the receiver operator characteristic curve, by tenfold cross validation, was 0.97 for within-session and 0.87 for between-session estimates, and was nearly as high (0.83) for targets presented in bursts that participants mistakenly reported to include no target features.
---
paper_title: An efficient P300-based brain-computer interface for disabled subjects
paper_content:
A brain-computer interface (BCI) is a communication system that translates brain-activity into commands for a computer or other devices. In other words, a BCI allows users to act on their environment by using only brain-activity, without using peripheral nerves and muscles. In this paper, we present a BCI that achieves high classification accuracy and high bitrates for both disabled and able-bodied subjects. The system is based on the P300 evoked potential and is tested with five severely disabled and four able-bodied subjects. For four of the disabled subjects classification accuracies of 100% are obtained. The bitrates obtained for the disabled subjects range between 10 and 25 bits/min. The effect of different electrode configurations and machine learning algorithms on classification accuracy is tested. Further factors that are possibly important for obtaining good classification accuracy in P300-based BCI systems for disabled subjects are discussed.
---
paper_title: Boosting linear logistic regression for single trial ERP detection in rapid serial visual presentation tasks.
paper_content:
In this paper, we employ the AdaBoost algorithm to the linear logistic regression model to detect encephalography (EEG) signatures, called evoked response potentials of visual recognition events in a single trial. In the experiments, a large amount of images were displayed at a very high presentation rate, named rapid serial visual presentation. The EEG was recorded using 32 electrodes during the rapid image presentation. Subjects were instructed to click the mouse when they recognize a target image. The results demonstrated that the boosting method improves the detection performance compared with the base classifier by approximately 3% as measured by area under the ROC curve.
---
paper_title: Optimising the Number of Channels in EEG-Augmented Image Search
paper_content:
Recent proof-of-concept research has appeared showing the applicability of Brain Computer Interface (BCI) technology in combination with the human visual system, to classify images. The basic premise here is that images that arouse a participant's attention generate a detectable response in their brainwaves, measurable using an electroencephalograph (EEG). When a participant is given a target class of images to search for, each image belonging to that target class presented within a stream of images should elicit a distinctly detectable neural response. Previous work in this domain has primarily focused on validating the technique on proof of concept image sets that demonstrate desired properties and on examining the capabilities of the technique at various image presentation speeds. In this paper we expand on this by examining the capability of the technique when using a reduced number of channels in the EEG, and its impact on the detection accuracy.
---
paper_title: Exploring EEG for Object Detection and Retrieval
paper_content:
This paper explores the potential for using Brain Computer Interfaces (BCI) as a relevance feedback mechanism in content-based image retrieval. Several experiments are performed using a rapid serial visual presentation (RSVP) of images at different rates (5Hz and 10Hz) on 8 users with different degrees of familiarization with BCI and the dataset. We compare the feedback from the BCI and mouse-based interfaces in a subset of TRECVid images, finding that, when users have limited time to annotate the images, both interfaces are comparable in performance. Comparing our best users in a retrieval task, we found that EEG-based relevance feedback can outperform mouse-based feedback.
---
paper_title: Common Spatio-Temporal Pattern for Single-Trial Detection of Event-Related Potential in Rapid Serial Visual Presentation Triage
paper_content:
Searching for target images in large volume imagery is a challenging problem and the rapid serial visual presentation (RSVP) triage is potentially a promising solution to the problem. RSVP triage is essentially a cortically-coupled computer vision technique that relies on single-trial detection of event-related potentials (ERP). In RSVP triage, images are shown to a subject in a rapid serial sequence. When a target image is seen by the subject, unique ERP characterized by P300 are elicited. Thus, in RSVP triage, accurate detection of such distinct ERP allows for fast searching of target images in large volume imagery. The accuracy of the distinct ERP detection in RSVP triage depends on the feature extraction method, for which the common spatial pattern analysis (CSP) was used with limited success. This paper presents a novel feature extraction method, termed common spatio-temporal pattern (CSTP), which is critical for robust single-trial detection of ERP. Unlike the conventional CSP, whereby only spatial patterns of ERP are considered, the present proposed method exploits spatial and temporal patterns of ERP separately, providing complementary spatial and temporal features for high accurate single-trial ERP detection. Numerical study using data collected from 20 subjects in RSVP triage experiments demonstrates that the proposed method offers significant performance improvement over the conventional CSP method (corrected p -value <; 0.05, Pearson r=0.64) and other competing methods in the literature. This paper further shows that the main idea of CSTP can be easily applied to other methods.
---
paper_title: An overview of gradient descent optimization algorithms
paper_content:
Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent.
---
paper_title: A review of classification algorithms for EEG-based brain-computer interfaces
paper_content:
In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.
---
paper_title: Neural Networks For Pattern Recognition
paper_content:
Highlights of adaptive resonance theory classifying spatial patterns classifying temporal patterns multilayer networks and the use of attention representing synonyms specific architectures that use presynaptic inhibition. Appendices: feedforward circuits for normalization and noise suppression network equations used in the simulations of chapter 3 network equations used in the simulations of chapter 4.
---
paper_title: Statistical Pattern Recognition: A Review
paper_content:
The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
---
paper_title: Multilayer perceptrons for the classification of brain computer interface data
paper_content:
Fast and simple classification methods for brain computer interfacing (BCI) signals are indispensable for the design of successful BCI applications. This paper presents a computationally simple algorithm to classify BCI data into left and right finger movements of the subjects. A two-class output multilayer perceptron (MLP) performs the classification. Our approach is attractive for providing an optimal combination of 1) computational efficiency, 2) classification accuracy (training: 100% and testing: 64%) and 3) minimal feature extraction (two channels out of a 28-channel EEG trial). The channels selected to be extracted (C3 and C4) not only greatly reduce dimensionality, but also refer to the central parts of the brain that decide left- right cognition, greatly enhancing the classification task. The results obtained are promising, and hold much potential for further investigation.
---
paper_title: The Segmentation of the Left Ventricle of the Heart From Ultrasound Data Using Deep Learning Architectures and Derivative-Based Search Methods
paper_content:
We present a new supervised learning model designed for the automatic segmentation of the left ventricle (LV) of the heart in ultrasound images. We address the following problems inherent to supervised learning models: 1) the need of a large set of training images; 2) robustness to imaging conditions not present in the training data; and 3) complex search process. The innovations of our approach reside in a formulation that decouples the rigid and nonrigid detections, deep learning methods that model the appearance of the LV, and efficient derivative-based search algorithms. The functionality of our approach is evaluated using a data set of diseased cases containing 400 annotated images (from 12 sequences) and another data set of normal cases comprising 80 annotated images (from two sequences), where both sets present long axis views of the LV. Using several error measures to compute the degree of similarity between the manual and automatic segmentations, we show that our method not only has high sensitivity and specificity but also presents variations with respect to a gold standard (computed from the manual annotations of two experts) within interuser variability on a subset of the diseased cases. We also compare the segmentations produced by our approach and by two state-of-the-art LV segmentation models on the data set of normal cases, and the results show that our approach produces segmentations that are comparable to these two approaches using only 20 training images and increasing the training set to 400 images causes our approach to be generally more accurate. Finally, we show that efficient search methods reduce up to tenfold the complexity of the method while still producing competitive segmentations. In the future, we plan to include a dynamical model to improve the performance of the algorithm, to use semisupervised learning methods to reduce even more the dependence on rich and large training sets, and to design a shape model less dependent on the training set.
---
paper_title: Recurrent convolutional neural network for object recognition
paper_content:
In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.
---
paper_title: Convolutional Neural Network Architectures for Matching Natural Language Sentences
paper_content:
Semantic matching is of central importance to many natural language tasks \cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.
---
paper_title: EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces
paper_content:
Brain computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional Neural Networks (CNNs), which have been used in computer vision and speech recognition, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible. In this work we introduce EEGNet, a compact convolutional network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR). We show that EEGNet generalizes across paradigms better than the reference algorithms when only limited training data is available. We demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features. Our results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks, suggesting that the observed performances were not due to artifact or noise sources in the data.
---
paper_title: Single-Trial Classification of Event-Related Potentials in Rapid Serial Visual Presentation Tasks Using Supervised Spatial Filtering
paper_content:
Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.
---
paper_title: Face Recognition : A Convolutional Neural Network Approach
paper_content:
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
---
paper_title: Single-trial EEG RSVP classification using convolutional neural networks
paper_content:
Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.
---
paper_title: Classification of EEG during imagined mental tasks by forecasting with Elman Recurrent Neural Networks
paper_content:
The ability to classify EEG recorded while a subject performs varying imagined mental tasks may lay the foundation for building usable Brain-Computer Interfaces as well as improve the performance of EEG analysis software used in clinical settings. Although a number of research groups have produced EEG classifiers, these methods have not yet reached a level of performance that is acceptable for use in many practical applications. We assert that current approaches are limited by their ability to capture the temporal and spatial patterns contained within EEG. In order to address these problems, we propose a new generative technique for EEG classification that uses Elman Recurrent Neural Networks. EEG recorded while a subject performs one of several imagined mental tasks is first modeled by training a network to forecast the signal a single step ahead in time. We show that these models are able to forecast EEG well with an RMSE as low as 0.110. A separate model is then trained over EEG belonging to each class. Classification of previously unseen data is performed by applying each model and assigning the class label associated with the network that produced the lowest forecasting error. This approach is tested on EEG collected from two able-bodied subjects and one subject with a high-level spinal cord injury. Classification rates as high as 93.3% are achieved for a two-task problem with decisions made every second yielding a bitrate of 38.7 bits per minute.
---
paper_title: Analysis of EEG signals by implementing eigenvector methods/recurrent neural networks
paper_content:
The implementation of recurrent neural network (RNN) employing eigenvector methods is presented for classification of electroencephalogram (EEG) signals. In practical applications of pattern recognition, there are often diverse features extracted from raw data which needs recognizing. Because of the importance of making the right decision, the present work is carried out for searching better classification procedures for the EEG signals. Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the classifiers trained on the extracted features. The aim of the study is classification of the EEG signals by the combination of eigenvector methods and the RNN. The present research demonstrated that the power levels of the power spectral density (PSD) estimates obtained by the eigenvector methods are the features which well represent the EEG signals and the RNN trained on these features achieved high classification accuracies.
---
paper_title: Recurrent neural network based language model
paper_content:
A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition
---
paper_title: A Deep Learning method for classification of images RSVP events with EEG data
paper_content:
In this paper, we investigated Deep Learning (DL) for characterizing and detecting target images in an image rapid serial visual presentation (RSVP) task based on EEG data. We exploited DL technique with input feature clusters to handle high dimensional features related to time - frequency events. The method was applied to EEG recordings of a RSVP experiment with multiple sessions and subjects. For classification of target and non-target images, a deep belief net (DBN) classifier was based on the uncorrelated features, which was constructed from original correlated features using clustering method. The performance of the proposed DBN was tested for different combinations of hidden units and hidden layers on multiple subjects. The results of DBN were compared with cluster Linear Discriminant Analysis (cLDA) and Support vector machine (SVM) and DBN demonstrated better performance in all tested cases. There was an improvement of 10 - 25% for certain cases. We also demonstrated how DBN is used to characterize brain activities.
---
paper_title: A Tutorial on Principal Component Analysis
paper_content:
Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
---
| Title: A Review of Feature Extraction and Classification Algorithms for Image RSVP-Based BCI
Section 1: Introduction
Description 1: This section provides an overview of RSVP technology in BCI, its importance, and the necessity of a review dedicated to feature extraction and classification algorithms for RSVP-based BCI research.
Section 2: RSVP Experiment for EEG Data Acquisition
Description 2: This section describes the typical setup and procedure for data acquisition in RSVP-EEG experiments, including the hardware and software used.
Section 3: Brief Introduction to RSVP-EEG Pattern
Description 3: This section introduces the commonly observed EEG patterns in RSVP-BCI, with a focus on the P300 ERP and its significance.
Section 4: RSVP-EEG Data Preprocessing and Properties
Description 4: This section discusses the preprocessing steps required for RSVP-EEG data, including filtering, epoching, and addressing issues like low SNR and overlapping epochs.
Section 5: Performance Evaluation Metrics
Description 5: This section outlines various metrics to evaluate the performance of machine-learning models used in RSVP-based BCI, emphasizing ROC-AUC and balanced accuracy (BA).
Section 6: Feature Extraction Methods Used in RSVP-Based BCI Research
Description 6: This section reviews various feature extraction methods, including supervised and unsupervised spatial filtering techniques, and time-frequency representation specific to RSVP-EEG.
Section 7: Survey of Classifiers Used in RSVP-Based BCI Research
Description 7: This section surveys different classifiers, categorizing them into linear classifiers and neural networks, discussing their applicability and performance in RSVP-based BCI.
Section 8: Conclusion
Description 8: This section summarizes the key findings on feature extraction and classification methods in RSVP-based BCI, highlighting the advantages of different approaches and suggesting future research directions. |
A Survey of Green Networking Research | 12 | ---
paper_title: To Slow or Not to Slow: The Economics of The Greenhouse Effect
paper_content:
Over the last decade, scientists have studied extensively the greenhouse effect, which holds that the accumulation of carbon dioxide (CO2) and other greenhouse gases (GHGs) is expected to produce global warming and other significant climatic changes over the next century. Along with the scientific research have come growing alarm and calls for drastic curbs on the emissions of greenhouse gases, as for example the reports of the Intergovernmental Panel on Climate Change (IPCC [I990]) and the Second World Climate Conference (October I990). To date, these call to arms for forceful measures to slow greenhouse warming have been made without any serious attempt to weigh the costs and benefits of climatic change or alternative control strategies. The present study presents a simple approach for analyzing policies to slow climate change. We begin by summarizing the elements of an economic analysis of different approaches to controlling greenhouse warming. We then sketch a mathematical model of economic growth that links the economy, emissions, and climate changes and summarize the empirical evidence on the costs of reducing emissions and concentrations of greenhouse gases and on the damages from greenhouse warming, relying primarily on data for the United States. The different sections are then integrated to provide estimates of the efficient reduction of greenhouse gases, after which the final section summarizes the major results.
---
paper_title: Cutting the electric bill for internet-scale systems
paper_content:
Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.
---
paper_title: Energy Efficiency in Telecom Optical Networks
paper_content:
Since the energy crisis and environmental protection are gaining increasing concerns in recent years, new research topics to devise technological solutions for energy conservation are being investigated in many scientific disciplines. Specifically, due to the rapid growth of energy consumption in ICT (Information and Communication Technologies), lot of attention is being devoted towards "green" ICT solutions. In this paper, we provide a comprehensive survey of the most relevant research activities for minimizing energy consumption in telecom networks, with a specific emphasis on those employing optical technologies. We investigate the energy-minimization opportunities enabled by optical technologies and classify the existing approaches over different network domains, namely core, metro, and access networks. A section is also devoted to describe energy-efficient solutions for some of today's important applications using optical network technology, e.g., grid computing and data centers. We provide an overview of the ongoing standardization efforts in this area. This work presents a comprehensive and timely survey on a growing field of research, as it covers most aspects of energy consumption in optical telecom networks. We aim at providing a comprehensive reference for the growing base of researchers who will work on energy efficiency of telecom networks in the upcoming years.
---
paper_title: The next frontier for communications networks: power management
paper_content:
Storage, memory, processor, and communications bandwidth are all relatively plentiful and inexpensive. However, a growing expense in the operation of computer networks is electricity usage. Estimates place devices connected to the Internet as consuming about 2%, and growing, of the total electricity produced in the USA-much of this power consumption is unnecessary. Power management is needed to reduce this large and growing energy consumption of the Internet. We see power management as the 'next frontier' in research in computer networks. In this paper, we propose methods for reducing energy consumption of networked desktop computers. Using traffic characterization of university dormitory computers, we show that there is significant idle time that can be exploited for power management. However, current Ethernet adapters in desktop computers lack the capabilities needed to allow existing system power management features to be enabled. We address this problem with a proxying Ethernet adapter that handles routine network tasks for a desktop computer when it is in a low-power sleep mode. This proxying adapter can allow existing power management features in desktop computers to remain enabled and have the computer be 'on the network' at all times. The energy that we expect can be saved is in the range of 0.8-2.7 billion US dollars/year.
---
paper_title: Enabling an Energy-Efficient Future Internet Through Selectively Connected End Systems
paper_content:
We offer an initial exploration of the architectural constructs required to support selective connectivity, whereby a host can choose the degree to which it maintains a network presence, rather than today's binary notion of "connected" or "disconnected". The driver for our thinking is to allow hosts to go to sleep to Proposed architectural elements
---
paper_title: Cutting the electric bill for internet-scale systems
paper_content:
Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.
---
paper_title: The Case for Energy-Proportional Computing
paper_content:
Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems.
---
paper_title: The next frontier for communications networks: power management
paper_content:
Storage, memory, processor, and communications bandwidth are all relatively plentiful and inexpensive. However, a growing expense in the operation of computer networks is electricity usage. Estimates place devices connected to the Internet as consuming about 2%, and growing, of the total electricity produced in the USA-much of this power consumption is unnecessary. Power management is needed to reduce this large and growing energy consumption of the Internet. We see power management as the 'next frontier' in research in computer networks. In this paper, we propose methods for reducing energy consumption of networked desktop computers. Using traffic characterization of university dormitory computers, we show that there is significant idle time that can be exploited for power management. However, current Ethernet adapters in desktop computers lack the capabilities needed to allow existing system power management features to be enabled. We address this problem with a proxying Ethernet adapter that handles routine network tasks for a desktop computer when it is in a low-power sleep mode. This proxying adapter can allow existing power management features in desktop computers to remain enabled and have the computer be 'on the network' at all times. The energy that we expect can be saved is in the range of 0.8-2.7 billion US dollars/year.
---
paper_title: Power Awareness in Network Design and Routing
paper_content:
Exponential bandwidth scaling has been a fundamental driver of the growth and popularity of the Internet. However, increases in bandwidth have been accompanied by increases in power consumption, and despite sustained system design efforts to address power demand, significant technological challenges remain that threaten to slow future bandwidth growth. In this paper we describe the power and associated heat management challenges in today's routers. We advocate a broad approach to addressing this problem that includes making power-awareness a primary objective in the design and configuration of networks, and in the design and implementation of network protocols. We support our arguments by providing a case study of power demands of two standard router platforms that enables us to create a generic model for router power consumption. We apply this model in a set of target network configurations and use mixed integer optimization techniques to investigate power consumption, performance and robustness in static network design and in dynamic routing. Our results indicate the potential for significant power savings in operational networks by including power-awareness.
---
paper_title: Energy Consumption of Residential and Professional Switches
paper_content:
Precise evaluation of network appliance energy consumption is necessary to accurately model or simulate the power consumption of distributed systems. In this paper we evaluate the influence of traffic onto the consumption of electrical power of four switches found in home and professional environments. First we describe our measurement and data analysis approach, and how our results can be used for estimating the power consumption when knowing the average traffic bandwidth.Then we present the measurement results of two residential switches, and two professional switches. For each type we present regression models and parameters describing their quality. Similar to other works we find that for one of the switches the power consumption actually drops for high traffic loads, while for the others the situation is reverse. Measures justify that during most energy consumption evaluation, network appliance energy cost can be approximated as constant. This work gives information on the possible changes of this cost.
---
paper_title: A Power Benchmarking Framework for Network Devices
paper_content:
Energy efficiency is becoming increasingly important in the operation of networking infrastructure, especially in enterprise and data center networks. Researchers have proposed several strategies for energy management of networking devices. However, we need a comprehensive characterization of power consumption by a variety of switches and routers to accurately quantify the savings from the various power savings schemes. In this paper, we first describe the hurdles in network power instrumentation and present a power measurement study of a variety of networking gear such as hubs, edge switches, core switches, routers and wireless access points in both stand-alone mode and a production data center. We build and describe a benchmarking suite that will allow users to measure and compare the power consumed for a large set of common configurations at any switch or router of their choice. We also propose a network energy proportionality index, which is an easily measurable metric, to compare power consumption behaviors of multiple devices.
---
paper_title: A feasibility study for power management in LAN switches
paper_content:
We examine the feasibility of introducing power management schemes in network devices in the LAN. Specifically, we investigate the possibility of putting various components on LAN switches to sleep during periods of low traffic activity. Traffic collected in our LAN indicates that there are significant periods of inactivity on specific switch interfaces. Using an abstract sleep model devised for LAN switches, we examine the potential energy savings possible for different times of day and different interfaces (e.g., interfaces connecting to hosts to switches, or interfaces connecting switches, or interfaces connecting switches and routers). Algorithms developed for sleeping, based on periodic protocol behavior as well as traffic estimation are shown to be capable of conserving significant amounts of energy. Our results show that sleeping is indeed feasible in the LAN and in some cases, with very little impact on other protocols. However, we note that in order to maximize energy savings while minimizing sleep-related losses, we need hardware that supports sleeping.
---
paper_title: Energy-efficient algorithms
paper_content:
Algorithmic solutions can help reduce energy consumption in computing environs.
---
paper_title: Using Low-Power Modes for Energy Conservation in Ethernet LANs
paper_content:
Most Ethernet interfaces available for deployment in switches and hosts today can operate in a variety of different low power modes. However, currently these modes have very limited usage models. They do not take advantage of periods of inactivity, when the links remain idle or under-utilized. In this study, we propose methods that allow for detection of such periods to obtain energy savings with little impact on loss or delay. We evaluate our methods on a wide range of real-time traffic traces collected at a high-speed backbone switch within our campus LAN. Our results show that Ethernet interfaces at both ends can be put in extremely low power modes anywhere from 40%-98% of the time observed. In addition, we found that approximately 37% of interfaces studied (on the same switch) can be put in low power modes simultaneously which opens the potential for further energy savings in the switching fabric within the switch.
---
paper_title: Greening the Switch
paper_content:
Active research is being conducted in reducing power consumption of all the components of the Internet. To that end, we propose schemes for power reduction in network switches -- Time Window Prediction, Power Save Mode and Lightweight Alternative. These schemes are adaptive to changing traffic patterns and automatically tune their parameters to guarantee a bounded and specified increase in latency. We propose a novel architecture for buffering ingress packets using shadow ports. ::: ::: We test our schemes on packet traces obtained from an enterprise network, and evaluate them using realistic power models for the switches. Our simple power reduction schemes produce power savings of upto 32% with minimal increase in latency or packet-loss. With appropriate hardware support in the form of Wake-on-Packet, shadow ports and fast transitioning of the ports between its high and low power states, these savings reach 90% of the optimal algorithm's savings.
---
paper_title: Greening of the internet
paper_content:
In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.
---
paper_title: NGL02-2: Ethernet Adaptive Link Rate (ALR): Analysis of a Buffer Threshold Policy
paper_content:
Rapidly increasing energy use by computing and communications equipment is a significant problem that needs to be addressed. Ethernet network interface controllers (NICs) consume hundreds of millions of US$ in electricity per year. Most Ethernet links are underutilized and link power consumption can be reduced by operating at lower data rates. An output buffer threshold policy to change link data rate in response to utilization is investigated. Analytical and simulation models are developed to evaluate the performance of Adaptive Link Rate (ALR) with respect to mean packet delay and time spent in low data rate with Poisson traffic and 100 Mb/s network traces as inputs. A Markov model of a state-dependent service rate queue with rate transitions only at service completion is developed. For the traffic traces, it is found that a link can operate at 10 Mb/s for over 99% of the time yielding energy savings with no user-perceivable increase in packet delay.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Reducing the Energy Consumption of Ethernet with Adaptive Link Rate (ALR)
paper_content:
The rapidly increasing energy consumption by computing and communications equipment is a significant economic and environmental problem that needs to be addressed. Ethernet network interface controllers (NICs) in the US alone consume hundreds of millions of US dollars in electricity per year. Most Ethernet links are underutilized and link energy consumption can be reduced by operating at a lower data rate. In this paper, we investigate adaptive link rate (ALR) as a means of reducing the energy consumption of a typical Ethernet link by adaptively varying the link data rate in response to utilization. Policies to determine when to change the link data rate are studied. Simple policies that use output buffer queue length thresholds and fine-grain utilization monitoring are shown to be effective. A Markov model of a state-dependent service rate queue with rate transitions only at service completion is used to evaluate the performance of ALR with respect to the mean packet delay, the time spent in an energy-saving low link data rate, and the oscillation of link data rates. Simulation experiments using actual and synthetic traffic traces show that an Ethernet link with ALR can operate at a lower data rate for over 80 percent of the time, yielding significant energy savings with only a very small increase in packet delay.
---
paper_title: On the complexity of time table and multi-commodity flow problems
paper_content:
A very primitive version of Gotlieb's timetable problem is shown to be NP-complete, and therefore all the common timetable problems are NP-complete. A polynomial time algorithm, in case all teachers are binary, is shown. The theorem that a meeting function always exists if all teachers and classes have no time constraints is proved. The multi-commodity integral flow problem is shown to be NP-complete even if the number of commodities is two. This is true both in the directed and undirected cases. Finally, the two commodity real flow problem in undirected graphs is shown to be solvable in polynomial time. The time bound is O(|v|2|E|).
---
paper_title: PowerNap: eliminating server idle power
paper_content:
Data center power consumption is growing to unprecedented levels: the EPA estimates U.S. data centers will consume 100 billion kilowatt hours annually by 2011. Much of this energy is wasted in idle systems: in typical deployments, server utilization is below 30%, but idle servers still consume 60% of their peak power draw. Typical idle periods though frequent--last seconds or less, confounding simple energy-conservation approaches. In this paper, we propose PowerNap, an energy-conservation approach where the entire system transitions rapidly between a high-performance active state and a near-zero-power idle state in response to instantaneous load. Rather than requiring fine-grained power-performance states and complex load-proportional operation from each system component, PowerNap instead calls for minimizing idle power and transition time, which are simpler optimization goals. Based on the PowerNap concept, we develop requirements and outline mechanisms to eliminate idle power waste in enterprise blade servers. Because PowerNap operates in low-efficiency regions of current blade center power supplies, we introduce the Redundant Array for Inexpensive Load Sharing (RAILS), a power provisioning approach that provides high conversion efficiency across the entire range of PowerNap's power demands. Using utilization traces collected from enterprise-scale commercial deployments, we demonstrate that, together, PowerNap and RAILS reduce average server power consumption by 74%.
---
paper_title: Reducing Network Energy Consumption via Sleeping and Rate-Adaptation
paper_content:
We present the design and evaluation of two forms of power management schemes that reduce the energy consumption of networks. The first is based on putting network components to sleep during idle times, reducing energy consumed in the absence of packets. The second is based on adapting the rate of network operation to the offered workload, reducing the energy consumed when actively processing packets. ::: ::: For real-world traffic workloads and topologies and using power constants drawn from existing network equipment, we show that even simple schemes for sleeping or rate-adaptation can offer substantial savings. For instance, our practical algorithms stand to halve energy consumption for lightly utilized networks (10-20%). We show that these savings approach the maximum achievable by any algorithms using the same power management primitives. Moreover this energy can be saved without noticeably increasing loss and with a small and controlled increase in latency (<10ms). Finally, we show that both sleeping and rate adaptation are valuable depending (primarily) on the power profile of network equipment and the utilization of the network itself.
---
paper_title: Power-Aware Speed Scaling in Processor Sharing Systems
paper_content:
Energy use of computer communication systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provides at least one key benefit - significantly improved robustness to bursty traffic and mis-estimation of workload parameters.
---
paper_title: NGL02-2: Ethernet Adaptive Link Rate (ALR): Analysis of a Buffer Threshold Policy
paper_content:
Rapidly increasing energy use by computing and communications equipment is a significant problem that needs to be addressed. Ethernet network interface controllers (NICs) consume hundreds of millions of US$ in electricity per year. Most Ethernet links are underutilized and link power consumption can be reduced by operating at lower data rates. An output buffer threshold policy to change link data rate in response to utilization is investigated. Analytical and simulation models are developed to evaluate the performance of Adaptive Link Rate (ALR) with respect to mean packet delay and time spent in low data rate with Poisson traffic and 100 Mb/s network traces as inputs. A Markov model of a state-dependent service rate queue with rate transitions only at service completion is developed. For the traffic traces, it is found that a link can operate at 10 Mb/s for over 99% of the time yielding energy savings with no user-perceivable increase in packet delay.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Using Low-Power Modes for Energy Conservation in Ethernet LANs
paper_content:
Most Ethernet interfaces available for deployment in switches and hosts today can operate in a variety of different low power modes. However, currently these modes have very limited usage models. They do not take advantage of periods of inactivity, when the links remain idle or under-utilized. In this study, we propose methods that allow for detection of such periods to obtain energy savings with little impact on loss or delay. We evaluate our methods on a wide range of real-time traffic traces collected at a high-speed backbone switch within our campus LAN. Our results show that Ethernet interfaces at both ends can be put in extremely low power modes anywhere from 40%-98% of the time observed. In addition, we found that approximately 37% of interfaces studied (on the same switch) can be put in low power modes simultaneously which opens the potential for further energy savings in the switching fabric within the switch.
---
paper_title: Greening the Switch
paper_content:
Active research is being conducted in reducing power consumption of all the components of the Internet. To that end, we propose schemes for power reduction in network switches -- Time Window Prediction, Power Save Mode and Lightweight Alternative. These schemes are adaptive to changing traffic patterns and automatically tune their parameters to guarantee a bounded and specified increase in latency. We propose a novel architecture for buffering ingress packets using shadow ports. ::: ::: We test our schemes on packet traces obtained from an enterprise network, and evaluate them using realistic power models for the switches. Our simple power reduction schemes produce power savings of upto 32% with minimal increase in latency or packet-loss. With appropriate hardware support in the form of Wake-on-Packet, shadow ports and fast transitioning of the ports between its high and low power states, these savings reach 90% of the optimal algorithm's savings.
---
paper_title: Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems
paper_content:
Networked end-systems such as desktops and set-top boxes are often left powered-on, but idle, leading to wasted energy consumption. An alternative would be for these idle systems to enter low-power sleep modes. Unfortunately, today, a sleeping system sees degraded functionality: first, a sleeping device loses its network "presence" which is problematic to users and applications that expect to maintain access to a remote machine and, second, sleeping can prevent running tasks scheduled during times of low utilization (e.g., network backups). Various solutions to these problems have been proposed over the years including wake-on-lan (WoL) mechanisms that wake hosts when specific packets arrive, and the use of a proxy that handles idle-time traffic on behalf of a sleeping host. As of yet, however, an in-depth evaluation of the potential for energy savings, and the effectiveness of proposed solutions has not been carried out. To remedy this, in this paper, we collect data directly from 250 enterprise users on their end-host machines capturing network traffic patterns and user presence indicators. With this data, we answer several questions: what is the potential value of proxying or using magic packets? which protocols and applications require proxying? how comprehensive does proxying need to be for energy benefits to be compelling? and so on. ::: ::: We find that, although there is indeed much potential for energy savings, trivial approaches are not effective. We also find that achieving substantial savings requires a careful consideration of the tradeoffs between the proxy complexity and the idle-time functionality available to users, and that these tradeoffs vary with user environment. Based on our findings, we propose and evaluate a proxy architecture that exposes a minimal set of APIs to support different forms of idle-time behavior.
---
paper_title: Reducing the Energy Consumption of Ethernet with Adaptive Link Rate (ALR)
paper_content:
The rapidly increasing energy consumption by computing and communications equipment is a significant economic and environmental problem that needs to be addressed. Ethernet network interface controllers (NICs) in the US alone consume hundreds of millions of US dollars in electricity per year. Most Ethernet links are underutilized and link energy consumption can be reduced by operating at a lower data rate. In this paper, we investigate adaptive link rate (ALR) as a means of reducing the energy consumption of a typical Ethernet link by adaptively varying the link data rate in response to utilization. Policies to determine when to change the link data rate are studied. Simple policies that use output buffer queue length thresholds and fine-grain utilization monitoring are shown to be effective. A Markov model of a state-dependent service rate queue with rate transitions only at service completion is used to evaluate the performance of ALR with respect to the mean packet delay, the time spent in an energy-saving low link data rate, and the oscillation of link data rates. Simulation experiments using actual and synthetic traffic traces show that an Ethernet link with ALR can operate at a lower data rate for over 80 percent of the time, yielding significant energy savings with only a very small increase in packet delay.
---
paper_title: Greening of the internet
paper_content:
In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems
paper_content:
Networked end-systems such as desktops and set-top boxes are often left powered-on, but idle, leading to wasted energy consumption. An alternative would be for these idle systems to enter low-power sleep modes. Unfortunately, today, a sleeping system sees degraded functionality: first, a sleeping device loses its network "presence" which is problematic to users and applications that expect to maintain access to a remote machine and, second, sleeping can prevent running tasks scheduled during times of low utilization (e.g., network backups). Various solutions to these problems have been proposed over the years including wake-on-lan (WoL) mechanisms that wake hosts when specific packets arrive, and the use of a proxy that handles idle-time traffic on behalf of a sleeping host. As of yet, however, an in-depth evaluation of the potential for energy savings, and the effectiveness of proposed solutions has not been carried out. To remedy this, in this paper, we collect data directly from 250 enterprise users on their end-host machines capturing network traffic patterns and user presence indicators. With this data, we answer several questions: what is the potential value of proxying or using magic packets? which protocols and applications require proxying? how comprehensive does proxying need to be for energy benefits to be compelling? and so on. ::: ::: We find that, although there is indeed much potential for energy savings, trivial approaches are not effective. We also find that achieving substantial savings requires a careful consideration of the tradeoffs between the proxy complexity and the idle-time functionality available to users, and that these tradeoffs vary with user environment. Based on our findings, we propose and evaluate a proxy architecture that exposes a minimal set of APIs to support different forms of idle-time behavior.
---
paper_title: Somniloquy: Augmenting Network Interfaces to Reduce PC Energy Usage
paper_content:
Reducing the energy consumption of PCs is becoming increasingly important with rising energy costs and environmental concerns. Sleep states such as S3 (suspend to RAM) save energy, but are often not appropriate because ongoing networking tasks, such as accepting remote desktop logins or performing background file transfers, must be supported. In this paper we present Somniloquy, an architecture that augments network interfaces to allow PCs in S3 to be responsive to network traffic. We show that many applications, such as remote desktop and VoIP, can be supported without application-specific code in the augmented network interface by using application-level wakeup triggers. A further class of applications, such as instant messaging and peer-to-peer file sharing, can be supported with modest processing and memory resources in the network interface. Experiments using our prototype Somniloquy implementation, a USB-based network interface, demonstrates energy savings of 60% to 80% in most commonly occuring scenarios. This translates to significant cost savings for PC users.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Power-Proxying on the NIC: A Case Study with the Gnutella File-Sharing Protocol
paper_content:
Edge devices such as desktop and laptop computers constitute a majority of the devices connected to the Internet today. Peer-to-Peer (P2P) file-sharing applications generally require edge devices to maintain network presence whenever possible to enhance the robustness of the file-sharing network, which in turn can lead to considerable wastage of energy. We show that energy can be saved by permitting edge devices to enter into standby state and still maintain network connectivity by proxying protocols in the Network Interface Card (NIC).
---
paper_title: A Prototype Power Management Proxy for Gnutella Peer-to-Peer File Sharing
paper_content:
In order to be part of a peer-to-peer (P2P) file sharing network a host must be fully powered-on all of the time. In addition to providing a user interface, a P2P host handles query messages and serves requested files. In this paper, we describe the development of a prototype Gnutella-like P2P power management proxy sub-system that handles query messages. This can allow desktop PCs acting as P2P hosts to enter a low-power sleep state for most of the time and be woken-up by the proxy only when needed to serve files. TCP connections with neighbors are maintained by the host when it is awake and by the proxy when the host is sleeping. Experiments show that a low-cost Freescale ColdFire processor can effectively proxy for a P2P host. This suggests that a controller for a Gnutella P2P proxy could be co-located on an Ethernet NIC at low cost. This could lead to significant energy savings by allowing P2P hosts to power manage into a low-power sleep state when not in active use.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems
paper_content:
Networked end-systems such as desktops and set-top boxes are often left powered-on, but idle, leading to wasted energy consumption. An alternative would be for these idle systems to enter low-power sleep modes. Unfortunately, today, a sleeping system sees degraded functionality: first, a sleeping device loses its network "presence" which is problematic to users and applications that expect to maintain access to a remote machine and, second, sleeping can prevent running tasks scheduled during times of low utilization (e.g., network backups). Various solutions to these problems have been proposed over the years including wake-on-lan (WoL) mechanisms that wake hosts when specific packets arrive, and the use of a proxy that handles idle-time traffic on behalf of a sleeping host. As of yet, however, an in-depth evaluation of the potential for energy savings, and the effectiveness of proposed solutions has not been carried out. To remedy this, in this paper, we collect data directly from 250 enterprise users on their end-host machines capturing network traffic patterns and user presence indicators. With this data, we answer several questions: what is the potential value of proxying or using magic packets? which protocols and applications require proxying? how comprehensive does proxying need to be for energy benefits to be compelling? and so on. ::: ::: We find that, although there is indeed much potential for energy savings, trivial approaches are not effective. We also find that achieving substantial savings requires a careful consideration of the tradeoffs between the proxy complexity and the idle-time functionality available to users, and that these tradeoffs vary with user environment. Based on our findings, we propose and evaluate a proxy architecture that exposes a minimal set of APIs to support different forms of idle-time behavior.
---
paper_title: Power-Proxying on the NIC: A Case Study with the Gnutella File-Sharing Protocol
paper_content:
Edge devices such as desktop and laptop computers constitute a majority of the devices connected to the Internet today. Peer-to-Peer (P2P) file-sharing applications generally require edge devices to maintain network presence whenever possible to enhance the robustness of the file-sharing network, which in turn can lead to considerable wastage of energy. We show that energy can be saved by permitting edge devices to enter into standby state and still maintain network connectivity by proxying protocols in the Network Interface Card (NIC).
---
paper_title: Power Awareness in Network Design and Routing
paper_content:
Exponential bandwidth scaling has been a fundamental driver of the growth and popularity of the Internet. However, increases in bandwidth have been accompanied by increases in power consumption, and despite sustained system design efforts to address power demand, significant technological challenges remain that threaten to slow future bandwidth growth. In this paper we describe the power and associated heat management challenges in today's routers. We advocate a broad approach to addressing this problem that includes making power-awareness a primary objective in the design and configuration of networks, and in the design and implementation of network protocols. We support our arguments by providing a case study of power demands of two standard router platforms that enables us to create a generic model for router power consumption. We apply this model in a set of target network configurations and use mixed integer optimization techniques to investigate power consumption, performance and robustness in static network design and in dynamic routing. Our results indicate the potential for significant power savings in operational networks by including power-awareness.
---
paper_title: Time for a "Greener" Internet
paper_content:
It is anticipated that the Internet traffic will continue to grow exponentially for the foreseeable future, which will require ever-growing energy (electricity). Since a lot of the Internet traffic growth comes from predictable services (such as video) there is a huge potential for decreasing future Internet energy requirements by synchronizing the operation of routers and scheduling traffic in advance, thus reducing complexity (e.g., header processing, buffer size, switching fabric speedup and memory access bandwidth speedup). Today, scheduling and synchronizing large-scale data transfer operations can be easily achieved by utilizing a choice of tens of global time sources, freely available on earth and in space. In a way, this manuscript shows how to "trade" global time for electricity utilized by the global Internet.
---
paper_title: Optical Burst Switching ( OBS ) { A New Paradigm for an Optical Internet
paper_content:
To support bursty traffic on the Internet (and especially WWW) efficiently, optical burst switching (OBS) is proposed as a way to streamline both protocols and hardware in building the future generation Optical Internet. By leveraging the attractive properties of optical communications and at the same time, taking into account its limitations, OBS combines the best of optical circuit-switching and packet/cell switching. In this paper, the general concept of OBS protocols and in particular, those based on Just-Enough-Time (JET), is described, along with the applicability of OBS protocols to IP over WDM. Specific issues such as the use of fiber delay-lines (FDLs) for accommodating processing delay and/or resolving conflicts are also discussed. In addition, the performance of JET-based OBS protocols which use an offset time along with delayed reservation to achieve efficient utilization of both bandwidth and FDLs as well as to support priority-based routing is evaluated.
---
paper_title: Reducing Network Energy Consumption via Sleeping and Rate-Adaptation
paper_content:
We present the design and evaluation of two forms of power management schemes that reduce the energy consumption of networks. The first is based on putting network components to sleep during idle times, reducing energy consumed in the absence of packets. The second is based on adapting the rate of network operation to the offered workload, reducing the energy consumed when actively processing packets. ::: ::: For real-world traffic workloads and topologies and using power constants drawn from existing network equipment, we show that even simple schemes for sleeping or rate-adaptation can offer substantial savings. For instance, our practical algorithms stand to halve energy consumption for lightly utilized networks (10-20%). We show that these savings approach the maximum achievable by any algorithms using the same power management primitives. Moreover this energy can be saved without noticeably increasing loss and with a small and controlled increase in latency (<10ms). Finally, we show that both sleeping and rate adaptation are valuable depending (primarily) on the power profile of network equipment and the utilization of the network itself.
---
paper_title: On reliability, performance and Internet power consumption
paper_content:
With the increasing concern for global warming, the impact of Internet power consumption is gaining interest. In this paper, we explore, for the first time, the relationship between network robustness, performance and Internet power consumption. We first discuss such a relationship based on data collected from Internet sources. Next, we propose a modeling framework to size that relationship. It is shown that when designing networks based on power consumption, careful attention should be paid to the trade-off between energy consumption and network performance since doing otherwise would lead to unreliable networks.
---
paper_title: The GREEN-NET framework: Energy efficiency in large scale distributed systems
paper_content:
The question of energy savings has been a matter of concern since a long time in the mobile distributed systems and battery-constrained systems. However, for large-scale non-mobile distributed systems, which nowadays reach impressive sizes, the energy dimension (electrical consumption) just starts to be taken into account. In this paper, we present the GREEN-NET1 framework which is based on 3 main components: an ON/OFF model based on an Energy Aware Resource Infrastructure (EARI), an adapted Resource Management System (OAR) for energy efficiency and a trust delegation component to assume network presence of sleeping nodes.
---
paper_title: Reducing Power Consumption in Backbone Networks
paper_content:
According to several studies, the power consumption of the Internet accounts for up to 10% of the worldwide energy consumption, and several initiatives are being put into place to reduce the power consumption of the ICT sector in general. To this goal, we propose a novel approach to switch off network nodes and links while still guaranteeing full connectivity and maximum link utilization. After showing that the problem falls in the class of capacitated multi-commodity flow problems, and therefore it is NP-complete, we propose some heuristic algorithms to solve it. Simulation results in a realistic scenario show that it is possible to reduce the number of links and nodes currently used by up to 30% and 50% respectively during off-peak hours, while offering the same service quality.
---
paper_title: Greening backbone networks: reducing energy consumption by shutting off cables in bundled links
paper_content:
In backbone networks, the line cards that drive the links between neighboring routers consume a large amount of energy. Since these networks are typically overprovisioned, selectively shutting down links during periods of low demand seems like a good way to reduce energy consumption. However, removing entire links from the topology often reduces capacity and connectivity too much, and leads to transient disruptions in the routing protocol. In this paper, we exploit the fact that many links in core networks are actually 'bundles' of multiple physical cables and line cards that can be shut down independently. Since identifying the optimal set of cables to shut down is an NP-complete problem, we propose several heuristics based on linear optimization techniques. We evaluate our heuristics on topology and traffic data from the Abilene backbone as well as on two synthetic topologies. The energy savings are significant, our simplest heuristic reduces energy consumption by 79% on Abilene under realistic traffic loads and bundled links consisting of five cables. Our optimization techniques run efficiently using standard optimization tools, such as the AMPL/CPLEX solver, making them a practical approach for network operators to reduce the energy consumption of their backbones.
---
paper_title: Energy-aware routing: A reality check
paper_content:
In this work, we analyze the design of green routing algorithms and evaluate the achievable energy savings that such mechanisms could allow in several realistic network scenarios. We formulate the problem as a minimum energy routing optimization, which we numerically solve considering a core-network scenario, which can be seen as a worst-case for energy saving performance (as nodes cannot be switched off). To gather full-relief results, we analyze the energy savings in various conditions (i.e., network topology and traffic matrix) and under different technology assumptions (i.e., the energy profile of the network devices). These results give us insight into the potential benefits of different “green” technologies and their interactions. In particular, we show that depending on the topology and traffic matrices, the optimal energy savings can be modest, partly limiting the interest for green routing approaches for some scenarios. At the same time, we also show that the common belief that there is a trade off between green network optimization and performance does not necessarily hold: in the considered environment, green routing has no effect on the main network performances such as maximum link utilization.
---
paper_title: Greening of the internet
paper_content:
In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.
---
paper_title: A Simulation Study of a New Green BitTorrent
paper_content:
The use of P2P technologies, such as BitTorrent, to distribute legal content to consumers is actively being explored as a means of reducing both file download times and the energy consumption of data centers. This approach pushes the energy use out of the data centers and into the homes of content consumers (who are also then content distributors). The current BitTorrent protocol requires that clients must be fully powered- on to be participating members in a swarm. In this paper, we show that simple changes to the BitTorrent protocol, including long-lived knowledge of sleeping peers and a new wake-up semantic, can enable clients to sleep when not actively downloading or uploading yet still be responsive swarm members. Using ns-2 we simulate a green BitTorrent swarm. We show that significant energy savings are achievable with only a small performance penalty in increased file download time.
---
paper_title: Fine-grained energy profiling for power-aware application design
paper_content:
Significant opportunities for power optimization exist at application design stage and are not yet fully exploited by system and application designers. We describe the challenges developers face in optimizing software for energy efficiency by exploiting application-level knowledge. To address these challenges, we propose the development of automated tools that profile the energy usage of various resource components used by an application and guide the design choices accordingly. We use a preliminary version of a tool we have developed to demonstrate how automated energy profiling helps a developer choose between alternative designs in the energy-performance trade-off space.
---
paper_title: Green: A System for Supporting Energy-Conscious Programming using Principled Approximation
paper_content:
Energy-efficient computing is important in several systems ranging from embedded devices to large scale data centers. Several application domains offer the opportunity to tradeoff quality of service/solution (QoS) for improvements in performance and reduction in energy consumption. Programmers sometimes take advantage of such opportunities, albeit in an ad-hoc manner and often without providing any QoS guarantees. We propose a system called Green that provides a simple and flexible framework that allows programmers to take advantage of such approximation opportunities in a systematic manner while providing statistical QoS guarantees. Green enables programmers to approximate expensive functions and loops and operates in two phases. In the calibration phase, it builds a model of the QoS loss produced by the approximation. This model is used in the operational phase to make approximation decisions based on the QoS constraints specified by the programmer. The operational phase also includes an adaptation function that occasionally monitors the runtime behavior and changes the approximation decisions and QoS model to provide strong QoS guarantees. To evaluate the effectiveness of Green, we implemented our system and language extensions using the Phoenix compiler framework. Our experiments using benchmarks from domains such as graphics, machine learning, and signal processing, and a real-world web search application, indicate that Green can produce significant improvements in performance and energy consumption with small and statistically guaranteed QoS degradation.
---
paper_title: A "Green TCP/IP" to reduce electricity consumed by computers
paper_content:
This new era in engineering calls for a greater understanding of the environmental impacts of modern technology. In the year 2000 it is estimated that personal computers in the USA alone will consume 21.9 TWh of electricity. Most of this electricity will be wasted due to the PCs remaining powered-on, but idle, most of the time. The PCs are often left powered-on so that network connectivity can be maintained even when the PC is not actively used. This paper describes how PCs can be powered-off and network connectivity still be maintained. The design and evaluation of a new connection sleep option in a "Green TCP/IP" is described. The long-term impacts of a Green TCP/IP can be very significant in terms of measured electricity savings.
---
paper_title: Computational energy cost of TCP
paper_content:
We present results from a detailed energy measurement study of TCP. We focus on the node-level cost of the TCP protocol and obtain a breakdown of the energy cost of different TCP functions. We analyze the energy consumption of TCP on two platforms (laptop and iPAQ) and three operating systems (FreeBSD 4.2, 5 and Linux 2.4.7). Our results show that 60-70% of the energy cost (for transmission or reception) is accounted for by the kernel NIC (network interface card) copy operation. Of the remainder, /spl sim/15% is accounted for in the copy operation from user space to kernel space with the remaining 15% being accounted for by TCP processing costs. We then further analyze the 15% TCP processing cost and show that the cost of computing checksums accounts for 20-30% of TCP processing cost. Finally, we determine the processing costs of two primary TCP functions - timeouts and triple duplicate ACKs. Pulling all these costs together, we present techniques whereby energy savings of between 20%-30% in the computational cost of TCP can be achieved.
---
paper_title: On Delivering Embarrassingly Distributed Cloud Services
paper_content:
Very large data centers are very expensive (servers, power/cooling, networking, physical plant.) Newer, geo-diverse, distributed or containerized designs offer a more economical alternative. We argue that a significant portion of cloud services are embarrassingly distributed – meaning there are high performance realizations that do not require massive internal communication among large server pools. We argue further that these embarrassingly distributed applications are a good match for realization in small distributed data center designs. We consider email delivery as an illustrative example. Geo-diversity in the design not only improves costs, scale and reliability, but also realizes advantages stemming from edge processing; in applications such as spam filtering, unwanted traffic can be blocked near the source to reduce transport costs.
---
paper_title: ECHOS: edge capacity hosting overlays of nano data centers
paper_content:
In this paper we propose a radical solution to data hosting and delivery for the Internet of the future. The current data delivery architecture is "network centric", with content stored in data centers connected directly to Internet backbones. This approach has multiple drawbacks among which complexity of deploying data centers, power consumption, and lack of scalability are the most critical. We propose a totally innovative and orthogonal approach to traditional data centers, through what we call "nano" data centers, which are deployed in boxes at the edge of the network (e.g., in home gateways, set-top-boxes, etc.) and accessed using a new peer-to-peer communication infrastructure. Unlike traditional peer-to-peer clients, however, our nano data centers operate under a common management authority, e.g., the ISP who installs and maintains the set-top-boxes, and can thus cooperate more effectively and achieve a higher aggregate performance. Nano data centers are, therefore, better suited for providing guaranteed quality to new emerging applications such as online gaming, interactive IPTV and VoD, and user generated content
---
paper_title: Energy-Efficient Dynamic Instruction Scheduling Logic through Instruction Grouping
paper_content:
Dynamic instruction scheduling logic is quite complex and dissipates significant energy in microprocessors that support superscalar and out-of-order execution. We propose a novel microarchitectural technique to reduce the complexity and energy consumption of the dynamic instruction scheduling logic. The proposed method groups several instructions as a single issue unit and reduces the required number of ports and the size of the structure for dispatch, wakeup, select, and issue. The present paper describes the microarchitecture mechanisms and shows evaluation results for energy savings and performance. These results reveal that the proposed technique can greatly reduce energy with almost no performance degradation, compared to the conventional dynamic instruction scheduling logic.
---
paper_title: Energy Aware Network Operations
paper_content:
Networking devices today consume a non-trivial amount of energy and it has been shown that this energy consumption is largely independent of the load through the devices. With a strong need to curtail the rising operational costs of IT infrastructure, there is a tremendous opportunity for introducing energy awareness in the design and operation of enterprise and data center networks. We focus on these networks as they are under the control of a single administrative domain in which network-wide control can be consistently applied. In this paper, we describe and analyze three approaches to saving energy in single administrative domain networks, without significantly impacting the networks' ability to provide the expected levels of performance and availability. We also explore the trade-offs between conserving energy and meeting performance and availability requirements. We conduct an extensive case study of our algorithms by simulating a real Web 2.0 workload in a real data center network topology using power characterizations that we obtain from real network hardware. Our results indicate that for our workload and data center scenario, 16% power savings (with no performance penalty and small decrease in availability) can be obtained merely by appropriately adjusting the active network elements (links). Significant additional savings (up to 75%) can be obtained by incorporating network traffic management and server workload consolidation.
---
paper_title: Every joule is precious: the case for revisiting operating system design for energy efficiency
paper_content:
By some estimates, there will be close to one billion wirelessdevices capable of Internet connectivity within five years,surpassing the installed base of traditional wired compute devices.These devices will take the form of cellular phones, personaldigital assistants (PDA's), embedded processors, and "Internetappliances". This proliferation of networked computing devices willenable a number of compelling applications, centering aroundubiquitous access to global information services, just in timedelivery of personalized content, and tight synchronization amongcompute devices/appliances in our everyday environment. However,one of the principal challenges of realizing this vision in thepost-PC environment is the need to reduce the energy consumed inusing these next-generation mobile and wireless devices, therebyextending the lifetime of the batteries that power them. While theprocessing power, memory, and network bandwidth of post-PC devicesare increasing exponentially, their battery capacity is improvingat a more modest pace. Thus, to ensure the utility of post-PC applications, it isimportant to develop low-level mechanisms and higher-level policiesto maximize energy efficiency. In this paper, we propose thesystematic re-examination of all aspects of operating system designand implementation from the point of view of energy efficiencyrather than the more traditional OS metric of maximizingperformance. In [7], we made the case for energy as a first-classOS-managed resource. We emphasized the benefits of higher-levelcontrol over energy usage policy and the application/OSinteractions required to achieve them. This paper explores theimplications that this major shift in focus can have upon theservices, policies, mechanisms, and internal structure of the OSitself based on our initial experiences with rethinking systemdesign for energy efficiency. Our ultimate goal is to design an operating system where majorcomponents cooperate to explicitly optimize for energy efficiency.A number of research efforts have recently investigated aspects ofenergy-efficient operating systems (a good overview is available at[16, 20]) and we intend to leverage existing "best practice" in ourown work where such results exist. However, we are not aware of anysystems that systematically revisit system structure with energy inmind. Further, our examination of operating system functionalityreveals a number of opportunities that have received littleattention in the literature. To illustrate this point, Table 1presents major operating system functionality, along with possibletechniques for improving power consumption characteristics. Severalof the techniques are well studied, such as disk spindown policiesor adaptively trading content fidelity for power [8]. For example,to reduce power consumption for MPEG playback, the system couldadapt to a smaller frame rate and window size, consuming lessbandwidth and computation. One of the primary objectives of operating systems is allocatingresources among competing tasks, typically for fairness andperformance. Adding energy efficiency to the equation raises anumber of interesting issues. For example, competingprocesses/users may be scheduled to receive a fair share ofbattery resources rather than CPU resources (e.g., anapplication that makes heavy use of DISK I/O may be given lowerpriority relative to a compute-bound application when energyresources are low). Similarly, for tasks such as ad hoc routing,local battery resources are often consumed on behalf of remoteprocesses. Fair allocation dictates that one battery is not drainedin preference to others. Finally, for the communication subsystem,a number of efforts already investigate adaptively setting thepolling rate for wireless networks (trading latency forenergy). Our efforts to date have focused on the last four areashighlighted in Table 1. For memory allocation, our work exploreshow to exploit the ability of memory chips to transition amongmultiple power states. We also investigate metrics for pickingenergy-efficient routes in ad hoc networks, energy-efficientplacement of distributed computation, and flexible RPC/name bindingthat accounts for power consumption. These last two points of resource allocation and remotecommunication highlight an interesting property for energy-aware OSdesign in the post-PC environment. Many tasks are distributedacross multiple machines, potentially running on machines withwidely varying CPU, memory, and power source characteristics. Thus,energy-aware OS design must closely cooperate with and track thecharacteristics of remote computers to balance the oftenconflicting goals of optimizing for energy and speed. The rest of this paper illustrates our approach with selectedexamples extracted from our recent efforts toward building anintegrated hardware/software infrastructure that incorporatescooperative power management to support mobile and wirelessapplications. The instances we present in subsequent sections coverthe resource management policies and mechanisms necessary toexploit low power modes of various (existing or proposed) hardwarecomponents, as well as power-aware communications and the essentialrole of the wide-area environment. We begin our discussion with theresources of a single machine and then extend it to the distributedcontext.
---
paper_title: Scheduling for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994]. What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance.
---
paper_title: Theoretical and practical limits of dynamic voltage scaling
paper_content:
Dynamic voltage scaling (DVS) is a popular approach for energy reduction of integrated circuits. Current processors that use DVS typically have an operating voltage range from full to half of the maximum Vdd. However, it is possible to construct designs that operate over a much larger voltage range: from full Vdd to subthreshold voltages. This possibility raises the question of whether a larger voltage range improves the energy efficiency of DVS. First, from a theoretical point of view, we show that for subthreshold supply voltages leakage energy becomes dominant, making "just in time completion" energy inefficient. We derive an analytical model for the minimum energy optimal voltage and study its trends with technology scaling. Second, we use the proposed model to study the workload activity of an actual processor and analyze the energy efficiency as a function of the lower limit of voltage scaling. Based on this study, we show that extending the voltage range below 1/2 Vdd will improve the energy efficiency for most processor designs, while extending this range to subthreshold operation is beneficial only for very specific applications. Finally, we show that operation deep in the subthreshold voltage range is never energy-efficient.
---
paper_title: A study of thread migration in temperature-constrained multicores
paper_content:
Temperature has become an important constraint in high-performance processors, especially multicores. Thread migration will be essential to exploit the full potential of future thermally constrained multicores. We propose and study a thread migration method that maximizes performance under a temperature constraint, while minimizing the number of migrations and ensuring fairness between threads. We show that thread migration brings important performance gains and that it is most effective during the first tens of seconds following a decrease of the number of running threads.
---
paper_title: Power provisioning for a warehouse-sized computer
paper_content:
Large-scale Internet services require a computing infrastructure that can beappropriately described as a warehouse-sized computing system. The cost ofbuilding datacenter facilities capable of delivering a given power capacity tosuch a computer can rival the recurring energy consumption costs themselves.Therefore, there are strong economic incentives to operate facilities as closeas possible to maximum capacity, so that the non-recurring facility costs canbe best amortized. That is difficult to achieve in practice because ofuncertainties in equipment power ratings and because power consumption tends tovary significantly with the actual computing activity. Effective powerprovisioning strategies are needed to determine how much computing equipmentcan be safely and efficiently hosted within a given power budget. In this paper we present the aggregate power usage characteristics of largecollections of servers (up to 15 thousand) for different classes ofapplications over a period of approximately six months. Those observationsallow us to evaluate opportunities for maximizing the use of the deployed powercapacity of datacenters, and assess the risks of over-subscribing it. We findthat even in well-tuned applications there is a noticeable gap (7 - 16%)between achieved and theoretical aggregate peak power usage at the clusterlevel (thousands of servers). The gap grows to almost 40% in wholedatacenters. This headroom can be used to deploy additional compute equipmentwithin the same power budget with minimal risk of exceeding it. We use ourmodeling framework to estimate the potential of power management schemes toreduce peak power and energy usage. We find that the opportunities for powerand energy savings are significant, but greater at the cluster-level (thousandsof servers) than at the rack-level (tens). Finally we argue that systems needto be power efficient across the activity range, and not only at peakperformance levels.
---
paper_title: Managing energy and server resources in hosting centers
paper_content:
Internet hosting centers serve multiple service sites from a common hardware base. This paper presents the design and implementation of an architecture for resource management in a hosting center operating system, with an emphasis on energy as a driving resource management issue for large server clusters. The goals are to provision server resources for co-hosted services in a way that automatically adapts to offered load, improve the energy efficiency of server clusters by dynamically resizing the active server set, and respond to power supply disruptions or thermal events by degrading service in accordance with negotiated Service Level Agreements (SLAs).Our system is based on an economic approach to managing shared server resources, in which services "bid" for resources as a function of delivered performance. The system continuously monitors load and plans resource allotments by estimating the value of their effects on service performance. A greedy resource allocation algorithm adjusts resource prices to balance supply and demand, allocating resources to their most efficient use. A reconfigurable server switching infrastructure directs request traffic to the servers assigned to each service. Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.
---
paper_title: NOX: towards an operating system for networks
paper_content:
As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale?
---
paper_title: A survey of energy efficient network protocols for wireless networks
paper_content:
Wireless networking has witnessed an explosion of interest from consumers in recent years for its applications in mobile and personal communications. As wireless networks become an integral component of the modern communication infrastructure, energy efficiency will be an important design consideration due to the limited battery life of mobile terminals. Power conservation techniques are commonly used in the hardware design of such systems. Since the network interface is a significant consumer of power, considerable research has been devoted to low-power design of the entire network protocol stack of wireless networks in an effort to enhance energy efficiency. This paper presents a comprehensive summary of recent work addressing energy efficient and low-power design within all layers of the wireless network protocol stack.
---
paper_title: Reducing Power Consumption in Backbone Networks
paper_content:
According to several studies, the power consumption of the Internet accounts for up to 10% of the worldwide energy consumption, and several initiatives are being put into place to reduce the power consumption of the ICT sector in general. To this goal, we propose a novel approach to switch off network nodes and links while still guaranteeing full connectivity and maximum link utilization. After showing that the problem falls in the class of capacitated multi-commodity flow problems, and therefore it is NP-complete, we propose some heuristic algorithms to solve it. Simulation results in a realistic scenario show that it is possible to reduce the number of links and nodes currently used by up to 30% and 50% respectively during off-peak hours, while offering the same service quality.
---
paper_title: The Case for Energy-Proportional Computing
paper_content:
Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems.
---
paper_title: Scheduling for reduced CPU energy
paper_content:
The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994]. What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Skilled in the Art of Being Idle: Reducing Energy Waste in Networked Systems
paper_content:
Networked end-systems such as desktops and set-top boxes are often left powered-on, but idle, leading to wasted energy consumption. An alternative would be for these idle systems to enter low-power sleep modes. Unfortunately, today, a sleeping system sees degraded functionality: first, a sleeping device loses its network "presence" which is problematic to users and applications that expect to maintain access to a remote machine and, second, sleeping can prevent running tasks scheduled during times of low utilization (e.g., network backups). Various solutions to these problems have been proposed over the years including wake-on-lan (WoL) mechanisms that wake hosts when specific packets arrive, and the use of a proxy that handles idle-time traffic on behalf of a sleeping host. As of yet, however, an in-depth evaluation of the potential for energy savings, and the effectiveness of proposed solutions has not been carried out. To remedy this, in this paper, we collect data directly from 250 enterprise users on their end-host machines capturing network traffic patterns and user presence indicators. With this data, we answer several questions: what is the potential value of proxying or using magic packets? which protocols and applications require proxying? how comprehensive does proxying need to be for energy benefits to be compelling? and so on. ::: ::: We find that, although there is indeed much potential for energy savings, trivial approaches are not effective. We also find that achieving substantial savings requires a careful consideration of the tradeoffs between the proxy complexity and the idle-time functionality available to users, and that these tradeoffs vary with user environment. Based on our findings, we propose and evaluate a proxy architecture that exposes a minimal set of APIs to support different forms of idle-time behavior.
---
paper_title: Greening of the internet
paper_content:
In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.
---
paper_title: Theoretical and practical limits of dynamic voltage scaling
paper_content:
Dynamic voltage scaling (DVS) is a popular approach for energy reduction of integrated circuits. Current processors that use DVS typically have an operating voltage range from full to half of the maximum Vdd. However, it is possible to construct designs that operate over a much larger voltage range: from full Vdd to subthreshold voltages. This possibility raises the question of whether a larger voltage range improves the energy efficiency of DVS. First, from a theoretical point of view, we show that for subthreshold supply voltages leakage energy becomes dominant, making "just in time completion" energy inefficient. We derive an analytical model for the minimum energy optimal voltage and study its trends with technology scaling. Second, we use the proposed model to study the workload activity of an actual processor and analyze the energy efficiency as a function of the lower limit of voltage scaling. Based on this study, we show that extending the voltage range below 1/2 Vdd will improve the energy efficiency for most processor designs, while extending this range to subthreshold operation is beneficial only for very specific applications. Finally, we show that operation deep in the subthreshold voltage range is never energy-efficient.
---
paper_title: Run-time Energy Consumption Estimation Based on Workload in Server Systems
paper_content:
This paper proposes to develop a system-wide energy consumption model for servers by making use of hardware performance counters and experimental measurements. We develop a real-time energy prediction model that relates server energy consumption to its overall thermal envelope. While previous studies have attempted system-wide modeling of server power consumption through subsystem models, our approach is different in that it uses a small set of tightly correlated parameters to create a model relating system energy input to subsystem energy consumption. We develop a linear regression model that relates processor power, bus activity, and system ambient temperatures into real-time predictions of the power consumption of long jobs and as result controlling their thermal impact. Using the HyperTransport bus model as a case study and through electrical measurements on example server subsystems, we develop a statistical model for estimating run-time power consumption. Our model is accurate within an error of four percent(4%) as verified using a set of common processor benchmarks.
---
paper_title: A Comparison of High-Level Full-System Power Models
paper_content:
Dynamic power management in enterprise environments requires an understanding of the relationship between resource utilization and system-level power consumption. Power models based on resource utilization have been proposed in the context of enabling specific energy-efficiency optimizations on specific machines, but the accuracy and portability of different approaches to modeling have not been systematically compared. In this work, we use a common infrastructure to fit a family of high-level full-system power models, and we compare these models over a wide variation of workloads and machines, from a laptop to a server. This analysis shows that a model based on OS utilization metrics and CPU performance counters is generally most accurate across the machines and workloads tested. It is particularly useful for machines whose dynamic power consumption is not dominated by the CPU, as well as machines with aggressively power-managed CPUs, two classes of systems that are increasingly prevalent.
---
paper_title: Fine-grained energy profiling for power-aware application design
paper_content:
Significant opportunities for power optimization exist at application design stage and are not yet fully exploited by system and application designers. We describe the challenges developers face in optimizing software for energy efficiency by exploiting application-level knowledge. To address these challenges, we propose the development of automated tools that profile the energy usage of various resource components used by an application and guide the design choices accordingly. We use a preliminary version of a tool we have developed to demonstrate how automated energy profiling helps a developer choose between alternative designs in the energy-performance trade-off space.
---
paper_title: An Analysis of Hard Drive Energy Consumption
paper_content:
The increasing storage capacity and necessary redundancy of data centers and other large-scale IT facilities has drawn attention to the issue of reducing the power consumption of hard drives. This work comprehensively investigates the power consumption of hard drives to determine typical runtime power profiles. We have instrumented at a fine-grained level and present our findings which show that (i) the energy consumed by the electronics of a drive is just as important as the mechanical energy consumption; (ii) the energy required to access data is affected by physical location on a drive; and (iii) the size of data transfers has measurable effect on power consumption.
---
paper_title: Power Awareness in Network Design and Routing
paper_content:
Exponential bandwidth scaling has been a fundamental driver of the growth and popularity of the Internet. However, increases in bandwidth have been accompanied by increases in power consumption, and despite sustained system design efforts to address power demand, significant technological challenges remain that threaten to slow future bandwidth growth. In this paper we describe the power and associated heat management challenges in today's routers. We advocate a broad approach to addressing this problem that includes making power-awareness a primary objective in the design and configuration of networks, and in the design and implementation of network protocols. We support our arguments by providing a case study of power demands of two standard router platforms that enables us to create a generic model for router power consumption. We apply this model in a set of target network configurations and use mixed integer optimization techniques to investigate power consumption, performance and robustness in static network design and in dynamic routing. Our results indicate the potential for significant power savings in operational networks by including power-awareness.
---
paper_title: Energy Consumption of Residential and Professional Switches
paper_content:
Precise evaluation of network appliance energy consumption is necessary to accurately model or simulate the power consumption of distributed systems. In this paper we evaluate the influence of traffic onto the consumption of electrical power of four switches found in home and professional environments. First we describe our measurement and data analysis approach, and how our results can be used for estimating the power consumption when knowing the average traffic bandwidth.Then we present the measurement results of two residential switches, and two professional switches. For each type we present regression models and parameters describing their quality. Similar to other works we find that for one of the switches the power consumption actually drops for high traffic loads, while for the others the situation is reverse. Measures justify that during most energy consumption evaluation, network appliance energy cost can be approximated as constant. This work gives information on the possible changes of this cost.
---
paper_title: Energy Consumption of the Internet
paper_content:
As concerns about global energy consumption increase, the power consumption of the Internet is a matter of increasing importance. We present a network-based model that estimates Internet power consumption including the core, metro, and access networks.
---
paper_title: A feasibility study for power management in LAN switches
paper_content:
We examine the feasibility of introducing power management schemes in network devices in the LAN. Specifically, we investigate the possibility of putting various components on LAN switches to sleep during periods of low traffic activity. Traffic collected in our LAN indicates that there are significant periods of inactivity on specific switch interfaces. Using an abstract sleep model devised for LAN switches, we examine the potential energy savings possible for different times of day and different interfaces (e.g., interfaces connecting to hosts to switches, or interfaces connecting switches, or interfaces connecting switches and routers). Algorithms developed for sleeping, based on periodic protocol behavior as well as traffic estimation are shown to be capable of conserving significant amounts of energy. Our results show that sleeping is indeed feasible in the LAN and in some cases, with very little impact on other protocols. However, we note that in order to maximize energy savings while minimizing sleep-related losses, we need hardware that supports sleeping.
---
paper_title: The Case for Energy-Proportional Computing
paper_content:
Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems.
---
paper_title: NGL02-2: Ethernet Adaptive Link Rate (ALR): Analysis of a Buffer Threshold Policy
paper_content:
Rapidly increasing energy use by computing and communications equipment is a significant problem that needs to be addressed. Ethernet network interface controllers (NICs) consume hundreds of millions of US$ in electricity per year. Most Ethernet links are underutilized and link power consumption can be reduced by operating at lower data rates. An output buffer threshold policy to change link data rate in response to utilization is investigated. Analytical and simulation models are developed to evaluate the performance of Adaptive Link Rate (ALR) with respect to mean packet delay and time spent in low data rate with Poisson traffic and 100 Mb/s network traces as inputs. A Markov model of a state-dependent service rate queue with rate transitions only at service completion is developed. For the traffic traces, it is found that a link can operate at 10 Mb/s for over 99% of the time yielding energy savings with no user-perceivable increase in packet delay.
---
paper_title: Managing energy consumption costs in desktop PCs and LAN switches with proxying, split TCP connections, and scaling of link speed
paper_content:
The IT equipment comprising the Internet in the USA uses about $6 billion of electricity every year. Much of this electricity use is wasted on idle, but fully powered-up, desktop PCs and network links. We show how to recover a large portion of the wasted electricity with improved power management methods that are focused on network issues.
---
paper_title: Using Low-Power Modes for Energy Conservation in Ethernet LANs
paper_content:
Most Ethernet interfaces available for deployment in switches and hosts today can operate in a variety of different low power modes. However, currently these modes have very limited usage models. They do not take advantage of periods of inactivity, when the links remain idle or under-utilized. In this study, we propose methods that allow for detection of such periods to obtain energy savings with little impact on loss or delay. We evaluate our methods on a wide range of real-time traffic traces collected at a high-speed backbone switch within our campus LAN. Our results show that Ethernet interfaces at both ends can be put in extremely low power modes anywhere from 40%-98% of the time observed. In addition, we found that approximately 37% of interfaces studied (on the same switch) can be put in low power modes simultaneously which opens the potential for further energy savings in the switching fabric within the switch.
---
paper_title: A Power Benchmarking Framework for Network Devices
paper_content:
Energy efficiency is becoming increasingly important in the operation of networking infrastructure, especially in enterprise and data center networks. Researchers have proposed several strategies for energy management of networking devices. However, we need a comprehensive characterization of power consumption by a variety of switches and routers to accurately quantify the savings from the various power savings schemes. In this paper, we first describe the hurdles in network power instrumentation and present a power measurement study of a variety of networking gear such as hubs, edge switches, core switches, routers and wireless access points in both stand-alone mode and a production data center. We build and describe a benchmarking suite that will allow users to measure and compare the power consumed for a large set of common configurations at any switch or router of their choice. We also propose a network energy proportionality index, which is an easily measurable metric, to compare power consumption behaviors of multiple devices.
---
paper_title: Reducing Network Energy Consumption via Sleeping and Rate-Adaptation
paper_content:
We present the design and evaluation of two forms of power management schemes that reduce the energy consumption of networks. The first is based on putting network components to sleep during idle times, reducing energy consumed in the absence of packets. The second is based on adapting the rate of network operation to the offered workload, reducing the energy consumed when actively processing packets. ::: ::: For real-world traffic workloads and topologies and using power constants drawn from existing network equipment, we show that even simple schemes for sleeping or rate-adaptation can offer substantial savings. For instance, our practical algorithms stand to halve energy consumption for lightly utilized networks (10-20%). We show that these savings approach the maximum achievable by any algorithms using the same power management primitives. Moreover this energy can be saved without noticeably increasing loss and with a small and controlled increase in latency (<10ms). Finally, we show that both sleeping and rate adaptation are valuable depending (primarily) on the power profile of network equipment and the utilization of the network itself.
---
paper_title: Greening the Switch
paper_content:
Active research is being conducted in reducing power consumption of all the components of the Internet. To that end, we propose schemes for power reduction in network switches -- Time Window Prediction, Power Save Mode and Lightweight Alternative. These schemes are adaptive to changing traffic patterns and automatically tune their parameters to guarantee a bounded and specified increase in latency. We propose a novel architecture for buffering ingress packets using shadow ports. ::: ::: We test our schemes on packet traces obtained from an enterprise network, and evaluate them using realistic power models for the switches. Our simple power reduction schemes produce power savings of upto 32% with minimal increase in latency or packet-loss. With appropriate hardware support in the form of Wake-on-Packet, shadow ports and fast transitioning of the ports between its high and low power states, these savings reach 90% of the optimal algorithm's savings.
---
paper_title: Power-Proxying on the NIC: A Case Study with the Gnutella File-Sharing Protocol
paper_content:
Edge devices such as desktop and laptop computers constitute a majority of the devices connected to the Internet today. Peer-to-Peer (P2P) file-sharing applications generally require edge devices to maintain network presence whenever possible to enhance the robustness of the file-sharing network, which in turn can lead to considerable wastage of energy. We show that energy can be saved by permitting edge devices to enter into standby state and still maintain network connectivity by proxying protocols in the Network Interface Card (NIC).
---
paper_title: Greening of the internet
paper_content:
In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.
---
paper_title: JouleSort: a balanced energy-efficiency benchmark
paper_content:
The energy efficiency of computer systems is an important concern in a variety of contexts. In data centers, reducing energy use improves operating cost, scalability, reliability, and other factors. For mobile devices, energy consumption directly affects functionality and usability. We propose and motivate JouleSort, an external sort benchmark, for evaluating the energy efficiency of a wide range of computer systems from clusters to handhelds. We list the criteria, challenges, and pitfalls from our experience in creating a fair energy-efficiency benchmark. Using a commercial sort, we demonstrate a JouleSort system that is over 3.5x as energy-efficient as last year's estimated winner. This system is quite different from those currently used in data centers. It consists of a commodity mobile CPU and 13 laptop drives connected by server-style I/O interfaces.
---
paper_title: Energy-Aware Backbone Networks: A Case Study
paper_content:
Power consumption of ICT is becoming more and more a sensible problem, which is of interest for both the research community, for ISPs and for the general public. In this paper we consider a real IP backbone network and a real traffic profile. We evaluate the energy cost of running it, and, speculating on the possibility of selectively turning off spare devices whose capacity is not required to transport off-peak traffic, we show that it is possible to easily achieve more than 23% of energy saving per year, i.e., to save about 3GWh/year considering today's power footprint of real network devices.
---
paper_title: On reliability, performance and Internet power consumption
paper_content:
With the increasing concern for global warming, the impact of Internet power consumption is gaining interest. In this paper, we explore, for the first time, the relationship between network robustness, performance and Internet power consumption. We first discuss such a relationship based on data collected from Internet sources. Next, we propose a modeling framework to size that relationship. It is shown that when designing networks based on power consumption, careful attention should be paid to the trade-off between energy consumption and network performance since doing otherwise would lead to unreliable networks.
---
paper_title: Energy-aware routing: A reality check
paper_content:
In this work, we analyze the design of green routing algorithms and evaluate the achievable energy savings that such mechanisms could allow in several realistic network scenarios. We formulate the problem as a minimum energy routing optimization, which we numerically solve considering a core-network scenario, which can be seen as a worst-case for energy saving performance (as nodes cannot be switched off). To gather full-relief results, we analyze the energy savings in various conditions (i.e., network topology and traffic matrix) and under different technology assumptions (i.e., the energy profile of the network devices). These results give us insight into the potential benefits of different “green” technologies and their interactions. In particular, we show that depending on the topology and traffic matrices, the optimal energy savings can be modest, partly limiting the interest for green routing approaches for some scenarios. At the same time, we also show that the common belief that there is a trade off between green network optimization and performance does not necessarily hold: in the considered environment, green routing has no effect on the main network performances such as maximum link utilization.
---
| Title: A Survey of Green Networking Research
Section 1: Why Save Energy
Description 1: Discuss the environmental, economic, and political motivations for reducing energy consumption in networking.
Section 2: Where to Save Energy
Description 2: Identify key areas within networks, such as core and access networks, where significant energy savings can be realized.
Section 3: Definition of Green Networking
Description 3: Define what green networking entails, including using renewable energy and designing energy-efficient components.
Section 4: Green Strategies
Description 4: Overview different paradigms and strategies for reducing energy consumption in networking, such as resource consolidation and proportional computing.
Section 5: Taxonomy of Green Networking Research
Description 5: Introduce a taxonomy to categorize green networking solutions based on criteria such as timescale, architectural layer, and scope.
Section 6: Adaptive Link Rate
Description 6: Explore strategies for adapting link rates to reduce energy consumption, including sleeping mode and rate switching.
Section 7: Interface Proxying
Description 7: Discuss methods for offloading network traffic processing to low-power proxies to extend idle periods of energy-hungry devices.
Section 8: Energy-Aware Infrastructure
Description 8: Examine solutions for designing and managing energy-aware network infrastructures, including clean-slate and incremental approaches.
Section 9: Energy-Aware Applications
Description 9: Detail how software and applications can be redesigned to be more energy-efficient, including protocol modifications and user-level applications.
Section 10: At the Network Edge
Description 10: Briefly overview research trends in related areas such as computer and data-center architectures and wireless networking.
Section 11: Measurement and Models
Description 11: Discuss the importance of power modeling and measurement for evaluating green networking solutions and propose common benchmarks and metrics.
Section 12: Conclusion
Description 12: Summarize the survey findings, highlight the maturity of different research areas, and outline future research directions. |
A Survey on Medium Access Control (MAC) for Clustering Wireless Sensor Network | 6 | ---
paper_title: Meta-survey on medium access control surveys in wireless sensor networks
paper_content:
Medium access control layer has been the hotbed of research in the field of wireless sensor networks since huge amount of research in wireless sensor networks has focused on saving energy and lately reducing latency. For the most part, medium access control layer solutions in wireless sensor networks are entrusted to save energy, to reduce latency, and at times to ensure reliability perhaps through cross-layer solutions. In this article, we review the surveys at medium access control layer and point to readers some of their strengths and relevancies to medium access control layer protocols. Furthermore, we classify the surveys subject-wise to show trend and usability. To cross compare, we devise a unified lexicon for the purpose and expose the coverage given to the wireless sensor networks’ medium access control protocols by different studies. We present medium access control solutions’ popularity on a known but hitherto unused metric: average citations/year from over 200 medium access control solutions. ...
---
paper_title: Routing Techniques in Wireless Sensor Networks: A Survey
paper_content:
Wireless sensor networks consist of small nodes with sensing, computation, and wireless communications capabilities. Many routing, power management, and data dissemination protocols have been specifically designed for WSNs where energy awareness is an essential design issue. Routing protocols in WSNs might differ depending on the application and network architecture. In this article we present a survey of state-of-the-art routing techniques in WSNs. We first outline the design challenges for routing protocols in WSNs followed by a comprehensive survey of routing techniques. Overall, the routing techniques are classified into three categories based on the underlying network structure: flit, hierarchical, and location-based routing. Furthermore, these protocols can be classified into multipath-based, query-based, negotiation-based, QoS-based, and coherent-based depending on the protocol operation. We study the design trade-offs between energy and communication overhead savings in every routing paradigm. We also highlight the advantages and performance issues of each routing technique. The article concludes with possible future research areas.
---
paper_title: QTSAC: An Energy-Efficient MAC Protocol for Delay Minimization in Wireless Sensor Networks
paper_content:
Millions of sensors are deployed to monitor the smart grid. They consume huge amounts of energy in the communication infrastructure. Therefore, the establishment of an energy-efficient medium access control (MAC) protocol for sensor nodes is challenging and urgently needed. The Quorum-based MAC protocol independently and adaptively schedules nodes’ wake-up times and decreases idle listening and collisions, thereby increasing the network throughput and extending the network lifetime. A novel Quorum time slot adaptive condensing (QTSAC)-based MAC protocol is proposed for achieving delay minimization and energy efficiency for the wireless sensor networks (WSNs). Compared to previous protocols, the QTSAC-based MAC protocol has two main novelties: 1) It selects more Quorum time slots (QTSs) than previous protocols in the area that is far from the sink according to the energy consumption in WSNs to decrease the network latency and 2) It allocates QTSs only when data are transmitted to further decrease the network latency. Theoretical analyses and experimental results indicate that the QTSAS protocol can greatly improve network performance compared with existing Quorum-based MAC protocols. For intermediate-scale wireless sensor networks, the method that is proposed in this paper can enhance the energy efficiency by 24.64%–82.75%, prolong the network lifetime by 58%–27.31%, and lower the network latency by 3.59%–29.23%.
---
paper_title: WiSeN: A new sensor node for smart applications with wireless sensor networks
paper_content:
Abstract Although the field application of Wireless Sensor Networks (WSNs) is widely varied, their use in practice is nominal due to the fact that existing sensor nodes do not meet necessary requirements, which are sensor integration with nodes and sending of sensed information to remote stations. In fact, sensor nodes face a number of challenges, such as scripts that cannot be fully changed, network routing algorithms not being written, and not receiving data from the most sensors. Therefore, a new sensor node has been developed (called WiSeN) which aims to be usable in all WSN applications. WiSeN has been developed for unlimited numbers and multiple types of sensors that can be connected, and it was designed to create a new and user-friendly alternative to sensors currently being used.
---
paper_title: Design guidelines for wireless sensor networks: communication, clustering and aggregation
paper_content:
Abstract When sensor nodes are organized in clusters, they could use either single hop or multi-hop mode of communication to send their data to their respective cluster heads. We present a systematic cost-based analysis of both the modes, and provide results that could serve as guidelines to decide which mode should be used for given settings. We determine closed form expressions for the required number of cluster heads and the required battery energy of nodes for both the modes. We also propose a hybrid communication mode which is a combination of single hop and multi-hop modes, and which is more cost-effective than either of the two modes. Our problem formulation also allows for the application to be taken into account in the overall design problem through a data aggregation model.
---
paper_title: MAC Protocols With Wake-Up Radio for Wireless Sensor Networks: A Review
paper_content:
The use of a low-power wake-up radio in wireless sensor networks is considered in this paper, where relevant medium access control solutions are studied. A variety of asynchronous wake-up MAC protocols have been proposed in the literature, which take advantage of integrating a second radio to the main one for waking it up. However, a complete and a comprehensive survey particularly on these protocols is missing in the literature. This paper aims at filling this gap, proposing a relevant taxonomy, and providing deep analysis and discussions. From both perspectives of energy efficiency and latency reduction, as well as their operation principles, state-of-the-art wake-up MAC protocols are grouped into three main categories: 1) duty cycled wake-up MAC protocols; 2) non-cycled wake-up protocols; and 3) path reservation wake-up protocols. The first category includes two subcategories: 1) static wake-up protocols versus 2) traffic adaptive wake-up protocols. Non-cycled wake-up MAC protocols are again divided into two classes: 1) always-on wake-up protocol and 2) radio-triggered wake-up protocols. The latter is in turn split into two subclasses: 1) passive wake-up MAC protocols versus 2) ultra low power active wake-up MAC protocols. Two schemes could be identified for the last category, 1) broadcast based wake-up versus 2) addressing based wake-up. All these classes are discussed and analyzed in this paper, and canonical protocols are investigated following the proposed taxonomy.
---
paper_title: MAC Protocols Used by Wireless Sensor Networks and a General Method of Performance Evaluation
paper_content:
Many researchers employ IEEE802.15.4 as communication technology for wireless sensor networks (WSNs). However, medium access control (MAC) layer requirements for communications in wireless sensor networks (WSNs) vary because the network is usually optimized for specific applications. Thus, one particular standard will hardly be suitable for every possible application. Two general categories of MAC techniques exist: contention based and schedule based. This paper explains these two main approaches and includes examples of each one. The paper concludes with a unique performance analysis and comparison of benefits and limitations of each protocol with respect to WSNs.
---
paper_title: Energy Saving Mechanisms for MAC Protocols in Wireless Sensor Networks
paper_content:
Energy efficiency is a primary requirement in a wireless sensor network (WSN). This is a major design parameter in medium access control (MAC) protocols for WSN due to limited resources in sensor nodes that include low battery power. Hence a proposed MAC protocol must be energy efficient by reducing the potential energy wastes. Developing such a MAC protocol has been a hot research area in WSN. To avoid wasting the limited energy, various energy saving mechanisms are proposed for MAC protocols. These mechanisms have a common design objective—to save energy to maximize the network lifetime. This paper presents a survey on various energy saving mechanisms that are proposed for MAC protocols in WSN. We present a detailed discussion of these mechanisms and discuss their strengths and weaknesses. We also discuss MAC protocols that use these energy saving mechanisms.
---
paper_title: A comparative study on popular MAC protocols for mixed Wireless Sensor Networks: From implementation viewpoint
paper_content:
Abstract Sensors cooperate and coordinate with each other to disseminate sensed data in the network. In establishing coordination among sensors such that they can access the shared wireless medium, Medium Access Control (MAC) protocol plays an important role. In this article, we presented an analytical study on some popular MAC protocols for Wireless Sensor Networks (WSNs). Based on the design techniques, MAC protocols for WSNs are classified into two main categories: single-layer and cross-layer. MAC protocols such as S-MAC, T-MAC, B-MAC and X-MAC are selected to study the design approaches of single-layer genre. BoX-MAC-1 and BoX-MAC-2 are selected to analyze cross-layer design approaches. This survey paper aims at reporting an implementation viewpoint of different design approaches of MAC protocols in WSN. We have considered mixed WSNs that exhibits node movement (e.g., static, mobile) and changes in communication medium (e.g., air, water). Representative protocols are implemented in Castalia simulator and evaluated on the basis of important performance metrics such as energy consumption, network lifetime, throughput and end-to-end delay. The merits and demerits of different protocols are also compared.
---
paper_title: A Survey on Real-Time MAC Protocols in Wireless Sensor Networks
paper_content:
As wireless sensor network becomes pervasive, new requirements have been continuously emerged. How-ever, the most of research efforts in wireless sensor network are focused on energy problem since the nodes are usually battery-powered. Among these requirements, real-time communication is one of the big research challenges in wireless sensor networks because most of query messages carry time information. To meet this requirement, recently several real-time medium access control protocols have been proposed for wireless sensor networks in the literature because waiting time to share medium on each node is one of main source for end-to-end delay. In this paper, we first introduce the specific requirement of wireless sensor real-time MAC protocol. Then, a collection of recent wireless sensor real-time MAC protocols are surveyed, classified, and described emphasizing their advantages and disadvantages whenever possible. Finally we present a dis-cussion about the challenges of current wireless sensor real-time MAC protocols in the literature, and show the conclusion in the end.
---
paper_title: MAC Essentials for Wireless Sensor Networks
paper_content:
The wireless medium being inherently broadcast in nature and hence prone to interferences requires highly optimized medium access control (MAC) protocols. This holds particularly true for wireless sensor networks (WSNs) consisting of a large amount of miniaturized battery-powered wireless networked sensors required to operate for years with no human intervention. There has hence been a growing interest on understanding and optimizing WSN MAC protocols in recent years, where the limited and constrained resources have driven research towards primarily reducing energy consumption of MAC functionalities. In this paper, we provide a comprehensive state-of-the-art study in which we thoroughly expose the prime focus of WSN MAC protocols, design guidelines that inspired these protocols, as well as drawbacks and shortcomings of the existing solutions and how existing and emerging technology will influence future solutions. In contrast to previous surveys that focused on classifying MAC protocols according to the technique being used, we provide a thematic taxonomy in which protocols are classified according to the problems dealt with. We also show that a key element in selecting a suitable solution for a particular situation is mainly driven by the statistical properties of the generated traffic.
---
paper_title: Meta-survey on medium access control surveys in wireless sensor networks
paper_content:
Medium access control layer has been the hotbed of research in the field of wireless sensor networks since huge amount of research in wireless sensor networks has focused on saving energy and lately reducing latency. For the most part, medium access control layer solutions in wireless sensor networks are entrusted to save energy, to reduce latency, and at times to ensure reliability perhaps through cross-layer solutions. In this article, we review the surveys at medium access control layer and point to readers some of their strengths and relevancies to medium access control layer protocols. Furthermore, we classify the surveys subject-wise to show trend and usability. To cross compare, we devise a unified lexicon for the purpose and expose the coverage given to the wireless sensor networks’ medium access control protocols by different studies. We present medium access control solutions’ popularity on a known but hitherto unused metric: average citations/year from over 200 medium access control solutions. ...
---
paper_title: The Evolution of MAC Protocols in Wireless Sensor Networks: A Survey
paper_content:
Wireless Sensor Networks (WSNs) have become a leading solution in many important applications such as intrusion detection, target tracking, industrial automation, smart building and so on. Typically, a WSN consists of a large number of small, low-cost sensor nodes that are distributed in the target area for collecting data of interest. For a WSN to provide high throughput in an energy-efficient way, designing an efficient Medium Access Control (MAC) protocol is of paramount importance because the MAC layer coordinates nodes' access to the shared wireless medium. To show the evolution of WSN MAC protocols, this article surveys the latest progresses in WSN MAC protocol designs over the period 2002-2011. In the early development stages, designers were mostly concerned with energy efficiency because sensor nodes are usually limited in power supply. Recently, new protocols are being developed to provide multi-task support and efficient delivery of bursty traffic. Therefore, research attention has turned back to throughput and delay. This article details the evolution of WSN MAC protocols in four categories: asynchronous, synchronous, frame-slotted, and multichannel. These designs are evaluated in terms of energy efficiency, data delivery performance, and overhead needed to maintain a protocol's mechanisms. With extensive analysis of the protocols many future directions are stated at the end of this survey. The performance of different classes of protocols could be substantially improved in future designs by taking into consideration the recent advances in technologies and application demands.
---
paper_title: Energy efficiency of MAC protocols in low data rate wireless multimedia sensor networks: A comparative study
paper_content:
Some new application scenarios for Wireless Sensor Networks (WSNs) such as urban resilience, smart house/building, smart agriculture and animal farming, among others, can be enhanced by adding multimedia sensors able to capture and transmit small multimedia samples such as still images or audio files. In these applications, Wireless Multimedia Sensor Networks (WMSNs) usually share two conflicting design goals. On the one hand, the goal of maximizing the network lifetime by saving energy, and on the other, the ability to successfully deliver packets to the sink. In this paper, we investigate the suitability of several WSNs MAC protocols from different categories for low data rate WMSNs by analyzing the effect of some network parameters, such as the sampling rate and the density of multimedia sensors on the energy consumption of nodes. First, we develop a general multi-class traffic model that allows us to integrate different types of sensors with different sampling rates. Then, we model, evaluate and compare the energy consumption of MAC protocols numerically. We illustrate how the MAC protocols put some constraints on network parameters like the sampling rates, the number of nodes, the size of the multimedia sample and the density of multimedia nodes in order to make collisions negligible and avoid long queuing delays. Numerical results show that in asynchronous MAC protocols, the receiver-initiated MAC protocols (RI-MAC and PW-MAC) consume less energy than the sender-initiated ones (B-MAC and X-MAC). B-MAC outperforms X-MAC when the sampling rates of multimedia nodes is very low and the polling periods are short. PW-MAC shows the lowest energy consumption between the selected asynchronous MAC protocols and it can be used in the considered WMSNs with a wider range of sampling rates. Regarding synchronous MAC protocols, results also show that they are only suitable for the considered WMSNs when the data rates are very low. In that situation, TreeMAC is the one that offers the lowest energy consumption in comparison to L-MAC and T-MAC. Finally, we compare the energy consumption of MAC protocols in four selected application scenarios related to Smart Cities and environment monitoring.
---
paper_title: Survey of MAC Protocol for Wireless Sensor Networks
paper_content:
Energy being a key constraint in wireless sensor networks (WSN) became a significant area of interest for various researches in WSN. A lot of work on energy conservation at different layers of protocol stack is present till date, of which energy conservation at Medium access control (MAC) layer is catching a lot of attention. Researchers have proposed several protocols for energy conservation in WSN at MAC layer. This paper presents a survey on some of the popular MAC layer protocols. It also provides a brief analysis of these protocols which could be helpful in future work in this direction. This paper also provides a reference for further research in this area giving an insight on energy conservation at MAC layer.
---
paper_title: A Survey on Energy Efficient Contention based and Hybrid MAC Protocols for Wireless Sensor Networks
paper_content:
Backgrounds/Objectives: Method/Statistical Analysis: The objective is to determine an energy efficient, MAC protocol for wireless sensor networks. All the sensor nodes communicate through a Medium Access Protocol (MAC). Energy is wasted while communicating data among sensor nodes. Since wireless sensors nodes are unwired they do not have any means of external power supply and it is only battery operated. Hence designing energy efficient MAC Protocol to expand battery span is very important. A thorough survey on various contention based and hybrid protocols has been done in this paper. Findings: Hybrid MAC Protocol implements the combined advantages of CSMA and TDMA. A clear comparison of some of the best Hybrid MAC protocols and contention based protocols has been explained in this paper. Applications: Based on the performance of various MAC protocols, it is found that contention based MAC protocols can be implemented for low traffic level networks, hybrid MAC protocols can be implemented for high traffic level networks including industrial critical processes.
---
paper_title: MAC protocols for wireless sensor networks: a survey
paper_content:
Wireless sensor networks are appealing to researchers due to their wide range of application potential in areas such as target detection and tracking, environmental monitoring, industrial process monitoring, and tactical systems. However, low sensing ranges result in dense networks and thus it becomes necessary to achieve an efficient medium-access protocol subject to power constraints. Various medium-access control (MAC) protocols with different objectives have been proposed for wireless sensor networks. In this article, we first outline the sensor network properties that are crucial for the design of MAC layer protocols. Then, we describe several MAC protocols proposed for sensor networks, emphasizing their strengths and weaknesses. Finally, we point out open research issues with regard to MAC layer design.
---
paper_title: Energy Efficient MAC Protocol for Wireless Sensor Networks: A Survey
paper_content:
Wireless Sensor Networks (WSNs) has gained tremendous popularity in various practical applications, in which sensors are fundamentally battery-powered and significantly resource-limited. Well-designed Medium Access Control (MAC) protocols can make great contribution on the performance of the networks. In this paper, we present a survey on some typical or newly proposed MAC protocols which aims at enhancing energy efficiency of the networks. The classification and beneficial characteristics of MAC protocols are discussed. Furthermore, we analyze the protocols’ performance in various fields, and point out the open research issue.
---
paper_title: Design guidelines for wireless sensor networks: communication, clustering and aggregation
paper_content:
Abstract When sensor nodes are organized in clusters, they could use either single hop or multi-hop mode of communication to send their data to their respective cluster heads. We present a systematic cost-based analysis of both the modes, and provide results that could serve as guidelines to decide which mode should be used for given settings. We determine closed form expressions for the required number of cluster heads and the required battery energy of nodes for both the modes. We also propose a hybrid communication mode which is a combination of single hop and multi-hop modes, and which is more cost-effective than either of the two modes. Our problem formulation also allows for the application to be taken into account in the overall design problem through a data aggregation model.
---
paper_title: An application-specific protocol architecture for wireless microsensor networks
paper_content:
Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches.
---
paper_title: A Delay-Bounded MAC Protocol for Mission- and Time-Critical Applications in Industrial Wireless Sensor Networks
paper_content:
Industrial wireless sensor networks (IWSNs) designed for mission- and time-critical applications require timely and deterministic data delivery within stringent deadline bounds. Exceeding delay limits for such applications can lead to system malfunction or ultimately dangerous situations that can threaten human safety. In this paper, we propose Slot Stealing Medium Access Control (SS-MAC), an efficient SS-MAC protocol to guarantee predictable and timely channel access for time-critical data in IWSNs. In the proposed SS-MAC, aperiodic time-critical traffic opportunistically steals time slots assigned to periodic non-critical traffic. Additionally, a dynamic deadline-based scheduling is introduced to provide guaranteed channel access in emergency and event-based situations, where multiple sensor nodes are triggered simultaneously to transmit time-critical data to the controller. The proposed protocol is evaluated mathematically to provide the worst-case delay bound for the time-critical traffic. Performance comparisons are carried out between the proposed SS-MAC and WirelessHART standard and they show that, for the time-critical traffic, the proposed SS-MAC can achieve, at least, a reduction of almost 30% in the worst-case delay with a significant channel utilization efficiency.
---
paper_title: Comprehensive review for energy efficient hierarchical routing protocols on wireless sensor networks
paper_content:
In recent years, wireless sensor networks (WSNs) have played a major role in applications such as tracking and monitoring in remote environments. Designing energy efficient protocols for routing of data events is a major challenge due to the dynamic topology and distributed nature of WSNs. Main aim of the paper is to discuss hierarchical routing protocols in order to improve the energy efficiency and network lifetime. This paper provides a discussion about hierarchical energy efficient routing protocols based on classical and swarm intelligence approach. The routing protocols belonging to both categories can be summarized according to energy efficiency, data aggregation, location awareness, QoS, scalability, load balancing, fault tolerance, query based and multipath. A systematic literature review has been conducted for hierarchical energy efficient routing protocols reported from 2012 to 2017. This survey provides a technical direction for researchers on how to develop routing protocols. Finally, research gaps in the reviewed protocols and the potential future aspects have been discussed.Graphical Abstract
---
paper_title: Clustering based on the node health status in wireless sensor networks
paper_content:
One of the applications of wireless sensor network is the forest fire monitoring which has different characteristics from others. In this application, the connectivity of nodes should not be destroyed just because of nodes lose their energy or burnt in the fire. Since the wide area of monitoring, the clustering method is considered as the efficient routing to increase its scalability as well as reduce energy consumed of nodes. Many clustering methods which are mostly based on Leach protocol are proposed without considering the node's failure. Here, we proposed the node health status as a parameter to select a Cluster Head and compared its performance with Leach, MTE and the direct algorithm. Results show that the number of packets received and the alive nodes of the proposed method are higher than others. Identically, it has the lowest average end to end delay which is suitable for forest fire application.
---
paper_title: TEEN: a routing protocol for enhanced efficiency in wireless sensor networks
paper_content:
Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.
---
paper_title: An application-specific protocol architecture for wireless microsensor networks
paper_content:
Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches.
---
| Title: A Survey on Medium Access Control (MAC) for Clustering Wireless Sensor Network
Section 1: INTRODUCTION
Description 1: Write about the background and motivation for studying MAC in the context of Clustering Wireless Sensor Networks, including the advantages, limitations, and why clustering is significant.
Section 2: RELATED WORKS
Description 2: Summarize previous surveys on MAC protocols in WSN, highlighting their classification methods and focus areas, as well as the gaps that this survey aims to address.
Section 3: NETWORK TOPOLOGY OF WSN
Description 3: Discuss the general network topologies in WSNs, particularly focusing on flat and clustering topologies, and explain the impact of these topologies on MAC protocols.
Section 4: SOURCES OF ISSUES OF MAC
Description 4: Identify and explain the sources of issues in MAC protocols, including factors like energy efficiency, latency, throughput, and scalability, as well as elements influencing these metrics such as collision, overhearing, idle listening, and overhead.
Section 5: COMPARING MAC FOR CLUSTERING WSN
Description 5: Provide a detailed comparison of various MAC protocols specifically designed for clustering WSN, including their performance in terms of energy consumption, latency, throughput, and scalability.
Section 6: CONCLUSION
Description 6: Summarize the key findings of the survey, including the advantages and disadvantages of different clustering MAC protocols, and suggest potential areas for future research. |
Occurrence of Chiral Bioactive Compounds in the Aquatic Environment: A Review | 8 | ---
paper_title: Risk assessment of the endocrine-disrupting effects of nine chiral pesticides
paper_content:
The increased release of chiral pesticides into the environment has generated interest in the role of enantioselectivity in the environmental fate and ecotoxicological effects of these compounds. However, the information on the endocrine disrupting effects (EDEs) of chiral pesticides is still limited and discrepancies are also usually observed among different assays. In this study, we investigated the enantioselectivity of EDEs via estrogen and thyroid hormone receptors for nine chiral pesticides using in vitro and in silico approaches. The results of the luciferase reporter gene assays showed 7 chiral pesticides possessed enantioselective estrogenic activities and 2 chiral pesticides exerted thyroid hormone antagonistic effects. Proliferation assays in MCF-7 and GH3 cells were also used to verify the results of the dual-luciferase reporter gene assays. At last, the molecular docking results indicated that the enantioselective EDEs of chiral pesticides were partially due to enantiospecific binding affinities with receptors. Our data not only show enantioselective EDEs of nine chiral pesticides, but also would be helpful to better understanding the molecular biological mechanisms of enantioselectivity in EDEs of chiral pesticides.
---
paper_title: Selective degradation of ibuprofen and clofibric acid in two model river biofilm systems
paper_content:
Abstract A field survey indicated that the Elbe and Saale Rivers were contaminated with both clofibric acid and ibuprofen. In Elbe River water we could detect the metabolite hydroxy-ibuprofen. Analyses of the city of Saskatoon sewage effluent discharged to the South Saskatchewan river detected clofibric acid but neither ibuprofen nor any metabolite. Laboratory studies indicated that the pharmaceutical ibuprofen was readily degraded in a river biofilm reactor. Two metabolites were detected and identified as hydroxy–and carboxy–ibuprofen. Both metabolites were observed to degrade in the biofilm reactors. However, in human metabolism the metabolite carboxy–ibuprofen appears and degrades second whereas the opposite occurs in biofilm systems. In biofilms the pharmacologically inactive stereoisomere of ibuprofen is degraded predominantly. In contrast, clofibric acid was not biologically degraded during the experimental period of 21 days. Similar results were obtained using biofilms developed using waters from either the South Saskatchewan or Elbe River. In a sterile reactor no losses of ibuprofen were observed. These results suggested that abiotic losses and adsorption played only a minimal role in the fate of the pharmaceuticals in the river biofilm reactors.
---
paper_title: Enantiomeric composition of chiral polychlorinated biphenyl atropisomers in aquatic bed sediment.
paper_content:
Enantiomeric ratios (ERs) for eight polychlorinated biphenyl (PCB) atropisomers were measured in aquatic sediment from selected sites throughout the United States by using chiral gas chromatography/mass spectrometry. Nonracemic ERs for PCBs 91, 95, 132, 136, 149, 174, and 176 were found in sediment cores from Lake Hartwell, SC, which confirmed previous inconclusive reports of reductive dechlorination of PCBs at these sites on the basis of achiral measurements. Nonracemic ERs for many of the atropisomers were also found in bed-sediment samples from the Hudson and Housatonic Rivers, thus indicating that some of the PCB biotransformation processes identified at these sites are enantioselective. Patterns in ERs among congeners were consistent with known reductive dechlorination patterns at both river sediment basins. The enantioselectivity of PCB 91 is reversed between the Hudson and Housatonic River sites, which implies that the two sites have different PCB biotransformation processes with different enantiom...
---
paper_title: Persistent organic pollutants in China's surface water systems.
paper_content:
Following recent rapid industrialization, China is now one of the largest producers and consumers of organic chemicals in the world. This is compounded by variable regulatory oversight with respect to storage, use and waste management of these chemicals and their byproducts. This review synthesizes the data on the distribution of selected persistent organic pollutants (POPs) in waters in China. Surface water heavily polluted with POPs is distributed in the Yangtze River Estuary, Pearl River Delta, Minjiang River Estuary, Jiulongjiang Estuary, Daya Bay, Taihu Lake, and the waterways of Zhejiang Province, where concentrations of Polycyclic aromatic hydrocarbons (PAHs), organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) frequently exceed both international and Chinese guideline values. These areas are mainly distributed along the southeast coast of China, within or downstream of major manufacturing districts, intensive agricultural basins, and other industrial centers. A comparison of the levels of OCPs in the aquatic environment of China with other indicative regions worldwide shows comparable levels of pollution (overall range from below detection limit (BDL) to 5104.8ng/L and regional means from 2.9-929.6ng/L). PAHs and PCBs pollution appear to be particularly serious in China (PAHs overall ranging from BDL to 474,000ng/L with regional means from 15.1-72,400ng/L; PCBs from BDL to 3161ng/L with regional means ranging from 0.2-985.2ng/L). There is as yet limited evidence of serious perfluorooctane sulfonate (PFOS) and perfluorooctanoate (PFOA) pollution. We discuss major sources and processes responsible for high POP occurrence using a range of measures (including diagnostic ratios of different compounds), regulatory oversight and policy gaps in the control of POPs in China, and potential long-term health and ecological effects. We argue that water quality guidelines, pollution control measures and cleanup strategies for POPs in China should be urgently improved.
---
paper_title: Concentrations, enantiomeric compositions, and sources of HCH, DDT and chlordane in soils from the Pearl River Delta, South China.
paper_content:
Concentrations, and enantiomeric compositions of HCH, DDT and chlordane in 74 soils of the Pearl River Delta, South China were investigated. The mean concentrations of HCHs and DDTs descended in the order: crop soils>paddy soils>natural soils. The concentrations (ng/g dw) of p,p'-DDE, p,p'-DDT, p,p'-DDD and o,p'-DDT in crop soils were 0.14-231, 0.07-315, <DL-96.7 and 0.06-73.8, respectively, while those of chlordane were generally below 0.78 for trans-chlordane (TC) and 0.75 for cis-chlordane (CC). Enantiomeric factors (EF value) were determined for o,p'-DDT, alpha-HCH, TC and CC. Both preferential depletions of (-) enantiomer and (+) enantiomer were observed for o,p'-DDT and alpha-HCH, indicated by EF values either <0.5 or >0.5. An EF value >0.5 generally suggested a preferential degradation of the (-) enantiomers of both TC and CC. The racemic alpha-HCH observed in the soils with higher HCH concentrations indicated that the transformation from gamma-HCH (e.g. lindane) to alpha-HCH may be an important process in the soils. The isomer ratios of p,p'-DDT/(DDE+DDD), o,p'-DDT/p,p'-DDT and enantiomeric compositions of o,p'-DDT suggested that both illegal use of technical DDT and the DDT impurity in dicofol may be responsible for the freshly DDT input in the region. The sources of DDTs were drawn by principal component analysis-multiple linear regression (PCA-MLR). The relative contributions of dicofol-type DDT, residues, and fresh technical DDT were estimated to be 55%, 21% and 17%, respectively. In addition, CC was found to degraded faster than TC in soils from the Pearl River Delta. The study demonstrated that the combination of isomer ratios and enantiomeric composition analysis may provide critical information on the potential sources and fate of organochlorine pesticides in soil.
---
paper_title: Enantioselective degradation of warfarin in soils
paper_content:
Environmental enantioselectivity information is important to fate assessment of chiral contaminants. Warfarin, a rodenticide and prescription medicine, is a chiral chemical but used in racemic form. Little is known about its enantioselective behavior in the environment. In this study, enantioselective degradation of warfarin in a turfgrass and a groundcover soils was examined in aerobic and ambient temperature conditions. An enantioselective analytical method was established using a novel triproline chiral stationary phase in high performance liquid chromatography. Unusual peak profile patterns, i.e., first peak (S(−)) broadening/second peak (R(+)) compression with hexane (0.1%TFA)/2-propanol (92/8, v/v) mobile phase, and first peak compression/second peak broadening with the (96/4, v/v) mobile phase, were observed in enantioseparation. This unique tunable peak property was leveraged in evaluating warfarin enantioselective degradation in two types of soil. Warfarin was extracted in high recovery from soil using methylene chloride after an aqueous phase basic-acidic conversion. No apparent degradation of warfarin was observed in the sterile turfgrass and groundcover soils during the 28 days incubation, while it showed quick degradation (half-life <7 days) in the nonsterile soils after a short lag period, suggesting warfarin degradation in the soils was mainly caused by micro-organisms. Limited enantioselectivity was found in the both soils, which was the R(+) enantiomer was preferentially degraded. The half-lives in turfgrass soil were 5.06 ± 0.13 and 5.97 ± 0.05 days, for the R(+) and the S(−) enantiomer, respectively. The corresponding values for the groundcover soil were 4.15 ± 0.11 and 4.47 ± 0.08 days. Chirality, 2011. © 2011 Wiley Periodicals, Inc.
---
paper_title: Enantioselectivity in environmental risk assessment of modern chiral pesticides.
paper_content:
Chiral pesticides comprise a new and important class of environmental pollutants nowadays. With the development of industry, more and more chiral pesticides will be introduced into the market. But their enantioselective ecotoxicology is not clear. Currently used synthetic pyrethroids, organophosphates, acylanilides, phenoxypropanoic acids and imidazolinones often behave enantioselectively in agriculture use and they always pose unpredictable enantioselective ecological risks on non-target organisms or human. It is necessary to explore the enantioselective toxicology and ecological fate of these chiral pesticides in environmental risk assessment. The enantioselective toxicology and the fate of these currently widely used pesticides have been discussed in this review article.
---
paper_title: Prioritizing research for trace pollutants and emerging contaminants in the freshwater environment.
paper_content:
Organic chemicals have been detected at trace concentrations in the freshwater environment for decades. Though the term trace pollutant indicates low concentrations normally in the nanogram or microgram per liter range, many of these pollutants can exceed an acceptable daily intake (ADI) for humans. Trace pollutants referred to as emerging contaminants (ECs) have recently been detected in the freshwater environment and may have adverse human health effects. Analytical techniques continue to improve; therefore, the number and frequency of detections of ECs are increasing. It is difficult for regulators to restrict use of pollutants that are a human health hazard; scientists to improve treatment techniques for higher priority pollutants; and the public to modify consumption patterns due to the vast number of ECs and the breadth of literature on the occurrence, use, and toxicity. Hence, this paper examines literature containing occurrence and toxicity data for three broad classes of trace pollutants and ECs (industrials, pesticides, and pharmaceuticals and personal care products (PPCPs)), and assesses the relevance of 71 individual compounds. The evaluation indicates that widely used industrials (BPF) and PPCPs (AHTN, HHCB, ibuprofen, and estriol) occur frequently in samples from the freshwater environment but toxicity data were not available; thus, it is important to establish their ADI. Other widely used industrials (BDE-47, BDE-99) and pesticides (benomyl, carbendazim, aldrin, endrin, ethion, malathion, biphenthrin, and cypermethrin) have established ADI values but occurrence in the freshwater environment was not well documented. The highest priority pollutants for regulation and treatment should include industrials (PFOA, PFOS and DEHP), pesticides (diazinon, methoxychlor, and dieldrin), and PPCPs (EE2, carbamazepine, βE2, DEET, triclosan, acetaminophen, and E1) because they occur frequently in the freshwater environment and pose a human health hazard at environmental concentrations.
---
paper_title: Stereoselective biodegradation of amphetamine and methamphetamine in river microcosms
paper_content:
Here presented for the first time is the enantioselective biodegradation of amphetamine and methamphetamine in river microcosm bioreactors. The aim of this investigation was to test the hypothesis that mechanisms governing the fate of amphetamine and methamphetamine in the environment are mostly stereoselective and biological in nature. Several bioreactors were studied over the duration of 15 days (i) in both biotic and abiotic conditions, (ii) in the dark or exposed to light and (iii) in the presence or absence of suspended particulate matter. Bioreactor samples were analysed using SPE-chiral-LC-(QTOF)MS methodology. This investigation has elucidated the fundamental mechanism for degradation of amphetamine and methamphetamine as being predominantly biological in origin. Furthermore, stereoselectivity and changes in enantiomeric fraction (EF) were only observed under biotic conditions. Neither amphetamine nor methamphetamine appeared to demonstrate adsorption to suspended particulate matter. Our experiments also demonstrated that amphetamine and methamphetamine were photo-stable. Illicit drugs are present in the environment at low concentrations but due to their pseudo-persistence and non-racemic behaviour, with two enantiomers revealing significantly different potency (and potentially different toxicity towards aquatic organisms) the risk posed by illicit drugs in the environment should not be under- or over-estimated. The above results demonstrate the need for re-evaluation of the procedures utilised in environmental risk assessment, which currently do not recognise the importance of the phenomenon of chirality in pharmacologically active compounds.
---
paper_title: Environmental Fate of Chiral Pharmaceuticals: Determination, Degradation and Toxicity
paper_content:
Pollution of the aquatic environment by pharmaceuticals is of major concern. Indeed pharmaceutical pollutants have several undesirable effects for many organisms, such as endocrine disruption and bacterium resistance. They are resistant to several degradation processes, making their removal difficult and slow. Pharmaceuticals reach the environment due to their inefficient removal by waste water treatment plants (WWTP), and by improper disposal of unused medicines. In aquatic environments pharmaceuticals reach concentrations at trace levels of ngL−1–μgL−1 range. Many pharmaceutical pollutants are chiral. They occur in nature as a single enantiomer or as mixtures of the two enantiomers, which have different spatial configuration and can thus be metabolized selectively. In spite of similar physical and chemical properties, enantiomers have different interactions with enzymes, receptors or other chiral molecules, leading to different biological response. Therefore they can affect living organisms in a different manner. The fate and effects of enantiomers of chiral pharmaceuticals in the environment are still largely unknown. Biodegradation and toxicity can be enantioselective, in contrast to abiotic degradation. Thus accurate methods to measure enantiomeric fractions in the environment are crucial to better understand the biodegradation process and to estimate toxicity of chiral pharmaceuticals. We review (1) general properties of chiral compounds, (2) current knowledge on chiral pharmaceuticals in the environment, (3) chiral analytical methods to determine the enantiomers composition in environmental matrices, (4) degradation and removal processes of chiral pharmaceuticals in the environment and (5) their toxicity to aquatic organisms. The major analytical methods discussed are gas chromatography (GC), high performance liquid chromatography (HPLC), electrochemical sensors and biosensors. These chiral methods are crucial for the correct quantification of the enantiomers regarding that if an enantiomer with more or less toxic effects is preferentially degraded, the assessed exposure based on measurements of achiral methodologies would overestimate or underestimate ecotoxicity. The degradation and biodegradation is discussed using few examples of important therapeutic classes usually detected in the aquatic environment. Few examples of ecotoxicity studies are also given on the occurrence of enantiomers and their fate in the environment which differs with regard to undesirable effects and to biochemical processes.
---
paper_title: Metabolism studies of chiral pesticides: A critical review
paper_content:
Abstract The consumption of pesticides worldwide has been growing in recent decades, and consequently the exposure of humans and other animals to them as well. However, even though it is known that chiral pesticides can behave stereoselectively, the knowledge about the risks to human health and the environment is scarce. Among the pesticides registered to date, approximately 30% have at least one center of asymmetry, and just 7% of them are currently marketed as a pure stereoisomer or as an enriched mixture of the active stereoisomer. There are several in vitro , in vivo , and in silico models available to evaluate the enantioselective metabolism of chiral pesticides aiming ecotoxicological and risk assessment. Therefore, this paper intends to provide a critical view of the metabolism of chiral pesticides in non-target species, including humans, and discuss their implications, as well as, conduct a review of the analytical techniques employed for in vitro and in vivo metabolism studies of chiral pesticides.
---
paper_title: Occurrence and removal of organic micropollutants: An overview of the watch list of EU Decision 2015/495.
paper_content:
Although there are no legal discharge limits for micropollutants into the environment, some regulations have been published in the last few years. Recently, a watch list of substances for European Union-wide monitoring was reported in the Decision 2015/495/EU of 20 March 2015. Besides the substances previously recommended to be included by the Directive 39/2013/EU, namely two pharmaceuticals (diclofenac and the synthetic hormone 17-alpha-ethinylestradiol (EE2)) and a natural hormone (17-beta-estradiol (E2)), the first watch list of 10 substances/groups of substances also refers three macrolide antibiotics (azithromycin, clarithromycin and erythromycin), other natural hormone (estrone (E1)), some pesticides (methiocarb, oxadiazon, imidacloprid, thiacloprid, thiamethoxam, clothianidin, acetamiprid and triallate), a UV filter (2-ethylhexyl-4-methoxycinnamate) and an antioxidant (2,6-di-tert-butyl-4-methylphenol) commonly used as food additive. Since little is known about the removal of most of the substances included in the Decision 2015/495/EU, particularly regarding realistic concentrations in aqueous environmental samples, this review aims to: (i) overview the European policy in the water field; (ii) briefly describe the most commonly used conventional and advanced treatment processes to remove micropollutants; (iii) summarize the relevant data published in the last decade, regarding occurrence and removal in aqueous matrices of the 10 substances/groups of substances that were recently included in the first watch list for European Union monitoring (Decision 2015/495/EU); and (iv) highlight the lack of reports concerning some substances of the watch list, the study of un-spiked aquatic matrices and the assessment of transformation by-products.
---
paper_title: Chiral Analysis of Pesticides and Drugs of Environmental Concern: Biodegradation and Enantiomeric Fraction
paper_content:
The importance of stereochemistry for medicinal chemistry and pharmacology is well recognized and the dissimilar behavior of enantiomers is fully documented. Regarding the environment, the significance is equivalent since enantiomers of chiral organic pollutants can also differ in biodegradation processes and fate, as well as in ecotoxicity. This review comprises designed biodegradation studies of several chiral drugs and pesticides followed by enantioselective analytical methodologies to accurately measure the enantiomeric fraction (EF). The enantioselective monitoring of microcosms and laboratory-scale experiments with different environmental matrices is herein reported. Thus, this review focuses on the importance of evaluating the EF variation during biodegradation studies of chiral pharmaceuticals, drugs of abuse, and agrochemicals and has implications for the understanding of the environmental fate of chiral pollutants.
---
paper_title: Priority Substances and Emerging Organic Pollutants in Portuguese Aquatic Environment: A Review.
paper_content:
Aquatic environments are among the most noteworthy ecosystems regarding chemical pollution due to the anthropogenic pressure. In 2000, the European Commission implemented the Water Framework Directive, with the aim of progressively reducing aquatic chemical pollution of the European Union countries. Therefore, the knowledge about the chemical and ecological status is imperative to determine the overall quality of water bodies. Concerning Portugal, some studies have demonstrated the presence of pollutants in the aquatic environment but an overall report is not available yet. The aim of this paper is to provide a comprehensive review about the occurrence of priority substances included in the Water Framework Directive and some classes of emerging organic pollutants that have been found in Portuguese aquatic environment. The most frequently studied compounds comprise industrial compounds, natural and synthetic estrogens, phytoestrogens, phytosterols, pesticides, pharmaceuticals and personal care products. Concentration of these pollutants ranged from few ng L(-1) to higher values such as 30 μg L(-1) for industrial compounds in surface waters and up to 106 μg L(-1) for the pharmaceutical ibuprofen in wastewaters. Compounds already banned in Europe such as atrazine, alkylphenols and alkylphenol polyethoxylates are still found in surface waters, nevertheless their origin is still poorly understood. Beyond the contamination of the Portuguese aquatic environment by priority substances and emerging organic pollutants, this review also highlights the need of more research on other classes of pollutants and emphasizes the importance of extending this research to other locations in Portugal, which have not been investigated yet.
---
paper_title: Enantioselective HPLC analysis and biodegradation of atenolol, metoprolol and fluoxetine
paper_content:
The accurate quantification of enantiomers is crucial for assessing the biodegradation of chiral pharmaceuticals in the environment. Methods to quantify enantiomers in environmental matrices are scarce. Here, we used an enantioselective method, high-performance liquid chromatography with fluorescence detection (HPLC-FD), to analyze two beta-blockers, metoprolol and atenolol, and the antidepressant fluoxetine in an activated sludge consortium from a wastewater treatment plant. The vancomycin-based chiral stationary phase was used under polar ionic mode to achieve the enantioseparation of target chiral pharmaceuticals in a single chromatographic run. The method was successfully validated over a concentration range of 20–800 ng/mL for each enantiomer of both beta-blockers and of 50–800 ng/mL for fluoxetine enantiomers. The limits of detection were between 5 and 20 ng/mL and the limits of quantification were between 20 and 50 ng/mL, for all enantiomers. The intra- and inter-batch precision was lower than 5.66 and 8.37 %, respectively. Accuracy values were between 103.03 and 117.92 %, and recovery rates were in the range of 88.48–116.62 %. Furthermore, the enantioselective biodegradation of atenolol, metoprolol and fluoxetine was followed during 15 days. The (S)-enantiomeric form of metoprolol was degraded at higher extents, whereas the degradation of atenolol and fluoxetine did not show enantioselectivity under the applied conditions.
---
paper_title: Stereochemistry of organic compounds
paper_content:
Structure Stereoisomers Symmetry Configuration Properties of Stereoisomers: Stereoisomer Discrimination Separation of Stereoisomers, Resolution, Racemization Heterotopic Ligands and Faces (Prostereoisomerism, Prochirality) Stereochemistry of Alkenes Conformation of Acyclic Molecules Configuration and Conformation of Cyclic Molecules Stereoselective Synthesis Chiroptical Properties Chirality in Molecules Devoid of Chiral Centres.
---
paper_title: Removal of fluoxetine and its effects in the performance of an aerobic granular sludge sequential batch reactor.
paper_content:
Fluoxetine (FLX) is a chiral fluorinated pharmaceutical mainly indicated for treatment of depression and is one of the most distributed drugs. There is a clear evidence of environmental contamination with this drug. Aerobic granular sludge sequencing batch reactors constitute a promising technology for wastewater treatment; however the removal of carbon and nutrients can be affected by micropollutants. In this study, the fate and effect of FLX on reactor performance and on microbial population were investigated. FLX adsorption/desorption to the aerobic granules was observed. FLX shock loads (≤4μM) did not show a significant effect on the COD removal. Ammonium removal efficiency decreased in the beginning of first shock load, but after 20 days, ammonia oxidizing bacteria became adapted. The nitrite concentration in the effluent was practically null indicating that nitrite oxidizing bacteria was not inhibited, whereas, nitrate was accumulated in the effluent, indicating that denitrification was affected. Phosphate removal was affected at the beginning showing a gradual adaptation, and the effluent concentration was <0.04mM after 70 days. A shift in microbial community occurred probably due to FLX exposure, which induced adaptation/restructuration of the microbial population. This contributed to the robustness of the reactor, which was able to adapt to the FLX load.
---
paper_title: Effects of Engineered Nanoparticles on the Enantioselective Transformation of Metalaxyl Agent and Commercial Metalaxyl in Agricultural Soils.
paper_content:
The adsorption coefficient of racemic metalaxyl onto an agriculture soil was small and nonenantioselective. Biotransformation was the predominant pathway for the elimination of R-metalaxyl, while abiotic and biotransformation made a comparable contribution to the degradation of S-metalaxyl. Metalaxyl acid was the main transformation intermediate. The enantiomer fraction of metalaxyl decreased with an increase in its initial spike concentration or the presence of the co-constituents in metalaxyl commercial products. Under simulated solar irradiation, the presence of TiO2 promoted the overall transformation kinetics through enhanced biotransformation and extra photoinduced chemical reactions. The promotion was enantioselective and thereafter changed the enantiomer fraction. The results obtained in this study showed that some achiral parameters, although they have no direct impact on enantioselective reactions with enantiomers, can significantly affect the enantioselective transformation of racemic metalaxyl. Thus, our results indicate that the contribution of chemical interactions on the enantioselective transformation of chiral pesticides may be underestimated.
---
paper_title: Unequal Activities of Enantiomers via Biological Receptors: Examples of Chiral Drug, Pesticide, and Fragrance Molecules
paper_content:
A molecule coming from outside an organism can form a ligand–receptor complex. Upon its formation, a message is transmitted, for example, to certain cells. In this way, two enantiomers can emit messages that differ, either quantitatively or qualitatively. In the present article, these facts are taken as a common basis for the actions of chiral drug, pesticide, and fragrance molecules. For each of these groups, a few examples of current interest were selected. At present, the demand for single enantiomers in medicine and agriculture is economically highly significant. We propose a two-hour lecture, emphasizing the similarity between the different types of receptor-mediated actions of enantiomers, thus connecting some seemingly unrelated subjects in the university curricula. The lecture is intended for medicinal chemistry students; in addition, it should be suitable for the third-year chemistry students who are interested in the applications presented here.
---
paper_title: Homochiral drugs: a demanding tendency of the pharmaceutical industry.
paper_content:
The issue of drug chirality is now a major theme in the design and development of new drugs, underpinned by a new understanding of the role of molecular recognition in many pharmacologically relevant events. In general, three methods are utilized for the production of a chiral drug: the chiral pool, separation of racemates, and asymmetric synthesis. Although the use of chiral drugs predates modern medicine, only since the 1980's has there been a significant increase in the development of chiral pharmaceutical drugs. An important commercial reason is that as patents on racemic drugs expire, pharmaceutical companies have the opportunity to extend patent coverage through development of the chiral switch enantiomers with desired bioactivity. Stimulated by the new policy statements issued by the regulatory agencies, the pharmaceutical industry has systematically begun to develop chiral drugs in enantiometrically enriched pure forms. This new trend has caused a tremendous change in the industrial small- and large-scale production to enantiomerically pure drugs, leading to the revisiting and updating of old technologies, and to the development of new methodologies of their large-scale preparation (as the use of stereoselective syntheses and biocatalyzed reactions). The final decision whether a given chiral drug will be marketed in an enantiomerically pure form, or as a racemic mixture of both enantiomers, will be made weighing all the medical, financial and social proficiencies of one or other form. The kinetic, pharmacological and toxicological properties of individual enantiomers need to be characterized, independently of a final decision.
---
paper_title: Treatment of a simulated wastewater amended with a chiral pharmaceuticals mixture by an aerobic granular sludge sequencing batch reactor
paper_content:
Abstract An aerobic granular sludge-sequencing batch reactor (AGS-SBR) was fed for 28-days with a simulated wastewater containing a mixture of chiral pharmaceuticals (CPs) (alprenolol, bisoprolol, metoprolol, propranolol, venlafaxine, salbutamol, fluoxetine and norfluoxetine), each at 1.3 μg L −1 . AGS-SBR exhibited the highest removal efficiency for norfluoxetine, with preferential removal of the ( R )-enantiomer indicating that biological-mediated processes occurred. For all other CPs, removal was non-enantioselective, occurring through biosorption onto AGS. A gradual decline of CPs removal was observed, probably related to the decrease of AGS adsorption capacity. Moreover, chemical oxygen demand (COD) content in the bulk liquid after anaerobic feeding increased, and P-release dropped, probably because the polyphosphate-accumulating organism's activity was affected. Nitrification was also affected as indicated by the ammonium effluent concentration increase. Moreover, CPs exposure promoted AGS disintegration, with decreasing granule size. After stopping CPs feeding, the AGS started to recover its compact structure, and the system returned its normal performance concerning COD- and P-removal. N-removal seemed to be a more sensitive process, as while the ammonium levels were fully restored at the end of operation, nitrite reduction was only partially restored. Results provide useful information on AGS performance during the treatment of wastewater containing pharmaceuticals, a frequent scenario in WWTP.
---
paper_title: Mathematical model and experimental validation of the synergistic effect of selective enantioseparation of (S)-amlodipine from pharmaceutical wastewater using a HFSLM
paper_content:
Abstract A case study on the synergistic enantioseparation of ( S )-amlodipine from pharmaceutical wastewater by using hollow fiber supported liquid membrane (HFSLM) was examined. A chiral reaction flux mathematical model was applied. Optimum conditions achieved the highest percentages of extraction and stripping viz. 84% and 80%, respectively. Relevant parameters affecting the enantioseparation efficiency of ( S )-amlodipine were determined. Standard deviation percentages were 2.31% for extraction and 1.26% for stripping. It was found that the mathematical model proved to be in good agreement with the experimental data.
---
paper_title: Enantioselective biodegradation of pharmaceuticals, alprenolol and propranolol, by an activated sludge inoculum.
paper_content:
Biodegradation of chiral pharmaceuticals in the environment can be enantioselective. Thus quantification of enantiomeric fractions during the biodegradation process is crucial for assessing the fate of chiral pollutants. This work presents the biodegradation of alprenolol and propranolol using an activated sludge inoculum, monitored by a validated enantioselective HPLC method with fluorescence detection. The enantioseparation was optimized using a vancomycin-based chiral stationary phase under polar ionic mode. The method was validated using a minimal salts medium inoculated with activated sludge as matrix. The method was selective and linear in the range of 10-800 ng/ml, with a R²>0.99. The accuracy ranged from 85.0 percent to 103 percent, the recovery ranged from 79.9 percent to 103 percent, and the precision measured by the relative standard deviation (RSD) was <7.18 percent for intra-batch and <5.39 percent for inter-batch assays. The limits of quantification and detection for all enantiomers were 10 ng/ml and 2.5 ng/ml, respectively. The method was successfully applied to follow the biodegradation of the target pharmaceuticals using an activated sludge inoculum during a fifteen days assay. The results indicated slightly higher biodegradation rates for the S-enantiomeric forms of both beta-blockers. The presence of another carbon source maintained the enantioselective degradation pattern while enhancing biodegradation extent up to fourteen percent.
---
paper_title: Enantioselectivity in environmental risk assessment of modern chiral pesticides.
paper_content:
Chiral pesticides comprise a new and important class of environmental pollutants nowadays. With the development of industry, more and more chiral pesticides will be introduced into the market. But their enantioselective ecotoxicology is not clear. Currently used synthetic pyrethroids, organophosphates, acylanilides, phenoxypropanoic acids and imidazolinones often behave enantioselectively in agriculture use and they always pose unpredictable enantioselective ecological risks on non-target organisms or human. It is necessary to explore the enantioselective toxicology and ecological fate of these chiral pesticides in environmental risk assessment. The enantioselective toxicology and the fate of these currently widely used pesticides have been discussed in this review article.
---
paper_title: Integrated liquid chromatography method in enantioselective studies: Biodegradation of ofloxacin by an activated sludge consortium.
paper_content:
Ofloxacin is a chiral fluoroquinolone commercialized as racemate and as its enantiomerically pure form levofloxacin. This work presents an integrated liquid chromatography (LC) method with fluorescence detection (FD) and exact mass spectrometry (EMS) developed to assess the enantiomeric biodegradation of ofloxacin and levofloxacin in laboratory-scale microcosms. The optimized enantioseparation conditions were achieved using a macrocyclic antibiotic ristocetin A-bonded CSP (150×2.1mm i.d.; particle size 5μm) under reversed-phase elution mode. The method was validated using a mineral salts medium as matrix and presented selectivity and linearity over a concentration range from 5μgL(-1) (quantification limit) to 350μgL(-1) for each enantiomer. The method was successfully applied to evaluate biodegradation of ofloxacin enantiomers at 250μgL(-1) by an activated sludge inoculum. Ofloxacin (racemic mixture) and (S)-enantiomer (levofloxacin) were degraded up to 58 and 52%, respectively. An additional degradable carbon source, acetate, enhanced biodegradation up to 23%. (S)-enantiomer presented the highest extent of degradation (66.8%) when ofloxacin was supplied along with acetate. Results indicated slightly higher biodegradation extents for the (S)-enantiomer when supplementation was done with ofloxacin. Degradation occurred faster in the first 3days and proceeded slowly until the end of the assays. The chromatographic results from LC-FD suggested the formation of the (R)-enantiomer during levofloxacin biodegradation which was confirmed by LC-MS with a LTQ Orbitrap XL.
---
paper_title: Organic Stereochemistry: Guiding Principles and Bio-Medicinal Relevance
paper_content:
Foreword: D. Seebach Preface: The Editors Part 1: Symmetry Elements and Operations, Classification of Stereoisomers (B. Testa, G. Vistoli, and A. Pedretti) Part 2: Stereoisomerism Resulting from One or Several Stereogenic Centers (B. Testa) Part 3: Other Stereogenic Elements: Axes of Chirality, Planes of Chirality, Helicity, and (E,Z)-Diastereoisomerism (B. Testa) Part 4: Isomerisms about Single Bonds and in Cyclic Systems (B. Testa, G. Vistoli, and A. Pedretti) Part 5: Stereoselectivity in Molecular and Clinical Pharmacology (B. Testa, G. Vistoli, A. Pedretti, and J. Caldwell) Part 6: The Conformation Factor in Molecular Pharmacology (G. Vistoli, B. Testa, and A. Pedretti) Part 7: The Concept of Substrate Stereoselectivity in Biochemistry and Xenobiotic Metabolism (B. Testa) Part 8: Prostereoisomerism and the Concept of Product Stereoselectivity in Biochemistry and Xenobiotic Metabolism (B. Testa) Part 9: Molecular Chirality in Chemistry and Biology: Historical Milestones (J. Gal) Glossary Index
---
paper_title: Influence of pH on the Stereoselective Degradation of the Fungicides Epoxiconazole and Cyproconazole in Soils
paper_content:
Many pesticides are chiral and consist of two or more enantiomers/stereoisomers, which may differ in biological activity, toxicity, effects on nontarget organisms, and environmental fate. In the last few years, several racemic compounds have been substituted by enantiomer-enriched or single-isomer compounds (“chiral switch”). In this context, the stereoselective degradation in soils is an important part of a benefit−risk evaluation, but the understanding of the environmental factors affecting the chiral preferences is limited. In this study, the stereoselective degradation of the fungicides epoxiconazole and cyproconazole was investigated in different soils, selected to cover a wide range of soil properties. The fungicides were incubated under laboratory conditions and the degradation and configurational stability of the stereoisomers were followed over time using enantioselective GC−MS with a γ-cyclodextrin derivative as chiral selector. In alkaline and slightly acidic soils, the degradation of epoxicona...
---
paper_title: Environmental Fate of Chiral Pharmaceuticals: Determination, Degradation and Toxicity
paper_content:
Pollution of the aquatic environment by pharmaceuticals is of major concern. Indeed pharmaceutical pollutants have several undesirable effects for many organisms, such as endocrine disruption and bacterium resistance. They are resistant to several degradation processes, making their removal difficult and slow. Pharmaceuticals reach the environment due to their inefficient removal by waste water treatment plants (WWTP), and by improper disposal of unused medicines. In aquatic environments pharmaceuticals reach concentrations at trace levels of ngL−1–μgL−1 range. Many pharmaceutical pollutants are chiral. They occur in nature as a single enantiomer or as mixtures of the two enantiomers, which have different spatial configuration and can thus be metabolized selectively. In spite of similar physical and chemical properties, enantiomers have different interactions with enzymes, receptors or other chiral molecules, leading to different biological response. Therefore they can affect living organisms in a different manner. The fate and effects of enantiomers of chiral pharmaceuticals in the environment are still largely unknown. Biodegradation and toxicity can be enantioselective, in contrast to abiotic degradation. Thus accurate methods to measure enantiomeric fractions in the environment are crucial to better understand the biodegradation process and to estimate toxicity of chiral pharmaceuticals. We review (1) general properties of chiral compounds, (2) current knowledge on chiral pharmaceuticals in the environment, (3) chiral analytical methods to determine the enantiomers composition in environmental matrices, (4) degradation and removal processes of chiral pharmaceuticals in the environment and (5) their toxicity to aquatic organisms. The major analytical methods discussed are gas chromatography (GC), high performance liquid chromatography (HPLC), electrochemical sensors and biosensors. These chiral methods are crucial for the correct quantification of the enantiomers regarding that if an enantiomer with more or less toxic effects is preferentially degraded, the assessed exposure based on measurements of achiral methodologies would overestimate or underestimate ecotoxicity. The degradation and biodegradation is discussed using few examples of important therapeutic classes usually detected in the aquatic environment. Few examples of ecotoxicity studies are also given on the occurrence of enantiomers and their fate in the environment which differs with regard to undesirable effects and to biochemical processes.
---
paper_title: Analytical separation of enantiomers by gas chromatography on chiral stationary phases.
paper_content:
Enantioselective GC is of widespread use in the enantiomeric analysis of volatile natural products such as pheromones, flavors, fragrances, and essential oils as well as synthetic products obtained from asymmetric syntheses and kinetic resolutions. Whereas enantioseparation of derivatized α-amino acids is performed on chiral stationary phases (CSPs) based on α-amino acid derivatives, alkylated/acylated cyclodextrins are employed as versatile CSPs for a multitude of volatile derivatized and nonderivatized enantiomers. Three main types of CSPs are reviewed and the miniaturization of enantioselective GC, the quantification of enantiomers, and validation issues are described. A list of commercially available fused silica capillary columns coated with various CSPs is compiled.
---
paper_title: Evolution of chiral stationary phase design in the Pirkle laboratories
paper_content:
An historical review of the design of chiral stationary phases (CSPs) in the Pirkle laboratories is presented. Beginning with the discovery of the non-equivalence of nuclear magnetic resonance signals arising from enantiomers in the presence of a chiral solvating agent more than 25 years ago the Pirkle group has been at the forefront of the study of the enantioselective interactions of chiral molecules. Dozens of CSPs have been synthesized and evaluated, and several of these CSPs have subsequently been commercialized and are now widely used by researchers around the world. Several recently developed CSPs are also presented, and general principles which have guided the design of these CSPs are discussed.
---
paper_title: Solid-phase extraction combined with dispersive liquid-liquid microextraction and chiral liquid chromatography-tandem mass spectrometry for the simultaneous enantioselective determination of representative proton-pump inhibitors in water samples
paper_content:
This report describes, for the first time, the simultaneous enantioselective determination of proton-pump inhibitors (PPIs-omeprazole, lansoprazole, pantoprazole, and rabeprazole) in environmental water matrices based on solid-phase extraction combined with dispersive liquid-liquid microextraction (SPE-DLLME) and chiral liquid chromatography-tandem mass spectrometry. The optimized results of SPE-DLLME were obtained with PEP-2 column using methanol-acetonitrile (1/1, v/v) as elution solvent, dichloroethane, and acetonitrile as extractant and disperser solvent, respectively. The separation and determination were performed using reversed-phase chromatography on a cellulose chiral stationary phase, a Chiralpak IC (250 mm × 4.6 mm, 5 μm) column, under isocratic conditions at 0.6 mL min(-1) flow rate. The analytes were detected in multiple reaction monitoring (MRM) mode by triple quadrupole mass spectrometry. Isotopically labeled internal standards were used to compensate matrix interferences. The method provided enrichment factors of around 500. Under optimal conditions, the mean recoveries for all eight enantiomers from the water samples were 89.3-107.3 % with 0.9-10.3 % intra-day RSD and 2.3-8.1 % inter-day RSD at 20 and 100 ng L(-1) levels. Correlation coefficients (r (2)) ≥ 0.999 were achieved for all enantiomers within the range of 2-500 μg L(-1). The method detection and quantification limits were at very low levels, within the range of 0.67-2.29 ng L(-1) and 2.54-8.68 ng L(-1), respectively. This method was successfully applied to the determination of the concentrations and enantiomeric fractions of the targeted analytes in wastewater and river water, making it applicable to the assessment of the enantiomeric fate of PPIs in the environment. Graphical Abstract Simultaneous enantioselective determination of representative proton-pump inhibitors in water samples.
---
paper_title: Assessment of the pharmaceutical active compounds removal in wastewater treatment systems at enantiomeric level. Ibuprofen and naproxen.
paper_content:
The enantioselective degradation of ibuprofen and naproxen enantiomers was evaluated in five different wastewater treatment systems, including three constructed wetlands (vertical- and horizontal-flow configurations), a sand filter and an activated sludge wastewater treatment plant. In addition, injection experiments were carried out with racemic ibuprofen at microcosm- and pilot-scale constructed wetlands. Ibuprofen and naproxen have an asymmetric carbon atom and, consequently, two enantiomeric forms (i.e. S and R). The enantiomeric fraction (EF=S/(S+R)) in the raw sewage and effluents of various wastewater treatments were found to be compound-dependent (i.e. ibuprofen: EF(influent)=0.73-0.90, EF(effluent)=0.60-0.76; naproxen: EF(influent)=0.88-0.90, EF(effluent)=0.71-0.86). Of the two chiral pharmaceuticals, naproxen was the only one whose effluent EF correlated with its removal efficiency (p<0.05). The lack of correlation found for ibuprofen was attributable to the fact that its enantioselective degradation kinetics were different under prevailing aerobic and anaerobic conditions. Injection experiments of ibuprofen in constructed wetlands at microcosm and pilot-scale followed similar trends. Hence, under prevailing aerobic conditions, S-ibuprofen degraded faster than R-ibuprofen, whereas under prevailing anaerobic conditions, the degradation was not enantioselective. In summary, the naproxen EF measurements in wastewater effluents show that naproxen is a suitable alternative for evaluating the removal efficiency of treatment systems because its enantioselective degradation is similar under prevailing aerobic and anaerobic conditions.
---
paper_title: Enantioselective HPLC analysis and biodegradation of atenolol, metoprolol and fluoxetine
paper_content:
The accurate quantification of enantiomers is crucial for assessing the biodegradation of chiral pharmaceuticals in the environment. Methods to quantify enantiomers in environmental matrices are scarce. Here, we used an enantioselective method, high-performance liquid chromatography with fluorescence detection (HPLC-FD), to analyze two beta-blockers, metoprolol and atenolol, and the antidepressant fluoxetine in an activated sludge consortium from a wastewater treatment plant. The vancomycin-based chiral stationary phase was used under polar ionic mode to achieve the enantioseparation of target chiral pharmaceuticals in a single chromatographic run. The method was successfully validated over a concentration range of 20–800 ng/mL for each enantiomer of both beta-blockers and of 50–800 ng/mL for fluoxetine enantiomers. The limits of detection were between 5 and 20 ng/mL and the limits of quantification were between 20 and 50 ng/mL, for all enantiomers. The intra- and inter-batch precision was lower than 5.66 and 8.37 %, respectively. Accuracy values were between 103.03 and 117.92 %, and recovery rates were in the range of 88.48–116.62 %. Furthermore, the enantioselective biodegradation of atenolol, metoprolol and fluoxetine was followed during 15 days. The (S)-enantiomeric form of metoprolol was degraded at higher extents, whereas the degradation of atenolol and fluoxetine did not show enantioselectivity under the applied conditions.
---
paper_title: Use of the chiral pharmaceutical propranolol to identify sewage discharges into surface waters.
paper_content:
The discharge of relatively small volumes of untreated sewage is a source of wastewater-derived contaminants in surface waters that is often ignored because it is difficult to discriminate from wastewater effluent. To identify raw sewage discharges, we analyzed the two enantiomers of the popular chiral pharmaceutical, propranolol, after derivitization to convert the enantiomers to diastereomers. The enantiomeric fraction (the ratio of the concentration of one of its isomers to the total concentration) of propranolol in the influent of five wastewater treatment plants was 0.50 +/- 0.02, while after secondary treatment it was 0.42 or less. In a laboratory study designed to simulate an activated sludge municipal wastewater treatment system, the enantiomeric fraction of propranolol decreased from 0.5 to 0.43 as the compound underwent biotransformation. In a similar system designed to simulate an effluent-dominanted surface water, the enantiomeric fraction of propranolol remained constant as it underwent biotransformation. Analysis of samples from surface waters with known or suspected discharges of untreated sewage contained propranolol with an enantiomeric fraction of approximately 0.50 whereas surface waters with large discharges of wastewater effluent contained propranolol with enantiomeric fractions similar to those observed in wastewater effluent. Measurement of enantiomers of propranolol may be useful in detecting and documenting contaminants related to leaking sewers and combined sewer overflows.
---
paper_title: Tutorial review on validation of liquid chromatography-mass spectrometry methods: part II.
paper_content:
This is the part II of a tutorial review intending to give an overview of the state of the art of method validation in liquid chromatography mass spectrometry (LC-MS) and discuss specific issues that arise with MS (and MS-MS) detection in LC (as opposed to the "conventional" detectors). The Part II starts with briefly introducing the main quantitation methods and then addresses the performance related to quantification: linearity of signal, sensitivity, precision, trueness, accuracy, stability and measurement uncertainty. The last section is devoted to practical considerations in validation. With every performance characteristic its essence and terminology are addressed, the current status of treating it is reviewed and recommendations are given, how to handle it, specifically in the case of LC-MS methods.
---
paper_title: Enantiomeric determination of chiral persistent organic pollutants and their metabolites
paper_content:
Many persistent organic pollutants (POPs) are chiral and are generally released into the environment as racemates, but frequently undergo alterations in enantiomeric composition as soon as they are subjected to biochemical processes. Enantiospecific analysis of chiral POPs is important, since enantiomers of chiral compounds often exhibit differences in biological activity, and most biochemical processes in nature are stereospecific. The effects and the environmental fate of the enantiomers of chiral pollutants therefore need to be investigated separately. Chiral separation of enantiomers is one of the most challenging tasks for any analytical technique. We discuss different aspects of enantiospecific analysis of chiral POPs, including classical POPs and their metabolites, as well as some emerging POPs.
---
paper_title: Enantioselective determination of representative profens in wastewater by a single-step sample treatment and chiral liquid chromatography-tandem mass spectrometry.
paper_content:
This manuscript describes, for the first time, the simultaneous enantioselective determination of ibuprofen, naproxen and ketoprofen in wastewater based on liquid chromatography tandem mass spectrometry (LC-MS/MS). The method uses a single-step sample treatment based on microextraction with a supramolecular solvent made up of hexagonal inverted aggregates of decanoic acid, formed in situ in the wastewater sample through a spontaneous self-assembly process. Microextraction of profens was optimized and the analytical method validated. Isotopically labeled internal standards were used to compensate for both matrix interferences and recoveries. Apparent recoveries for the six enantiomers in influent and effluent wastewater samples were in the interval 97-103%. Low method detection limits (MDLs) were obtained (0.5-1.2 ng L(-1)) as a result of the high concentration factors achieved in the microextraction process (i.e. actual concentration factors 469-736). No analyte derivatization or evaporation of extracts, as it is required with GC-MS, was necessary. Relative standard deviations for enantiomers in wastewater were always below 8%. The method was applied to the determination of the concentrations and enantiomeric fractions of the targeted analytes in influents and effluents from three wastewater treatment plants. All the values found for profen enantiomers were consistent with those previously reported and confirmed again the suitability of using the enantiomeric fraction of ibuprofen as an indicator of the discharge of untreated or poorly treated wastewaters. Both the analytical and operational features of this method make it applicable to the assessment of the enantiomeric fate of profens in the environment.
---
paper_title: Field and laboratory studies of the fate and enantiomeric enrichment of venlafaxine and O-desmethylvenlafaxine under aerobic and anaerobic conditions.
paper_content:
The stereoselectivity of R,S-venlafaxine and its metabolites R,S-O-desmethylvenlafaxine, N-desmethylvenlafaxine, O,N-didesmethylvenlafaxine, N,N-didesmethylvenlafaxine and tridesmethylvenlafaxine was studied in three processes: (i) anaerobic and aerobic laboratory scale tests; (ii) six wastewater treatment plants (WWTPs) operating under different conditions; and (iii) a variety of wastewater treatments including conventional activated sludge, natural attenuation along a receiving river stream and storage in operational and seasonal reservoirs. In the laboratory and field studies, the degradation of the venlafaxine yielded O-desmethylvenalfaxine as the dominant metabolite under aerobic and anaerobic conditions. Venlafaxine was almost exclusively converted to O-desmethylvenlafaxine under anaerobic conditions, but only a fraction of the drug was transformed to O-desmethylvenlafaxine under aerobic conditions. Degradation of venlafaxine involved only small stereoisomeric selectivity. In contrast, the degradation of O-desmethylvenlafaxine yielded remarkable S to R enrichment under aerobic conditions but none under anaerobic conditions. Determination of venlafaxine and its metabolites in the WWTPs agreed well with the stereoselectivity observed in the laboratory studies. Our results suggest that the levels of the drug and its metabolites and the stereoisomeric enrichment of the metabolite and its parent drug can be used for source tracking and for discrimination between domestic and nondomestic wastewater pollution. This was indeed demonstrated in the investigations carried out at the Jerusalem WWTP.
---
paper_title: Enantioselective simultaneous analysis of selected pharmaceuticals in environmental samples by ultrahigh performance supercritical fluid based chromatography tandem mass spectrometry.
paper_content:
In order to assess the true impact of each single enantiomer of pharmacologically active compounds (PACs) in the environment, highly efficient, fast and sensitive analytical methods are needed. For the first time this paper focuses on the use of ultrahigh performance supercritical fluid based chromatography coupled to a triple quadrupole mass spectrometer to develop multi-residue enantioselective methods for chiral PACs in environmental matrices. This technique exploits the advantages of supercritical fluid chromatography, ultrahigh performance liquid chromatography and mass spectrometry. Two coated modified 2.5 μm-polysaccharide-based chiral stationary phases were investigated: an amylose tris-3,5-dimethylphenylcarbamate column and a cellulose tris-3-chloro-4-methylphenylcarbamate column. The effect of different chromatographic variables on chiral recognition is highlighted. This novel approach resulted in the baseline resolution of 13 enantiomers PACs (aminorex, carprofen, chloramphenicol, 3-N-dechloroethylifosfamide, flurbiprofen, 2-hydroxyibuprofen, ifosfamide, imazalil, naproxen, ofloxacin, omeprazole, praziquantel and tetramisole) and partial resolution of 2 enantiomers PACs (ibuprofen and indoprofen) under fast-gradient conditions (<10 min analysis time). The overall performance of the methods was satisfactory. The applicability of the methods was tested on influent and effluent wastewater samples. To the best of our knowledge, this is the first feasibility study on the simultaneous separation of chemically diverse chiral PACs in environmental matrices using ultrahigh performance supercritical fluid based chromatography coupled with tandem mass spectrometry.
---
paper_title: Enantiomeric Profiling of Chiral Drugs in Wastewater and Receiving Waters
paper_content:
The aim of this paper is to discuss the enantiomer-specific fate of chiral drugs during wastewater treatment and in receiving waters. Several chiral drugs were studied: amphetamine-like drugs of abuse (amphetamine, methamphetamine, MDMA, MDA), ephedrines (ephedrine and pseudoephedrine), antidepressant venlafaxine, and beta-blocker atenolol. A monitoring program was undertaken in 7 WWTPs (utilizing mainly activated sludge and trickling filters technologies) and at 6 sampling points in receiving waters over the period of 9 months. The results revealed the enantiomer-specific fate of all studied drugs during both wastewater treatment and in the aqueous environment. The extent of stereoselectivity depended on several parameters including: type of chiral drug (high stereoselectivity was recorded for atenolol and MDMA), treatment technology used (activated sludge showed higher stereoselectivity than trickling filters), and season (higher stereoselectivity was observed in the aqueous environment over the spring/...
---
paper_title: Gas chromatographic enantioseparation of chiral pollutants—techniques and results
paper_content:
Publisher Summary This chapter discusses the enantioseparation of chiral pollutants through gas chromatography technique. The phenomenon of enantioselective processes for a certain chiral pollutant in the environment can be compared with its interaction with different chiral stationary phases (CSPs). On one CSP, the enantiomers are resolved, and on the next, they are not. On one CSP the (+)-enantiomer elutes second (interacts more with the CSP), on another CSP the (+)-enantiomer elutes first. Prerequisite for enantioseparation is the interaction of the enantiomers with another chiral system followed by the formation of associations that behave like diastereomers. The most complex point is the interpretation of the data that aims to generalize the findings. To obtain knowledge from the result, one should know the meaning of measured EFs in a general content. Nevertheless, studies and approaches such as, using EF determinations in closed but complex systems such as sewage treatment plants, is a promising tool to find out more about the fate of chemicals therein. Despite all uncertainties in EF data, the positive aspects prevail, and the determination of EFs in environmental samples is today, by far more than the addition of a new parameter to monitoring data. The data compiled in this article may be helpful for stimulating new studies and eliciting some hidden aspects of this acknowledged field of research.
---
paper_title: Selective degradation of ibuprofen and clofibric acid in two model river biofilm systems
paper_content:
Abstract A field survey indicated that the Elbe and Saale Rivers were contaminated with both clofibric acid and ibuprofen. In Elbe River water we could detect the metabolite hydroxy-ibuprofen. Analyses of the city of Saskatoon sewage effluent discharged to the South Saskatchewan river detected clofibric acid but neither ibuprofen nor any metabolite. Laboratory studies indicated that the pharmaceutical ibuprofen was readily degraded in a river biofilm reactor. Two metabolites were detected and identified as hydroxy–and carboxy–ibuprofen. Both metabolites were observed to degrade in the biofilm reactors. However, in human metabolism the metabolite carboxy–ibuprofen appears and degrades second whereas the opposite occurs in biofilm systems. In biofilms the pharmacologically inactive stereoisomere of ibuprofen is degraded predominantly. In contrast, clofibric acid was not biologically degraded during the experimental period of 21 days. Similar results were obtained using biofilms developed using waters from either the South Saskatchewan or Elbe River. In a sterile reactor no losses of ibuprofen were observed. These results suggested that abiotic losses and adsorption played only a minimal role in the fate of the pharmaceuticals in the river biofilm reactors.
---
paper_title: Attenuation of wastewater-derived contaminants in an effluent-dominated river.
paper_content:
Although wastewater-derived chemical contaminants undergo transformation through a variety of mechanisms, the relative importance of processes such as biotransformation and photolysis is poorly understood under conditions representative of large rivers. To assess attenuation rates under conditions encountered in such systems, samples from the Trinity River were analyzed for a suite of wastewater-derived contaminants during a period when wastewater effluent accounted for nearly the entire flow of the river over a travel time of approximately 2 weeks. While the concentration of total adsorbable organic iodide, a surrogate for recalcitrant X-ray phase contrast media in wastewater, was approximately constant throughout the river, concentrations of ethylenediamine tetraacetate, gemfibrozil, ibuprofen, metoprolol, and naproxen all decreased between 60% and 90% as the water flowed downstream. Comparison of attenuation rates estimated in the river with rates measured in laboratory-scale microcosms suggests that biotransformation was more important than photolysis for most of the compounds. Further evidence for biotransformation in the river was provided by measurements of the enantiomeric fraction of metoprolol, which showed a gradual decrease as the water moved downstream. Results of this study indicate that natural attenuation can result in significant decreases in concentrations of wastewater-derived contaminants in large rivers.
---
paper_title: Chiral recognition by enantioselective liquid chromatography: Mechanisms and modern chiral stationary phases
paper_content:
Abstract An overview of the state-of-the-art in LC enantiomer separation is presented. This tutorial review is mainly focused on mechanisms of chiral recognition and enantiomer distinction of popular chiral selectors and corresponding chiral stationary phases including discussions of thermodynamics, additivity principle of binding increments, site-selective thermodynamics, extrathermodynamic approaches, methods employed for the investigation of dominating intermolecular interactions and complex structures such as spectroscopic methods (IR, NMR), X-ray diffraction and computational methods. Modern chiral stationary phases are discussed with particular focus on those that are commercially available and broadly used. It is attempted to provide the reader with vivid images of molecular recognition mechanisms of selected chiral selector–selectand pairs on basis of solid-state X-ray crystal structures and simulated computer models, respectively. Such snapshot images illustrated in this communication unfortunately cannot account for the molecular dynamics of the real world, but are supposed to be helpful for the understanding. The exploding number of papers about applications of various chiral stationary phases in numerous fields of enantiomer separations is not covered systematically.
---
paper_title: Analysis of the chiral pollutants by chromatography
paper_content:
Many organic environmental pollutants are mixtures of their chiral isomers. Studies have revealed that these isomers have different enantioselective distribution, metabolism and toxicity. Today, analysis of chiral pollutants represents an urgent need. This review discusses the analysis of chiral pollutants using gas chromatography (GC), high performance liquid chromatography (HPLC), capillary electro-chromatography (CEC), micellar electrokinetic chromatography (MEKC), super critical fluid chromatography (SFC) and thin layer chromatography (TLC).
---
paper_title: Critical evaluation of monitoring strategy for the multi-residue determination of 90 chiral and achiral micropollutants in effluent wastewater.
paper_content:
It is essential to monitor the release of organic micropollutants from wastewater treatment plants (WWTPs) for developing environmental risk assessment and assessing compliance with legislative regulation. In this study the impact of sampling strategy on the quantitative determination of micropollutants in effluent wastewater was investigated. An extended list of 90 chiral and achiral micropollutants representing a broad range of biological and physico-chemical properties were studied simultaneously for the first time. During composite sample collection micropollutants can degrade resulting in the under-estimation of concentration. Cooling collected sub-samples to 4°C stabilised ≥81 of 90 micropollutants to acceptable levels (±20% of the initial concentration) in the studied effluents. However, achieving stability for all micropollutants will require an integrated approach to sample collection (i.e., multi-bottle sampling with more than one stabilisation method applied). Full-scale monitoring of effluent revealed time-paced composites attained similar information to volume-paced composites (influent wastewater requires a sampling mode responsive to flow variation). The option of monitoring effluent using time-paced composite samplers is advantageous as not all WWTPs have flow controlled samplers or suitable sites for deploying portable flow meters. There has been little research to date on the impact of monitoring strategy on the determination of chiral micropollutants at the enantiomeric level. Variability in wastewater flow results in a dynamic hydraulic retention time within the WWTP (and upstream sewerage system). Despite chiral micropollutants being susceptible to stereo-selective degradation, no diurnal variability in their enantiomeric distribution was observed. However, unused medication can be directly disposed into the sewer network creating short-term (e.g., daily) changes to their enantiomeric distribution. As enantio-specific toxicity is observed in the environment, similar resolution of enantio-selective analysis to more routinely applied achiral methods is needed throughout the monitoring period for accurate risk assessment.
---
paper_title: Enantiomeric analysis of polycyclic musks in water by chiral gas chromatography-tandem mass spectrometry.
paper_content:
Galaxolide (HHCB), tonalide (AHTN), phantolide (AHDI), traseolide (ATII) and cashmeran (DPMI) are synthetic polycyclic musks (PCMs). They are all commonly used in fragrance industries as racemic mixtures. A sensitive and robust enantioselective analytical method was developed to facilitate measurement of these chemicals in wastewater and environmental samples. The method is based on gas chromatography with tandem mass spectrometry (GC-MS/MS). Enantioseparation was assessed using four commercially available chiral capillary columns. Optimised resolution was achieved using a dual-column configuration of a chiral heptakis(2,3- di-O-methyl-6-O-t-butyl dimethylsilyl)-β-cyclodextrin column combined with a (non-chiral) HP-5MS column. This configuration was demonstrated to be capable of effectively resolving all commercially manufactured enantiomers of these five PCMs. Method detection limits for single enantiomers in drinking water and surface water range between 1.01 and 2.39ngL(-1). Full validation of the application of this method in these aqueous matrices is provided.
---
paper_title: Role of Chirality and Macroring in Imprinted Polymers with Enantiodiscriminative Power
paper_content:
Enantioselective discrimination of chiral amines is of great importance as their biological properties often differ. Therefore, here we report the development of synthetic receptors for their enantioselective recognition and pH-sensitive drug release. This paper reports the preparation of three pyridine and two benzene derivatives containing an allyloxy group [(S,S)-5, 6–9] as well as their evaluation as functional monomer anchors for chiral imprinting of amines. The enantiomeric enriching ability and controlled release of the imprinted polymers (IPs) were evaluated using racemic mixture of 1-(1-naphthyl)ethylamine hydrogen perchlorate (1). The effect of the enantiomeric purity of the template on the enantioseparation performance was investigated. Racemic template in combination with enantiomerically pure macrocyclic anchors and vice versa yields IPs with excellent enantiomeric recognition. In vitro drug delivery, enantiomeric enrichment and pH-sensitive release were investigated through kinetic models.
---
paper_title: Chiral profiling of azole antifungals in municipal wastewater and recipient rivers of the Pearl River Delta, China
paper_content:
Enantiomeric compositions and fractions (EFs) of three chiral imidazole (econazole, ketoconazole, and miconazole) and one chiral triazole (tebuconazole) antifungals were investigated in wastewater, river water, and bed sediment of the Pearl River Delta, South China. The imidazole pharmaceuticals in the untreated wastewater were racemic to weakly nonracemic (EFs of 0.450-0.530) and showed weak enantioselectivity during treatment in the sewage treatment plant. The EFs of the dissolved azole antifungals were usually different from those of the sorbed azoles in the suspended particulate matter, suggesting different behaviors for the enantiomers of the chiral azole antifungals in the dissolved and particulate phases of the wastewater. The azole antifungals were widely present in the rivers. The bed sediment was a sink for the imidazole antifungals. The imidazoles were prevalently racemic, whereas tebuconazole was widely nonracemic in the rivers. Seasonal effects were observed on distribution and chirality of the azole antifungals. Concentrations of the azole antifungals in the river water were relatively higher in winter than in spring and summer while the EF of miconazole in the river water was higher in summer. The mechanism of enantiomeric behavior of the chiral azole antifungals in the environment warrants further research.
---
paper_title: Synthesis and Preliminary Structural and Binding Characterization of New Enantiopure Crown Ethers Containing an Alkyl Diarylphosphinate or a Proton-Ionizable Diarylphosphinic Acid Unit
paper_content:
New enantiopure crown ethers containing either an ethyl diarylphosphinate moiety [(S,S)-4 to (S,S)-7] or a proton-ionizable diarylphosphinic acid unit [(S,S)-8 to (S,S)-11] have been synthesized. Electronic circular dichroism (ECD) studies on the complexation of these new enantiopure crown ethers with the enantiomers of α-(1-naphthyl)ethylammonium perchlorate (1-NEA) and with α-(2-naphthyl)ethylammonium perchlorate (2-NEA) were also carried out. These studies showed appreciable enantiomeric recognition with heterochiral [(S,S)-crown ether plus either (R)-1- or (R)-2-NEA] preference. Theoretical calculations found three significant intermolecular hydrogen bonds in the complexes of (S,S)-9. Furthermore, preference for heterochiral complexes was also observed, in good agreement with ECD results. Complex formation constants were determined by NMR titration for four selected crown ether/NEA pairs.
---
paper_title: Enantiomer separation of polychlorinated biphenyl atropisomers and polychlorinated biphenyl retention behavior on modified cyclodextrin capillary gas chromatography columns.
paper_content:
Abstract Seven commercially-available chiral capillary gas chromatography columns containing modified cyclodextrins were evaluated for their ability to separate enantiomers of the 19 stable chiral polychlorinated biphenyl (PCB) atropisomers, and for their ability to separate these enantiomers from achiral congeners, necessary for trace environmental analysis of chiral PCBs. The enantiomers of each of the 19 chiral PCBs were at least partially separated on one or more of these columns. Enantiomeric ratios of eleven atropisomers could also be quantified on six columns as they did not coelute with any other congener containing the same number of chlorine atoms, so could be quantified using gas chromatography–mass spectrometry. Analysis of a lake sediment heavily contaminated with PCBs showed enantioselective occurrence of PCB 91, proof positive of enantioselective in situ reductive dechlorination at the sampling site.
---
paper_title: Occurrence and Behavior of Pesticides in Rainwater, Roof Runoff, and Artificial Stormwater Infiltration
paper_content:
To prevent overloading of sewer systems and to ensure sufficient recharging of the groundwater underneath sealed urban areas, collection and artificial infiltration of roof runoff water has become very popular in many countries including Switzerland. However, there is still a considerable lack of knowledge concerning the quality of roof runoff, particularly with respect to the presence of pesticides. In this work, the occurrence and the temporal variations in concentration in rainwater and in roof runoff from different types of roofs (i.e., clay tile roofs, polyester roofs, flat gravel roofs) were determined for the most important members of three widely used classes of pesticides (i.e., triazines, acetamides, phenoxy acids). It is shown that in rain and roof runoff, maximum pesticide concentrations originating primarily from agricultural use occurred during and right after the application periods. Maximum average concentrations for single rain events and total loads per year were, for example, for atrazi...
---
paper_title: Using chiral liquid chromatography quadrupole time-of-flight mass spectrometry for the analysis of pharmaceuticals and illicit drugs in surface and wastewater at the enantiomeric level.
paper_content:
This paper presents and compares for the first time two chiral LC-QTOF-MS methodologies (utilising CBH and Chirobiotic V columns with cellobiohydrolase and vancomycin as chiral selectors) for the quantification of amphetamine, methamphetamine, MDA (methylenedioxyamphetamine), MDMA (methylenedioxymethamphetamine), propranolol, atenolol, metoprolol, fluoxetine and venlafaxine in river water and sewage effluent. The lowest MDLs (0.3-5.0 ng L(-1) and 1.3-15.1 ng L(-1) for river water and sewage effluent respectively) were observed using the chiral column Chirobiotic V. This is with the exception of methamphetamine and MDMA which had lower MDLs using the CBH column. However, the CBH column resulted in better resolution of enantiomers (R(s)=2.5 for amphetamine compared with R(s)=1.2 with Chirobiotic V). Method recovery rates were typically >80% for both methodologies. Pharmaceuticals and illicit drugs detected and quantified in environmental samples were successfully identified using MS/MS confirmation. In sewage effluent, the total beta-blocker concentrations of propranolol, atenolol and metoprolol were on average 77.0, 1091.0 and 3.6 ng L(-1) thus having EFs (Enantiomeric Fractions) of 0.43, 0.55 and 0.54 respectively. In river water, total propranolol and atenolol was quantified on average at <10.0 ng L(-1). Differences in EF between sewage and river water matrices were evident: venlafaxine was observed with respective EF of 0.43 ± 0.02 and 0.58 ± 0.02.
---
paper_title: Comparative HPLC methods for β-blockers separation using different types of chiral stationary phases in normal phase and polar organic phase elution modes. Analysis of propranolol enantiomers in natural waters.
paper_content:
Abstract The enantioselectivities of β-blockers (propranolol, metoprolol, atenolol and pindolol) on four different types of chiral stationary phases (CSPs): Chiralpak AD-H, Lux Cellulose-1, Chirobiotic T and Sumichiral OA-4900 were compared using polar organic (PO) elution mode and normal phase (NP) elution mode. Method optimizations were demonstrated by modifying parameters such as organic modifier composition (ethanol, 2-propanol and acetonitrile) and basic mobile phase additives (triethylamine, diethylamine, ethanolamine, and buthylamine). In normal phase elution mode with Lux Cellulose-1, the four pairs of enantiomers can be separated in the same run in gradient elution mode. Additionally, a simple chiral HPLC–DAD method using a newly commercialized polysaccharide-based CSP by Phenomenex (Lux Cellulose-1) in NP elution mode for enantioselective determination of propranolol in water samples by highly selective molecularly imprinted polymers extraction was validated. The optimized conditions were a mobile phase composed by n -hexane/ethanol/DEA (70/30/0.3, v/v/v) at a flow rate of 1.0 mL min −1 and 25 °C. The method is selective, precise and accurate and was found to be linear in the range of 0.125–50 μg mL −1 ( R 2 > 0.999) with a method detection limit (MLD) of 0.4 μg mL −1 for both enantiomers. Recoveries achieved with both enantiomers ranged from 97 to 109%.
---
paper_title: Multi-residue enantiomeric analysis of pharmaceuticals and their active metabolites in the Guadalquivir River basin (South Spain) by chiral liquid chromatography coupled with tandem mass spectrometry
paper_content:
This paper describes the development and application of a multi-residue chiral liquid chromatography coupled with tandem mass spectrometry method for simultaneous enantiomeric profiling of 18 chiral pharmaceuticals and their active metabolites (belonging to several therapeutic classes including analgesics, psychiatric drugs, antibiotics, cardiovascular drugs and β-agonists) in surface water and wastewater. To the authors' knowledge, this is the first time an enantiomeric method including such a high number of pharmaceuticals and their metabolites has been reported. Some of the pharmaceuticals have never been studied before in environmental matrices. Among them are timolol, betaxolol, carazolol and clenbuterol. A monitoring programme of the Guadalquivir River basin (South Spain), including 24 sampling sites and five wastewater treatment plants along the basin, revealed that enantiomeric composition of studied pharmaceuticals is dependent on compound and sampling site. Several compounds such as ibuprofen, atenolol, sotalol and metoprolol were frequently found as racemic mixtures. On the other hand, fluoxetine, propranolol and albuterol were found to be enriched with one enantiomer. Such an outcome might be of significant environmental relevance as two enantiomers of the same chiral compound might reveal different ecotoxicity. For example, propranolol was enriched with S(-)-enantiomer, which is known to be more toxic to Pimephales promelas than R(+)-propranolol. Fluoxetine was found to be enriched with S(+)-enantiomer, which is more toxic to P. promelas than R(-)-fluoxetine.
---
paper_title: Small Molecules as Chromatographic Tools for HPLC Enantiomeric Resolution: Pirkle-Type Chiral Stationary Phases Evolution
paper_content:
This review focuses on the evolution of Pirkle-type chiral stationary phases (CSPs), based on chiral recognition mechanism of small molecules and applications directly related with Medicinal Chemistry. Therefore, the strategies to plan these chiral selectors for enantioseparation of diverse therapeutic classes of chiral drugs and the understanding of the recognition mechanism are emphasized. The planning of Pirkle and co-workers to design different classes of CSPs was initially based on NMR studies, following the principle of reciprocity together with chromatographic results and studies of chiral recognition phenomena. All those features are described and critically discussed in this review. Finally, based on general principles established by Pirkle’s work it can be inferred that diverse chiral small molecules can be successfully used as chromatographic tools for enantiomeric resolution. In this context, several research groups were inspired on Pirkle’s design to develop new CSPs. Xanthone derivatives bonded to chiral groups were also exploited as selectors for CSPs and are briefly reported.
---
paper_title: Enantiomeric determination of azole antifungals in wastewater and sludge by liquid chromatography–tandem mass spectrometry
paper_content:
A sensitive and reliable liquid chromatographic-tandem mass spectrometric method for enantiomeric determination of five chiral azole antifungals (econazole, ketoconazole, miconazole, tebuconazole, and propiconazole) in wastewater and sludge has been established and validated. An isotope-labeled internal standard was used for quantification. Recovery of the individual enantiomers was usually in the range of 77-102 % for wastewater and 71-95 % for sludge, with relative standard deviations within 20 %. No significant difference (p>0.05) was observed between recovery of pairs of enantiomers of the chiral azole antifungals except for those of tebuconazole. Method quantification limits for individual enantiomers were 0.3-10 ng L(-1) and 3-29 ng g(-1) dry weight for wastewater and sludge, respectively. The method was used to investigate the enantiomeric composition of the azole pharmaceuticals in wastewater and sludge samples from a sewage treatment plant in China. Enantiomers of miconazole, ketoconazole, and econazole were widely detected. The results showed that the azole antifungals in wastewater and sludge were generally racemic or marginally non-racemic. The method is a useful tool for investigation of the enantiomeric occurrence, behavior, and fate of the chiral azole antifungals in the environment.
---
paper_title: Simultaneous enantiomeric determination of propranolol, metoprolol, pindolol, and atenolol in natural waters by HPLC on new polysaccharide-based stationary phase using a highly selective molecularly imprinted polymer extraction.
paper_content:
A simple high performance liquid chromatography method HPLC-UV for simultaneous enantiomeric determination of propranolol, metoprolol, pindolol, and atenolol in natural water samples was developed and validated, using a molecularly imprinted polymer solid-phase extraction. To achieve this purpose, Lux(®) Cellulose-1/Sepapak-1 (cellulose tris-(3,5-dymethylphenylcarbamate)) (Phenomenex, Madrid, Spain) chiral stationary phase was used in gradient elution and normal phase mode at ambient temperature. The gradient elution program optimized consisted of a progressive change of the mobile phase polarity from n-hex/EtOH/DEA 90/10/0.5 (v/v/v) to 60/40/0.5 (v/v/v) in 13 min, delivered at a flow rate of 1.3 ml/min and a sudden change of flow rate to 2.3 ml/min in 1 min. Critical steps in any molecularly imprinted polymer extraction protocol such as the flow rate to load the water sample in the cartridges and the breakthrough volume were optimized to obtain the higher extraction recoveries for all compounds. In optimal conditions (100 ml breakthrough volume loaded at 2.0 ml/min), extraction recoveries for the four pairs of β-blockers were near 100%. The MIP-SPE-HPLC-UV method developed demonstrates good linearity (R(2) ≥ 0.99), precision, selectivity, and sensitivity. Method limit detection was 3.0 µg/l for propranolol and pindolol enantiomers and 20.0 and 22.0 µg/l for metoprolol and atenolol enantiomers, respectively. The proposed methodology should be suitable for routine control of these emerging pollutants in natural waters for a better understanding of the environmental impact and fate.
---
paper_title: Functionalized Graphene as a Gatekeeper for Chiral Molecules: An Alternative Concept for Chiral Separation
paper_content:
Author(s): Hauser, AW; Mardirossian, N; Panetier, JA; Head-Gordon, M; Bell, AT; Schwerdtfeger, P | Abstract: We propose a new method of chiral separation using functionalized nanoporous graphene as an example. Computational simulations based on density functional theory show that the attachment of a suitable chiral "bouncer" molecule to the pore rim prevents the passage of the undesired enantiomer while letting its mirror image through. © 2014 WILEY-VCH Verlag GmbH a Co. KGaA, Weinheim.
---
paper_title: Enantioselective biodegradation of pharmaceuticals, alprenolol and propranolol, by an activated sludge inoculum.
paper_content:
Biodegradation of chiral pharmaceuticals in the environment can be enantioselective. Thus quantification of enantiomeric fractions during the biodegradation process is crucial for assessing the fate of chiral pollutants. This work presents the biodegradation of alprenolol and propranolol using an activated sludge inoculum, monitored by a validated enantioselective HPLC method with fluorescence detection. The enantioseparation was optimized using a vancomycin-based chiral stationary phase under polar ionic mode. The method was validated using a minimal salts medium inoculated with activated sludge as matrix. The method was selective and linear in the range of 10-800 ng/ml, with a R²>0.99. The accuracy ranged from 85.0 percent to 103 percent, the recovery ranged from 79.9 percent to 103 percent, and the precision measured by the relative standard deviation (RSD) was <7.18 percent for intra-batch and <5.39 percent for inter-batch assays. The limits of quantification and detection for all enantiomers were 10 ng/ml and 2.5 ng/ml, respectively. The method was successfully applied to follow the biodegradation of the target pharmaceuticals using an activated sludge inoculum during a fifteen days assay. The results indicated slightly higher biodegradation rates for the S-enantiomeric forms of both beta-blockers. The presence of another carbon source maintained the enantioselective degradation pattern while enhancing biodegradation extent up to fourteen percent.
---
paper_title: Simultaneous enantiomeric analysis of pharmacologically active compounds in environmental samples by chiral LC-MS/MS with a macrocyclic antibiotic stationary phase.
paper_content:
This paper presents a multi-residue method for direct enantioselective separation of chiral pharmacologically active compounds in environmental matrices. The method is based on chiral liquid chromatography and tandem mass spectrometry detection. Simultaneous chiral discrimination was achieved with a macrocyclic glycopeptide-based column with antibiotic teicoplanin as a chiral selector working under reverse phase mode. For the first time, enantioresolution was reported for metabolites of ibuprofen: carboxyibuprofen and 2-hydroxyibuprofen with this chiral stationary phase. Moreover, enantiomers of chloramphenicol, ibuprofen, ifosfamide, indoprofen, ketoprofen, naproxen and praziquantel were also resolved. The overall performance of the method was satisfactory in terms of linearity, precision, accuracy and limits of detection. The method was successfully applied for monitoring of pharmacologically active compounds at enantiomeric level in influent and effluent wastewater and in river water. In addition, the chiral recognition and analytical performance of the teicoplanin-based column was critically compared with that of the α1 -acid glycoprotein chiral stationary phase. Copyright © 2017 John Wiley & Sons, Ltd.
---
paper_title: Enantioselective degradation of amphetamine-like environmental micropollutants (amphetamine, methamphetamine, MDMA and MDA) in urban water.
paper_content:
This paper aims to understand enantioselective transformation of amphetamine, methamphetamine, MDMA (3,4-methylenedioxy-methamphetamine) and MDA (3,4-methylenedioxyamphetamine) during wastewater treatment and in receiving waters. In order to undertake a comprehensive evaluation of the processes occurring, stereoselective transformation of amphetamine-like compounds was studied, for the first time, in controlled laboratory experiments: receiving water and activated sludge simulating microcosm systems. The results demonstrated that stereoselective degradation, via microbial metabolic processes favouring S-(+)-enantiomer, occurred in all studied amphetamine-based compounds in activated sludge simulating microcosms. R-(-)-enantiomers were not degraded (or their degradation was limited) which proves their more recalcitrant nature. Out of all four amphetamine-like compounds studied, amphetamine was the most susceptible to biodegradation. It was followed by MDMA and methamphetamine. Photochemical processes facilitated degradation of MDMA and methamphetamine but they were not, as expected, stereoselective. Preferential biodegradation of S-(+)-methamphetamine led to the formation of S-(+)-amphetamine. Racemic MDMA was stereoselectively biodegraded by activated sludge which led to its enrichment with R-(-)-enantiomer and formation of S-(+)-MDA. Interestingly, there was only mild stereoselectivity observed during MDMA degradation in rivers. This might be due to different microbial communities utilised during activated sludge treatment and those present in the environment. Kinetic studies confirmed the recalcitrant nature of MDMA.
---
paper_title: Concentrations, enantiomeric compositions, and sources of HCH, DDT and chlordane in soils from the Pearl River Delta, South China.
paper_content:
Concentrations, and enantiomeric compositions of HCH, DDT and chlordane in 74 soils of the Pearl River Delta, South China were investigated. The mean concentrations of HCHs and DDTs descended in the order: crop soils>paddy soils>natural soils. The concentrations (ng/g dw) of p,p'-DDE, p,p'-DDT, p,p'-DDD and o,p'-DDT in crop soils were 0.14-231, 0.07-315, <DL-96.7 and 0.06-73.8, respectively, while those of chlordane were generally below 0.78 for trans-chlordane (TC) and 0.75 for cis-chlordane (CC). Enantiomeric factors (EF value) were determined for o,p'-DDT, alpha-HCH, TC and CC. Both preferential depletions of (-) enantiomer and (+) enantiomer were observed for o,p'-DDT and alpha-HCH, indicated by EF values either <0.5 or >0.5. An EF value >0.5 generally suggested a preferential degradation of the (-) enantiomers of both TC and CC. The racemic alpha-HCH observed in the soils with higher HCH concentrations indicated that the transformation from gamma-HCH (e.g. lindane) to alpha-HCH may be an important process in the soils. The isomer ratios of p,p'-DDT/(DDE+DDD), o,p'-DDT/p,p'-DDT and enantiomeric compositions of o,p'-DDT suggested that both illegal use of technical DDT and the DDT impurity in dicofol may be responsible for the freshly DDT input in the region. The sources of DDTs were drawn by principal component analysis-multiple linear regression (PCA-MLR). The relative contributions of dicofol-type DDT, residues, and fresh technical DDT were estimated to be 55%, 21% and 17%, respectively. In addition, CC was found to degraded faster than TC in soils from the Pearl River Delta. The study demonstrated that the combination of isomer ratios and enantiomeric composition analysis may provide critical information on the potential sources and fate of organochlorine pesticides in soil.
---
paper_title: Enantiomer separation by enantioselective inclusion complexation–organic solvent nanofiltration
paper_content:
A novel chiral separation process, which utilizes a combination of enantioselective inclusion complexation (EIC) and organic solvent nanofiltration (OSN), was developed. Although EIC is an attractive way to resolve racemates, the difficulties associated with enantiomer recovery and chiral host recycle has limited large-scale applications. EIC coupled with OSN replaces distillation for the recovery of enantiomers from enantioenriched solid complex. A decomplexation solvent is employed to dissociate enantiomers from the complex, and subsequent separation of enantiomers from the chiral host is realized using OSN. The new process was investigated using racemic 1-phenylethanol as the guest and (R,R)-TADDOL as the chiral host. This novel technology expands the application of EIC to the resolution of nonvolatile racemates, and enables large-scale application.
---
paper_title: Enantiomeric profiling of a chemically diverse mixture of chiral pharmaceuticals in urban water.
paper_content:
Due to concerns regarding the release of pharmaceuticals into the environment and the understudied impact of stereochemistry of pharmaceuticals on their fate and biological potency, we focussed in this paper on stereoselective transformation pathways of selected chiral pharmaceuticals (16 pairs) at both microcosm (receiving waters and activated sludge wastewater treatment simulating microcosms) and macrocosm (wastewater treatment plant (WWTP) utilising activated sludge technology and receiving waters) scales in order to test the hypothesis that biodegradation of chiral drugs is stereoselective. Our monitoring programme of a full scale activated sludge WWTP and receiving environment revealed that several chiral drugs, those being marketed mostly as racemates, are present in wastewater and receiving waters enriched with one enantiomeric form (e.g. fluoxetine, mirtazapine, salbutamol, MDMA). This is most likely due to biological metabolic processes occurring in humans and other organisms. Both activated sludge and receiving waters simulating microcosms confirmed our hypothesis that chiral drugs are subject to stereoselective microbial degradation. It led, in this research, to preferential degradation of S-(+)-enantiomers of amphetamines, R-(+)-enantiomers of beta-blockers and S-(+)-enantiomers of antidepressants. In the case of three parent compound - metabolite pairs (venlafaxine - desmethylvenlafaxine, citalopram - desmethylcitalopram and MDMA - MDA), while parent compounds showed higher resistance to both microbial metabolism and photodegradation, their desmethyl metabolites showed much higher degradation rate both in terms of stereoselective metabolic and non-stereoselective photochemical processes. It is also worth noting that metabolites tend to be, as expected, enriched with enantiomers of opposite configuration to their parent compounds, which might have significant toxicological consequences when evaluating the metabolic residues of chiral pollutants.
---
paper_title: Enantioselective degradation of warfarin in soils
paper_content:
Environmental enantioselectivity information is important to fate assessment of chiral contaminants. Warfarin, a rodenticide and prescription medicine, is a chiral chemical but used in racemic form. Little is known about its enantioselective behavior in the environment. In this study, enantioselective degradation of warfarin in a turfgrass and a groundcover soils was examined in aerobic and ambient temperature conditions. An enantioselective analytical method was established using a novel triproline chiral stationary phase in high performance liquid chromatography. Unusual peak profile patterns, i.e., first peak (S(−)) broadening/second peak (R(+)) compression with hexane (0.1%TFA)/2-propanol (92/8, v/v) mobile phase, and first peak compression/second peak broadening with the (96/4, v/v) mobile phase, were observed in enantioseparation. This unique tunable peak property was leveraged in evaluating warfarin enantioselective degradation in two types of soil. Warfarin was extracted in high recovery from soil using methylene chloride after an aqueous phase basic-acidic conversion. No apparent degradation of warfarin was observed in the sterile turfgrass and groundcover soils during the 28 days incubation, while it showed quick degradation (half-life <7 days) in the nonsterile soils after a short lag period, suggesting warfarin degradation in the soils was mainly caused by micro-organisms. Limited enantioselectivity was found in the both soils, which was the R(+) enantiomer was preferentially degraded. The half-lives in turfgrass soil were 5.06 ± 0.13 and 5.97 ± 0.05 days, for the R(+) and the S(−) enantiomer, respectively. The corresponding values for the groundcover soil were 4.15 ± 0.11 and 4.47 ± 0.08 days. Chirality, 2011. © 2011 Wiley Periodicals, Inc.
---
paper_title: Enantioselective quantification of fluoxetine and norfluoxetine by HPLC in wastewater effluents.
paper_content:
Microbial degradation is the most important process to remove organic pollutants in Waste Water Treatment Plants. Regarding chiral compounds this process is normally enantioselective and needs the suitable analytical methodology to follow the removal of both enantiomers in an accurate way. Thus, this paper describes the development and validation of an enantioselective High Performance Liquid Chromatography with Fluorescence Detection (HPLC-FD) method for simultaneous analysis of fluoxetine (FLX) and norfluoxetine (NFLX) in wastewater effluents. Briefly, this method preconcentrated a small volume of wastewater samples (50 mL) on 500 mg Oasis MCX cartridges and used HPLC-FD with a vancomycin-based chiral stationary phase under reversed mode for analyses. The optimized mobile phase was EtOH/aqueous ammonium acetate buffer (92.5/7.5, v/v) at pH 6.8. The effect of EtOH percentage, buffer concentration, pH, column oven temperature and flow rate on chromatographic parameters was systematically investigated. The developed method was validated within the wastewater effluent used in microcosms laboratory assays. Linearity (R(2)>0.99), selectivity and sensitivity were achieved in the range of 4.0-60 ng mL(-1) for enantiomers of FLX and 2.0-30 ng mL(-1) for enantiomers of NFLX. The limits of detection were between 0.8 and 2.0 ng mL(-1) and the limits of quantification were between 2.0 and 4.0 ng mL(-1) for both enantiomers of FLX and the enantiomers of its demethylated metabolite NFLX. The validated method was successfully applied and proved to be robust to follow the degradation of both enantiomers of FLX in wastewater samples, during 46 days.
---
paper_title: Distinct Enantiomeric Signals of Ibuprofen and Naproxen in Treated Wastewater and Sewer Overflow
paper_content:
Ibuprofen and naproxen are commonly used members of a class of pharmaceuticals known as 2-arylpropionic acids (2-APAs). Both are chiral chemicals and can exist as either of two (R)- and (S)-enantiomers. Enantioselective analyses of effluents from municipal wastewater treatment plants (WWTPs) and from untreated sewage overflow reveal distinctly different enantiomeric fractions for both pharmaceuticals. The (S)-enantiomers of both were dominant in untreated sewage overflow, but the relative proportions of the (R)-enantiomers were shown to be increased in WWTP effluents. (R)-naproxen was below method detection limits (<1 ng.L(-1)) in sewage overflow, but measurable at higher concentrations in WWTP effluents. Accordingly, enantiomeric fractions (EF) for naproxen were consistently 1.0 in sewage overflow, but ranged from 0.7–0.9 in WWTP effluents. Ibuprofen EF ranged from 0.6–0.8 in sewage overflow and receiving waters, and was 0.5 in two WWTP effluents. Strong evidence is provided to indicate that chiral inversion of (S)-2-APAs to produce (R)-2-APAs may occur during wastewater treatment processes. It is concluded that this characterization of the enantiomeric fractions for ibuprofen and naproxen in particular effluents could facilitate the distinction of treated and untreated sources of pharmaceutical contamination in surface waters.
---
paper_title: Fate of pharmaceuticals in rivers : Deriving a benchmark dataset at favorable attenuation conditions
paper_content:
Abstract Pharmaceutical residues are commonly detected organic micropollutants in the aquatic environment. Their actual fate in rivers is still incompletely understood as their elimination is highly substance specific and studies often report contradictory results. To elucidate the ceiling of attenuation rates of pharmaceuticals in rivers we carried out a study at a river with favorable conditions for the elimination of organic micropollutants. Experiments were carried out at a small stream in Germany. Composite samples were taken at both ends of a 12.5 km long river stretch located downstream of a sewage treatment plant and analyzed for 10 pharmaceuticals. Moreover, pore water samples were taken and in situ photolysis experiments at several sites within the river stretch were performed to assess the importance of these individual elimination mechanisms. Pharmaceutical concentration in the surface water at the first sampling site ranged from 3.5 ng L−1 for propranolol to 1400 ng L−1 for diclofenac. In comparison to carbamazepine which was used as persistent tracer, all other pharmaceuticals were attenuated along the river stretch. Their elimination was higher in a sunny, dry weather period (period I) compared to a period with elevated discharge after a heavy rainfall (period II). Overall, the measured elimination rates ranged from 25% for sulfamethoxazole (period II) to 70% for propranolol (period I). Photolysis was only a relevant elimination process for diclofenac and potentially also for sotalol; for these compounds phototransformation half-life times of some hours were determined in the unshaded parts of the river. Biotransformation in the sediments was also an important attenuation process since the concentrations of the other pharmaceuticals in the sediments decreased relative to carbamazepine with depth. For the chiral betablocker metoprolol this biotransformation was also confirmed by a decrease in the enantiomer fractionation from 0.49 at site A to 0.43 at site B and to
---
paper_title: Characterization of pharmaceutically active compounds in Dongting Lake, China: Occurrence, chiral profiling and environmental risk.
paper_content:
Twenty commonly used pharmaceuticals including eight chiral drugs were investigated in Dongting Lake, China. The contamination level was relatively low on a global scale. Twelve pharmaceuticals were identified. The most abundant compound was caffeine followed by diclofenac, DEET, mefenamic acid, fluoxetine, ibuprofen and carbamazepine with mean concentrations from 2.0 to 80.8ngL(-1). Concentrations between East and West Dongting Lake showed spatial difference, with the West Dongting Lake less polluted. The relatively high ratio of caffeine versus carbamazepine (over 50) may indicate there was possible direct discharge of domestic wastewater into the lake. This is the first study presenting a survey allowing for comprehensive analysis of multiclass achiral and chiral pharmaceuticals including beta-blockers, antidepressants and anti-inflammatory drugs in freshwater lake. The enantiomeric compositions presented racemic to weakly enantioselective, with the highest enantiomeric fraction (EF) of 0.63 for fluoxetine. Meanwhile, venlafaxine was identified and evaluated the environment risk in surface water in China for the first time. The results of risk assessment suggested that fluoxetine, venlafaxine and diclofenac acid might pose a significant risk to aquatic organisms in Dongting Lake. The resulting data will be useful to enrich the research of emerging pollutants in freshwater lake and stereochemistry for environment investigations.
---
paper_title: Determination of the rotational barriers of atropisomeric polychlorinated biphenyls (PCBs) by a novel stopped-flow multidimensional gas chromatographic technique
paper_content:
The rotational barriers DG ‡ (T ) of the four atropisomeric polychlori- nated biphenyls (PCBs) 2,28,3,58,6-pentachlorobiphenyl (PCB 95), 2,283,38,4,68- hexachlorobiphenyl (PCB 132), 2,28,3,38,6,68-hexachlorobiphenyl (PCB 136), and 2,28,3,48,58,6-hexachlorobiphenyl (PCB 149) were determined via on-line enantiomeriza- tion kinetics by a new stopped-flow multidimensional gas chromatographic technique (stopped-flow MDGC) employing Chirasil-Dex as chiral stationary phase for enantiomer separation. The calculated rotational barriers DG ‡ (T ) of the trichloro-ortho-substituted atropisomers are 184 ± 2 kJ/mol for PCB 95, 189 ± 4 kJ/mol for PCB 132, and 184 ± 1 kJ/mol for PCB 149 at 300°C. The rotational barrier DG ‡ (T ) of tetrachloro-ortho- substituted PCB 136 is at least (or higher than) 210 kJ/mol at 320°C. Chirality 10:316- 320, 1998. © 1998 Wiley-Liss, Inc.
---
paper_title: Stereoisomer quantification of the β-blocker drugs atenolol, metoprolol, and propranolol in wastewaters by chiral high-performance liquid chromatography–tandem mass spectrometry
paper_content:
A chiral liquid chromatography–tandem mass spectrometry (HPLC-MS–MS) method was developed and validated for measuring individual enantiomers of three β-blocker drugs (atenolol, metoprolol, and propranolol) in wastewater treatment plant (WWTP) influents and effluents. Mean recoveries of the pharmaceuticals ranged from 67 to 106%, and the limits of detection of the analytes were 2–17 ng/L in wastewater effluents. The method was demonstrated by measuring, for the first time, the stereoisomer composition of target analytes in raw and treated wastewaters of two Canadian WWTPs. In these trials, racemic amounts of the three drugs were observed in influent of one wastewater treatment plant, but nonracemic amounts were observed in another. Effluents of the two plants contained nonracemic amounts of the drugs. These results indicate that biologically-mediated stereoselective processes that differ among WWTPs had occurred to eliminate individual enantiomers of the target analytes.
---
paper_title: Trace analysis of fluoxetine and its metabolite norfluoxetine. Part II : Enantioselective quantification and studies of matrix effects in raw and treated wastewater by solid phase extraction and liquid chromatography-tandem mass spectrometry
paper_content:
The isotope-labeled compounds fluoxetine-d5 and norfluoxetine-d5 were used to study matrix effects caused by co-eluting compounds originating from raw and treated wastewater samples, collected in U ...
---
paper_title: Occurrence and behavior of the chiral anti-inflammatory drug naproxen in an aquatic environment
paper_content:
The present study reports on the occurrence and chiral behavior of the anti-inflammatory drug (S)-naproxen (NAP)-(S)-2-(6-methoxynaphthalen-2-yl)propionic acid-in an aquatic environment under both field and laboratory conditions. In influents and effluents of sewage treatment plants (STPs) in the Tama River basin (Tokyo), (S)-NAP was detected at concentrations of 0.03 µg L(-1) to 0.43 µg L(-1) and 0.01 µg L(-1) to 0.11 µg L(-1), respectively. The concentrations of a major metabolite, 6-O-desmethyl NAP (DM-NAP) were up to 0.47 µg L(-1) and 0.56 µg L(-1) in influents and effluents, respectively. (R)-naproxen was not detected in STP influents, although it was present in effluents, and the enantiomeric faction (= S/[S + R]) of NAP ranged from 0.88 to 0.91. Under laboratory conditions with activated sludge from STPs, rapid degradation of (S)-NAP to DM-NAP and chiral inversion of (S)-NAP to (R)-NAP were observed. During river die-away experiments, degradation and chiral inversion of NAP were extremely slow. In addition, chiral inversion of (S)-NAP to (R)-NAP was not observed during photodegradation experiments. In the river receiving STP discharge, NAP and DM-NAP concentrations reached 0.08 µg L(-1) and 0.16 µg L(-1) , respectively. The enantiomeric faction of NAP in the river ranged from 0.84 to 0.98 and remained almost unchanged with the increasing contribution of rainfall to the river water. These results suggest that the absence and decrease of (R)-NAP in river waters could indicate the inflow of untreated sewage. E
---
paper_title: Simultaneous determination of chiral pesticide flufiprole enantiomers in vegetables, fruits, and soil by high-performance liquid chromatography
paper_content:
A simple and reliable method for the simultaneous determination of chiral pesticide flufiprole enantiomers using high-performance liquid chromatography has been established. The separation and determination were performed using reversed-phase chromatography on a carbamoyl–cellulose-type chiral stationary phase, a Lux Cellulose-2 column. The effects of different mobile phase composition on separation were discussed. The absolute configuration of flufiprole enantiomers was measured through the combination of experimental and predicted ECD spectra. An Alumina-N solid-phase extraction (SPE) column was used in the cleanup of the vegetables, fruits, and soil samples. The method was evaluated by the specificity, matrix effect, linearity, precision, accuracy and stability. The mean recoveries of two enantiomers ranged from 86.8 to 98.9 %, with 1.1–6.4 % intra-day relative standard deviation (RSD) and 1.2 to 5.8 % inter-day RSD. Good linearity (R 2 > 0.998) was obtained for all analyte matrix calibration curves within the range of 0.2–20 mg L−1. The limit of detection for two enantiomers in the six matrices was 0.007–0.008 mg kg−1, whereas the limit of quantification of two enantiomers in fruits, vegetables, and soil was 0.021–0.025 mg kg−1. The results confirmed that this method was convenient and accurate for the simultaneous determination of flufiprole enantiomers in food and environmental samples.
---
paper_title: Pharmaceutical and biomedical applications of enantioseparations using liquid chromatographic techniques.
paper_content:
The chiral separation methods using liquid chromatographic techniques can be divided into two categories: one is a direct method, which is based on a diastereomer formation on stationary phase or in mobile phase. The other is an indirect method, which is based on a diasteromer formation by reaction with a homochiral reagent. The enantiomer separation on a chiral stationary phases followed by derivatization with an achiral reagent is also dealt with this review article as the indirect method. The pharmaceutical and biomedical applications of enantioseparations using the direct and indirect methods have been considered in this review.
---
paper_title: A New Chiral Residue Analysis Method for Triazole Fungicides in Water Using Dispersive Liquid-Liquid Microextraction (DLLME)
paper_content:
A rapid, simple, reliable, and environment-friendly method for the residue analysis of the enantiomers of four chiral fungicides including hexaconazole, triadimefon, tebuconazole, and penconazole in water samples was developed by dispersive liquid-liquid microextraction (DLLME) pretreatment followed by chiral high-performance liquid chromatography (HPLC)-DAD detection. The enantiomers were separated on a Chiralpak IC column by HPLC applying n-hexane or petroleum ether as mobile phase and ethanol or isopropanol as modifier. The influences of mobile phase composition and temperature on the resolution were investigated and most of the enantiomers could be completely separated in 20 min under optimized conditions. The thermodynamic parameters indicated that the separation was enthalpy-driven. The elution orders were detected by both circular dichroism detector (CD) and optical rotatory dispersion detector (ORD). Parameters affecting the DLLME performance for pretreatment of the chiral fungicides residue in water samples, such as the extraction and dispersive solvents and their volume, were studied and optimized. Under the optimum microextraction condition the enrichment factors were over 121 and the linearities were 30-1500 µg L(-1) with the correlation coefficients (R(2)) over 0.9988 and the recoveries were between 88.7% and 103.7% at the spiking levels of 0.5, 0.25, and 0.05 mg L(-1) (for each enantiomer) with relative standard deviations varying from 1.38% to 6.70% (n = 6) The limits of detection (LODs) ranged from 8.5 to 29.0 µg L(-1) (S/N = 3).
---
paper_title: Enantiomeric Fraction Determination of 2-Arylpropionic Acids in a Package Plant Membrane Bioreactor
paper_content:
Enantiomeric compositions of three 2-arylpropionic acid (2-APA) drugs, ibuprofen, naproxen, and ketoprofen, were monitored in a membrane bioreactor (MBR) treating municipal effluent in a small rural town in Australia. Specific enantiomers were determined as amide diastereomers using the chiral derivatizing reagent, (R)-1-phenylethylamine (PEA), followed by gas chromatography-tandem mass spectrometry (GC-MS/MS). The six individual enantiomers were quantified by isotope dilution and the enantiomeric fractions (EFs) were determined. Over four separate sampling events, ibuprofen EF ranged from 0.88 to 0.94 (median 0.93) in the influent and 0.38 to 0.40 (median 0.39) in the effluent. However, no significant change in ketoprofen EF was observed, with influent EFs of 0.56-0.60 (median 0.58) and effluent EFs 0.54-0.68 (median 0.56). This is the first report of enantiospecific analysis of ketoprofen in municipal wastewater and it is not yet clear why such different behavior was observed compared to ibuprofen. Naproxen EF was consistently measured at 0.99 in the influent and ranged from 0.86 to 0.94 (median 0.91) in the effluent. This study demonstrates that EF is a relatively stable parameter and does not fluctuate according to concentration or other short-term variables introduced by sampling limitations. The enantiospecific analysis of chiral chemicals presents a promising approach to elucidate a more thorough understanding of biological treatment processes and a potential tool for monitoring the performance of key biological pathways.
---
paper_title: Strategy for Correction of Matrix Effect on the Determination of Pesticides in Water Bodies Using SPME-GC-FID
paper_content:
This paper investigates a strategy as a quality control parameter, using standard surrogate and determination of a relation factor for determination of parathion-methyl, chlorpyriphos and cypermethrin pesticides in environmental aqueous matrices with distinct characteristics (river water, estuarine, seawater and weir water), using the technique of solid phase microextraction gas chromatography with flame ionization detector (SPME-GC-FID). Pesticides were very susceptible to matrix effects promoted by environmental samples. The salinity and the organic matter seem to have been the main sources of interference in the method. For chlorpyriphos, in middle and high levels, the values of relation factor (Rf) for estuarines, seawater and weir matrices were statistically similar. For cypermethrin, the statistical equality occurred in estuarine matrices in medium and high levels of concentration. That indicates proportional behavior between the pesticide and the surrogate recovery, suggesting that a single value of Rf can be used as correction factor recovery for any of these matrices.
---
paper_title: Enantioseparation of chiral pharmaceuticals in biomedical and environmental analyses by liquid chromatography: An overview
paper_content:
Abstract This review aims to present the issues associated to enantioseparation of chiral pharmaceuticals in biological and environmental matrices using chiral stationary phases (CSP). Thus, it related some enantioselective methods in liquid chromatography (LC) and compares the importance given to chiral separation in biomedical and environmental fields. For that the most used CSP, the enantioselective chromatographic methods, their advantages and drawbacks were swiftly revised and compared. The recent advances and the limitations of chiral analytical methods in LC were also discussed.
---
paper_title: Recent progresses in protein-based chiral stationary phases for enantioseparations in liquid chromatography.
paper_content:
Chiral stationary phases (CSPs) based on proteins orglycoproteins have been developed for the enantioseparations of various compounds. In 2001, a review article [J. Haginaka, J. Chromatogr. A, 906 (2001) 253] dealing with CSPs based on proteins and glycoproteins was published. After that serum albumin from other species, penicillin G-acylase, antibodies, fatty acid binding protein and streptavidin were newly introduced as the chiral selectors in CSPs. This review article deals with recent progresses in CSPs based on protein or glycoproteins in LC after 2001, focusing on their enantioselective properties and chiral recognition mechanisms.
---
paper_title: Chiral stationary phase optimized selectivity liquid chromatography: A strategy for the separation of chiral isomers
paper_content:
Chiral Stationary-Phase Optimized Selectivity Liquid Chromatography (SOSLC) is proposed as a tool to optimally separate mixtures of enantiomers on a set of commercially available coupled chiral columns. This approach allows for the prediction of the separation profiles on any possible combination of the chiral stationary phases based on a limited number of preliminary analyses, followed by automated selection of the optimal column combination. Both the isocratic and gradient SOSLC approach were implemented for prediction of the retention times for a mixture of 4 chiral pairs on all possible combinations of the 5 commercial chiral columns. Predictions in isocratic and gradient mode were performed with a commercially available and with an in-house developed Microsoft visual basic algorithm, respectively. Optimal predictions in the isocratic mode required the coupling of 4 columns whereby relative deviations between the predicted and experimental retention times ranged between 2 and 7%. Gradient predictions led to the coupling of 3 chiral columns allowing baseline separation of all solutes, whereby differences between predictions and experiments ranged between 0 and 12%. The methodology is a novel tool allowing optimizing the separation of mixtures of optical isomers.
---
paper_title: Chiral analysis of organochlorine pesticides in Alabama soils.
paper_content:
The enantiomeric composition of organochlorine (OC) pesticide residues was investigated in 32 agricultural and 3 cemetery soils from Alabama. The enantiomeric signatures were similar to those from other soils in US and Canada. The enantiomer fractions (EFs) of o,p'-DDT showed great variability, ranging from 0.41 to 0.57 while the EFs of chlordanes and chlordane metabolites were less variable and differed in general significantly from racemic. Enantioselective depletion of (+)trans-chlordane, (-)cis-chlordane, the first eluting enantiomer of MC5, and enrichment of (+)heptachlor-exo-epoxide and (+)oxychlordane was found in a large majority of the samples with detectable residues. The enantiomeric composition of alpha-hexachlorocyclohexane was racemic or close to racemic.
---
paper_title: Recent advances in gas chromatography for solid and liquid stationary phases containing metal ions.
paper_content:
This review is devoted to the application of metal complexes as column packings and liquid stationary phases in gas chromatography. Particular attention is paid to the stationary phases with nitrogen-containing functional groups (e.g., amine and ketoimine) and beta-diketonates on the modified silica surface. The review also concerns the results of the research on metallomesogenes and chiral stationary phases. The factors influencing the retention mechanism in complexation gas chromatography are discussed. Practical application of the metal chelate-containing chromatographic packings for analytical separation of organic substances is considered.
---
paper_title: Loadings, trends, comparisons, and fate of achiral and chiral pharmaceuticals in wastewaters from urban tertiary and rural aerated lagoon treatments.
paper_content:
A comparison of time-weighted average pharmaceutical concentrations, loadings and enantiomer fractions (EFs) was made among treated wastewater from one rural aerated lagoon and from two urban tertiary wastewater treatment plants (WWTPs) in Alberta, Canada. Passive samplers were deployed directly in treated effluent for nearly continuous monitoring of temporal trends between July 2007 and April 2008. In aerated lagoon effluent, concentrations of some drugs changed over time, with some higher concentrations in winter likely due to reduced attenuation from lower temperatures (e.g., less microbially mediated biotransformation) and reduced photolysis from ice cover over lagoons; however, concentrations of some drugs (e.g. antibiotics) may also be influenced by changing use patterns over the year. Winter loadings to receiving waters for the sum of all drugs were 700 and 400 g/day from the two urban plants, compared with 4 g/day from the rural plant. Per capita loadings were similar amongst all plants. This result indicates that measured loadings, weighted by population served by WWTPs, are a good predictor of other effluent concentrations, even among different treatment types. Temporal changes in chiral drug EFs were observed in the effluent of aerated lagoons, and some differences in EF were found among WWTPs. This result suggests that there may be some variation of microbial biotransformation of drugs in WWTPs among plants and treatment types, and that the latter may be a good predictor of EF for some, but not all drugs.
---
paper_title: Enantioselective analysis of ibuprofen, ketoprofen and naproxen in wastewater and environmental water samples.
paper_content:
A highly sensitive and reliable method for the enantioselective analysis of ibuprofen, ketoprofen and naproxen in wastewater and environmental water samples has been developed. These three pharmaceuticals are chiral molecules and the variable presence of their individual (R)- and (S)-enantiomers is of increasing interest for environmental analysis. An indirect method for enantioseparation was achieved by the derivatization of the (R)- and (S)-enantiomers to amide diastereomers using (R)-1-phenylethylamine ((R)-1-PEA). After initial solid phase extraction from aqueous samples, derivatization was undertaken at room temperature in less than 5 min. Optimum recovery and clean-up of the amide diastereomers from the derivatization solution was achieved by a second solid phase extraction step. Separation and detection of the individual diastereomers was undertaken by gas chromatography-tandem mass spectrometry (GC-MS/MS). Excellent analyte separation and peak shapes were achieved for the derivatized (R)- and (S)-enantiomers for all three pharmaceuticals with peak resolution, R(s) is in the range of 2.87-4.02 for all diastereomer pairs. Furthermore, the calibration curves developed for the (S)-enantiomers revealed excellent linearity (r(2) ≥ 0.99) for all three compounds. Method detection limits were shown to be within the range of 0.2-3.3 ng L(-1) for individual enantiomers in ultrapure water, drinking water, surface water and a synthetic wastewater. Finally, the method was shown to perform well on a real tertiary treated wastewater sample, revealing measurable concentrations of both (R)- and (S)-enantiomers of ibuprofen, naproxen and ketoprofen. Isotope dilution using racemic D(3)-ibuprofen, racemic D(3)-ketoprofen and racemic D(3)-naproxen was shown to be an essential aspect of this method for accurate quantification and enantiomeric fraction (EF) determination. This approach produced excellent reproducibility for EF determination of triplicate tertiary treated wastewater samples.
---
paper_title: Enantiomeric fraction evaluation of pharmaceuticals in environmental matrices by liquid chromatography-tandem mass spectrometry
paper_content:
Abstract The interest for environmental fate assessment of chiral pharmaceuticals is increasing and enantioselective analytical methods are mandatory. This study presents an enantioselective analytical method for the quantification of seven pairs of enantiomers of pharmaceuticals and a pair of a metabolite. The selected chiral pharmaceuticals belong to three different therapeutic classes, namely selective serotonin reuptake inhibitors (venlafaxine, fluoxetine and its metabolite norfluoxetine), beta-blockers (alprenolol, bisoprolol, metoprolol, propranolol) and a beta 2 -adrenergic agonist (salbutamol). The analytical method was based on solid phase extraction followed by liquid chromatography tandem mass spectrometry with a triple quadrupole analyser. Briefly, Oasis® MCX cartridges were used to preconcentrate 250 mL of water samples and the reconstituted extracts were analysed with a Chirobiotic™ V under reversed mode. The effluent of a laboratory-scale aerobic granular sludge sequencing batch reactor (AGS-SBR) was used to validate the method. Linearity ( r 2 > 0.99), selectivity and sensitivity were achieved in the range of 20–400 ng L −1 for all enantiomers, except for norfluoxetine enantiomers which range covered 30–400 ng L −1 . The method detection limits were between 0.65 and 11.5 ng L −1 and the method quantification limits were between 1.98 and 19.7 ng L −1 . The identity of all enantiomers was confirmed using two MS/MS transitions and its ion ratios, according to European Commission Decision 2002/657/EC. This method was successfully applied to evaluate effluents of wastewater treatment plants (WWTP) in Portugal. Venlafaxine and fluoxetine were quantified as non-racemic mixtures (enantiomeric fraction ≠ 0.5). The enantioselective validated method was able to monitor chiral pharmaceuticals in WWTP effluents and has potential to assess the enantioselective biodegradation in bioreactors. Further application in environmental matrices as surface and estuarine waters can be exploited.
---
paper_title: Spatial and temporal occurrence of pharmaceuticals and illicit drugs in the aqueous environment and during wastewater treatment: new developments.
paper_content:
This paper presents, for the first time, spatial and temporal occurrence of a comprehensive set of >60 pharmaceuticals, illicit drugs and their metabolites in wastewater (7 wastewater treatment plants utilising different treatment technologies) and a major river in the UK over a 12 month period. This paper also undertakes a comparison of the efficiency of processes utilised during wastewater treatment and it discusses under-researched aspects of pharmaceuticals and illicit drugs in the environment including sorption to solids and stereoselectivity in the fate of chiral drugs during wastewater treatment and in receiving waters. The removal efficiency of analytes strongly depended on the type of wastewater treatment technology employed and denoted <50% or >60% in the case of tricking filter and activated sludge respectively. It should be stressed, however, that the removal rate was highly variable for different groups of compounds. A clear increase in the cumulative concentration of all monitored compounds was observed in receiving waters; thus highlighting the impact of WWTP discharge on water quality and the importance of the removal efficiency of WWTPs. No seasonal variation was observed with regard to the total load of targeted compounds in the river each month. The concentration of each analyte was largely dependent on rainfall and the dilution factor of WWTP discharge. These results indicate that although the drugs of abuse are not present at very high concentrations in river water (typically low ng L(-1) levels), their occurrence and possible synergic action is of concern, and the study of multiple groups of drugs of abuse is of significant importance.
---
paper_title: Stereoselective biodegradation of amphetamine and methamphetamine in river microcosms
paper_content:
Here presented for the first time is the enantioselective biodegradation of amphetamine and methamphetamine in river microcosm bioreactors. The aim of this investigation was to test the hypothesis that mechanisms governing the fate of amphetamine and methamphetamine in the environment are mostly stereoselective and biological in nature. Several bioreactors were studied over the duration of 15 days (i) in both biotic and abiotic conditions, (ii) in the dark or exposed to light and (iii) in the presence or absence of suspended particulate matter. Bioreactor samples were analysed using SPE-chiral-LC-(QTOF)MS methodology. This investigation has elucidated the fundamental mechanism for degradation of amphetamine and methamphetamine as being predominantly biological in origin. Furthermore, stereoselectivity and changes in enantiomeric fraction (EF) were only observed under biotic conditions. Neither amphetamine nor methamphetamine appeared to demonstrate adsorption to suspended particulate matter. Our experiments also demonstrated that amphetamine and methamphetamine were photo-stable. Illicit drugs are present in the environment at low concentrations but due to their pseudo-persistence and non-racemic behaviour, with two enantiomers revealing significantly different potency (and potentially different toxicity towards aquatic organisms) the risk posed by illicit drugs in the environment should not be under- or over-estimated. The above results demonstrate the need for re-evaluation of the procedures utilised in environmental risk assessment, which currently do not recognise the importance of the phenomenon of chirality in pharmacologically active compounds.
---
paper_title: A column-switching method for quantification of the enantiomers of omeprazole in native matrices of waste and estuarine water samples.
paper_content:
Abstract This work reports the use of a two-dimensional liquid chromatography (2D-LC) system for quantification of the enantiomers of omeprazole in distinct native aqueous matrices. An octyl restricted-access media bovine serum albumin column (RAM-BSA C8) was used in the first dimension, while a polysaccharide-based chiral column was used in the second dimension with either ultraviolet (UV–vis) or ion-trap tandem mass spectrometry (IT-MS/MS) detection. An in-line configuration was employed to assess the exclusion capacity of the RAM-BSA columns to humic substances. The excluded macromolecules had a molecular mass in the order of 18 kDa. Good selectivity, extraction efficiency, accuracy, and precision were achieved employing a very small amount (500 μL or 1.00 mL) of native water sample per injection, with detection limits of 5.00 μg L−1, using UV–vis, and 0.0250 μg L−1, using IT-MS/MS. The total analysis time was only 35 min, with no time spent on sample preparation. The methods were successfully applied to analyze a series of waste and estuarine water samples. The enantiomers were detected in an estuarine water sample collected from the Douro River estuary (Portugal) and in an influent sample from the wastewater treatment plant (WWTP) of Sao Carlos (Brazil). As far as we are concerned, this is the first report of the occurrence of (+)-omeprazole and (−)-omeprazole in native aqueous matrices.
---
paper_title: Enantiomeric analysis of drugs of abuse in wastewater by chiral liquid chromatography coupled with tandem mass spectrometry.
paper_content:
The manuscript concerns the development and validation of a method for enantiomeric analysis of structurally related amphetamines (amphetamine, methamphetamine, 4-methylenedioxymethamphetamine (MDMA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxy-N-ethylamphetamine (MDEA)), ephedrines (ephedrine, pseudoephedrine and norephedrine) and venlafaxine in wastewater by means of chiral chromatography coupled with tandem mass spectrometry. Solid-phase extraction on Oasis HLB sorbent used for sample clean-up and concentration of analytes resulted in very good recoveries accounting for >70%. Signal suppression during MS analysis was negligible for most studied analytes. Resolution of enantiomers of chiral drugs was found to be higher than 1. Preliminary assay validation was undertaken. The mean correlation coefficients of the calibration curves, which were on average higher than 0.997 for all studied analytes, showed good linearity of the method in the studied range. Intra- and inter-day repeatabilities were on average less than 5%. The method quantification limits in wastewater were at low ppt levels and varied from 2.25 to 11.75ng/L. The method was successfully applied for the analysis of raw and treated wastewater samples collected from four wastewater treatment plants. A common occurrence of 1R,2S (-)-ephedrine, 1S,2S (+)-pseudoephedrine and venlafaxine in both raw and treated wastewater samples was observed. Amphetamine, methamphetamine, MDMA and MDEA were also detected in several wastewater samples. The study of enantiomeric fractions of these chiral drugs proved their variable non-racemic composition. The influence of wastewater treatment processes on the enantiomeric composition of chiral drugs was also noted and might indicate enantioselective processes occurring during treatment, although more comprehensive research has to be undertaken to support this hypothesis.
---
paper_title: Occurrence and Environmental Behavior of the Chiral Pharmaceutical Drug Ibuprofen in Surface Waters and in Wastewater
paper_content:
Pharmaceutical compounds can reach detectable concentrations in rivers and lakes if production and use are sufficiently large and the compounds show some mobility and persistence in the aquatic environment. In this study, we report on the occurrence and on the enantiomer composition of the chiral pharmaceutical drug ibuprofen (IB) in surface waters and in samples from wastewater treatment plants (WWTPs). Enantioselective gas chroma tography and detection by mass spectrometry/mass spectrometry was used for analysis. IB was present in influents of WWTPs at concentrations of up to 3 μg/L with a high enantiomeric excess of the pharmacologically active S enantiomer (S ≫ R), as from human urinary excretion. The principal human urinary metabolites of IB, hydroxy-IB and carboxy-IB, were observed in WWTP influents at even higher concentrations. In contrast to other pharmaceutical compounds such as clofibric acid and diclofenac, IB and its metabolites are then efficiently degraded (>95%) during treatment in WWTPs. ...
---
paper_title: Direct injection of native aqueous matrices by achiral–chiral chromatography ion trap mass spectrometry for simultaneous quantification of pantoprazole and lansoprazole enantiomers fractions
paper_content:
Abstract A two-dimensional liquid chromatography system coupled to ion-trap tandem mass spectrometer (2DLC-IT–MS/MS) was employed for the simultaneous quantification of pantoprazole and lansoprazole enantiomers fractions. A restricted access media of bovine serum albumin octyl column (RAM-BSA C 8 ) was used in the first dimension for the exclusion of the humic substances, while a polysaccharide-based chiral column was used in the second dimension for the enantioseparation of both pharmaceuticals. The results described here show good selectivity, extraction efficiency, accuracy, and precision with detection limits of 0.200 and 0.150 μg L −1 for the enatiomers of pantoprazole and lansoprazole respectively, while employing a small amount (1.0 mL) of native water sample per injection. This work reports an innovative assay for monitoring work, studies of biotic and abiotic enantioselective degradation and temporal changes of enantiomeric fractions.
---
paper_title: Enantioselective chromatography in drug discovery
paper_content:
Molecular chirality is a fundamental consideration in drug discovery, one necessary to understand and describe biological targets as well as to design effective pharmaceutical agents. Enantioselective chromatography has played an increasing role not only as an analytical tool for chiral analyses, but also as a preparative technique to obtain pure enantiomers from racemates quickly from a wide diversity of chemical structures. Different enantioselective chromatography techniques are reviewed here, with particular emphasis on the most widespread high performance liquid chromatography (HPLC) and the rapidly emerging supercritical fluid chromatography (SFC) techniques. This review focuses on the dramatic advances in the chiral stationary phases (CSPs) that have made HPLC and SFC indispensable techniques for drug discovery today. In addition, screening strategies for rapid method development and considerations for laboratory-scale preparative separation are discussed and recent achievements are highlighted.
---
paper_title: Multi-residue enantiomeric analysis of human and veterinary pharmaceuticals and their metabolites in environmental samples by chiral liquid chromatography coupled with tandem mass spectrometry detection
paper_content:
Enantiomeric profiling of chiral pharmacologically active compounds (PACs) in the environment has hardly been investigated. This manuscript describes, for the first time, a multi-residue enantioselective method for the analysis of human and veterinary chiral PACs and their main metabolites from different therapeutic groups in complex environmental samples such as wastewater and river water. Several analytes targeted in this paper have not been analysed in the environment at enantiomeric level before. These are aminorex, carboxyibuprofen, carprofen, cephalexin, 3-N-dechloroethylifosfamide, 10,11-dihydro-10-hydroxycarbamazepine, dihydroketoprofen, fenoprofen, fexofenadine, flurbiprofen, 2-hydroxyibuprofen, ifosfamide, indoprofen, mandelic acid, 2-phenylpropionic acid, praziquantel and tetramisole. The method is based on chiral liquid chromatography utilising a chiral α1-acid glycoprotein column and tandem mass spectrometry detection. Excellent chromatographic separation of enantiomers (Rs≥1.0) was achieved for chloramphenicol, fexofenadine, ifosfamide, naproxen, tetramisole, ibuprofen and their metabolites: aminorex and dihydroketoprofen (three of four enantiomers), and partial separation (Rs = 0.7-1.0) was achieved for ketoprofen, praziquantel and the following metabolites: 3-N-dechloroethylifosfamide and 10,11-dihydro-10-hydroxycarbamazepine. The overall performance of the method was satisfactory for most of the compounds targeted. Method detection limits were at low nanogram per litre for surface water and effluent wastewater. Method intra-day precision was on average under 20% and sample pre-concentration using solid phase extraction yielded recoveries >70% for most of the analytes. This novel, selective and sensitive method has been applied for the quantification of chiral PACs in surface water and effluent wastewater providing excellent enantioresolution of multicomponent mixtures in complex environmental samples. It will help with better understanding of the role of individual enantiomers in the environment and will enable more accurate environmental risk assessment.
---
paper_title: Trace analysis of fluoxetine and its metabolite norfluoxetine. Part I: development of a chiral liquid chromatography-tandem mass spectrometry method for wastewater samples.
paper_content:
An enantioselective method for the determination of fluoxetine (a selective serotonin reuptake inhibitor) and its pharmacologically active metabolite norfluoxetine has been developed for raw and tr ...
---
paper_title: Stopped-flow multidimensional gas chromatography: A new method for the determination of enantiomerization barriers
paper_content:
With the stopped-flow multidimensional gas chromatographic technique enantiomerization barriers of ΔG#(T)=70–200 kJ/mol can easily be determined. First, the racemic mixture is separated by gas chromatography on a chiral stationary phase, then one enantiomer is transferred into a heated empty reactor column where enantiomerization is performed on-line in an achiral environment in the gas phase, and finally the enantiomerized fraction is separated in a second chiral column. The enantiomerization barrier ΔG (T) can be calculated from the enantiomerization time, the observed enantiomeric ratio, and the enantiomerization temperature. Moreover, by calculating ΔG#(T) at different temperatures, ΔH# and ΔS# are accessible. With this method atropisomeric polychlorinated biphenyls, a chiral allene, and a chiral aziridine were investigated. ©1999 John Wiley & Sons, Inc. J Micro Sep 11: 475–479, 1999
---
paper_title: Chiral pharmaceuticals: A review on their environmental occurrence and fate processes
paper_content:
More than 50% of pharmaceuticals in current use are chiral compounds. Enantiomers of the same pharmaceutical have identical physicochemical properties, but may exhibit differences in pharmacokinetics, pharmacodynamics and toxicity. The advancement in separation and detection methods has made it possible to analyze trace amounts of chiral compounds in environmental media. As a result, interest on chiral analysis and evaluation of stereoselectivity in environmental occurrence, phase distribution and degradation of chiral pharmaceuticals has grown substantially in recent years. Here we review recent studies on the analysis, occurrence, and fate of chiral pharmaceuticals in engineered and natural environments. Monitoring studies have shown ubiquitous presence of chiral pharmaceuticals in wastewater, surface waters, sediments, and sludge, particularly β-receptor antagonists, analgesics, antifungals, and antidepressants. Selective sorption and microbial degradation have been demonstrated to result in enrichment of one enantiomer over the other. The changes in enantiomer composition may also be caused by biologically catalyzed chiral inversion. However, accurate evaluation of chiral pharmaceuticals as trace environmental pollutants is often hampered by the lack of identification of the stereoconfiguration of enantiomers. Furthermore, a systematic approach including occurrence, fate and transport in various environmental matrices is needed to minimize uncertainties in risk assessment of chiral pharmaceuticals as emerging environmental contaminants.
---
paper_title: Chiral signature of venlafaxine as a marker of biological attenuation processes.
paper_content:
The chiral signature of the antidepressant venlafaxine was used in this study to gain insight into biological attenuation processes and to differentiate abiotic and biotic transformation processes in water. Laboratory scale experiments revealed that sorption and phototransformation processes were not enantioselective while venlafaxine was enantioselectively biotransformed into O-desmethylvenlafaxine. The enantiomeric fraction (EF) variations of venlafaxine appeared to be proportional to its microbial fractional conversion. Enantioselective biotransformation of venlafaxine was also investigated in a eutrophic French river. Venlafaxine was found to be racemic at the output of the main wastewater treatment plant discharging into the river, independently of the sampling date during the year. An analysis of EF variations might provide evidence of biodegradation along a 30 km river stretch.
---
paper_title: Enantiomeric fractionation as a tool for quantitative assessment of biodegradation: The case of metoprolol.
paper_content:
An efficient chiral liquid chromatography high resolution mass spectrometry method has been developed for the determination of metoprolol (MTP) and three of its major metabolites, namely O-desmethylmetoprolol (O-DMTP), α-hydroxymetoprolol (α-HMTP) and metoprolol acid (MTPA) in wastewater treatment plant (WWTP) influents and effluents. The optimized analytical method has been validated with good quality parameters including resolution >1.3 and method quantification limits down to the ng/L range except for MTPA. On the basis of this newly developed analytical method, the stereochemistry of MTP and its metabolites was studied over time in effluent/sediment biotic and sterile microcosms under dark and light conditions and in influents and effluents of 5 different WWTPs. MTP stereoselective degradation was exclusively observed under biotic conditions, confirming the specificity of enantiomeric fraction variations to biodegradation processes. MTP was always biotransformed into MTPA with a (S)-enantiomer enrichment. The results of enantiomeric enrichment pointed the way for a quantitative assessment of in situ biodegradation processes due to a good fit (R(2) > 0.98) of the aerobic MTP biodegradation to the Rayleigh dependency in all the biotic microcosms and in WWTPs because both MTP enantiomers followed the same biodegradation kinetic profiles. These results demonstrate that enantiomeric fractionation constitutes a very interesting quantitative indicator of MTP biodegradation in WWTPs and probably in the environment.
---
paper_title: Enantioselective stopped-flow multidimensional gas chromatography. Determination of the inversion barrier of 1-chloro-2,2-dimethylaziridine.
paper_content:
Enantioselective stopped-flow multidimensional gas chromatography (stopped-flow MDGC) is a fast and simple technique to determine enantiomerization (inversion) barriers in the gas phase in a range of delta G#gas(T)=70-200 kJ mol(-1). After complete gas-chromatographic separation of the enantiomers in the first column, gas phase enantiomerization of the heart-cut fraction of one single enantiomer is performed in the second (reactor) column at increased temperature and afterwards this fraction is separated into the enantiomers in the third column. From the observed de novo enantiomeric peak areas a(j), the enantiomerization time t and the enantiomerization temperature T, the enantiomerization (inversion) barrier delta G#gas(T) is determined and from temperature-dependent experiments, the activation enthalpy delta H#gas and the activation entropy delta S#gas are obtained. Enantiomerization studies on chiral 1-chloro-2,2-dimethylaziridine by stopped-flow MDGC yielded activation parameters of nitrogen inversion in the gas phase, i.e., delta G#gas(353 K)=110.5+/-0.5 kJ mol(-1), delta H#gas=71.0+/-3.8 kJ mol(-1) and delta S#gas=-109+/-11 J mol(-1) K(-1). By the complementary method of dynamic gas chromatography (GC), the apparent enantiomerization (inversion) barrier of 1-chloro-2,2-dimethylaziridine in the gas-liquid biphase system was found delta G#app(353 K)=108 kJ mol(-1). The values obtained by stopped-flow MDGC in the gas phase were used to calculate the activation parameters of nitrogen inversion of 1-chloro-2,2-dimethylaziridine in the liquid phase in the presence of the chiral selector Chirasil-nickel(II), i.e.. deltaG#liq(353 K)=106.0+/-0.4 kJ mol(-1), delta H#liq=68.3+/-1.4 kJ mol(-1) and deltaS#liq=-106+/-3.0 J mol(-1) K(-1).
---
paper_title: Stereoisomer analysis of wastewater-derived β-blockers, selective serotonin re-uptake inhibitors, and salbutamol by high-performance liquid chromatography–tandem mass spectrometry
paper_content:
Abstract A reversed-phase enantioselective liquid chromatography–tandem mass spectrometry (HPLC-MS-MS) method was developed to measure enantiomer fractions (EF) and concentrations of pharmaceuticals in wastewater. Enantiomer resolution of six β-blockers (atenolol, metoprolol, nadolol, pindolol, propranolol, and sotalol) along with two selective serotonin re-uptake inhibitors (citalopram, fluoxetine) and one β 2 -agonist (salbutamol) was achieved with the Chirobiotic V stationary phase. Analyte recovery averaged 86% in influent and 78% in effluent with limits of detection ranging from 0.2 to 7.5 ng/L. These results represent an improvement in wastewater EF measurement for atenolol, metoprolol and propranolol as well as the first EF measurements of citalopram, fluoxetine, nadolol, pindolol, salbutamol and sotalol in wastewaters. Changes in EF through treatment indicate biologically mediated stereoselective processes were likely occurring during wastewater treatment.
---
paper_title: Source discrimination of drug residues in wastewater: The case of salbutamol.
paper_content:
Abstract Analytical methods used for pharmaceuticals and drugs of abuse in sewage play a fundamental role in wastewater-based epidemiology (WBE) studies. Here quantitative analysis of drug metabolites in raw wastewaters is used to determine consumption from general population. Its great advantage in public health studies is that it gives objective, real-time data about community use of chemicals, highlighting the relationship between environmental and human health. Within a WBE study on salbutamol use in a large population, we developed a procedure to distinguish human metabolic excretion from external source of contamination, possibly industrial, in wastewaters. Salbutamol is mainly excreted as the sulphate metabolite, which is rapidly hydrolyzed to the parent compound in the environment, so this is currently not detected. When a molecule is either excreted un-metabolized or its metabolites are unstable in the environment, studies can be completed by monitoring the parent compound. In this case it is mandatory to assess whether the drug in wastewater is present because of population use or because of a specific source of contamination, such as industrial manufacturing waste. Because commercial salbutamol mainly occurs as a racemic mixture and is stereoselective in the human metabolism, the enantiomeric relative fraction (EFrel) in wastewater samples should reflect excretion, being unbalanced towards one of two enantiomers, if the drug is of metabolic origin. The procedure described involves chiral analysis of the salbutamol enantiomers by liquid chromatography-tandem mass spectrometry (LC–MS-MS) and calculation of EFrel, to detect samples where external contamination occurs. Samples were collected daily between October and December 2013 from the Milano Nosedo wastewater treatment plant. Carbamazepine and atenolol were measured in the sewage collector, as “control” drugs. Salbutamol EFrel was highly consistent in all samples during this three-month period, but a limited number of samples had unexpectedly high concentrations where the EFrel was close to that observed of the un-metabolized, commercially available drug, supporting the idea of an external source of contamination, besides human metabolic excretion. Results showed that, when present, non-metabolic daily loads could be evaluated indicating an average of 4.12 g/day of salbutamol extra load due to non-metabolic sources. The stereoselectivity in metabolism and enantiomeric ratio analysis appears to be a useful approach in WBE studies to identify different sources of drugs in the environment, when no metabolic products are present at useful analytical levels.
---
paper_title: Determination of chiral pharmaceuticals and illicit drugs in wastewater and sludge using microwave assisted extraction, solid-phase extraction and chiral liquid chromatography coupled with tandem mass spectrometry.
paper_content:
This is the first study presenting a multi-residue method allowing for comprehensive analysis of several chiral pharmacologically active compounds (cPACs) including beta-blockers, antidepressants and amphetamines in wastewater and digested sludge at the enantiomeric level. Analysis of both the liquid and solid matrices within wastewater treatment is crucial to being able to carry out mass balance within these systems. The method developed comprises filtration, microwave assisted extraction and solid phase extraction followed by chiral liquid chromatography coupled with tandem mass spectrometry to analyse the enantiomers of 18 compounds within all three matrices. The method was successfully validated for 10 compounds within all three matrices (amphetamine, methamphetamine, MDMA, MDA, venlafaxine, desmethylvenlafaxine, citalopram, metoprolol, propranolol and sotalol), 7 compounds validated for the liquid matrices only (mirtazapine, salbutamol, fluoxetine, desmethylcitalopram, atenolol, ephedrine and pseudoephedrine) and 1 compound (alprenolol) passing the criteria for solid samples only. The method was then applied to wastewater samples; cPACs were found at concentration ranges in liquid matrices of: 1.7ngL(-1) (metoprolol) - 1321ngL(-1) (tramadol) in influent, Language: en
---
paper_title: Estimation of community-wide drugs use via stereoselective profiling of sewage
paper_content:
This paper explores possibilities of applying enantiomeric profiling to solving problems related to estimation of drugs usage in communities via the sewage epidemiology approach: for the identification of whether drug residue results from consumption of illicit drug or metabolism of other drugs, verification of potency of used drugs and monitoring of changing patterns of drugs abuse. Due to the very complex nature of wastewater used in sewage epidemiology, which comes from the whole community rather than one individual, verification of the above is challenging but vital in accurate estimations of drugs abuse as well as providing comprehensive information regarding drug abuse trends. The results of this study indicated that amphetamine in raw wastewater was enriched with R(-)-enantiomer due to its abuse as racemate. Methamphetamine was found to be racemic or to be enriched with S(+)-enantiomer. MDMA was enriched with R(-)-MDMA, which was to be expected as MDMA is abused as racemate. MDA was enriched with S(+)-enantiomer, which suggests that its presence might be associated with MDMA abuse and not intentional MDA use. Out of the four possible isomers of ephedrine only natural 1R,2S(-)-ephedrine and 1S,2S(+)-pseudoephedrine were detected in raw wastewater and their diastereomeric fractions were found to be season dependent with higher contribution from 1S,2S(+)-pseudoephedrine over winter months and an enrichment with 1R,2S(-)-ephedrine during the spring and summer months. These findings were accompanied by a decrease of cumulative concentration of ephedrines throughout the sampling campaign between February and August. This is a very important finding indicating that non-enantioselective measurement of ephedrine concentrations cannot be a reliable indicator of actual potency of ephedrines used.
---
paper_title: Chiral analysis of metoprolol and two of its metabolites, α-hydroxymetoprolol and deaminated metoprolol, in wastewater using liquid chromatography–tandem mass spectrometry
paper_content:
A LC–MS/MS method for the chiral separation of metoprolol and two of its main metabolites, α-hydroxymetoprolol (α-OH-Met) and deaminated metoprolol (COOH-Met), in environmental water samples has be ...
---
paper_title: Enantioselective and nonenantioselective degradation of organic pollutants in the marine ecosystem
paper_content:
Enantiomeric ratios of 11 chiral environmental pollutants determined in different compartments of the marine ecosystem by chiral capillary gas chromatography and chiral high-performance liquid chromatography allow discrimination between the following processes: enantioselective decomposition of both enantiomers with different velocities by marine microorganisms (α-HCH, β-PCCH, γ-PCCH); enantioselective decomposition of one enantiomer only by marine microorganisms (DCPP); enantioselective decomposition by enzymatic processes in marine biota (α-HCH, β-PCCH, trans-chlordane, cis-chlordane, octachlordane MC4, octachlordane MC5, octachlordane MC7, oxychlordane, heptachlor epoxide); enantioselective active transport through the “blood–brain barrier” (α-HCH); nonenantioselective photochemical degradation (α-HCH, β-PCCH). © 1993 Wiley-Liss, Inc.
---
paper_title: Enantiomers of α-Hexachlorocyclohexane as Tracers of Air−Water Gas Exchange in Lake Ontario
paper_content:
The technique of chiral phase capillary gas chromatography was applied to investigate the degradation and transport of the persistent chiral pesticide α-hexachlorocyclohexane (α-HCH) in the Lake Ontario environment. Chiral analysis gave the enantiomeric ratios (ERs) of α-HCH in samples taken May−October 1993 from the lake and its atmosphere. ERs of (+)α-HCH/(−)α-HCH for Lake Ontario surface and deep water samples were similar and averaged 0.85 ± 0.02 as compared with a value of 1.00 for the α-HCH standard. Higher ERs were observed in water samples from the Niagara River (0.91 ± 0.02) and from precipitation (1.00 ± 0.01). Air samples of α-HCH measured at 10 m above the lake show a seasonal variability with values near 1.00 in spring and fall and minimum values in individual samples near 0.90 in summer. A simple air−water gas transfer model demonstrates that enantiomeric ratios <1.0 in air are derived from equilibration of the air with the water during transport of the air mass over the lake. Based on the m...
---
paper_title: Variations in α-Hexachlorocyclohexane enantiomer ratios in relation to microbial activity in a temperate estuary
paper_content:
Changes in the enantiomer ratios (ERs) of chiral pollutants in the environment are often considered evidence of biological alteration despite the lack of data on causal or mechanistic relationships between microbial parameters and ER values. Enantiomer ratios that deviate from 1:1 in the environment provide evidence for the preferential microbial degradation of one enantiomer, whereas ER values equal to 1 provide no evidence for microbial degradation and may mistakenly be interpreted as evidence that biodegradation is not important. In an attempt to link biological and geochemical information related to enantioselective processes, we measured the ERs of the chiral pesticide α-hexachlorocyclohexane (α-HCH) and bacterial activity (normalized to abundance) in surface waters of the York River (VA, USA) bimonthly throughout one year. Despite lower overall α-HCH concentrations, α-HCH ER values were unexpectedly close to 1:1 in the freshwater region of the estuary with the highest bacterial activity. In contrast, ER values were nonracemic (ER ≠ 1) and α-HCH concentrations were significantly higher in the higher salinity region of the estuary, where bacterial activity was lower. Examination of these data may indicate that racemic environmental ER values are not necessarily reflective of a lack of biodegradation or recent input into the environment, and that nonenantioselective biodegradation may be important in certain areas.
---
paper_title: Terms for the quantitation of a mixture of stereoisomers.
paper_content:
Various terms for the quantitation of a mixture of enantiomers and diastereomers are discussed.
---
paper_title: Solid-phase extraction combined with dispersive liquid-liquid microextraction and chiral liquid chromatography-tandem mass spectrometry for the simultaneous enantioselective determination of representative proton-pump inhibitors in water samples
paper_content:
This report describes, for the first time, the simultaneous enantioselective determination of proton-pump inhibitors (PPIs-omeprazole, lansoprazole, pantoprazole, and rabeprazole) in environmental water matrices based on solid-phase extraction combined with dispersive liquid-liquid microextraction (SPE-DLLME) and chiral liquid chromatography-tandem mass spectrometry. The optimized results of SPE-DLLME were obtained with PEP-2 column using methanol-acetonitrile (1/1, v/v) as elution solvent, dichloroethane, and acetonitrile as extractant and disperser solvent, respectively. The separation and determination were performed using reversed-phase chromatography on a cellulose chiral stationary phase, a Chiralpak IC (250 mm × 4.6 mm, 5 μm) column, under isocratic conditions at 0.6 mL min(-1) flow rate. The analytes were detected in multiple reaction monitoring (MRM) mode by triple quadrupole mass spectrometry. Isotopically labeled internal standards were used to compensate matrix interferences. The method provided enrichment factors of around 500. Under optimal conditions, the mean recoveries for all eight enantiomers from the water samples were 89.3-107.3 % with 0.9-10.3 % intra-day RSD and 2.3-8.1 % inter-day RSD at 20 and 100 ng L(-1) levels. Correlation coefficients (r (2)) ≥ 0.999 were achieved for all enantiomers within the range of 2-500 μg L(-1). The method detection and quantification limits were at very low levels, within the range of 0.67-2.29 ng L(-1) and 2.54-8.68 ng L(-1), respectively. This method was successfully applied to the determination of the concentrations and enantiomeric fractions of the targeted analytes in wastewater and river water, making it applicable to the assessment of the enantiomeric fate of PPIs in the environment. Graphical Abstract Simultaneous enantioselective determination of representative proton-pump inhibitors in water samples.
---
paper_title: Enantioselective HPLC analysis and biodegradation of atenolol, metoprolol and fluoxetine
paper_content:
The accurate quantification of enantiomers is crucial for assessing the biodegradation of chiral pharmaceuticals in the environment. Methods to quantify enantiomers in environmental matrices are scarce. Here, we used an enantioselective method, high-performance liquid chromatography with fluorescence detection (HPLC-FD), to analyze two beta-blockers, metoprolol and atenolol, and the antidepressant fluoxetine in an activated sludge consortium from a wastewater treatment plant. The vancomycin-based chiral stationary phase was used under polar ionic mode to achieve the enantioseparation of target chiral pharmaceuticals in a single chromatographic run. The method was successfully validated over a concentration range of 20–800 ng/mL for each enantiomer of both beta-blockers and of 50–800 ng/mL for fluoxetine enantiomers. The limits of detection were between 5 and 20 ng/mL and the limits of quantification were between 20 and 50 ng/mL, for all enantiomers. The intra- and inter-batch precision was lower than 5.66 and 8.37 %, respectively. Accuracy values were between 103.03 and 117.92 %, and recovery rates were in the range of 88.48–116.62 %. Furthermore, the enantioselective biodegradation of atenolol, metoprolol and fluoxetine was followed during 15 days. The (S)-enantiomeric form of metoprolol was degraded at higher extents, whereas the degradation of atenolol and fluoxetine did not show enantioselectivity under the applied conditions.
---
paper_title: Use of the chiral pharmaceutical propranolol to identify sewage discharges into surface waters.
paper_content:
The discharge of relatively small volumes of untreated sewage is a source of wastewater-derived contaminants in surface waters that is often ignored because it is difficult to discriminate from wastewater effluent. To identify raw sewage discharges, we analyzed the two enantiomers of the popular chiral pharmaceutical, propranolol, after derivitization to convert the enantiomers to diastereomers. The enantiomeric fraction (the ratio of the concentration of one of its isomers to the total concentration) of propranolol in the influent of five wastewater treatment plants was 0.50 +/- 0.02, while after secondary treatment it was 0.42 or less. In a laboratory study designed to simulate an activated sludge municipal wastewater treatment system, the enantiomeric fraction of propranolol decreased from 0.5 to 0.43 as the compound underwent biotransformation. In a similar system designed to simulate an effluent-dominanted surface water, the enantiomeric fraction of propranolol remained constant as it underwent biotransformation. Analysis of samples from surface waters with known or suspected discharges of untreated sewage contained propranolol with an enantiomeric fraction of approximately 0.50 whereas surface waters with large discharges of wastewater effluent contained propranolol with enantiomeric fractions similar to those observed in wastewater effluent. Measurement of enantiomers of propranolol may be useful in detecting and documenting contaminants related to leaking sewers and combined sewer overflows.
---
paper_title: Enantioselective simultaneous analysis of selected pharmaceuticals in environmental samples by ultrahigh performance supercritical fluid based chromatography tandem mass spectrometry.
paper_content:
In order to assess the true impact of each single enantiomer of pharmacologically active compounds (PACs) in the environment, highly efficient, fast and sensitive analytical methods are needed. For the first time this paper focuses on the use of ultrahigh performance supercritical fluid based chromatography coupled to a triple quadrupole mass spectrometer to develop multi-residue enantioselective methods for chiral PACs in environmental matrices. This technique exploits the advantages of supercritical fluid chromatography, ultrahigh performance liquid chromatography and mass spectrometry. Two coated modified 2.5 μm-polysaccharide-based chiral stationary phases were investigated: an amylose tris-3,5-dimethylphenylcarbamate column and a cellulose tris-3-chloro-4-methylphenylcarbamate column. The effect of different chromatographic variables on chiral recognition is highlighted. This novel approach resulted in the baseline resolution of 13 enantiomers PACs (aminorex, carprofen, chloramphenicol, 3-N-dechloroethylifosfamide, flurbiprofen, 2-hydroxyibuprofen, ifosfamide, imazalil, naproxen, ofloxacin, omeprazole, praziquantel and tetramisole) and partial resolution of 2 enantiomers PACs (ibuprofen and indoprofen) under fast-gradient conditions (<10 min analysis time). The overall performance of the methods was satisfactory. The applicability of the methods was tested on influent and effluent wastewater samples. To the best of our knowledge, this is the first feasibility study on the simultaneous separation of chemically diverse chiral PACs in environmental matrices using ultrahigh performance supercritical fluid based chromatography coupled with tandem mass spectrometry.
---
paper_title: Selective degradation of ibuprofen and clofibric acid in two model river biofilm systems
paper_content:
Abstract A field survey indicated that the Elbe and Saale Rivers were contaminated with both clofibric acid and ibuprofen. In Elbe River water we could detect the metabolite hydroxy-ibuprofen. Analyses of the city of Saskatoon sewage effluent discharged to the South Saskatchewan river detected clofibric acid but neither ibuprofen nor any metabolite. Laboratory studies indicated that the pharmaceutical ibuprofen was readily degraded in a river biofilm reactor. Two metabolites were detected and identified as hydroxy–and carboxy–ibuprofen. Both metabolites were observed to degrade in the biofilm reactors. However, in human metabolism the metabolite carboxy–ibuprofen appears and degrades second whereas the opposite occurs in biofilm systems. In biofilms the pharmacologically inactive stereoisomere of ibuprofen is degraded predominantly. In contrast, clofibric acid was not biologically degraded during the experimental period of 21 days. Similar results were obtained using biofilms developed using waters from either the South Saskatchewan or Elbe River. In a sterile reactor no losses of ibuprofen were observed. These results suggested that abiotic losses and adsorption played only a minimal role in the fate of the pharmaceuticals in the river biofilm reactors.
---
paper_title: Critical evaluation of monitoring strategy for the multi-residue determination of 90 chiral and achiral micropollutants in effluent wastewater.
paper_content:
It is essential to monitor the release of organic micropollutants from wastewater treatment plants (WWTPs) for developing environmental risk assessment and assessing compliance with legislative regulation. In this study the impact of sampling strategy on the quantitative determination of micropollutants in effluent wastewater was investigated. An extended list of 90 chiral and achiral micropollutants representing a broad range of biological and physico-chemical properties were studied simultaneously for the first time. During composite sample collection micropollutants can degrade resulting in the under-estimation of concentration. Cooling collected sub-samples to 4°C stabilised ≥81 of 90 micropollutants to acceptable levels (±20% of the initial concentration) in the studied effluents. However, achieving stability for all micropollutants will require an integrated approach to sample collection (i.e., multi-bottle sampling with more than one stabilisation method applied). Full-scale monitoring of effluent revealed time-paced composites attained similar information to volume-paced composites (influent wastewater requires a sampling mode responsive to flow variation). The option of monitoring effluent using time-paced composite samplers is advantageous as not all WWTPs have flow controlled samplers or suitable sites for deploying portable flow meters. There has been little research to date on the impact of monitoring strategy on the determination of chiral micropollutants at the enantiomeric level. Variability in wastewater flow results in a dynamic hydraulic retention time within the WWTP (and upstream sewerage system). Despite chiral micropollutants being susceptible to stereo-selective degradation, no diurnal variability in their enantiomeric distribution was observed. However, unused medication can be directly disposed into the sewer network creating short-term (e.g., daily) changes to their enantiomeric distribution. As enantio-specific toxicity is observed in the environment, similar resolution of enantio-selective analysis to more routinely applied achiral methods is needed throughout the monitoring period for accurate risk assessment.
---
paper_title: Chiral profiling of azole antifungals in municipal wastewater and recipient rivers of the Pearl River Delta, China
paper_content:
Enantiomeric compositions and fractions (EFs) of three chiral imidazole (econazole, ketoconazole, and miconazole) and one chiral triazole (tebuconazole) antifungals were investigated in wastewater, river water, and bed sediment of the Pearl River Delta, South China. The imidazole pharmaceuticals in the untreated wastewater were racemic to weakly nonracemic (EFs of 0.450-0.530) and showed weak enantioselectivity during treatment in the sewage treatment plant. The EFs of the dissolved azole antifungals were usually different from those of the sorbed azoles in the suspended particulate matter, suggesting different behaviors for the enantiomers of the chiral azole antifungals in the dissolved and particulate phases of the wastewater. The azole antifungals were widely present in the rivers. The bed sediment was a sink for the imidazole antifungals. The imidazoles were prevalently racemic, whereas tebuconazole was widely nonracemic in the rivers. Seasonal effects were observed on distribution and chirality of the azole antifungals. Concentrations of the azole antifungals in the river water were relatively higher in winter than in spring and summer while the EF of miconazole in the river water was higher in summer. The mechanism of enantiomeric behavior of the chiral azole antifungals in the environment warrants further research.
---
paper_title: Fast, simple and efficient supramolecular solvent-based microextraction of mecoprop and dichlorprop in soils prior to their enantioselective determination by liquid chromatography-tandem mass spectrometry.
paper_content:
A simple, sensitive, rapid and economic method was developed for the quantification of enantiomers of chiral pesticides as mecoprop (MCPP) and dichlorprop (DCPP) in soil samples using supramolecular solvent-based microextraction (SUSME) combined with liquid chromatography coupled to mass spectrometry (LC-MS/MS). SUSME has been described for the extraction of chiral pesticides in water, but this is firstly applied to soil samples. MCPP and DCPP are herbicides widely used in agriculture that have two enantiomeric forms (R- and S-) differing in environmental fate and toxicity. Therefore, it is essential to have analytical methods for monitoring individual DCPP and MCPP enantiomers in environmental samples. MCPP and DCPP were extracted in a supramolecular solvent (SUPRAS) made up of dodecanoic acid aggregates, the extract was dried under a nitrogen stream, the two herbicides dissolved in acetate buffer and the aqueous extract directly injected in the LC-MS/MS system. The recoveries obtained were independent of soil composition and age of herbicide residues. The detection and quantitation limits of the developed method for the determination of R- and S-MCPP and R- and S-DCPP in soils were 0.03 and 0.1 ng g(-1), respectively, and the precision, expressed as relative standard deviation (n=6), for enantiomer concentrations of 5 and 100 ng g(-1) were in the ranges 4.1-6.1% and 2.9-4.1%. Recoveries for soil samples spiked with enantiomer concentrations within the interval 5-180 ng g(-1) and enantiomeric ratios (ERs) of 1, 3 and 9, ranged between 93 and 104% with standard deviations of the percent recovery varying between 0.3% and 6.0%. Because the SUPRAS can solubilize analytes through different type of interactions (dispersion, dipole-dipole and hydrogen bonds), it could be used to extract a great variety of pesticides (including both polar and non-polar) in soils.
---
paper_title: Enantioselective biodegradation of pharmaceuticals, alprenolol and propranolol, by an activated sludge inoculum.
paper_content:
Biodegradation of chiral pharmaceuticals in the environment can be enantioselective. Thus quantification of enantiomeric fractions during the biodegradation process is crucial for assessing the fate of chiral pollutants. This work presents the biodegradation of alprenolol and propranolol using an activated sludge inoculum, monitored by a validated enantioselective HPLC method with fluorescence detection. The enantioseparation was optimized using a vancomycin-based chiral stationary phase under polar ionic mode. The method was validated using a minimal salts medium inoculated with activated sludge as matrix. The method was selective and linear in the range of 10-800 ng/ml, with a R²>0.99. The accuracy ranged from 85.0 percent to 103 percent, the recovery ranged from 79.9 percent to 103 percent, and the precision measured by the relative standard deviation (RSD) was <7.18 percent for intra-batch and <5.39 percent for inter-batch assays. The limits of quantification and detection for all enantiomers were 10 ng/ml and 2.5 ng/ml, respectively. The method was successfully applied to follow the biodegradation of the target pharmaceuticals using an activated sludge inoculum during a fifteen days assay. The results indicated slightly higher biodegradation rates for the S-enantiomeric forms of both beta-blockers. The presence of another carbon source maintained the enantioselective degradation pattern while enhancing biodegradation extent up to fourteen percent.
---
paper_title: Simultaneous enantiomeric analysis of pharmacologically active compounds in environmental samples by chiral LC-MS/MS with a macrocyclic antibiotic stationary phase.
paper_content:
This paper presents a multi-residue method for direct enantioselective separation of chiral pharmacologically active compounds in environmental matrices. The method is based on chiral liquid chromatography and tandem mass spectrometry detection. Simultaneous chiral discrimination was achieved with a macrocyclic glycopeptide-based column with antibiotic teicoplanin as a chiral selector working under reverse phase mode. For the first time, enantioresolution was reported for metabolites of ibuprofen: carboxyibuprofen and 2-hydroxyibuprofen with this chiral stationary phase. Moreover, enantiomers of chloramphenicol, ibuprofen, ifosfamide, indoprofen, ketoprofen, naproxen and praziquantel were also resolved. The overall performance of the method was satisfactory in terms of linearity, precision, accuracy and limits of detection. The method was successfully applied for monitoring of pharmacologically active compounds at enantiomeric level in influent and effluent wastewater and in river water. In addition, the chiral recognition and analytical performance of the teicoplanin-based column was critically compared with that of the α1 -acid glycoprotein chiral stationary phase. Copyright © 2017 John Wiley & Sons, Ltd.
---
paper_title: Enantioselective degradation of warfarin in soils
paper_content:
Environmental enantioselectivity information is important to fate assessment of chiral contaminants. Warfarin, a rodenticide and prescription medicine, is a chiral chemical but used in racemic form. Little is known about its enantioselective behavior in the environment. In this study, enantioselective degradation of warfarin in a turfgrass and a groundcover soils was examined in aerobic and ambient temperature conditions. An enantioselective analytical method was established using a novel triproline chiral stationary phase in high performance liquid chromatography. Unusual peak profile patterns, i.e., first peak (S(−)) broadening/second peak (R(+)) compression with hexane (0.1%TFA)/2-propanol (92/8, v/v) mobile phase, and first peak compression/second peak broadening with the (96/4, v/v) mobile phase, were observed in enantioseparation. This unique tunable peak property was leveraged in evaluating warfarin enantioselective degradation in two types of soil. Warfarin was extracted in high recovery from soil using methylene chloride after an aqueous phase basic-acidic conversion. No apparent degradation of warfarin was observed in the sterile turfgrass and groundcover soils during the 28 days incubation, while it showed quick degradation (half-life <7 days) in the nonsterile soils after a short lag period, suggesting warfarin degradation in the soils was mainly caused by micro-organisms. Limited enantioselectivity was found in the both soils, which was the R(+) enantiomer was preferentially degraded. The half-lives in turfgrass soil were 5.06 ± 0.13 and 5.97 ± 0.05 days, for the R(+) and the S(−) enantiomer, respectively. The corresponding values for the groundcover soil were 4.15 ± 0.11 and 4.47 ± 0.08 days. Chirality, 2011. © 2011 Wiley Periodicals, Inc.
---
paper_title: Distinct Enantiomeric Signals of Ibuprofen and Naproxen in Treated Wastewater and Sewer Overflow
paper_content:
Ibuprofen and naproxen are commonly used members of a class of pharmaceuticals known as 2-arylpropionic acids (2-APAs). Both are chiral chemicals and can exist as either of two (R)- and (S)-enantiomers. Enantioselective analyses of effluents from municipal wastewater treatment plants (WWTPs) and from untreated sewage overflow reveal distinctly different enantiomeric fractions for both pharmaceuticals. The (S)-enantiomers of both were dominant in untreated sewage overflow, but the relative proportions of the (R)-enantiomers were shown to be increased in WWTP effluents. (R)-naproxen was below method detection limits (<1 ng.L(-1)) in sewage overflow, but measurable at higher concentrations in WWTP effluents. Accordingly, enantiomeric fractions (EF) for naproxen were consistently 1.0 in sewage overflow, but ranged from 0.7–0.9 in WWTP effluents. Ibuprofen EF ranged from 0.6–0.8 in sewage overflow and receiving waters, and was 0.5 in two WWTP effluents. Strong evidence is provided to indicate that chiral inversion of (S)-2-APAs to produce (R)-2-APAs may occur during wastewater treatment processes. It is concluded that this characterization of the enantiomeric fractions for ibuprofen and naproxen in particular effluents could facilitate the distinction of treated and untreated sources of pharmaceutical contamination in surface waters.
---
paper_title: Trace analysis of fluoxetine and its metabolite norfluoxetine. Part II : Enantioselective quantification and studies of matrix effects in raw and treated wastewater by solid phase extraction and liquid chromatography-tandem mass spectrometry
paper_content:
The isotope-labeled compounds fluoxetine-d5 and norfluoxetine-d5 were used to study matrix effects caused by co-eluting compounds originating from raw and treated wastewater samples, collected in U ...
---
paper_title: A New Chiral Residue Analysis Method for Triazole Fungicides in Water Using Dispersive Liquid-Liquid Microextraction (DLLME)
paper_content:
A rapid, simple, reliable, and environment-friendly method for the residue analysis of the enantiomers of four chiral fungicides including hexaconazole, triadimefon, tebuconazole, and penconazole in water samples was developed by dispersive liquid-liquid microextraction (DLLME) pretreatment followed by chiral high-performance liquid chromatography (HPLC)-DAD detection. The enantiomers were separated on a Chiralpak IC column by HPLC applying n-hexane or petroleum ether as mobile phase and ethanol or isopropanol as modifier. The influences of mobile phase composition and temperature on the resolution were investigated and most of the enantiomers could be completely separated in 20 min under optimized conditions. The thermodynamic parameters indicated that the separation was enthalpy-driven. The elution orders were detected by both circular dichroism detector (CD) and optical rotatory dispersion detector (ORD). Parameters affecting the DLLME performance for pretreatment of the chiral fungicides residue in water samples, such as the extraction and dispersive solvents and their volume, were studied and optimized. Under the optimum microextraction condition the enrichment factors were over 121 and the linearities were 30-1500 µg L(-1) with the correlation coefficients (R(2)) over 0.9988 and the recoveries were between 88.7% and 103.7% at the spiking levels of 0.5, 0.25, and 0.05 mg L(-1) (for each enantiomer) with relative standard deviations varying from 1.38% to 6.70% (n = 6) The limits of detection (LODs) ranged from 8.5 to 29.0 µg L(-1) (S/N = 3).
---
paper_title: Enantiomeric Fraction Determination of 2-Arylpropionic Acids in a Package Plant Membrane Bioreactor
paper_content:
Enantiomeric compositions of three 2-arylpropionic acid (2-APA) drugs, ibuprofen, naproxen, and ketoprofen, were monitored in a membrane bioreactor (MBR) treating municipal effluent in a small rural town in Australia. Specific enantiomers were determined as amide diastereomers using the chiral derivatizing reagent, (R)-1-phenylethylamine (PEA), followed by gas chromatography-tandem mass spectrometry (GC-MS/MS). The six individual enantiomers were quantified by isotope dilution and the enantiomeric fractions (EFs) were determined. Over four separate sampling events, ibuprofen EF ranged from 0.88 to 0.94 (median 0.93) in the influent and 0.38 to 0.40 (median 0.39) in the effluent. However, no significant change in ketoprofen EF was observed, with influent EFs of 0.56-0.60 (median 0.58) and effluent EFs 0.54-0.68 (median 0.56). This is the first report of enantiospecific analysis of ketoprofen in municipal wastewater and it is not yet clear why such different behavior was observed compared to ibuprofen. Naproxen EF was consistently measured at 0.99 in the influent and ranged from 0.86 to 0.94 (median 0.91) in the effluent. This study demonstrates that EF is a relatively stable parameter and does not fluctuate according to concentration or other short-term variables introduced by sampling limitations. The enantiospecific analysis of chiral chemicals presents a promising approach to elucidate a more thorough understanding of biological treatment processes and a potential tool for monitoring the performance of key biological pathways.
---
paper_title: Loadings, trends, comparisons, and fate of achiral and chiral pharmaceuticals in wastewaters from urban tertiary and rural aerated lagoon treatments.
paper_content:
A comparison of time-weighted average pharmaceutical concentrations, loadings and enantiomer fractions (EFs) was made among treated wastewater from one rural aerated lagoon and from two urban tertiary wastewater treatment plants (WWTPs) in Alberta, Canada. Passive samplers were deployed directly in treated effluent for nearly continuous monitoring of temporal trends between July 2007 and April 2008. In aerated lagoon effluent, concentrations of some drugs changed over time, with some higher concentrations in winter likely due to reduced attenuation from lower temperatures (e.g., less microbially mediated biotransformation) and reduced photolysis from ice cover over lagoons; however, concentrations of some drugs (e.g. antibiotics) may also be influenced by changing use patterns over the year. Winter loadings to receiving waters for the sum of all drugs were 700 and 400 g/day from the two urban plants, compared with 4 g/day from the rural plant. Per capita loadings were similar amongst all plants. This result indicates that measured loadings, weighted by population served by WWTPs, are a good predictor of other effluent concentrations, even among different treatment types. Temporal changes in chiral drug EFs were observed in the effluent of aerated lagoons, and some differences in EF were found among WWTPs. This result suggests that there may be some variation of microbial biotransformation of drugs in WWTPs among plants and treatment types, and that the latter may be a good predictor of EF for some, but not all drugs.
---
paper_title: Enantioselective analysis of ibuprofen, ketoprofen and naproxen in wastewater and environmental water samples.
paper_content:
A highly sensitive and reliable method for the enantioselective analysis of ibuprofen, ketoprofen and naproxen in wastewater and environmental water samples has been developed. These three pharmaceuticals are chiral molecules and the variable presence of their individual (R)- and (S)-enantiomers is of increasing interest for environmental analysis. An indirect method for enantioseparation was achieved by the derivatization of the (R)- and (S)-enantiomers to amide diastereomers using (R)-1-phenylethylamine ((R)-1-PEA). After initial solid phase extraction from aqueous samples, derivatization was undertaken at room temperature in less than 5 min. Optimum recovery and clean-up of the amide diastereomers from the derivatization solution was achieved by a second solid phase extraction step. Separation and detection of the individual diastereomers was undertaken by gas chromatography-tandem mass spectrometry (GC-MS/MS). Excellent analyte separation and peak shapes were achieved for the derivatized (R)- and (S)-enantiomers for all three pharmaceuticals with peak resolution, R(s) is in the range of 2.87-4.02 for all diastereomer pairs. Furthermore, the calibration curves developed for the (S)-enantiomers revealed excellent linearity (r(2) ≥ 0.99) for all three compounds. Method detection limits were shown to be within the range of 0.2-3.3 ng L(-1) for individual enantiomers in ultrapure water, drinking water, surface water and a synthetic wastewater. Finally, the method was shown to perform well on a real tertiary treated wastewater sample, revealing measurable concentrations of both (R)- and (S)-enantiomers of ibuprofen, naproxen and ketoprofen. Isotope dilution using racemic D(3)-ibuprofen, racemic D(3)-ketoprofen and racemic D(3)-naproxen was shown to be an essential aspect of this method for accurate quantification and enantiomeric fraction (EF) determination. This approach produced excellent reproducibility for EF determination of triplicate tertiary treated wastewater samples.
---
paper_title: Enantiomeric fraction evaluation of pharmaceuticals in environmental matrices by liquid chromatography-tandem mass spectrometry
paper_content:
Abstract The interest for environmental fate assessment of chiral pharmaceuticals is increasing and enantioselective analytical methods are mandatory. This study presents an enantioselective analytical method for the quantification of seven pairs of enantiomers of pharmaceuticals and a pair of a metabolite. The selected chiral pharmaceuticals belong to three different therapeutic classes, namely selective serotonin reuptake inhibitors (venlafaxine, fluoxetine and its metabolite norfluoxetine), beta-blockers (alprenolol, bisoprolol, metoprolol, propranolol) and a beta 2 -adrenergic agonist (salbutamol). The analytical method was based on solid phase extraction followed by liquid chromatography tandem mass spectrometry with a triple quadrupole analyser. Briefly, Oasis® MCX cartridges were used to preconcentrate 250 mL of water samples and the reconstituted extracts were analysed with a Chirobiotic™ V under reversed mode. The effluent of a laboratory-scale aerobic granular sludge sequencing batch reactor (AGS-SBR) was used to validate the method. Linearity ( r 2 > 0.99), selectivity and sensitivity were achieved in the range of 20–400 ng L −1 for all enantiomers, except for norfluoxetine enantiomers which range covered 30–400 ng L −1 . The method detection limits were between 0.65 and 11.5 ng L −1 and the method quantification limits were between 1.98 and 19.7 ng L −1 . The identity of all enantiomers was confirmed using two MS/MS transitions and its ion ratios, according to European Commission Decision 2002/657/EC. This method was successfully applied to evaluate effluents of wastewater treatment plants (WWTP) in Portugal. Venlafaxine and fluoxetine were quantified as non-racemic mixtures (enantiomeric fraction ≠ 0.5). The enantioselective validated method was able to monitor chiral pharmaceuticals in WWTP effluents and has potential to assess the enantioselective biodegradation in bioreactors. Further application in environmental matrices as surface and estuarine waters can be exploited.
---
paper_title: Stereoselective biodegradation of amphetamine and methamphetamine in river microcosms
paper_content:
Here presented for the first time is the enantioselective biodegradation of amphetamine and methamphetamine in river microcosm bioreactors. The aim of this investigation was to test the hypothesis that mechanisms governing the fate of amphetamine and methamphetamine in the environment are mostly stereoselective and biological in nature. Several bioreactors were studied over the duration of 15 days (i) in both biotic and abiotic conditions, (ii) in the dark or exposed to light and (iii) in the presence or absence of suspended particulate matter. Bioreactor samples were analysed using SPE-chiral-LC-(QTOF)MS methodology. This investigation has elucidated the fundamental mechanism for degradation of amphetamine and methamphetamine as being predominantly biological in origin. Furthermore, stereoselectivity and changes in enantiomeric fraction (EF) were only observed under biotic conditions. Neither amphetamine nor methamphetamine appeared to demonstrate adsorption to suspended particulate matter. Our experiments also demonstrated that amphetamine and methamphetamine were photo-stable. Illicit drugs are present in the environment at low concentrations but due to their pseudo-persistence and non-racemic behaviour, with two enantiomers revealing significantly different potency (and potentially different toxicity towards aquatic organisms) the risk posed by illicit drugs in the environment should not be under- or over-estimated. The above results demonstrate the need for re-evaluation of the procedures utilised in environmental risk assessment, which currently do not recognise the importance of the phenomenon of chirality in pharmacologically active compounds.
---
paper_title: A column-switching method for quantification of the enantiomers of omeprazole in native matrices of waste and estuarine water samples.
paper_content:
Abstract This work reports the use of a two-dimensional liquid chromatography (2D-LC) system for quantification of the enantiomers of omeprazole in distinct native aqueous matrices. An octyl restricted-access media bovine serum albumin column (RAM-BSA C8) was used in the first dimension, while a polysaccharide-based chiral column was used in the second dimension with either ultraviolet (UV–vis) or ion-trap tandem mass spectrometry (IT-MS/MS) detection. An in-line configuration was employed to assess the exclusion capacity of the RAM-BSA columns to humic substances. The excluded macromolecules had a molecular mass in the order of 18 kDa. Good selectivity, extraction efficiency, accuracy, and precision were achieved employing a very small amount (500 μL or 1.00 mL) of native water sample per injection, with detection limits of 5.00 μg L−1, using UV–vis, and 0.0250 μg L−1, using IT-MS/MS. The total analysis time was only 35 min, with no time spent on sample preparation. The methods were successfully applied to analyze a series of waste and estuarine water samples. The enantiomers were detected in an estuarine water sample collected from the Douro River estuary (Portugal) and in an influent sample from the wastewater treatment plant (WWTP) of Sao Carlos (Brazil). As far as we are concerned, this is the first report of the occurrence of (+)-omeprazole and (−)-omeprazole in native aqueous matrices.
---
paper_title: Occurrence and Environmental Behavior of the Chiral Pharmaceutical Drug Ibuprofen in Surface Waters and in Wastewater
paper_content:
Pharmaceutical compounds can reach detectable concentrations in rivers and lakes if production and use are sufficiently large and the compounds show some mobility and persistence in the aquatic environment. In this study, we report on the occurrence and on the enantiomer composition of the chiral pharmaceutical drug ibuprofen (IB) in surface waters and in samples from wastewater treatment plants (WWTPs). Enantioselective gas chroma tography and detection by mass spectrometry/mass spectrometry was used for analysis. IB was present in influents of WWTPs at concentrations of up to 3 μg/L with a high enantiomeric excess of the pharmacologically active S enantiomer (S ≫ R), as from human urinary excretion. The principal human urinary metabolites of IB, hydroxy-IB and carboxy-IB, were observed in WWTP influents at even higher concentrations. In contrast to other pharmaceutical compounds such as clofibric acid and diclofenac, IB and its metabolites are then efficiently degraded (>95%) during treatment in WWTPs. ...
---
paper_title: Direct injection of native aqueous matrices by achiral–chiral chromatography ion trap mass spectrometry for simultaneous quantification of pantoprazole and lansoprazole enantiomers fractions
paper_content:
Abstract A two-dimensional liquid chromatography system coupled to ion-trap tandem mass spectrometer (2DLC-IT–MS/MS) was employed for the simultaneous quantification of pantoprazole and lansoprazole enantiomers fractions. A restricted access media of bovine serum albumin octyl column (RAM-BSA C 8 ) was used in the first dimension for the exclusion of the humic substances, while a polysaccharide-based chiral column was used in the second dimension for the enantioseparation of both pharmaceuticals. The results described here show good selectivity, extraction efficiency, accuracy, and precision with detection limits of 0.200 and 0.150 μg L −1 for the enatiomers of pantoprazole and lansoprazole respectively, while employing a small amount (1.0 mL) of native water sample per injection. This work reports an innovative assay for monitoring work, studies of biotic and abiotic enantioselective degradation and temporal changes of enantiomeric fractions.
---
paper_title: Multi-residue enantiomeric analysis of human and veterinary pharmaceuticals and their metabolites in environmental samples by chiral liquid chromatography coupled with tandem mass spectrometry detection
paper_content:
Enantiomeric profiling of chiral pharmacologically active compounds (PACs) in the environment has hardly been investigated. This manuscript describes, for the first time, a multi-residue enantioselective method for the analysis of human and veterinary chiral PACs and their main metabolites from different therapeutic groups in complex environmental samples such as wastewater and river water. Several analytes targeted in this paper have not been analysed in the environment at enantiomeric level before. These are aminorex, carboxyibuprofen, carprofen, cephalexin, 3-N-dechloroethylifosfamide, 10,11-dihydro-10-hydroxycarbamazepine, dihydroketoprofen, fenoprofen, fexofenadine, flurbiprofen, 2-hydroxyibuprofen, ifosfamide, indoprofen, mandelic acid, 2-phenylpropionic acid, praziquantel and tetramisole. The method is based on chiral liquid chromatography utilising a chiral α1-acid glycoprotein column and tandem mass spectrometry detection. Excellent chromatographic separation of enantiomers (Rs≥1.0) was achieved for chloramphenicol, fexofenadine, ifosfamide, naproxen, tetramisole, ibuprofen and their metabolites: aminorex and dihydroketoprofen (three of four enantiomers), and partial separation (Rs = 0.7-1.0) was achieved for ketoprofen, praziquantel and the following metabolites: 3-N-dechloroethylifosfamide and 10,11-dihydro-10-hydroxycarbamazepine. The overall performance of the method was satisfactory for most of the compounds targeted. Method detection limits were at low nanogram per litre for surface water and effluent wastewater. Method intra-day precision was on average under 20% and sample pre-concentration using solid phase extraction yielded recoveries >70% for most of the analytes. This novel, selective and sensitive method has been applied for the quantification of chiral PACs in surface water and effluent wastewater providing excellent enantioresolution of multicomponent mixtures in complex environmental samples. It will help with better understanding of the role of individual enantiomers in the environment and will enable more accurate environmental risk assessment.
---
paper_title: Trace analysis of fluoxetine and its metabolite norfluoxetine. Part I: development of a chiral liquid chromatography-tandem mass spectrometry method for wastewater samples.
paper_content:
An enantioselective method for the determination of fluoxetine (a selective serotonin reuptake inhibitor) and its pharmacologically active metabolite norfluoxetine has been developed for raw and tr ...
---
paper_title: Enantiomeric fractionation as a tool for quantitative assessment of biodegradation: The case of metoprolol.
paper_content:
An efficient chiral liquid chromatography high resolution mass spectrometry method has been developed for the determination of metoprolol (MTP) and three of its major metabolites, namely O-desmethylmetoprolol (O-DMTP), α-hydroxymetoprolol (α-HMTP) and metoprolol acid (MTPA) in wastewater treatment plant (WWTP) influents and effluents. The optimized analytical method has been validated with good quality parameters including resolution >1.3 and method quantification limits down to the ng/L range except for MTPA. On the basis of this newly developed analytical method, the stereochemistry of MTP and its metabolites was studied over time in effluent/sediment biotic and sterile microcosms under dark and light conditions and in influents and effluents of 5 different WWTPs. MTP stereoselective degradation was exclusively observed under biotic conditions, confirming the specificity of enantiomeric fraction variations to biodegradation processes. MTP was always biotransformed into MTPA with a (S)-enantiomer enrichment. The results of enantiomeric enrichment pointed the way for a quantitative assessment of in situ biodegradation processes due to a good fit (R(2) > 0.98) of the aerobic MTP biodegradation to the Rayleigh dependency in all the biotic microcosms and in WWTPs because both MTP enantiomers followed the same biodegradation kinetic profiles. These results demonstrate that enantiomeric fractionation constitutes a very interesting quantitative indicator of MTP biodegradation in WWTPs and probably in the environment.
---
paper_title: Stereoisomer analysis of wastewater-derived β-blockers, selective serotonin re-uptake inhibitors, and salbutamol by high-performance liquid chromatography–tandem mass spectrometry
paper_content:
Abstract A reversed-phase enantioselective liquid chromatography–tandem mass spectrometry (HPLC-MS-MS) method was developed to measure enantiomer fractions (EF) and concentrations of pharmaceuticals in wastewater. Enantiomer resolution of six β-blockers (atenolol, metoprolol, nadolol, pindolol, propranolol, and sotalol) along with two selective serotonin re-uptake inhibitors (citalopram, fluoxetine) and one β 2 -agonist (salbutamol) was achieved with the Chirobiotic V stationary phase. Analyte recovery averaged 86% in influent and 78% in effluent with limits of detection ranging from 0.2 to 7.5 ng/L. These results represent an improvement in wastewater EF measurement for atenolol, metoprolol and propranolol as well as the first EF measurements of citalopram, fluoxetine, nadolol, pindolol, salbutamol and sotalol in wastewaters. Changes in EF through treatment indicate biologically mediated stereoselective processes were likely occurring during wastewater treatment.
---
paper_title: Source discrimination of drug residues in wastewater: The case of salbutamol.
paper_content:
Abstract Analytical methods used for pharmaceuticals and drugs of abuse in sewage play a fundamental role in wastewater-based epidemiology (WBE) studies. Here quantitative analysis of drug metabolites in raw wastewaters is used to determine consumption from general population. Its great advantage in public health studies is that it gives objective, real-time data about community use of chemicals, highlighting the relationship between environmental and human health. Within a WBE study on salbutamol use in a large population, we developed a procedure to distinguish human metabolic excretion from external source of contamination, possibly industrial, in wastewaters. Salbutamol is mainly excreted as the sulphate metabolite, which is rapidly hydrolyzed to the parent compound in the environment, so this is currently not detected. When a molecule is either excreted un-metabolized or its metabolites are unstable in the environment, studies can be completed by monitoring the parent compound. In this case it is mandatory to assess whether the drug in wastewater is present because of population use or because of a specific source of contamination, such as industrial manufacturing waste. Because commercial salbutamol mainly occurs as a racemic mixture and is stereoselective in the human metabolism, the enantiomeric relative fraction (EFrel) in wastewater samples should reflect excretion, being unbalanced towards one of two enantiomers, if the drug is of metabolic origin. The procedure described involves chiral analysis of the salbutamol enantiomers by liquid chromatography-tandem mass spectrometry (LC–MS-MS) and calculation of EFrel, to detect samples where external contamination occurs. Samples were collected daily between October and December 2013 from the Milano Nosedo wastewater treatment plant. Carbamazepine and atenolol were measured in the sewage collector, as “control” drugs. Salbutamol EFrel was highly consistent in all samples during this three-month period, but a limited number of samples had unexpectedly high concentrations where the EFrel was close to that observed of the un-metabolized, commercially available drug, supporting the idea of an external source of contamination, besides human metabolic excretion. Results showed that, when present, non-metabolic daily loads could be evaluated indicating an average of 4.12 g/day of salbutamol extra load due to non-metabolic sources. The stereoselectivity in metabolism and enantiomeric ratio analysis appears to be a useful approach in WBE studies to identify different sources of drugs in the environment, when no metabolic products are present at useful analytical levels.
---
paper_title: Chiral analysis of metoprolol and two of its metabolites, α-hydroxymetoprolol and deaminated metoprolol, in wastewater using liquid chromatography–tandem mass spectrometry
paper_content:
A LC–MS/MS method for the chiral separation of metoprolol and two of its main metabolites, α-hydroxymetoprolol (α-OH-Met) and deaminated metoprolol (COOH-Met), in environmental water samples has be ...
---
paper_title: Enantioselective simultaneous analysis of selected pharmaceuticals in environmental samples by ultrahigh performance supercritical fluid based chromatography tandem mass spectrometry.
paper_content:
In order to assess the true impact of each single enantiomer of pharmacologically active compounds (PACs) in the environment, highly efficient, fast and sensitive analytical methods are needed. For the first time this paper focuses on the use of ultrahigh performance supercritical fluid based chromatography coupled to a triple quadrupole mass spectrometer to develop multi-residue enantioselective methods for chiral PACs in environmental matrices. This technique exploits the advantages of supercritical fluid chromatography, ultrahigh performance liquid chromatography and mass spectrometry. Two coated modified 2.5 μm-polysaccharide-based chiral stationary phases were investigated: an amylose tris-3,5-dimethylphenylcarbamate column and a cellulose tris-3-chloro-4-methylphenylcarbamate column. The effect of different chromatographic variables on chiral recognition is highlighted. This novel approach resulted in the baseline resolution of 13 enantiomers PACs (aminorex, carprofen, chloramphenicol, 3-N-dechloroethylifosfamide, flurbiprofen, 2-hydroxyibuprofen, ifosfamide, imazalil, naproxen, ofloxacin, omeprazole, praziquantel and tetramisole) and partial resolution of 2 enantiomers PACs (ibuprofen and indoprofen) under fast-gradient conditions (<10 min analysis time). The overall performance of the methods was satisfactory. The applicability of the methods was tested on influent and effluent wastewater samples. To the best of our knowledge, this is the first feasibility study on the simultaneous separation of chemically diverse chiral PACs in environmental matrices using ultrahigh performance supercritical fluid based chromatography coupled with tandem mass spectrometry.
---
paper_title: Critical evaluation of monitoring strategy for the multi-residue determination of 90 chiral and achiral micropollutants in effluent wastewater.
paper_content:
It is essential to monitor the release of organic micropollutants from wastewater treatment plants (WWTPs) for developing environmental risk assessment and assessing compliance with legislative regulation. In this study the impact of sampling strategy on the quantitative determination of micropollutants in effluent wastewater was investigated. An extended list of 90 chiral and achiral micropollutants representing a broad range of biological and physico-chemical properties were studied simultaneously for the first time. During composite sample collection micropollutants can degrade resulting in the under-estimation of concentration. Cooling collected sub-samples to 4°C stabilised ≥81 of 90 micropollutants to acceptable levels (±20% of the initial concentration) in the studied effluents. However, achieving stability for all micropollutants will require an integrated approach to sample collection (i.e., multi-bottle sampling with more than one stabilisation method applied). Full-scale monitoring of effluent revealed time-paced composites attained similar information to volume-paced composites (influent wastewater requires a sampling mode responsive to flow variation). The option of monitoring effluent using time-paced composite samplers is advantageous as not all WWTPs have flow controlled samplers or suitable sites for deploying portable flow meters. There has been little research to date on the impact of monitoring strategy on the determination of chiral micropollutants at the enantiomeric level. Variability in wastewater flow results in a dynamic hydraulic retention time within the WWTP (and upstream sewerage system). Despite chiral micropollutants being susceptible to stereo-selective degradation, no diurnal variability in their enantiomeric distribution was observed. However, unused medication can be directly disposed into the sewer network creating short-term (e.g., daily) changes to their enantiomeric distribution. As enantio-specific toxicity is observed in the environment, similar resolution of enantio-selective analysis to more routinely applied achiral methods is needed throughout the monitoring period for accurate risk assessment.
---
paper_title: Using chiral liquid chromatography quadrupole time-of-flight mass spectrometry for the analysis of pharmaceuticals and illicit drugs in surface and wastewater at the enantiomeric level.
paper_content:
This paper presents and compares for the first time two chiral LC-QTOF-MS methodologies (utilising CBH and Chirobiotic V columns with cellobiohydrolase and vancomycin as chiral selectors) for the quantification of amphetamine, methamphetamine, MDA (methylenedioxyamphetamine), MDMA (methylenedioxymethamphetamine), propranolol, atenolol, metoprolol, fluoxetine and venlafaxine in river water and sewage effluent. The lowest MDLs (0.3-5.0 ng L(-1) and 1.3-15.1 ng L(-1) for river water and sewage effluent respectively) were observed using the chiral column Chirobiotic V. This is with the exception of methamphetamine and MDMA which had lower MDLs using the CBH column. However, the CBH column resulted in better resolution of enantiomers (R(s)=2.5 for amphetamine compared with R(s)=1.2 with Chirobiotic V). Method recovery rates were typically >80% for both methodologies. Pharmaceuticals and illicit drugs detected and quantified in environmental samples were successfully identified using MS/MS confirmation. In sewage effluent, the total beta-blocker concentrations of propranolol, atenolol and metoprolol were on average 77.0, 1091.0 and 3.6 ng L(-1) thus having EFs (Enantiomeric Fractions) of 0.43, 0.55 and 0.54 respectively. In river water, total propranolol and atenolol was quantified on average at <10.0 ng L(-1). Differences in EF between sewage and river water matrices were evident: venlafaxine was observed with respective EF of 0.43 ± 0.02 and 0.58 ± 0.02.
---
paper_title: Multi-residue enantiomeric analysis of pharmaceuticals and their active metabolites in the Guadalquivir River basin (South Spain) by chiral liquid chromatography coupled with tandem mass spectrometry
paper_content:
This paper describes the development and application of a multi-residue chiral liquid chromatography coupled with tandem mass spectrometry method for simultaneous enantiomeric profiling of 18 chiral pharmaceuticals and their active metabolites (belonging to several therapeutic classes including analgesics, psychiatric drugs, antibiotics, cardiovascular drugs and β-agonists) in surface water and wastewater. To the authors' knowledge, this is the first time an enantiomeric method including such a high number of pharmaceuticals and their metabolites has been reported. Some of the pharmaceuticals have never been studied before in environmental matrices. Among them are timolol, betaxolol, carazolol and clenbuterol. A monitoring programme of the Guadalquivir River basin (South Spain), including 24 sampling sites and five wastewater treatment plants along the basin, revealed that enantiomeric composition of studied pharmaceuticals is dependent on compound and sampling site. Several compounds such as ibuprofen, atenolol, sotalol and metoprolol were frequently found as racemic mixtures. On the other hand, fluoxetine, propranolol and albuterol were found to be enriched with one enantiomer. Such an outcome might be of significant environmental relevance as two enantiomers of the same chiral compound might reveal different ecotoxicity. For example, propranolol was enriched with S(-)-enantiomer, which is known to be more toxic to Pimephales promelas than R(+)-propranolol. Fluoxetine was found to be enriched with S(+)-enantiomer, which is more toxic to P. promelas than R(-)-fluoxetine.
---
paper_title: Pharmaceutical trace analysis in aqueous environmental matrices by liquid chromatography-ion trap tandem mass spectrometry.
paper_content:
An analytical method based on solid-phase extraction followed by liquid chromatography tandem mass spectrometry with an ion trap analyser was developed and validated for the quantification of a series of pharmaceutical compounds with distinct physical-chemical characteristics in estuarine water samples. Method detection limits were between 0.03 and 16.4 ng/L. The sensitivity and the accuracy obtained associated with the inherent confirmatory potential of ion trap tandem mass spectrometry (IT-MS/MS) validates its success as an environmental analysis tool. Two MS/MS transitions were used to confirm compound identity. Almost all pharmaceuticals were detected at ng/L level in at least one sampling site of the Douro River estuary, Portugal.
---
paper_title: Loadings, trends, comparisons, and fate of achiral and chiral pharmaceuticals in wastewaters from urban tertiary and rural aerated lagoon treatments.
paper_content:
A comparison of time-weighted average pharmaceutical concentrations, loadings and enantiomer fractions (EFs) was made among treated wastewater from one rural aerated lagoon and from two urban tertiary wastewater treatment plants (WWTPs) in Alberta, Canada. Passive samplers were deployed directly in treated effluent for nearly continuous monitoring of temporal trends between July 2007 and April 2008. In aerated lagoon effluent, concentrations of some drugs changed over time, with some higher concentrations in winter likely due to reduced attenuation from lower temperatures (e.g., less microbially mediated biotransformation) and reduced photolysis from ice cover over lagoons; however, concentrations of some drugs (e.g. antibiotics) may also be influenced by changing use patterns over the year. Winter loadings to receiving waters for the sum of all drugs were 700 and 400 g/day from the two urban plants, compared with 4 g/day from the rural plant. Per capita loadings were similar amongst all plants. This result indicates that measured loadings, weighted by population served by WWTPs, are a good predictor of other effluent concentrations, even among different treatment types. Temporal changes in chiral drug EFs were observed in the effluent of aerated lagoons, and some differences in EF were found among WWTPs. This result suggests that there may be some variation of microbial biotransformation of drugs in WWTPs among plants and treatment types, and that the latter may be a good predictor of EF for some, but not all drugs.
---
paper_title: Enantiomeric fraction evaluation of pharmaceuticals in environmental matrices by liquid chromatography-tandem mass spectrometry
paper_content:
Abstract The interest for environmental fate assessment of chiral pharmaceuticals is increasing and enantioselective analytical methods are mandatory. This study presents an enantioselective analytical method for the quantification of seven pairs of enantiomers of pharmaceuticals and a pair of a metabolite. The selected chiral pharmaceuticals belong to three different therapeutic classes, namely selective serotonin reuptake inhibitors (venlafaxine, fluoxetine and its metabolite norfluoxetine), beta-blockers (alprenolol, bisoprolol, metoprolol, propranolol) and a beta 2 -adrenergic agonist (salbutamol). The analytical method was based on solid phase extraction followed by liquid chromatography tandem mass spectrometry with a triple quadrupole analyser. Briefly, Oasis® MCX cartridges were used to preconcentrate 250 mL of water samples and the reconstituted extracts were analysed with a Chirobiotic™ V under reversed mode. The effluent of a laboratory-scale aerobic granular sludge sequencing batch reactor (AGS-SBR) was used to validate the method. Linearity ( r 2 > 0.99), selectivity and sensitivity were achieved in the range of 20–400 ng L −1 for all enantiomers, except for norfluoxetine enantiomers which range covered 30–400 ng L −1 . The method detection limits were between 0.65 and 11.5 ng L −1 and the method quantification limits were between 1.98 and 19.7 ng L −1 . The identity of all enantiomers was confirmed using two MS/MS transitions and its ion ratios, according to European Commission Decision 2002/657/EC. This method was successfully applied to evaluate effluents of wastewater treatment plants (WWTP) in Portugal. Venlafaxine and fluoxetine were quantified as non-racemic mixtures (enantiomeric fraction ≠ 0.5). The enantioselective validated method was able to monitor chiral pharmaceuticals in WWTP effluents and has potential to assess the enantioselective biodegradation in bioreactors. Further application in environmental matrices as surface and estuarine waters can be exploited.
---
paper_title: Enantiomeric analysis of drugs of abuse in wastewater by chiral liquid chromatography coupled with tandem mass spectrometry.
paper_content:
The manuscript concerns the development and validation of a method for enantiomeric analysis of structurally related amphetamines (amphetamine, methamphetamine, 4-methylenedioxymethamphetamine (MDMA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxy-N-ethylamphetamine (MDEA)), ephedrines (ephedrine, pseudoephedrine and norephedrine) and venlafaxine in wastewater by means of chiral chromatography coupled with tandem mass spectrometry. Solid-phase extraction on Oasis HLB sorbent used for sample clean-up and concentration of analytes resulted in very good recoveries accounting for >70%. Signal suppression during MS analysis was negligible for most studied analytes. Resolution of enantiomers of chiral drugs was found to be higher than 1. Preliminary assay validation was undertaken. The mean correlation coefficients of the calibration curves, which were on average higher than 0.997 for all studied analytes, showed good linearity of the method in the studied range. Intra- and inter-day repeatabilities were on average less than 5%. The method quantification limits in wastewater were at low ppt levels and varied from 2.25 to 11.75ng/L. The method was successfully applied for the analysis of raw and treated wastewater samples collected from four wastewater treatment plants. A common occurrence of 1R,2S (-)-ephedrine, 1S,2S (+)-pseudoephedrine and venlafaxine in both raw and treated wastewater samples was observed. Amphetamine, methamphetamine, MDMA and MDEA were also detected in several wastewater samples. The study of enantiomeric fractions of these chiral drugs proved their variable non-racemic composition. The influence of wastewater treatment processes on the enantiomeric composition of chiral drugs was also noted and might indicate enantioselective processes occurring during treatment, although more comprehensive research has to be undertaken to support this hypothesis.
---
paper_title: Multi-residue enantiomeric analysis of human and veterinary pharmaceuticals and their metabolites in environmental samples by chiral liquid chromatography coupled with tandem mass spectrometry detection
paper_content:
Enantiomeric profiling of chiral pharmacologically active compounds (PACs) in the environment has hardly been investigated. This manuscript describes, for the first time, a multi-residue enantioselective method for the analysis of human and veterinary chiral PACs and their main metabolites from different therapeutic groups in complex environmental samples such as wastewater and river water. Several analytes targeted in this paper have not been analysed in the environment at enantiomeric level before. These are aminorex, carboxyibuprofen, carprofen, cephalexin, 3-N-dechloroethylifosfamide, 10,11-dihydro-10-hydroxycarbamazepine, dihydroketoprofen, fenoprofen, fexofenadine, flurbiprofen, 2-hydroxyibuprofen, ifosfamide, indoprofen, mandelic acid, 2-phenylpropionic acid, praziquantel and tetramisole. The method is based on chiral liquid chromatography utilising a chiral α1-acid glycoprotein column and tandem mass spectrometry detection. Excellent chromatographic separation of enantiomers (Rs≥1.0) was achieved for chloramphenicol, fexofenadine, ifosfamide, naproxen, tetramisole, ibuprofen and their metabolites: aminorex and dihydroketoprofen (three of four enantiomers), and partial separation (Rs = 0.7-1.0) was achieved for ketoprofen, praziquantel and the following metabolites: 3-N-dechloroethylifosfamide and 10,11-dihydro-10-hydroxycarbamazepine. The overall performance of the method was satisfactory for most of the compounds targeted. Method detection limits were at low nanogram per litre for surface water and effluent wastewater. Method intra-day precision was on average under 20% and sample pre-concentration using solid phase extraction yielded recoveries >70% for most of the analytes. This novel, selective and sensitive method has been applied for the quantification of chiral PACs in surface water and effluent wastewater providing excellent enantioresolution of multicomponent mixtures in complex environmental samples. It will help with better understanding of the role of individual enantiomers in the environment and will enable more accurate environmental risk assessment.
---
paper_title: Stereoisomer analysis of wastewater-derived β-blockers, selective serotonin re-uptake inhibitors, and salbutamol by high-performance liquid chromatography–tandem mass spectrometry
paper_content:
Abstract A reversed-phase enantioselective liquid chromatography–tandem mass spectrometry (HPLC-MS-MS) method was developed to measure enantiomer fractions (EF) and concentrations of pharmaceuticals in wastewater. Enantiomer resolution of six β-blockers (atenolol, metoprolol, nadolol, pindolol, propranolol, and sotalol) along with two selective serotonin re-uptake inhibitors (citalopram, fluoxetine) and one β 2 -agonist (salbutamol) was achieved with the Chirobiotic V stationary phase. Analyte recovery averaged 86% in influent and 78% in effluent with limits of detection ranging from 0.2 to 7.5 ng/L. These results represent an improvement in wastewater EF measurement for atenolol, metoprolol and propranolol as well as the first EF measurements of citalopram, fluoxetine, nadolol, pindolol, salbutamol and sotalol in wastewaters. Changes in EF through treatment indicate biologically mediated stereoselective processes were likely occurring during wastewater treatment.
---
paper_title: Determination of chiral pharmaceuticals and illicit drugs in wastewater and sludge using microwave assisted extraction, solid-phase extraction and chiral liquid chromatography coupled with tandem mass spectrometry.
paper_content:
This is the first study presenting a multi-residue method allowing for comprehensive analysis of several chiral pharmacologically active compounds (cPACs) including beta-blockers, antidepressants and amphetamines in wastewater and digested sludge at the enantiomeric level. Analysis of both the liquid and solid matrices within wastewater treatment is crucial to being able to carry out mass balance within these systems. The method developed comprises filtration, microwave assisted extraction and solid phase extraction followed by chiral liquid chromatography coupled with tandem mass spectrometry to analyse the enantiomers of 18 compounds within all three matrices. The method was successfully validated for 10 compounds within all three matrices (amphetamine, methamphetamine, MDMA, MDA, venlafaxine, desmethylvenlafaxine, citalopram, metoprolol, propranolol and sotalol), 7 compounds validated for the liquid matrices only (mirtazapine, salbutamol, fluoxetine, desmethylcitalopram, atenolol, ephedrine and pseudoephedrine) and 1 compound (alprenolol) passing the criteria for solid samples only. The method was then applied to wastewater samples; cPACs were found at concentration ranges in liquid matrices of: 1.7ngL(-1) (metoprolol) - 1321ngL(-1) (tramadol) in influent, Language: en
---
paper_title: Risk assessment of the endocrine-disrupting effects of nine chiral pesticides
paper_content:
The increased release of chiral pesticides into the environment has generated interest in the role of enantioselectivity in the environmental fate and ecotoxicological effects of these compounds. However, the information on the endocrine disrupting effects (EDEs) of chiral pesticides is still limited and discrepancies are also usually observed among different assays. In this study, we investigated the enantioselectivity of EDEs via estrogen and thyroid hormone receptors for nine chiral pesticides using in vitro and in silico approaches. The results of the luciferase reporter gene assays showed 7 chiral pesticides possessed enantioselective estrogenic activities and 2 chiral pesticides exerted thyroid hormone antagonistic effects. Proliferation assays in MCF-7 and GH3 cells were also used to verify the results of the dual-luciferase reporter gene assays. At last, the molecular docking results indicated that the enantioselective EDEs of chiral pesticides were partially due to enantiospecific binding affinities with receptors. Our data not only show enantioselective EDEs of nine chiral pesticides, but also would be helpful to better understanding the molecular biological mechanisms of enantioselectivity in EDEs of chiral pesticides.
---
paper_title: Air-water gas exchange of hexachlorocyclohexanes (HCHs) and the enantiomers of α-HCH in Arctic regions
paper_content:
In the summers of 1993 and 1994, air and water samples were taken in the Bering and Chukchi Seas and on a transect across the polar cap to the Greenland Sea to measure the air-sea gas exchange of hexachlorocyclohexanes (HCHs) and the enantiomers of α-HCH. Atmospheric concentrations of α- and γ-HCH have decreased threefold or more since the mid-1980s, whereas concentrations in surface water have shown little change. The saturation state of surface water (water/air fugacity ratio) was determined from the air and water concentrations of HCHs and their Henry's law constants as a function of temperature. Fugacity ratios >1.0 indicated net volatilization of α-HCH in all regions except the Greenland Sea, where concentrations in air and water were close to equilibrium. Net deposition of γ-HCH in the Chukchi Sea was indicated by fugacity ratios <1.0. In other regions, γ-HCH was volatilizing or near air-water equilibrium. Enantioselective degradation of (−)α-HCH was found in surface water of the Bering and Chukchi Seas. The ER was reversed in the Canada Basin and Greenland Sea, where (+)α-HCH was preferentially lost. The same order of enantioselective degradation was seen in air within the marine boundary layer of these regions, which provides direct evidence for sea-to-air transfer of α-HCH.
---
paper_title: Differing estrogenic activities for the enantiomers of o, p'-DDT in immature female rats.
paper_content:
Female rats (20-21 days) were given single intraperitoneal injections of (+/-)-o,p'-DDT, (--)-o,p'-DDT, or (+)-o,p'-DDT. At 18 h their uteri were excised and the estrogen sensitive parameters, uterine wet weight, and uterine glycogen content were measured. For o,p'-DDT, the levo enantiomer is the more active estrogen in immature female rats. Optical resolutions of other racemic environmental xenobiotics may be important in the evaluation of their biological effects.
---
paper_title: Bromocyclen contamination of surface water, waste water and fish from Northern Germany, and gas chromatographic chiral separation
paper_content:
The concentrations of bromocyclen were determined in the water and muscle tissue of trout and bream from the river Stor (northern Germany), to locate the source of this contaminant. The concentrations in water varied between n.d. and 261 pg/L, while in the fish samples concentrations between 0.01 and 0.24 mg/kg fat were determined. In addition, remarkably high concentrations of bromocyclen were found in waste water from sewage plants discharging into the river Stor and at the city of Hamburg. Sewage plants can be, therefore, assumed to be the main source of bromocyclen contamination. This is the first report about the contamination of river water and waste water with bromocyclen. Determination by chiral capillary gas chromatography using a modified cyclodextrin phase showed no significant enantiomeric excess in the water samples, but a preferential degradation of (+)-bromocyclen in the fish muscle tissue of breams.
---
paper_title: Occurrence and Transformation Reactions of Chiral and Achiral Phenoxyalkanoic Acid Herbicides in Lakes and Rivers in Switzerland
paper_content:
The occurrence of chiral and achiral phenoxyalkanoic acid herbicides in lakes and rivers in Switzerland is reported. The compounds most frequently detected were the chiral 2-(4-chloro-2-methylphenoxy)propionic acid (mecoprop, MCPP) and the achiral 4-chloro-2-methylphenoxyacetic acid (MCPA), 2,4-dichlorophenoxyacetic acid (2,4-D), and dicamba, a benzoic acid derivative. The compounds were generally present at concentrations well below the ECE recommended drinking water tolerance level (100 ng/L), even in lakes situated in areas with intense agricultural activities. The chiral 2-(2,4-dichlorophenoxy)propionic acid (dichlorprop, DCPP) was hardly present, and none of the compounds was detected in mountain lakes. In case of MCPP, both enantiomers (R and S) were present, although only the technical material with the R enantiomer (mecoprop-P) is registered and used as a herbicide in Switzerland. Previous studies indicated significant enantiomerization of MCPP and DCPP in soil leading to residues enriched in R en...
---
paper_title: Occurrence and Behavior of Pesticides in Rainwater, Roof Runoff, and Artificial Stormwater Infiltration
paper_content:
To prevent overloading of sewer systems and to ensure sufficient recharging of the groundwater underneath sealed urban areas, collection and artificial infiltration of roof runoff water has become very popular in many countries including Switzerland. However, there is still a considerable lack of knowledge concerning the quality of roof runoff, particularly with respect to the presence of pesticides. In this work, the occurrence and the temporal variations in concentration in rainwater and in roof runoff from different types of roofs (i.e., clay tile roofs, polyester roofs, flat gravel roofs) were determined for the most important members of three widely used classes of pesticides (i.e., triazines, acetamides, phenoxy acids). It is shown that in rain and roof runoff, maximum pesticide concentrations originating primarily from agricultural use occurred during and right after the application periods. Maximum average concentrations for single rain events and total loads per year were, for example, for atrazi...
---
paper_title: Spatial and temporal distribution of chiral pesticides in Calanus spp. from three Arctic fjords.
paper_content:
Concentration and enantiomeric fractions (EFs) of chiral chlorinated pesticides (α-hexachlorocyclohexane (α-HCH), trans-, cis- and oxychlordane) were determined in Arctic zooplankton, mainly Calanus spp. collected in the period 2007-11 from Svalbard fjords and open pack-ice. The temporal and spatial enantiomer distribution varied considerably for all species and chiral pesticides investigated. An overall enantiomeric excess of (+)-oxychlordane (EF 0.53-0.86) were observed. Cis-chlordane was close to racemic (EF 0.46-0.55), while EF for trans-chlordane varied between 0.29 and 0.55, and between 0.38 and 0.59 for α-HCH. The biodegradation potential for trans-chlordane was higher compared to cis-chlordane. The comprehensive statistical evaluation of the data set revealed that the EF distribution of α-HCH was affected by ice cover to a higher extent compared to cis-chlordane. Potential impact from benthic processes on EFs in zooplankton is an interesting feature and should be further investigated. Enantiomeric selective analyses may be a suitable tool for investigations of climate change related influences on Arctic ecosystems.
---
paper_title: Sources of pesticides in surface waters in Switzerland: pesticide load through waste water treatment plants--current situation and reduction potential.
paper_content:
Concentrations of pesticides in Swiss rivers and lakes frequently exceed the Swiss quality goal of 0.1 lg/l for surface waters. In this study, concentrations of various pesticides (e.g., atrazine, diuron, mecoprop) were continuously measured in the effluents of waste water treatment plants and in two rivers during a period of four months. These measurements revealed that in the catchment of Lake Greifensee, farmers who did not perfectly comply with ‘good agricultural practice’ caused at least 14% of the measured agricultural herbicide load into surface waters. Pesticides, used for additional purposes in urban areas (i.e. protection of materials, conservation, etc.), entered surface waters up to 75% through waste water treatment plants. 2002 Elsevier Science Ltd. All rights reserved.
---
paper_title: Enantioselective degradation of metalaxyl in soils: chiral preference changes with soil pH.
paper_content:
Chiral pesticides are often degraded enantio-/stereoselectively in soils. Degradation is typically studied with one or a small number of soils so that it is not possible to extrapolate the findings on chiral preference to other soils. For this study, the fungicide metalaxyl was chosen as a "chiral probe" to investigate its enantioselective degradation in 20 different soils, selected primarily to cover a wide range of soil properties (e.g., acidic/alkaline, aerobic/ anaerobic) rather than to consider soils of agricultural importance. Racemic metalaxyl was incubated in these soils under laboratory conditions, and the degradation of the enantiomers as well as the enantioselective formation/ degradation of the primary major metabolite, metalaxyl acid, was followed over time, using enantioselective GC-MS after ethylation with diazoethane. In aerobic soils with pH > 5, the fungicidally active R-enantiomer was degraded faster than the S-enantiomer (k(R) > k(S)), leading to residues with a composition [S] > [R]. However, in aerobic soils with pH 4-5, both enantiomers were degraded at similar rates (k(R) approximately k(S)), and in aerobic soils with pH < 4 and in most anaerobic soils, the enantioselectivity was reversed (k(R) < k(S)). These considerable soil-to-soil variations were observed with soils from locations close to each other, in one case even within a single soil profile. Liming and acidification of a "nonenantioselective" soil prior to incubation resulted in enantioselective degradation with k(R)> k(S) and k(R) < k(S), respectively. While the enantioselectivity (expressed as ES = (k(R) - k(S))/(k(R) + k(S))) of metalaxyl degradation in aerobic soils apparently correlated with soil pH, no such correlation was found for metalaxyl acid. Reevaluation of published kinetic data for the herbicides dichlorprop and mecoprop indicated similar correlations between soil pH and ES as for metalaxyl.
---
paper_title: Enantioselective and nonenantioselective degradation of organic pollutants in the marine ecosystem
paper_content:
Enantiomeric ratios of 11 chiral environmental pollutants determined in different compartments of the marine ecosystem by chiral capillary gas chromatography and chiral high-performance liquid chromatography allow discrimination between the following processes: enantioselective decomposition of both enantiomers with different velocities by marine microorganisms (α-HCH, β-PCCH, γ-PCCH); enantioselective decomposition of one enantiomer only by marine microorganisms (DCPP); enantioselective decomposition by enzymatic processes in marine biota (α-HCH, β-PCCH, trans-chlordane, cis-chlordane, octachlordane MC4, octachlordane MC5, octachlordane MC7, oxychlordane, heptachlor epoxide); enantioselective active transport through the “blood–brain barrier” (α-HCH); nonenantioselective photochemical degradation (α-HCH, β-PCCH). © 1993 Wiley-Liss, Inc.
---
paper_title: Variations in α-Hexachlorocyclohexane enantiomer ratios in relation to microbial activity in a temperate estuary
paper_content:
Changes in the enantiomer ratios (ERs) of chiral pollutants in the environment are often considered evidence of biological alteration despite the lack of data on causal or mechanistic relationships between microbial parameters and ER values. Enantiomer ratios that deviate from 1:1 in the environment provide evidence for the preferential microbial degradation of one enantiomer, whereas ER values equal to 1 provide no evidence for microbial degradation and may mistakenly be interpreted as evidence that biodegradation is not important. In an attempt to link biological and geochemical information related to enantioselective processes, we measured the ERs of the chiral pesticide α-hexachlorocyclohexane (α-HCH) and bacterial activity (normalized to abundance) in surface waters of the York River (VA, USA) bimonthly throughout one year. Despite lower overall α-HCH concentrations, α-HCH ER values were unexpectedly close to 1:1 in the freshwater region of the estuary with the highest bacterial activity. In contrast, ER values were nonracemic (ER ≠ 1) and α-HCH concentrations were significantly higher in the higher salinity region of the estuary, where bacterial activity was lower. Examination of these data may indicate that racemic environmental ER values are not necessarily reflective of a lack of biodegradation or recent input into the environment, and that nonenantioselective biodegradation may be important in certain areas.
---
paper_title: External exposure and bioaccumulation of PCBs in humans living in a contaminated urban environment.
paper_content:
Humans are exposed to different mixtures of PCBs depending on the route of exposure. In this study we investigated the potential contribution of inhalation to the overall human exposure to PCBs in an urban area. For this purpose, the mechanistically based, non-steady state bioaccumulation model ACC-HUMAN was applied to predict the PCB body burden in an adult living in the Midwestern United States who eats a typical North American diet and inhales air contaminated with PCBs. Dietary exposure was estimated using measured data for eighteen PCB congeners in different food groups (fish, meat and egg, dairy products). Two scenarios for inhalation exposure were evaluated: one using air concentrations measured in Chicago, and a second using air measurements in a remote area on Lake Michigan, Sleeping Bear Dunes. The model predicted that exposure via inhalation increases the accumulated mass of PCBs in the body by up to 30% for lower chlorinated congeners, while diet is by far the dominant source of exposure for those PCB congeners that accumulate most in humans.
---
paper_title: Determination of atropisomeric and planar polychlorinated biphenyls, their enantiomeric fractions and tissue distribution in grey seals using comprehensive 2D gas chromatography
paper_content:
High prevalence of uterine occlusions and sterility is found among Baltic ringed and grey seal. Polychlorinated biphenyls (CBs) are suspected to be the main cause. The CB concentrations are higher in affected than in healthy animals, but the natural variation is considerable. Thus, it might be possible to assess the health status of seals by CB analysis. The ratios of chiral compounds (enantiomeric fractions (EFs)) such as atropisomeric CBs are of particular interest, since these may reflect differences in metabolic rates. An analytical procedure was developed and used to determine the levels of atropisomeric CBs, planar-CBs (WHO-PCBs) and total CBs in seals of different health status. Comprehensive 2D gas chromatography (GC×GC) was used to separate the target analytes from other CBs and interferences and a micro electron-capture detector (μECD) was used for detection. EFs of the atropisomeric CBs were difficult to determine as the levels were low and the interferences many. Two column combinations had to be used to avoid biased results—both had a chiral column as first-dimension column. The second-dimension column was coated with either a high-polarity cyanopropyl or a liquid crystal phase. EFs were determined for five atropisomeric CBs, i.e. CBs 91, 95, 132, 149 and 174. The results were verified by GC×GC–time-of-flight mass spectrometry (TOF-MS). Some atropisomeric CBs had EFs that deviated strongly from the racemic-mixture value. The deviations were larger in liver than blubber, which indicates enantioselective metabolism. However, there was no selective passage of the studied atropisomeric CBs across placenta and no selective blood–brain barrier. Similarly, no correlation between EFs and health status was observed, although there was a correlation between the total CB levels and health status.
---
paper_title: Enantioselective gas chromatography/mass spectrometry of methylsulfonyl PCBs with application to arctic marine mammals.
paper_content:
Four different commercially available cyclodextrin (CD) capillary gas chromatography (GC) columns were tested for the enantioselective separation of nine environmentally persistent atropisomeric 3- and 4-methylsulfonyl PCBs (MeSO2-CBs). The selected columns contained cyclodextrins with various cavity diameters (beta- or gamma-CD), which were methylated and/or tert-butyldimethylsilylated (TBDMS) in the 2,3,6-O-positions. The beta-CD column with TBDMS substituents in all of the 2,3,6-O-positions was by far the most selective column for the MeSO2-CBs tested. Enantiomers of congeners with 3-MeSO2 substitution were more easily separated than those with 4-MeSO2 substitution. The separation also seemed to be enhanced for congeners with the chlorine atoms on the non-MeSO2-containing ring and clustered on one side of the same ring. The 2,3-di-O-methyl-6-O-TBDMS-beta-CD was found to give somewhat better selectivity than the corresponding gamma-CD, in comparison between the two columns, which were identical in all other respects. Enantioselective analysis of arctic ringed seal (Phoca hispida) and polar bear (Ursus maritimus) adipose tissue revealed a strong dominance of certain enantiomers. For example, the enantiomer ratio (ER) of 3-MeSO2-CB149 was 0.32 and < 0.1 in ringed seal blubber and polar bear fat, respectively. These low ER values are indicative of highly enantioselective formation, enantioselective metabolism, enantioselective transport across cell membranes, or a combination of the three in both species. Comparable results for the enantiomeric analysis of MeSO2-CBs in biotic tissue extracts were obtained using two highly selective mass spectrometric techniques, ion trap mass spectrometry/mass spectrometry and electron capture negative ion low-resolution mass spectrometry.
---
paper_title: Metabolism and metabolites of polychlorinated biphenyls.
paper_content:
Abstract The metabolism of polychlorinated biphenyls (PCBs) is complex and has an impact on toxicity, and thereby on the assessment of PCB risks. A large number of reactive and stable metabolites are formed in the processes of biotransformation in biota in general, and in humans in particular. The aim of this document is to provide an overview of PCB metabolism, and to identify the metabolites of concern and their occurrence. Emphasis is given to mammalian metabolism of PCBs and their hydroxyl, methylsulfonyl, and sulfated metabolites, especially those that persist in human blood. Potential intracellular targets and health risks are also discussed.
---
paper_title: Critical evaluation of monitoring strategy for the multi-residue determination of 90 chiral and achiral micropollutants in effluent wastewater.
paper_content:
It is essential to monitor the release of organic micropollutants from wastewater treatment plants (WWTPs) for developing environmental risk assessment and assessing compliance with legislative regulation. In this study the impact of sampling strategy on the quantitative determination of micropollutants in effluent wastewater was investigated. An extended list of 90 chiral and achiral micropollutants representing a broad range of biological and physico-chemical properties were studied simultaneously for the first time. During composite sample collection micropollutants can degrade resulting in the under-estimation of concentration. Cooling collected sub-samples to 4°C stabilised ≥81 of 90 micropollutants to acceptable levels (±20% of the initial concentration) in the studied effluents. However, achieving stability for all micropollutants will require an integrated approach to sample collection (i.e., multi-bottle sampling with more than one stabilisation method applied). Full-scale monitoring of effluent revealed time-paced composites attained similar information to volume-paced composites (influent wastewater requires a sampling mode responsive to flow variation). The option of monitoring effluent using time-paced composite samplers is advantageous as not all WWTPs have flow controlled samplers or suitable sites for deploying portable flow meters. There has been little research to date on the impact of monitoring strategy on the determination of chiral micropollutants at the enantiomeric level. Variability in wastewater flow results in a dynamic hydraulic retention time within the WWTP (and upstream sewerage system). Despite chiral micropollutants being susceptible to stereo-selective degradation, no diurnal variability in their enantiomeric distribution was observed. However, unused medication can be directly disposed into the sewer network creating short-term (e.g., daily) changes to their enantiomeric distribution. As enantio-specific toxicity is observed in the environment, similar resolution of enantio-selective analysis to more routinely applied achiral methods is needed throughout the monitoring period for accurate risk assessment.
---
paper_title: Enantiomeric composition of chiral polychlorinated biphenyl atropisomers in aquatic bed sediment.
paper_content:
Enantiomeric ratios (ERs) for eight polychlorinated biphenyl (PCB) atropisomers were measured in aquatic sediment from selected sites throughout the United States by using chiral gas chromatography/mass spectrometry. Nonracemic ERs for PCBs 91, 95, 132, 136, 149, 174, and 176 were found in sediment cores from Lake Hartwell, SC, which confirmed previous inconclusive reports of reductive dechlorination of PCBs at these sites on the basis of achiral measurements. Nonracemic ERs for many of the atropisomers were also found in bed-sediment samples from the Hudson and Housatonic Rivers, thus indicating that some of the PCB biotransformation processes identified at these sites are enantioselective. Patterns in ERs among congeners were consistent with known reductive dechlorination patterns at both river sediment basins. The enantioselectivity of PCB 91 is reversed between the Hudson and Housatonic River sites, which implies that the two sites have different PCB biotransformation processes with different enantiom...
---
paper_title: Enantiomeric signatures of chiral polychlorinated biphenyl atropisomers in livers of harbour porpoises (Phocoena phocoena) from the southern North Sea
paper_content:
The enantiomeric composition of polychlorinated biphenyl (PCB) atropisomers, including PCB 95, PCB 149 and PCB 132, was measured in 11 livers of harbour porpoises (Phocoena phocoena) from the southern North Sea. Non-racemic enantiomeric ratios (ERs) were found in some samples. The value of ERs in three of the four juvenile porpoises was equal or almost equal to one, while the ERs in all adults differed from racemic and ranged from 1.31 to 2.54 for PCB 95; from 1.19 to 1.81 for PCB 149 and from 0.45 to 0.94 for PCB 132. There were no relationships between the total concentration of PCBs and ERs. To understand the phenomena, the relationships between the ER value of individual chiral congener with age, concentration of total PCBs and PCB congener pattern were discussed. A model of intake and elimination kinetics was set up and it was tested using the ratio between concentration of PCB 153 and PCB 101 in the liver samples. There was a clear trend between the enantiomeric ratios and the ratio between PCB 153 and PCB 101. Considering that PCB 153 is one of the most persistent PCB congeners in marine mammals and PCB 101 is a relatively easy metabolised congener, this trend means that the enantiomeric ratio most likely reflects the proportion of the metabolised congener. The exposure period in contaminated conditions has a strong impact on ERs, and it is suggested that ERs in wildlife, combined with information on their anthropometric data, health status, diet and habitat conditions, might be good indicators of pollution in coastal ecosystems.
---
paper_title: Multidimensional gas chromatographic separation of selected PCB atropisomers in technical formulations and sediments
paper_content:
A simple dual-column gas chromatographic system with a six-port switching valve has been used to separate the atropisomers of PCB congeners 84, 91, and 95 in technical PCB formulations and in extracts of soil and river sediment. A capillary column coated with a methylphenylsiloxane stationary phase (CP-Sil 8) was used as the first column, for retention window selection, and a permethylated β-cyclodextrin (ChirasilDex) capillary column as the main separation column. Because peak overlap could not be eliminated by optimization of column temperature, the enantiomeric ratios of PCB congeners could not be determined from the original chromatograms. The correct enantiomer ratio was determined from the peak areas obtained by deconvolution of the chromatograms. Whereas the PCB atropisomers considered were present in equal concentrations in the technical PCB formulations, analysis of a river sediment sample confirmed different residual concentrations of the atropisomers of congener 95.
---
paper_title: Enantioselective analysis of chiral polychlorinated biphenyls in sediment samples by multidimensional gas chromatography-electron-capture detection after steam distillation-solvent extraction and sulfur removal
paper_content:
Enantioselective analysis of the chiral polychlorinated biphenyls (PCBs) 95, 132 and 149 in river sediment samples was carried out by multidimensional gas chromatography with an achiral-chiral column combination and electron-capture detection. The investigations revealed the PCB congeners analysed to be present racemic within experimental error. A micro simultaneous steam distillation-solvent extraction device was used as extraction method. This technique proved to be a versatile tool for the extraction of organochlorine compounds from several matrices and was used for the extraction of PCBs from sediment samples. The removal of interfering elementary sulfur was carried out by the treatment with copper powder during the extraction.
---
paper_title: Congener specific determination and enantiomeric ratios of chiral polychlorinated biphenyls in striped dolphins (Stenella coeruleoalba) from the Mediterranean Sea
paper_content:
Blubber and liver samples from six striped dolphins (Stenella coeruleoalba) found dead in the Mediterranean Sea in 1989--1990 were tested for 37 coplanar and chiral polychlorinated biphenyls (PCBs), including the enantiomeric ratios of 9 chiral PCBs. The method includes a fractionation step using HPLC (PYE column) for separating the PCBs according to the number of chlorine atoms in the ortho positions. HRGC/ECD and HRGC/LRMS with an a chiral column (DB-5) were used to determine the PCB congeners. The enantiomeric ratios of nine chiral PCBs were determined by HRGC/LRMS (SIM) with a chiral column (Chirasil-Dex) and by MDGC as the confirmatory technique. The total PCB concentration (sum of 37 congeners) ranged from 7.2 to 89.6 {micro}g/g (wet weight) and from 0.52 to 29.2 {micro}g/g (wet weight) for blubber and liver samples, respectively. PCB profiles were dominated by congeners 138, 153, 170, and 180. The toxic equivalent values (TEQ) ranged from 0.17 to 3.93 ng/g (wet weight) and from 0.02 to 0.73 ng/g (wet weight) for blubber and liver samples, respectively. PCBs 95, 132, 135, 149, and 176 revealed an enantiomeric excess of the second eluted enantiomer in almost all of the samples, whereas PCBs 136 and 174 were racemic or almostmore » racemic. PCBs 88 and 91 were under the detection limits of the methodology used.« less
---
paper_title: Concentrations and enantiomer fractions of organochlorine compounds in Baltic species hit by reproductive impairment
paper_content:
Concentrations and enantiomer fractions (EFs) of organochlorine compounds (OCs) were determined in tissues of gray seal (Halichoerus grypus) and salmon (Salmo salar) originating from the Baltic Sea. The selected seal specimens ranged from starved to unstarved animals, and some of them suffered from a disease complex, while the salmon samples originated from individuals, which were known to produce offspring with and without the M74 syndrome. Significant differences in residue levels and EFs were found between seal groups but not between M74 salmon and non-M74 salmon. The relations between chemical and biological variables of seal samples were investigated with multivariate statistics. Poor health status correlated strongly with age, while bad nutrition condition was associated mainly with high pollution loads and distinctively nonracemic chiral OC compositions. High biotransformation rate (as indicated by fraction of chlordane metabolites in relation to total level of chlordanes) was also associated with large deviations from racemic values and high contaminant levels.
---
paper_title: PCBs and OCPs in human milk in Eastern Siberia, Russia: Levels, temporal trends and infant exposure assessment.
paper_content:
The aim of our study is to investigate the spatial distribution of polychlorinated biphenyls (PCBs), hexachlorobenzene (HCB), dichlorodiphenyltrichloroethane (p,p'-DDT) and its metabolites, α- and γ-isomers of hexachlorocyclohexane (HCH) in 155 samples of human milk (HM) from Eastern Siberia (six towns and seven villages in Irkutsk Region, one village of the Republic of Buryatia and one town in Zabaikal'sk Region, Russia), and to examine the dietary and social factors influencing the human exposure to the organochlorines. The median and range of the concentration of six indicator PCBs in HM in 14 localities in Eastern Siberia (114 (19-655) ng g-1 lipids respectively) are similar to levels in the majority of European countries. However, in one village, Onguren, the median and range of levels of six indicator PCBs (1390 (300-3725) ng g-1 lipids) were comparable to levels measured in highly contaminated populations. The Lake Baikal seals are highly exposed to persistent organic pollutants (POPs) and could be a potential source of PCB and DDT exposure in the Onguren cohort via the consumption of the Lake Baikal seal tissue. The location of food production in areas exposed to the emissions of local POP sources can also significantly influence POP levels in HM samples from industrialized areas. Estimated daily intakes (EDI) of HCH and HCB for infants are considerably lower or close to acceptable daily intake (ADI). The EDI of total DDTs and total PCBs are higher than ADI.
---
paper_title: Enantioselective Determination of Polycyclic Musks in River and Wastewater by GC/MS/MS
paper_content:
The separation of chiral compounds is an interesting and challenging topic in analytical chemistry, especially in environmental fields. Enantioselective degradation or bioaccumulation has been observed for several chiral pollutants. Polycyclic musks are chiral and are widely used as fragrances in a variety of personal care products such as soaps, shampoos, cosmetics and perfumes. In this study, the gas chromatographic separation of chiral polycyclic musks, 1,3,4,6,7,8-hexahydro-4,6,6,7,8,8-hexamethylcyclo-penta-γ-2-benzopyrane (HHCB), 7-acetyl-1,1,3,4,4,6-hexamethyl-1,2,3,4-tetra-hydronaphthalene (AHTN), 6-acetyl-1,1,2,3,3,5-hexamethylindane (AHDI), 5-acetyl-1,1,2,6-tetramethyl-3-iso-propylindane (ATII), and 6,7-dihydro-1,1,2,3,3-pentamethyl-4(5H)-indanone (DPMI) was achieved on modified cyclodextrin stationary phase (heptakis (2,3-di-O-methyl-6-O-tert-butyl-dimethylsilyl-β-CD in DV-1701)). Separation techniques are coupled to tandem mass spectrometry (MS-MS), as it provides the sensitivity and selectivity needed. River and wastewaters (influents and effluents of wastewater treatment plants (WWTPs)) in the Nakdong River were investigated with regard to the concentrations and the enantiomeric ratios of polycyclic musks. HHCB was most frequently detected in river and wastewaters, and an enantiomeric enrichment was observed in the effluents of one of the investigated wastewater treatment plants (WWTPs). We reported the contamination of river and wastewaters in Korea by chiral polycyclic musks. The results of this investigation suggest that enantioselective transformation may occur during wastewater treatment.
---
paper_title: A review of personal care products in the aquatic environment: environmental concentrations and toxicity.
paper_content:
Considerable research has been conducted examining occurrence and effects of human use pharmaceuticals in the aquatic environment; however, relatively little research has been conducted examining personal care products although they are found more often and in higher concentrations than pharmaceuticals. Personal care products are continually released into the aquatic environment and are biologically active and persistent. This article examines the acute and chronic toxicity data available for personal care products and highlights areas of concern. Toxicity and environmental data were synergized to develop a preliminary hazard assessment in which only triclosan and triclocarban presented any hazard. However, numerous PCPs including triclosan, paraben preservatives, and UV filters have evidence suggesting endocrine effects in aquatic organisms and thus need to be investigated and incorporated in definitive risk assessments. Additional data pertaining to environmental concentrations of UV filters and parabens, in vivo toxicity data for parabens, and potential for bioaccumulation of PCPs needs to obtained to develop definitive aquatic risk assessments.
---
paper_title: Considerations about the enantioselective transformation of polycyclic musks in wastewater, treated wastewater and sewage sludge and analysis of their fate in a sequencing batch reactor plant.
paper_content:
The present work consists of two distinct parts: in the first part enantioselective GC was used to separate the different enantiomeric/diastereomeric polycyclic musks, PCMs (HHCB, AHTN, AHDI, ATII and DPMI) including the main transformation product of HHCB, HHCB-lactone, in wastewater and sewage sludge. After optimization all PCMs were resolved on a cyclodextrin containing Rt-BDEXcst capillary GC column. Enantiomeric ratios of PCMs in a technical mixture were determined and compared to those obtained from enantioselective separation of wastewater and sewage sludge samples. In general, enantiomeric ratios were similar for most materials in influent, effluent and stabilized sewage sludge. However, the ratios for HHCB, AHDI and particularly ATII suggest some stereospecific removal of these compounds. In the second part, a field study was conducted on a wastewater treatment plant comprising a sequencing batch reactor. Concentrations of HHCB, AHTN, ADBI, AHDI, ATII, DPMI and HHCB-lactone were determined by non-enantioselective GC in daily samples of influent, effluent and activated sludge during one week. Mean concentrations in influent were 6900 and 1520 ng/l for HHCB and AHTN, respectively. The other PCMs exhibited contents 200 ng/l. Mean percent removal was between 61% (AHDI) and 87% (HHCB) resulting in mean effluent concentrations below 860 ng/l. HHCB-lactone concentration increased during wastewater treatment with a mean in the influent of 430 ng/l and in the effluent of 900 ng/l, respectively, indicating a degradation of HHCB.
---
| Title: Occurrence of Chiral Bioactive Compounds in the Aquatic Environment: A Review
Section 1: Introduction
Description 1: This section introduces the issue of chiral bioactive compounds in the environment and summarizes the objectives of the review.
Section 2: Basic Concepts of Chirality
Description 2: This section explains the fundamental concepts of chirality, including definitions and implications in an environmental context.
Section 3: Analytical Methodologies for Enantioseparation of Chiral Bioactive Compounds
Description 3: This section provides a review of the chromatographic methods used for the analysis of chiral bioactive drugs in environmental matrices.
Section 4: Chiral Bioactive Compounds of Environmental Concern
Description 4: This section describes reports on the occurrence of various chiral bioactive compounds such as illicit drugs, pharmaceuticals, pesticides, PCBs, and PCMs in aquatic environments.
Section 5: Illicit Drugs and Pharmaceuticals
Description 5: This section focuses on the enantioselective analysis of illicit drugs and pharmaceuticals in environmental matrices, discussing specific compounds and their enantiomeric behaviors.
Section 6: Multi-Class Enantioselective Analysis
Description 6: This section addresses the challenges and methodologies for multi-residue enantioselective analysis of pharmaceuticals and drugs of abuse in environmental samples.
Section 7: Environmental Chiral Analysis of Pesticides, PCBs, and PCMs
Description 7: This section covers the enantioselective occurrence, distribution, and biodegradation of pesticides, PCBs, and PCMs in various environmental matrices.
Section 8: Conclusions
Description 8: This section provides a summary of the findings on the occurrence and environmental impact of chiral bioactive compounds, and suggests areas for future research. |
A review of key development areas in low-cost packaging and integration of future E-band mm-wave transceivers | 5 | ---
paper_title: Study on high rate long range wireless communications in the 71–76 and 81–86 GHz bands
paper_content:
Performance of high data rate wireless line-of-sight communications in the E-band is analysed using an example of the spectrally efficient multi-gigabit system reported earlier. This paper discussed available technologies, potentials of a further increase of the communication range and challenges in development of the future multi-gigabit wireless networks.
---
paper_title: A comparison of basic 94 GHz planar transmission line resonators in commercial BiCMOS back-end-of-line processes
paper_content:
Paper presented at the 2014 International Conference on Actual Problems of Electron Devices Engineering (APEDE), 25-26 Sept. 2014, Saratov.
---
paper_title: Energy harvesting for Wireless Sensors from electromagnetic fields around overhead power lines
paper_content:
In this paper, capacitive and inductive energy harvesting devices are proposed to extract energy from the electric and magnetic fields surrounding 132kV overhead transmission lines. Research has been done to determine what parameters affect these energy harvesters. In-lab testing has been done on both the inductive and capacitive energy harvesters and results look promising. A single IC solution for 3.3V voltage regulation and energy storage was also tested.
---
paper_title: A proposed communications infrastructure for the smart grid
paper_content:
This paper explores and fortifies the need for a robust communications infrastructure for the upcoming smart grid and computes the bandwidth requirement for a hypothetical grid infrastructure. It presents the architecture of the current distribution system and shows that even for a medium-sized grid, the latency requirements of messages on the smart grid will require optical fibers as the transmission medium.
---
paper_title: Viability of powerline communication for the smart grid
paper_content:
There is currently an ongoing debate surrounding what would be the best choice for smart grid communication technology. One of the promising communication technologies for smart grid realization is Powerline Communication (PLC). However, because of its noisy environment and the low capacity of Narrowband Powerline Communication, its viability for smart grid realization is being questioned. To investigate this issue, we studied smart grid communication network requirements. We categorize smart grid data traffic into two general traffic classes including home area network data traffic and distribution automation data traffic. Then using network simulator-2, we simulate powerline communication and a smart grid communication network. To have a better understanding of the viability of powerline communication for smart grid realization, some future smart grid advanced applications are considered. Latency and reliability are considered to be the main smart grid communication network requirements. In this paper, the delay of different traffic classes under different network infrastructures and traffic applications have been calculated. Furthermore, a viable powerline communication network infrastructure for smart grid communication network is proposed.
---
paper_title: Space and frequency multiplexing for MM-wave multi-gigabit point-to-point transmission links
paper_content:
During last 10 years, the use of frequencies at E-band from 71 GHz to 76 GHz, from 81 GHz to 86 GHz and from 92 GHz to 95 GHz to licensed users has been regulated in US, Europe, Australia and Japan. Due to the large amount of available bandwidth and reasonable atmospheric attenuation, these frequency bands are suitable for very high data rate radio communication for medium to long range wireless links. However, in order to convert the bandwidth availability into real capacity, suitable transmission techniques should be designed. In the present paper, we propose a space-frequency multiplexing technique using FDM, coded modulation and 4×4 MIMO spatial multiplexing for point-to-point multi-gigabit connection in the 81–86 GHz bandwidth. We tested the proposed system, considering different link distances, different values of pathloss and atmospheric and rain attenuations. Simulation results evidenced the possibility of achieving a 48 Gb/s net capacity over 5GHz bandwidth (spectral efficiency 9.6 b/s/Hz) with 99.98% availability at link distances up to 1 Km.
---
paper_title: Wireless Network Design for Transmission Line Monitoring in Smart Grid
paper_content:
In this paper, we develop a real-time situational awareness framework for the electrical transmission power grid using Wireless Sensor Network (WSN). While WSNs are capable of cost efficient monitoring over vast geographical areas, several technical challenges exist. The low power, low data rate devices cause bandwidth and latency bottlenecks. In this paper, our objective is to design a wireless network capable of real-time delivery of physical measurements for ideal preventive or corrective control action. For network design, we formulate an optimization problem with the objective of minimizing the installation and operational costs while satisfying the end-to-end latency and bandwidth constraints of the data flows. We study a hybrid hierarchical network architecture composed of a combination of wired, wireless and cellular technologies that can guarantee low cost real-time data monitoring. We formulate a placement problem to find the optimal location of cellular enabled transmission towers. Further, we present evaluation results of the optimization solution for diverse scenarios. Our formulation is generic and addresses real world scenarios with asymmetric sensor data generation, unreliable wireless link behavior, non-uniform cellular coverage, etc. Our analysis shows that a transmission line monitoring framework using WSN is indeed feasible using available technologies. Our results show that wireless link bandwidth can be a limiting factor for cost optimization.
---
paper_title: Smart Grid Technologies: Communication Technologies and Standards
paper_content:
For 100 years, there has been no change in the basic structure of the electrical power grid. Experiences have shown that the hierarchical, centrally controlled grid of the 20th Century is ill-suited to the needs of the 21st Century. To address the challenges of the existing power grid, the new concept of smart grid has emerged. The smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability, and so on. While current power systems are based on a solid information and communication infrastructure, the new smart grid needs a different and much more complex one, as its dimension is much larger. This paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues and opportunities. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as to discuss the still-open research issues in this field. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.
---
paper_title: Expanding wireless bandwidth in a power-efficient way: developing a viable mm-wave radio technology
paper_content:
In recent years, demand for wireless bandwidth has been growing rapidly; a recent study has predicted that by 2015, demand will increase to twenty-six times that of today. However, the ability of wireless networks to handle even contemporary traffic is becoming problematic, with network overloading becoming a growing and widening problem; if data traffic increases by even a fraction of projections, wireless networks (as presently constructed) will be completely unable to cope with demand. This paper examines the current state of mm-wave radio technology for addressing this bandwidth/power-consumption crisis. The main promise of mm-wave radio is a route to power-efficient bandwidth - the ability to greatly increase the available data rates without increasing power consumption. To deliver successful mm-wave radio technology for general use, a number of multifaceted challenges will need to be addressed; these range from basic integrated circuit design to network design and network management.
---
paper_title: Millimeter Wave Mobile Communications for 5G Cellular: It Will Work!
paper_content:
The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.
---
paper_title: ICT Barriers and Critical Success Factors in Developing Countries
paper_content:
Since the early 1990s, Information and Communication Technology (ICT) has been perceived as a catalyst for development. However, the UNICEF State of the World’s Children Report 2011 acknowledges that the poor in many developing countries remain largely excluded from ICT and its benefits. This paper aims to address three issues. Firstly, identify ICT barriers in the literature from 2000 to 2011. Secondly, identify ICT barriers through empirical findings and thirdly, categorize these barriers into critical success factors. These aims are achieved by comparing the findings in the literature to our recent empirical results. Two methodologies are used in this study, namely, a systematic literature review and a case study; the empirical data for our case study was collected from The Gambia in autumn of 2012. The systematic literature review covers 1107 studies (2000-2011) published in the top five ranked ICT4D journals in terms of journal citation ranking. The paper identifies a total of 43 ICT barriers. Forty of them are common to both studies while the remaining three were revealed in the case study, namely, lack of Internet exchange points, micromanaging and invisible hands. The barriers in both studies are grouped into eight possible critical success factors and their degrees of severity are then compared. This paper argues that lack of Internet exchange points is an important ICT barrier that is overlooked in our review pool.
---
paper_title: A Survey on Smart Grid Communication Infrastructures: Motivations, Requirements and Challenges
paper_content:
A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.
---
paper_title: Millimeter-wave measurement of complex permittivity using dielectric rod resonator excited by NRD-guide
paper_content:
A new method for measuring complex permittivity of low-loss dielectric materials at millimeter-wave frequencies has been developed. The method uses a dielectric rod resonator excited by a nonradiative dielectric waveguide. Effective conductivity of conducting plates for short circuiting the resonator is determined by the difference of unloaded Q factors between TE/sub 0m1/- and TE/sub 0m/spl delta//-mode resonators, made of the same low-loss dielectric material. The complex permittivity of the dielectric rod is determined by the resonant frequency (f/sub 0/) and unloaded Q factor (Q/sub u/) of the TE/sub 0m1/-mode resonator. The complex permittivities of single crystal sapphire, polycrystalline Ba(Mg/sub 1/2/W/sub 1/2/)O/sub 3/ and Mg/sub 2/Al/sub 4/Si/sub 5/O/sub 18/ (cordierite) have been obtained at 60 and 77 GHz by the new method. These results were consistent with the values measured at microwave frequencies. It was also found that the frequency dependence of the dielectric loss tangent (tan/spl delta/) for sapphire can be expressed by frequency/tan/spl delta/=1/spl times/10/sup 6/ GHz.
---
paper_title: GaAs E Band Radio Chip-Set
paper_content:
A GaAs pHEMT radio chip-set, consisting of receiver, up-converter and power amplifier, for E-band applications demonstrates excellent conversion gain, linearity and output power over the entire 15 GHz bandwidth of the European Telecommunications Standards Institute (ETSI) E-band specification. The receiver's measured gain is 12 dB with an image rejection exceeding 10 dB, an IIP2 of 17 dBm and IIP3 of 5 dBm. For the up-converter, the measured conversion gain exceeds 10 dB and the OIP3 is approximately 26 dBm. The power amplifier has an average measured output power of 25.4 dBm and exceeds 24.5 dBm over the band. This amplifier has a measured small signal gain of 20 dB, OIP3 of approximately 32 dBm and the input and output return losses exceed 15 dB. The saturated output exceeds previous results for a power amplifier spanning the full 71 to 86 GHz span of the ETSI E bands for any semiconductor system. To the authors' knowledge this is the highest performance E-band full chipset solution realized in a commercially available GaAs foundry.
---
paper_title: Low-cost E-band receiver front-end development for gigabyte point-to-point wireless communications
paper_content:
This paper presents an attractive low cost E-band (81-86 GHz) receiver front-end system with a broad bandwidth (up to 5 GHz) based on substrate integrated waveguide (SIW) technology. The proposed subsystem is designed for gigabyte point-to-point wireless communication system. Active components are surface-mounted on alumina substrate utilizing miniature hybrid microwave-integrated circuit (MHMIC) technology. To increase gain and bandwidth of the antenna, and also to reduce fabrication cost, antenna and passive components are designed and fabricated on a low cost Rogers 6002 substrate with low dielectric constant. To fabricate the passive components, low cost printed circuit board (PCB) process is used. Then, the two substrates are integrated together by simple wire bonding process. With this design and implementation technique, our E-band front-end receiver subsystem can demonstrate attractive advantages and features such as broad bandwidth, low cost, compact size, light weight, and repayable performance. The minimum detectable power density at the receiver point is 0.41 μW/cm2 and the dynamic range is 72 dB.
---
paper_title: An E-band transceiver with 5GHz IF bandwidth
paper_content:
This paper reports design and test results of a wideband low-complexity RF transceiver for full duplex multi-gigabit communication systems operating in 71–76 and 81–86 GHz frequency bands based on high-performance integrated multi-chip frequency conversion modules that use commercial GaAs MMICs.
---
paper_title: Passives partitioning for single package single chip SoC on 32nm RFCMOS technology
paper_content:
Using a multilayer organic package substrate, we demonstrate that future system-on-chip with co-existing digital and radio integrated circuits can be partitioned to have critical RF inductors/transformers on package and active devices and capacitors on silicon, which enables cost and form factor reduction as well as design of radios on low resistivity substrate. As proof of concept, we present an example of silicon/package co-design, where a WLAN bandpass filter has been designed using 3D inductors on-package and multi-finger-capacitors on a 32nm silicon process. The design is validated both experimentally and through full wave electromagnetic simulation.
---
paper_title: MMIC-based chipset for multi-Gigabit satellite links in E-band
paper_content:
A complete MMIC-based chipset for multi-Gigabit satellite communication links in the designated E-Band frequency ranges is shown. The chipset includes broadband I/Q up- and downconverter, a frequency multiplier for LO generation, a low noise amplifier with a noise figure below 2 dB integrated within the receiver chip as well as a power amplifier with a saturated output power above 22 dBm. A link budget analysis indicates the feasibility a LEO satellite downlink with the presented fully integrated solid-state chipset with data rates up to 5 Gbit/s.
---
paper_title: Passive Circuit Technologies for mm-Wave Wireless Systems on Silicon
paper_content:
The performance characteristics of transmission lines, silicon integrated waveguides, tunable LC resonators and passive combiners/splitters and baluns are described in this paper. It is shown that Q-factor for an on-chip LC tank peaks between 20 and 40 GHz in a 65 nm RF-CMOS technology; well below the bands proposed for many mm-wave applications. Simulations also predict that the Q-factor of differential CPW transmission lines on-chip can exceed 20 at 60 GHz in RF-CMOS when a floating shield is applied, outperforming unshielded variants employing more advanced metal stacks. A PA circuit demonstrator for advanced on-chip passive power combiners, splitters and baluns realizes peak-PAE of 18% and Psat better than 20 dBm into a 50 Ω load at 62 GHz. An outlook to the enablement of digitally intensive mm-wave ICs and low-loss passive interconnections (0.15 dB/mm measured loss at 100 GHz) concludes the paper.
---
paper_title: Microwave, millimiter and submillimeter devices, instrumentation and equipment
paper_content:
We design and manufacture noise measurement systems for high sensitivity measuring of amplitude and phase noise of components of radar and communication systems (oscillators, local oscillators, amplifiers, mixers) at frequency range from 1 to 180 GHz. For this purpose we have worked out a set of microwave components (coaxial, microstrip, waveguide) in this frequency range: resonators, detectors, mixers, local oscillators, attenuators, phase shifters, directional couplers, switches, matched load and others. The main advantages of these components are wide frequency range, small level of losses, low noise level of oscillators, very high Q-factor of resonators. Except to outlined microwave components we may propose microwave measurement system for: measuring of phase and amplitude noise of microwave signals, measuring of constitutive parameters of various materials in cm- and mm-wave range (1...180 GHz) as submm-wave range (180-405 GHz).
---
paper_title: An E-band transmitter module constructed with four WLCSP MMICs solder-reflowed On PCB
paper_content:
The first edition of E-band transmitter module is demonstrated. This module is consisted of four WLCSP MMICs solder-reflowed on a designated PCB 14 mm × 10 mm in size. Because the WLCSP technology is incorporated, the module is mass producible and potentially very low cost compared with the current E-band products. The treatment of WLCSP MMIC in the E-band should be carefully done so as to suppress undesired bulk mode, an electro-magnetic absorber stables the performance. To demonstrate a TX module, we newly designed an LNA as driver amplifier and a PA, which are cascade combined with tripler and up-converter WLCSP MMICs on PCB. The LNA and PA have variable gain function modified of topology from the dual-HEMT amplifier. The conversion gain of TX module is 19dB and the saturated RF output power level is 17.5dBm.
---
paper_title: Low insertion loss substrate integrated waveguide quasi-elliptic filters for V-band wireless personal area network applications
paper_content:
Novel V-band substrate integrated waveguide (SIW) filters have been presented. Design procedures for the filters synthesis and mechanisms providing quasi-elliptic response have been explained. The insertion loss of the filters has been measured below 2 dB with microstrip-to-SIW transitions being included.
---
paper_title: High-Performance Shielded Coplanar Waveguides for the Design of CMOS 60-GHz Bandpass Filters
paper_content:
This paper presents optimized very high performance CMOS slow-wave shielded CPW transmission lines (S-CPW TLines). They are used to realize a 60-GHz bandpass filter, with T-junctions and open stubs. Owing to a strong slow-wave effect, the longitudinal length of the S-CPW is reduced by a factor up to 2.6 compared to a classical microstrip topology in the same technology. Moreover, the quality factor of the realized S-CPWs reaches 43 at 60 GHz, which is about two times higher than the microstrip one and corresponds to the state of the art concerning S-CPW TLines with moderate width. For a proof of concept of complex passive device realization, two millimeter-wave filters working at 60 GHz based on dual-behavior-resonator filters have been designed with these S-CPWs and measured up to 110 GHz. The measured insertion loss for the first-order (respectively, second-order) filter is -2.6 dB (respectively, -4.1 dB). The comparison with a classical microstrip topology and the state-of-the-art CMOS filter results highlights the very good performance of the realized filters in terms of unloaded quality factor. It also shows the potential of S-CPW TLines for the design of high-performance complex CMOS passive devices.
---
paper_title: A Millimeter-Wave System-on-Package Technology Using a Thin-Film Substrate With a Flip-Chip Interconnection
paper_content:
In this paper, a system-on-package (SOP) technology using a thin-film substrate with a flip-chip interconnection has been developed for compact and high-performance millimeter-wave (mm-wave) modules. The thin-film substrate consists of Si-bumps, ground-bumps, and multilayer benzocyclobutene (BCB) films on a lossy silicon substrate. The lossy silicon substrate is not only a base plate of the thin-film substrate, but also suppresses the parasitic substrate mode excited in the thin-film substrate. Suppression of the substrate mode was verified with measurement results. The multilayer BCB films and the ground-bumps provide the thin-film substrate with high-performance integrated passives for the SOP capability. A broadband port terminator and a V-band broad-side coupler based on thin-film microstrip (TFMS) circuits were fabricated and characterized as mm-wave integrated passives. The Si-bumps dissipate the heat generated during the operation of flipped chips as well as provide mechanical support. The power dissipation capability of the Si-bumps was confirmed with an analysis of DC-IV characteristics of GaAs pseudomorphic high electron-mobility transistors (PHEMTs) and radio-frequency performances of a V-band power amplifier (PA). In addition, the flip-chip transition between a TFMS line on the thin-film substrate and a coplanar waveguide (CPW) line on a flipped chip was optimized with a compensation network, which consists of a high-impedance and low-impedance TFMS line and a removed ground technique. As an implementation example of the mm-wave SOP technology, a V-band power combining module (PCM) was developed on the thin-film substrate with the flip-chip interconnection. The V-band PCM incorporating two PAs with broadside couplers showed a combining efficiency higher than 78%.
---
paper_title: Multilayer RF PCB for space applications: technological and interconnections trade-off
paper_content:
Multilayer RF printed circuits boards with embedded passives are complex structures to manufacture and to package while keeping good RF performances. In this paper, technological solutions aimed at simplifying these issues are proposed: the use of thermoset materials instead of thermoplastic laminates and an original interconnection technique based on "RF openings" machined in the edges of the boards. Two breadboards based on these solutions have been developed and demonstrate very good RF performances between 5 and 15 GHz.
---
paper_title: A low-cost 24GHz Doppler radar sensor for traffic monitoring implemented in standard discrete-component technology
paper_content:
This paper deals with both the implementation and the real-life characterization of a low-cost 24 GHz Doppler radar sensor, purposely designed for the traffic monitoring. To reduce industrial costs as much as possible a discrete-components technology has been adopted for the microwave front-end Plastic packaged devices and fiberglass reinforced substrate are used in such a way as to fit with standard PCB manufacturing processes and automated assembly procedures. The signal manipulation is based on a state-machine algorithm and has been implemented in a 8051 family micro-controller unit. The realized sensor has a typical output power of 6 dBm and mounts a planar antenna with a 3 dB beam-width of plusmn4.5 degrees. The real-life measured performances shows a detection range in excess of 300 meters.
---
paper_title: A 60GHz LC-VCO module using flip-chip on a laminate substrate
paper_content:
For emerging mm-wave consumer applications such as high data-rate wireless communications at 60GHz and car radar at 76–81GHz, it is important to investigate the impact of module assembly on IC performance. Flip-chip is a promising candidate to meet requirements like low reflections, low insertion loss and low costs for mm-wave applications. This paper addresses the design, modeling and evaluation results of a 60GHz LC-VCO module using flip-chip. The impact of the substrate on on-chip CPW transmission lines and spiral inductors is studied based on the performance of a 60GHz LC-VCO. Since the inductor is part of the VCO resonator, a remarkable 10% increase in oscillation frequency occurs due to the nearby top-metal layer of the substrate. The IC is realized in a 0.25µm SiGe BiCMOS process. The 0.44mm thick substrate offers four copper signal layers.
---
paper_title: A fully integrated SiGe E-BAND transceiver chipset for broadband point-to-point communication
paper_content:
Fully integrated chipset at E-band frequencies in a superhetrodyne architecture covering the lower 71–76GHz and upper 81–86GHz bands were designed and fabricated in 0.13%m SiGe technology. The receiver chips include an image-reject low-noise amplifier (LNA), RF-to-IF mixer, variable gain IF amplifier, quadrature IF-to-baseband de-modulators, tunable baseband filter, phase-locked loop (PLL), and frequency multiplier by four (quadrupler). The receiver chips achieve 60dB gain, 8.5 dB noise figure, −30 dBm IIP3, and consumes 600 mW. The transmitter chips include a power amplifier, image-reject driver, IF-to-RF up-converting mixer, variable gain IF amplifier, quadrature baseband-to-IF modulator, PLL, and frequency multiplier by four (quadrupler). It achieves output power P1dB of 0 to 11 dBm, Psat of 3.3 to 14 dBm, and consumes 850 mW.
---
paper_title: A Ka-band transceiver front-end module for wide band wireless communication
paper_content:
In this paper, a Ka-band transceiver (T/R) front-end module for wide band and high speed wireless communication is presented. A new multilayer PCB structure is proposed. With this structure, entire millimeter wave system can be integrated on one board. The performance of the T/R front-end module is measured. In order to verify the feasibility of the T/R front-end module for practical wireless communication applications, a complete wireless communication system with two T/R front-end modules is designed and implemented. A 16QAM signal with a date rate of 20 Ms/s and a 512QAM signal with a date rate of 5 Ms/s are used to evaluate the system performance. Experimental results are satisfactory.
---
paper_title: A hybrid fabricated 40 GHz low phase noise SiGe push-push oscillator
paper_content:
We present a 40 GHz push-push oscillator based on SiGe HBTs. The circuit is fabricated in thin film technology on an alumina substrate. It provides an output power of -9 dBm at 40 GHz and two differential outputs with -6 dBm each at the fundamental frequency of 20 GHz. The oscillator shows an excellent phase noise performance and reaches -108 dBc/Hz at the second harmonic frequency of 40 GHz and -114 dBc/Hz at the fundamental frequency of 20 GHz Both values are measured at an offset frequency of I MHz.
---
paper_title: Oscillator phase noise: a tutorial
paper_content:
Linear time-invariant (LTI) phase noise theories provide important qualitative design insights but are limited in their quantitative predictive power. Part of the difficulty is that device noise undergoes multiple frequency translations to become oscillator phase noise. A quantitative understanding of this process requires abandoning the principle of time invariance assumed in most older theories of phase noise. Fortunately, the noise-to-phase transfer function of oscillators is still linear, despite the existence of the nonlinearities necessary for amplitude stabilization. In addition to providing a quantitative reconciliation between theory and measurement, the time-varying phase noise model presented in this tutorial identifies the importance of symmetry in suppressing the upconversion of 1/f noise into close-in phase noise, and provides an explicit appreciation of cyclostationary effects and AM-PM conversion. These insights allow a reinterpretation of why the Colpitts oscillator exhibits good performance, and suggest new oscillator topologies. Tuned LC and ring oscillator circuit examples are presented to reinforce the theoretical considerations developed. Simulation issues and the accommodation of amplitude noise are considered in appendixes.
---
paper_title: Passive Circuit Technologies for mm-Wave Wireless Systems on Silicon
paper_content:
The performance characteristics of transmission lines, silicon integrated waveguides, tunable LC resonators and passive combiners/splitters and baluns are described in this paper. It is shown that Q-factor for an on-chip LC tank peaks between 20 and 40 GHz in a 65 nm RF-CMOS technology; well below the bands proposed for many mm-wave applications. Simulations also predict that the Q-factor of differential CPW transmission lines on-chip can exceed 20 at 60 GHz in RF-CMOS when a floating shield is applied, outperforming unshielded variants employing more advanced metal stacks. A PA circuit demonstrator for advanced on-chip passive power combiners, splitters and baluns realizes peak-PAE of 18% and Psat better than 20 dBm into a 50 Ω load at 62 GHz. An outlook to the enablement of digitally intensive mm-wave ICs and low-loss passive interconnections (0.15 dB/mm measured loss at 100 GHz) concludes the paper.
---
paper_title: Radiated emissions and immunity of microstrip transmission lines: theory and reverberation chamber measurements
paper_content:
The increasing complexity of electronic systems has introduced an increased potential for electromagnetic interference (EMI) between electronic systems. We analyze the radiation from a microstrip transmission line and calculate the total radiated power by numerical integration. Reverberation chamber methods for measuring radiated emissions and immunity are reviewed and applied to three microstrip configurations. Measurements from 200 to 2000 MHz are compared with theory, and excellent agreement is obtained for two configurations that minimize feed cable and finite ground plane effects. Emissions measurements are found to be more accurate than immunity measurements because the impedance mismatch of the receiving antenna cancels when the ratio of the microstrip and reference radiated power measurements is taken. The use of two different receiving antenna locations for emissions measurements illustrates good field uniformity within the chamber.
---
paper_title: Packaging Effects of Multiple X-Band SiGe LNAs Embedded in an Organic LCP Substrate
paper_content:
Interconnects in radio frequency (RF) packages have a strong tendency to deteriorate RF performance, especially in high-frequency systems. In this paper, comparison is made between the wirebonded and embedded flip-chip packages. X-band silicon-germanium low-noise amplifiers are used to evaluate the performance of these interconnects. Measured results show that the embedded flip-chip packages have better RF performance than the wirebonded packages for X-band applications. At 9.5 GHz, the flip-chip interconnects contribute only 0.4 dB of insertion loss, while the wirebond interconnects contribute 2.2 dB of insertion loss. The flip-chip and wirebond interconnects are modeled and validated against measured results from 8 to 20 GHz. For the first time, multiple dies are put together in a single liquid crystal polymer package to compare the packaging effects, and to demonstrate the feasibility of embedding multiple dies within a single package for highly integrated solutions.
---
paper_title: A Millimeter-Wave System-on-Package Technology Using a Thin-Film Substrate With a Flip-Chip Interconnection
paper_content:
In this paper, a system-on-package (SOP) technology using a thin-film substrate with a flip-chip interconnection has been developed for compact and high-performance millimeter-wave (mm-wave) modules. The thin-film substrate consists of Si-bumps, ground-bumps, and multilayer benzocyclobutene (BCB) films on a lossy silicon substrate. The lossy silicon substrate is not only a base plate of the thin-film substrate, but also suppresses the parasitic substrate mode excited in the thin-film substrate. Suppression of the substrate mode was verified with measurement results. The multilayer BCB films and the ground-bumps provide the thin-film substrate with high-performance integrated passives for the SOP capability. A broadband port terminator and a V-band broad-side coupler based on thin-film microstrip (TFMS) circuits were fabricated and characterized as mm-wave integrated passives. The Si-bumps dissipate the heat generated during the operation of flipped chips as well as provide mechanical support. The power dissipation capability of the Si-bumps was confirmed with an analysis of DC-IV characteristics of GaAs pseudomorphic high electron-mobility transistors (PHEMTs) and radio-frequency performances of a V-band power amplifier (PA). In addition, the flip-chip transition between a TFMS line on the thin-film substrate and a coplanar waveguide (CPW) line on a flipped chip was optimized with a compensation network, which consists of a high-impedance and low-impedance TFMS line and a removed ground technique. As an implementation example of the mm-wave SOP technology, a V-band power combining module (PCM) was developed on the thin-film substrate with the flip-chip interconnection. The V-band PCM incorporating two PAs with broadside couplers showed a combining efficiency higher than 78%.
---
paper_title: A full E-band low noise amplifier realized by using novel wafer-level chip size package technology suitable for reliable flip-chip reflow-soldering
paper_content:
Cost effective E-band Low Noise Amplifier (LNA) using a three-dimensional (3-D) MMIC technology and wafer level chip size package (WLCSP) technology is presented. The reflow-soldering compatibility of the technology makes MMIC assembly on PCB very simple and significantly contributes to mass production of receivers and transmitters. The applied 3-D MMIC design effectively shrinks the die sizes. The newly designed LNA exhibited an on-chip gain of 22.5 ± 1 dB and a noise figure of 4.0 ± 0.3 dB in the full E-band. The measurements of the assembled LNA, including extended transmission lines on PCB, exhibited a gain of 21 ± 1 dB and a noise figure of 4.6 ± 0.2 dB. Eliminating the extended transmission line losses of about 0.3 dB × 2, the noise figure increase due to assembly was only 0.3 dB. These results show that the LNA using the 3-D WLCSP technology is valuable for E-band application in terms of not only cost reduction but also practical performance achievement.
---
paper_title: A high gain E-band MMIC LNA in GaAs 0.1-µm pHEMT process for radio astronomy applications
paper_content:
In this paper, we present an E-band MMIC low noise amplifier (LNA) using 0.1-μm GaAs pHEMT technology operating in 1V and 2V drain voltage. The E-band LNA shows small signal gain of 28 dB from 62 to 77 GHz with DC power consumption 44 mW. Noise measurement conducts in the package shows average noise figure about 4.5 dB from 75 to 90 GHz. The figure-of-merit (FOM) is 212.5 (GHz/mW), which is highest compared with other LNAs using 0.1-μm GaAs pHEMT technology.
---
paper_title: Design and performance of a 60-GHz multi-chip module receiver employing substrate integrated waveguides
paper_content:
The design, fabrication and performance of a 60-GHz multi-chip module receiver employing a substrate integrated waveguide filter and antenna are presented. The receiver is integrated onto a single multilayer substrate, fabricated using photoimageable thick-film technology. The module incorporates a GaAs monolithic microwave integrated circuit low-noise amplifier and downconverter, with lumped elements for intermediate frequency (IF) filtering embedded into the substrate. The chip cavities for mounting of the MMICs are photo-imaged as part of the standard process, giving precise dimensional control for short bond-wire lengths. The complete module including integrated antenna measures only 22.5 mm x 5.4 mm x 0.3 mm and works from 58 to 64 GHz.
---
paper_title: Development of millimeter-wave passive components and system-in-packages by LTCC technology
paper_content:
This paper addresses the recent development of millimeter-wave passive components for radar and wireless communication applications by the low-temperature co-fired ceramic (LTCC) technology. Several new design ideas which fully utilize the LTCC multilayer features and the existence of multiple cavity modes in realizing some special performance are summarized, including multi-band filters with flexible band assignments, compact diplexers with single branch configurations, couplers and antennas design with filtering characteristics. Finally, two system-in-package application examples have been demonstrated in 60 GHz frontend phased array for dense wireless communications and micro-radar module for noncontact vital sign detection.
---
paper_title: 94 GHz Substrate Integrated Monopulse Antenna Array
paper_content:
A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.
---
paper_title: Low-cost E-band receiver front-end development for gigabyte point-to-point wireless communications
paper_content:
This paper presents an attractive low cost E-band (81-86 GHz) receiver front-end system with a broad bandwidth (up to 5 GHz) based on substrate integrated waveguide (SIW) technology. The proposed subsystem is designed for gigabyte point-to-point wireless communication system. Active components are surface-mounted on alumina substrate utilizing miniature hybrid microwave-integrated circuit (MHMIC) technology. To increase gain and bandwidth of the antenna, and also to reduce fabrication cost, antenna and passive components are designed and fabricated on a low cost Rogers 6002 substrate with low dielectric constant. To fabricate the passive components, low cost printed circuit board (PCB) process is used. Then, the two substrates are integrated together by simple wire bonding process. With this design and implementation technique, our E-band front-end receiver subsystem can demonstrate attractive advantages and features such as broad bandwidth, low cost, compact size, light weight, and repayable performance. The minimum detectable power density at the receiver point is 0.41 μW/cm2 and the dynamic range is 72 dB.
---
paper_title: An L-band tapered-ridge SIW-to-CPW transition
paper_content:
A tapered ridge transition between coplanar waveguide and substrate-integrated waveguide is presented. The taper is implemented through staircase metallization across 10 layers of conventional RF substrate with sidewall expansion and upper cut-out using Vivaldi-type exponential tapers. Fractional bandwidth of 36% is achieved in the L-band, with insertion loss of 0.5 dB and return loss of 15 dB.
---
paper_title: Low insertion loss substrate integrated waveguide quasi-elliptic filters for V-band wireless personal area network applications
paper_content:
Novel V-band substrate integrated waveguide (SIW) filters have been presented. Design procedures for the filters synthesis and mechanisms providing quasi-elliptic response have been explained. The insertion loss of the filters has been measured below 2 dB with microstrip-to-SIW transitions being included.
---
paper_title: A Low Phase-Noise VCO Using an Electronically Tunable Substrate Integrated Waveguide Resonator
paper_content:
In this paper, an X -band low phase-noise voltage-controlled oscillator (VCO) using a novel electronically tunable substrate integrated waveguide (SIW) resonator is proposed and developed for RF/microwave applications on the basis of the substrate integrated circuits concept. In this case, the resonant frequency of the SIW cavity resonator is tuned by different dc-biasing voltages applied over a varactor coupled to the cavity. Measured results show that the tuning range of the resonator is about 630 MHz with an unloaded Q U of 138. Subsequently, a novel reflection-type low phase noise VCO is developed by taking advantage of the proposed tunable resonator. Measured results demonstrate a frequency tuning range of 460 MHz and a phase noise of 88 dBc/Hz at a 100-kHz offset over all oscillation frequencies. The VCO is also able to deliver an output power from 6.5 to 10 dBm. This type of VCO is very suitable for low-cost microwave and millimeter-wave applications.
---
paper_title: Investigation of substrate integrated waveguide in LTCC technology for mm-wave applications
paper_content:
This paper presents the electromagnetic modeling and design of substrate integrated waveguides (SIW) up to W band (75 – 110 GHz) for Low-Temperature Co-fired Ceramics (LTCC) technology. The commercial software package CST Microwave Studio was used to optimize a coplanar waveguide (CPW) to SIW transition. Two test structures with two CPW to SIW transitions were designed, fabricated and measured up to 110 GHz. The comparison between simulated and measured data for the test structures shows a good agreement. The effective permittivity was extracted from both simulated and measured data and show good agreement.
---
paper_title: Low-Loss Integrated-Waveguide Passive Circuits Using Liquid-Crystal Polymer System-on-Package (SOP) Technology for Millimeter-Wave Applications
paper_content:
In this paper, we show a low-loss integrated waveguide (IWG), microstrip line-to-IWG transition, IWG bandpass filter (BPF), and system-on-package (SOP) using a liquid-crystal polymer (LCP) substrate, which can be used toward SOP technology for millimeter-wave applications. The proposed IWG can be used as a low-loss millimeter-wave transmission line on this substrate. The measured insertion loss of the IWG is -0.12 dB/mm and the measured insertion loss of two microstrip line-to-IWG transitions is -0.14 dB at 60 GHz. The evaluated IWG filter is demonstrated as the pre-select filter for RF front-end modules at the millimeter-wave band. The fabricated three-pole BPF at a center frequency of 61.1 GHz has specifications: a 3-dB bandwidth of approximately 13.4% (~8.4 GHz), an insertion loss of -1.8 dB at the center frequency of 61.1 GHz, and a rejection of >15 dB over the passband. The proposed IWG can also be used as a low-loss millimeter-wave feed-through transition and interconnection between the monolithic microwave integrated circuit and the module instead of the vertical via structure. In terms of a SOP on LCP for millimeter-wave applications, the top face of the IWG does not have any electromagnetic effects, and a package lid can be attached to provide a hermetic sealing. These low-loss IWG circuits on LCP can easily be used in many millimeter-wave packaging applications
---
paper_title: X-Band Substrate Integrated Waveguide Amplifier
paper_content:
Design and realization of a compact X-band single-transistor amplifier with substrate integrated waveguide (SIW)-based input and output matching networks is presented. The overall size of the proposed SIW amplifier is only 1.5lambdag at the center frequency. Using a calibration technique, we extract the S-parameters of the fabricated amplifier with reference to its SIW ports. Measurements show that the amplifier features 10 dB of power gain with less than 2 dB of ripple and more than 10 dB of input and output return losses on the SIW ports in the entire frequency band. Due to an appropriate modeling of the constituent blocks of the amplifier, a good agreement between the simulation and measurement results is observed.
---
paper_title: A Wire Bonding Structure Directly Based on Substrate Integrated Waveguide Technology
paper_content:
When integrating an integrated circuit (IC) chip with a substrate integrated waveguide (SIW), the usual practice is first to convert the SIW to a microstrip, and then make wire bonding between the IC Chip and the microstrip. Consequently, the transition between the SIW and microstrip will enlarge the circuit size and introduce extra loss. In this letter, a novel wire bonding structure directly between the SIW and IC Chip is proposed, which not only reduced the circuit size and loss but also weakened the sensitivity etc. Two W-band low noise amplifier (LNA) prototypes are designed and fabricated with and without SIW-to-Microstrip transitions, respectively. Compared with the microstrip wire bonding structure, the direct SIW wire bonding structure efficiently reduced the circuit size and improved the performance.
---
| Title: A review of key development areas in low-cost packaging and integration of future E-band mm-wave transceivers
Section 1: INTRODUCTION
Description 1: Introduce the significance of ICT development, the limitations faced in developing regions, and the potential of mm-wave transceivers for future wireless communications.
Section 2: APPROACHES
Description 2: Discuss different integration approaches for E-band transceivers, including system-on-chip (SoC) and off-the-shelf component integration, along with their benefits and drawbacks.
Section 3: Hybrid component integration in soft substrates
Description 3: Review the method of hybrid packaging which integrates on-chip semiconductor components with off-chip passive components, and explore potential configurations and their advantages.
Section 4: SOFT SUBSTRATE INTEGRATED WAVEGUIDE SYSTEMS
Description 4: Explore the use of substrate integrated waveguide (SIW) for E-band passive devices, the challenges faced when integrating active components, and potential solutions for full system integration.
Section 5: CONCLUSION
Description 5: Summarize the current state of low-cost E-band system integration, the benefits of discrete semiconductors, and the future directions for development in conventional RF substrates. |
MAC and Routing Layer Supports for QoS in MANET: A Survey | 9 | ---
paper_title: Mobile ad-hoc networking
paper_content:
Mobile ad-hoc networks are multihop wireless networks. Wireless communications and the lack of fixed infrastructure generate new research problems compared with fixed multihop networks: Dynamic topologies. Because nodes can move arbitrarily, the network topology, which is typically multihop, can change randomly and rapidly; Bandwidth-constrained, variable capacity, possibly asymmetric links. Wireless links will continue to have significantly lower capacity than wired links and hence congestion is more problematic. Energy-constrained operation. Some, or all nodes in a Mobile ad-hoc network may rely on batteries for energy. For these nodes, power conservation is a critical design criterion. Wireless vulnerabilities and limited physical security. Mobile wireless networks are generally more prone to information and physical security threats than are fixed, hardwired nets. Network technologies for single-hop wireless networks are becoming to appear on the market. The IEEE 802.11 standard is a good platform to implement a one level multi-hop architecture because of its extreme. The Hiperlan standard is the ETSI standard for wireless LANs. Bluetooth technology is an open specification (supported by several hundreds of IT companies) for wireless communication of data and voice. Soon, mobile ad-hoc networks that cover areas of several square kilometres could be built from WLAN technologies such as IEEE 802.11. Furthermore, wireless-network technologies are the building blocks to construct multi-hop mobile ad-hoc networks. In the IETF, the Mobile Ad-hoc NETwork (MANET) working group was set up for developing and evolving mobile ad-hoc network routing specification(s) and introduce them to the Internet Standards track. The goal is to support networks scaling up to hundreds of routers. The step toward a large network (larger than thousand nodes) consisting of nodes with limited resources is not straightforward and presents many challenges that are still to be solved in areas such as: highcapacity wireless technologies, configuration management, addressing, routing, location management, interoperability, and security. The minitrack presents six papers related to the design, modeling and performance evaluation of architectures and
---
paper_title: A survey of QoS routing solutions for mobile ad hoc networks
paper_content:
In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wireline networks, mainly due to node mobility, multihop communications, contention for channel access, and a lack of central coordination. QoS guarantees are required by most multimedia and other time- or error-sensitive applications. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. However, in the last decade, much research attention has focused on providing QoS assurances in MANET protocols. The QoS routing protocol is an integral part of any QoS solution since its function is to ascertain which nodes, if any, are able to serve applications? requirements. Consequently, it also plays a crucial role in data session admission control. This document offers an up-to-date survey of most major contributions to the pool of QoS routing solutions for MANETs published in the period 1997?2006. We include a thorough overview of QoS routing metrics, resources, and factors affecting performance and classify the protocols found in the literature. We also summarize their operation and describe their interactions with the medium access control (MAC) protocol, where applicable. This provides the reader with insight into their differences and allows us to highlight trends in protocol design and identify areas for future research.
---
paper_title: Quality of service challenges for wireless mobile ad hoc networks
paper_content:
Wireless mobile ad hoc networks consist of mobile nodes interconnected by wireless multi-hop communication paths. Unlike conventional wireless networks, ad hoc networks have no fixed network infrastructure or administrative support. The topology of such networks changes dynamically as mobile nodes join or depart the network or radio links between nodes become unusable. Supporting appropriate quality of service for mobile ad hoc networks is a complex and difficult issue because of the dynamic nature of the network topology and generally imprecise network state information, and has become an intensely active area of research in the last few years. This paper1 presents the basic concepts of quality of service support in ad hoc networks for unicast communication, reviews the major areas of current research and results, and addresses some new issues. The principal focus is on routing and security issues associated with quality of service support. The paper concludes with some observations on the open areas for further investigation. Copyright © 2004 John Wiley & Sons, Ltd.
---
paper_title: Quality of service challenges for wireless mobile ad hoc networks
paper_content:
Wireless mobile ad hoc networks consist of mobile nodes interconnected by wireless multi-hop communication paths. Unlike conventional wireless networks, ad hoc networks have no fixed network infrastructure or administrative support. The topology of such networks changes dynamically as mobile nodes join or depart the network or radio links between nodes become unusable. Supporting appropriate quality of service for mobile ad hoc networks is a complex and difficult issue because of the dynamic nature of the network topology and generally imprecise network state information, and has become an intensely active area of research in the last few years. This paper1 presents the basic concepts of quality of service support in ad hoc networks for unicast communication, reviews the major areas of current research and results, and addresses some new issues. The principal focus is on routing and security issues associated with quality of service support. The paper concludes with some observations on the open areas for further investigation. Copyright © 2004 John Wiley & Sons, Ltd.
---
paper_title: Quality of service challenges for wireless mobile ad hoc networks
paper_content:
Wireless mobile ad hoc networks consist of mobile nodes interconnected by wireless multi-hop communication paths. Unlike conventional wireless networks, ad hoc networks have no fixed network infrastructure or administrative support. The topology of such networks changes dynamically as mobile nodes join or depart the network or radio links between nodes become unusable. Supporting appropriate quality of service for mobile ad hoc networks is a complex and difficult issue because of the dynamic nature of the network topology and generally imprecise network state information, and has become an intensely active area of research in the last few years. This paper1 presents the basic concepts of quality of service support in ad hoc networks for unicast communication, reviews the major areas of current research and results, and addresses some new issues. The principal focus is on routing and security issues associated with quality of service support. The paper concludes with some observations on the open areas for further investigation. Copyright © 2004 John Wiley & Sons, Ltd.
---
paper_title: A survey of QoS routing solutions for mobile ad hoc networks
paper_content:
In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wireline networks, mainly due to node mobility, multihop communications, contention for channel access, and a lack of central coordination. QoS guarantees are required by most multimedia and other time- or error-sensitive applications. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. However, in the last decade, much research attention has focused on providing QoS assurances in MANET protocols. The QoS routing protocol is an integral part of any QoS solution since its function is to ascertain which nodes, if any, are able to serve applications? requirements. Consequently, it also plays a crucial role in data session admission control. This document offers an up-to-date survey of most major contributions to the pool of QoS routing solutions for MANETs published in the period 1997?2006. We include a thorough overview of QoS routing metrics, resources, and factors affecting performance and classify the protocols found in the literature. We also summarize their operation and describe their interactions with the medium access control (MAC) protocol, where applicable. This provides the reader with insight into their differences and allows us to highlight trends in protocol design and identify areas for future research.
---
paper_title: Distributed Quality-of-Service Routing in Ad Hoc Networks
paper_content:
In an ad hoc network, all communication is done over wireless media, typically by radio through the air, without the help of wired base stations. Since direct communication is allowed only between adjacent nodes, distant nodes communicate over multiple hops. The quality-of-service (QoS) routing in an ad hoc network is difficult because the network topology may change constantly, and the available state information for routing is inherently imprecise. In this paper, we propose a distributed QoS routing scheme that selects a network path with sufficient resources to satisfy a certain delay (or bandwidth) requirement in a dynamic multihop mobile environment. The proposed algorithms work with imprecise state information. Multiple paths are searched in parallel to find the most qualified one. Fault-tolerance techniques are brought in for the maintenance of the routing paths when the nodes move, join, or leave the network. Our algorithms consider not only the QoS requirement, but also the cost optimality of the routing path to improve the overall network performance. Extensive simulations show that high call admission ratio and low-cost paths are achieved with modest routing overhead. The algorithms can tolerate a high degree of information imprecision.
---
paper_title: Generalized quality-of-service routing with resource allocation
paper_content:
We present a general framework for the problem of quality-of-service (QoS) routing with resource allocation for data networks. The framework represents the QoS parameters as functions rather than static metrics. The formulation incorporates the hardware/software implementation and its relation to the allocated resources into a single framework. The proposed formulation allows intelligent adaptation of QoS parameters and allocated resources during a path search, rather than decoupling the path search process from resource allocation. We present a dynamic programming algorithm that, under certain conditions, finds an optimal path between a source and destination node and computes the amount of resources needed at each node so that the end-to-end QoS requirements are satisfied. We present jitter and data droppage analyzes of various rate-based service disciplines and use the dynamic programming algorithm to solve the problem of QoS routing with resource allocation for networks that employ these service disciplines.
---
paper_title: QoS enabled routing in mobile ad hoc networks
paper_content:
The aim of this work is to present a QoS enabled routing protocol in ad hoc networks and compare it with a normal routing protocol. Results demonstrate how a QoS enable routing mechanism makes the resource consumption more efficient by minimising the unnecessary signalling and stopping the sessions that cannot meet the demanded QoS requirement. The performance of both routing and QoS routing protocols are evaluated in the presence of information achieved from the link layer. The QoS enabled routing protocol shows a significant improvement in the protocol performance metrics applied in our measurements, such as packet delay and protocol overhead. The results achieved from this experiment lead us to analyse our proposed mechanism to add link layer information in control routing messages. Measuring wireless resources, such as bandwidth, with the help of periodic signalling packets makes both the routing and the QoS routing protocols react slower to wireless link changes. This problem can be overcome by more frequent advertisements.
---
paper_title: Unified Support for Quality of Service Metrics Management in Mobile Ad Hoc Networks Using OLSR
paper_content:
This article focuses on technical issues related to quality of service provisioning at routing layer in ad hoc networks. It describes the design and implementation of a unified support for quality of service (QoS) metrics within the routing protocol OLSR. This is achieved by the extension of both signalling messages and route calculation process. Major benefits of the proposed approach are to allow dynamic enforcing and adaptation of QoS metrics according to policies defined into the network. QoS metrics information are inserted in a generic way within OLSR signalling messages taking advantage of Linux kernel plugins. These messages are used by the routing process in order to compute routes with respect to the chosen QoS metrics.
---
paper_title: Formulation of Distributed Coordination Function of IEEE 802.11 for Asynchronous Networks: Mixed Data Rate and Packet Size
paper_content:
8throughput of the high data-rate stations. We introduce a simple 9 and standard-compliant algorithm to fairly utilize the channel. 10 We first provide a formulation for the throughput with mixed 11 data-rate connections. To alleviate the low performance of the high 12 data-rate stations, we introduce a mechanism that implements 13 an adaptive scheme to adjust the packet size according to the 14 data rate. With this scheme, stations occupy the channel for 15 equal amounts of time. We then extend the scheme to a frame16 aggregation scheme to show how different packet sizes affect 17 performance. 18 Index Terms—Distributed coordination function, fairness, 19 frame aggregation, IEEE 802.11, Markov model, QoS, wireless 20 voice over Internet protocol (VoIP). AQ3 21
---
paper_title: Performance analysis for IEEE 802.11e EDCF service differentiation
paper_content:
Having been originally developed as an extension of the wired local area networks, IEEE 802.11 lacks support for quality-of-service (QoS) and differential services. Since its introduction, various extensions and modifications have been studied to address this current need and the IEEE 802.11 Task Group E is responsible for developing a QoS-aware MAC protocol that considers several service differentiation mechanisms. However, the performance of service differentiation has only been evaluated by simulation. The analytical model that calculates the differential service performance corresponding to the contention parameter configuration has not been found yet. In this paper, we first briefly explain the enhanced distributed coordination function (EDCF) access method of IEEE 802.11e. We then introduce an analytical model, which can be used to calculate the traffic priority, throughput, and delay corresponding to the configuration of multiple DCF contention parameters under the saturation condition. A detailed simulation is provided to validate the proposed model. Finally, using the analytical model, we analyze the effect on service differentiation for each contention parameter. The contention parameters can be configured appropriately at each station to achieve specific needs of service differentiation for applications.
---
paper_title: A survey of QoS routing solutions for mobile ad hoc networks
paper_content:
In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wireline networks, mainly due to node mobility, multihop communications, contention for channel access, and a lack of central coordination. QoS guarantees are required by most multimedia and other time- or error-sensitive applications. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. However, in the last decade, much research attention has focused on providing QoS assurances in MANET protocols. The QoS routing protocol is an integral part of any QoS solution since its function is to ascertain which nodes, if any, are able to serve applications? requirements. Consequently, it also plays a crucial role in data session admission control. This document offers an up-to-date survey of most major contributions to the pool of QoS routing solutions for MANETs published in the period 1997?2006. We include a thorough overview of QoS routing metrics, resources, and factors affecting performance and classify the protocols found in the literature. We also summarize their operation and describe their interactions with the medium access control (MAC) protocol, where applicable. This provides the reader with insight into their differences and allows us to highlight trends in protocol design and identify areas for future research.
---
paper_title: A survey of QoS routing solutions for mobile ad hoc networks
paper_content:
In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wireline networks, mainly due to node mobility, multihop communications, contention for channel access, and a lack of central coordination. QoS guarantees are required by most multimedia and other time- or error-sensitive applications. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. However, in the last decade, much research attention has focused on providing QoS assurances in MANET protocols. The QoS routing protocol is an integral part of any QoS solution since its function is to ascertain which nodes, if any, are able to serve applications? requirements. Consequently, it also plays a crucial role in data session admission control. This document offers an up-to-date survey of most major contributions to the pool of QoS routing solutions for MANETs published in the period 1997?2006. We include a thorough overview of QoS routing metrics, resources, and factors affecting performance and classify the protocols found in the literature. We also summarize their operation and describe their interactions with the medium access control (MAC) protocol, where applicable. This provides the reader with insight into their differences and allows us to highlight trends in protocol design and identify areas for future research.
---
paper_title: Admission control schemes for 802.11-based multi-hop mobile ad hoc networks: a survey
paper_content:
Mobile ad hoc networks (MANETs) promise unique communication opportunities. The IEEE 802.11 standard has allowed affordable MANETs to be realised. However, providing quality of service (QoS) assurances to MANET applications is difficult due to the unreliable wireless channel, the lack of centralised control, contention for channel access and node mobility. One of the most crucial components of a system for providing QoS assurances is admission control (AC). It is the job of the AC mechanism to estimate the state of the network's resources and thereby to decide which application data sessions can be admitted without promising more resources than are available and thus violating previously made guarantees. Unfortunately, due to the aforementioned difficulties, estimating the network resources and maintaining QoS guarantees are non-trivial tasks. Accordingly, a large body of work has been published on AC protocols for addressing these issues. However, as far as it is possible to tell, no wide-ranging survey of these approaches exists at the time of writing. This paper thus aims to provide a comprehensive survey of the salient unicast AC schemes designed for IEEE 802.11- based multi-hop MANETs, which were published in the peer-reviewed open literature during the period 2000-2007. The relevant considerations for the design of such protocols are discussed and several methods of classifying the schemes found in the literature are proposed. A brief outline of the operation, reaction to route failures, as well as the strengths and weaknesses of each protocol is given. This enables patterns in the design and trends in the development of AC protocols to be identified. Finally, directions for possible future work are provided.
---
paper_title: A QoS routing and admission control scheme for 802.11 ad hoc networks
paper_content:
This paper presents an admission control mechanism for multirate wireless ad hoc networks. Admission control depends on precise estimates of bandwidth available in the network and the bandwidth required by a new flow. Estimating these parameters in wireless ad hoc networks is challenging due to the shared and open nature of the wireless channel. Available bandwidth can only be determined by also considering interference at neighboring nodes. Also, due to self-interference of flows the required bandwidth of a flow varies for each link of a route. The proposed admission control mechanisms is integrated with a hop-by-hop ad hoc routing protocol, thus enabling it to identify alternate routes if the shortest path is congested. Each node measures available channel bandwidth through passive monitoring of the channel. The mechanism improves estimation accuracy by using a formula that considers possible spatial reuse from parallel transmissions. The protocol also uses temporal accounting to enable bandwidth estimation across links using different bit-rates. Simulation results support that the admission control mechanism can effectively control the traffic load and that considering parallel transmission leads to improved bandwidth estimation accuracy. The admission control mechanism can admit more traffic while maintaining QoS.
---
paper_title: Channel-aware packet scheduling for MANETs
paper_content:
In this work, we present, CaSMA, a packet scheduling mechanism for mobile ad hoc networks (MANETs) that takes into account both the congestion state and the end-to-end path duration. We show that CaSMA approximates an ideal scheduling mechanism in terms of maximizing the good put and sharing the throughput (losses) fairly among the contending flows. Further, the simulation results show that both average delay for CBR flows and throughput for TCP can be improved substantially compared to FIFO.
---
| Title: MAC and Routing Layer Supports for QoS in MANET: A Survey
Section 1: INTRODUCTION
Description 1: Provide an overview of Mobile Ad Hoc Networks (MANETs), their characteristics, and the importance of Quality of Service (QoS) in these networks. Discuss the challenges and goals associated with achieving QoS in MANETs.
Section 2: RELATED WORKS
Description 2: Summarize previous research and surveys related to the provisioning of QoS in MANETs. Highlight key studies and their contributions to the field.
Section 3: PROBLEM FOR PROVISIONING QOS IN MANET
Description 3: Discuss various challenges and issues encountered when provisioning QoS in MANETs, including unreliable wireless channels, lack of centralized coordination, channel contention, and dynamic network topology.
Section 4: QoS METRICS
Description 4: Detail the QoS metrics used to evaluate protocol performance in different layers: Physical Layer, Data Link Layer, and Network Layer.
Section 5: MAC PROTOCOLS
Description 5: Describe different MAC layer protocols and mechanisms that support QoS, including Distributed Coordination Function (DCF) and Enhanced Distributed Coordination Function (EDCF).
Section 6: NETWORK LAYER SOLUTION
Description 6: Explain the various QoS-aware routing protocols in the network layer, categorized into proactive, reactive, and hybrid routing protocols. Discuss the principles and examples of these protocols.
Section 7: ADMISSION CONTROL
Description 7: Discuss the mechanisms of admission control used to estimate network resources and decide on admitting new data sessions without violating QoS guarantees.
Section 8: SCHEDULING
Description 8: Describe different scheduling algorithms and techniques used to enhance QoS in MANETs, including no-priority scheduling, priority scheduling, weighted-hop scheduling, weighted distance scheduling, cluster-based multi-channel scheduling, and channel-aware packet scheduling.
Section 9: FUTURE WORKS
Description 9: Present potential future research directions and areas of improvement in QoS provisioning for MANETs, emphasizing the importance of admission control, scheduling mechanisms, and reservation techniques. |
A survey of survivability in mobile ad hoc networks | 6 | ---
paper_title: A survey of security issues in mobile ad hoc and sensor networks
paper_content:
Security in mobile ad hoc networks is difficult to achieve, notably because of the vulnerability of wireless links, the limited physical protection of nodes, the dynamically changing topology, the absence of a certification authority, and the lack of a centralized monitoring or management point. Earlier studies on mobile ad hoc networks (MANETs) aimed at proposing protocols for some fundamental problems, such as routing, and tried to cope with the challenges imposed by the new environment. These protocols, however, fully trust all nodes and do not consider the security aspect. They are consequently vulnerable to attacks and misbehavior. More recent studies focused on security problems in MANETs, and proposed mechanisms to secure protocols and applications. This article surveys these studies. It presents and discusses several security problems along with the currently proposed solutions (as of July 2005) at different network layers of MANETs. Security issues involved in this article include routing and data forwarding, medium access, key management and intrusion detection systems (IDSs). This survey also includes an overview of security in a particular type of MANET, namely, wireless sensor networks (WSNs).
---
paper_title: The Sybil Attack
paper_content:
Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these "Sybil attacks" is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.
---
paper_title: Ad hoc networking: an introduction
paper_content:
In recent years, mobile computing has enjoyed a tremendous rise in popularity. The continued miniaturization of mobile computing devices and the extraordinary rise of processing power available in mobile laptop computers combine to put more and better computer-based applications into the hands of a growing segment of the population. At the same time, the markets for wireless telephones and communication devices are experiencing rapid growth. Projections have been made that, by the year 2002, there will be more than a billion wireless communication devices in use, and more than 200 million wireless telephone handsets will be purchased annually. The rise of wireless telephony will change what it means to be “in touch”; already many people use their office telephone for taking messages while they are away and rely on their mobile telephone for more important or timely messages. Indeed, mobile phones are used for tasks as simple and as convenient as finding one’s associates in a crowded shopping mall or at a conference. A similar transformation awaits mobile computer users, and we can expect new applications to be built for equally mundane but immediately convenient uses. Much of the context for the transformation has to do with keeping in touch with the Internet. We expect to have “the network” at our disposal for the innumerable little conveniences that we have begun to integrate into our professional lives. We might wish to download a roadmap on the spur of the moment so that we can see what is available in the local area. We might wish to have driving suggestions sent to us, based on information from the global positioning system (GPS) in our car, using the services offered by various web sites. The combination of sufficiently fast and inexpensive wireless communication links and cheap mobile computing devices makes this a reality for many people today. In the future, the average traveler is likely to take such services for granted.
---
paper_title: Internet Security: An Intrusion-Tolerance Approach
paper_content:
The Internet has become essential to most enterprises and many private individuals. However, both the network and computer systems connected to it are still too vulnerable and attacks are becoming evermore frequent. To face this situation, traditional security techniques are insufficient and fault-tolerance techniques are becoming increasingly cost-effective. Nevertheless,intrusions are very special faults, and this has to be taken into account when selecting the fault-tolerance techniques.
---
paper_title: Survivable mobile wireless networks: issues, challenges, and research directions
paper_content:
In this paper we survey issues and challenges in enhancing the survivability of mobile wireless networks, with particular emphasis on military requirements*. Research focus on three key aspects can significantly enhance network survivability: (i) establishing and maintaining survivable topologies that strive to keep the network connected even under attack, (ii) design for end-to-end communication in challenging environments in which the path from source to destination is not wholly available at any given instant of time, (iii) the use of technology to enhance survivability such as adaptive networks and satellites.
---
paper_title: Dependability and Its Threats: A Taxonomy
paper_content:
This paper gives the main definitions relating to dependability, a generic concept including as special case such attributes as reliability, availability, safety, confidentiality, integrity, maintainability, etc. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability (faults, errors, failures), and the attributes of dependability. The discussion on the attributes encompasses the relationship of dependability with security, survivability and trustworthiness.
---
paper_title: Secure Routing for Mobile Ad hoc Networks
paper_content:
Buttyan L.et al.found out a security flaw in Aridane and proposed a secure routing protocol,EndairA, which can resist attacks of active-1-1 according to ref[9].But unfortunately we discover an as yet unknown active-0-1 attack,"man-in-the-middle attack",which EndairA can't resist.So we propose a new secure routing protocol,En- dairALoc.Analysis shows that EndairALoc can not only inherit security of EndairA,but also resist"man-in-the-mid- dle attack"and even wormhole attack.Furthermore EndairALoc uses pairwise secret keys instead of public keys Endai- rA used,so compared with EndairA,EndairALoc can save more energy in the process of routing establishment.
---
paper_title: Global Positioning System Theory And Practice
paper_content:
Thank you for reading global positioning system theory and practice. Maybe you have knowledge that, people have search hundreds times for their favorite novels like this global positioning system theory and practice, but end up in infectious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some malicious bugs inside their desktop computer. global positioning system theory and practice is available in our digital library an online access to it is set as public so you can get it instantly. Our digital library spans in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the global positioning system theory and practice is universally compatible with any devices to read.
---
paper_title: Mitigating routing misbehavior in mobile ad hoc networks
paper_content:
This paper describes two techniques that improve throughput in an ad hoc network in the presence of nodes that agree to forward packets but fail to do so. To mitigate this problem, we propose categorizing nodes based upon their dynamically measured behavior. We use a watchdog that identifies misbehaving nodes and a pathrater that helps routing protocols avoid these nodes. Through simulation we evaluate watchdog and pathrater using packet throughput, percentage of overhead (routing) transmissions, and the accuracy of misbehaving node detection. When used together in a network with moderate mobility, the two techniques increase throughput by 17% in the presence of 40% misbehaving nodes, while increasing the percentage of overhead transmissions from the standard routing protocol's 9% to 17%. During extreme mobility, watchdog and pathrater can increase network throughput by 27%, while increasing the overhead transmissions from the standard routing protocol's 12% to 24%.
---
paper_title: SEAD: secure efficient distance vector routing for mobile wireless ad hoc networks
paper_content:
An ad hoc network is a collection of wireless computers (nodes), communicating among themselves over possibly multihop paths, without the help of any infrastructure such as base stations or access points. Although many previous ad hoc network routing protocols have been based in part on distance vector approaches, they have generally assumed a trusted environment. We design and evaluate the Secure Efficient Ad hoc Distance vector routing protocol (SEAD), a secure ad hoc network routing protocol based on the design of the Destination-Sequenced Distance-Vector routing protocol (DSDV). In order to support use with nodes of limited CPU processing capability, and to guard against denial-of-service (DoS) attacks in which an attacker attempts to cause other nodes to consume excess network bandwidth or processing time, we use efficient one-way hash functions and do not use asymmetric cryptographic operations in the protocol. SEAD performs well over the range of scenarios we tested, and is robust against multiple uncoordinated attackers creating incorrect routing state in any other node, even in spite of any active attackers or compromised nodes in the network.
---
paper_title: Struggling against selfishness and black hole attacks in MANETs
paper_content:
Since mobile ad hoc networks (MANETs) are infrastructureless and multi-hop by nature, transmitting packets from any node to another usually relies on services provided by intermediate nodes. This reliance introduces a new vulnerability; one node could launch a Black Hole DoS attack by participating in the routing protocol and including itself in routes, then simply dropping packets it receives to forward. Another motivation for dropping packets in self-organized MANETs is resource preservation. Some solutions for detecting and isolating packet droppers have been recently proposed, but almost all of them employ the promiscuous mode monitoring approach (watchdog (WD)) which suffers from many problems, especially when employing the power control technique. In this paper we propose a novel monitoring approach that overcomes some WD's shortcomings, and improves the efficiency in detection. To overcome false detections due to nodes mobility and channel conditions we propose a Bayesian technique for the judgment, allowing node redemption before judgment. Finally, we suggest a social-based approach for the detection approval and isolation of guilty nodes. We analyze our solution and asses its performance by simulation. The results illustrate a large improvement of our monitoring solution in detection versus the WD, and an efficiency through our judgment and isolation techniques as well. Copyright © 2007 John Wiley & Sons, Ltd.
---
paper_title: Using Directional Antennas to Prevent Wormhole Attacks
paper_content:
Wormhole attacks enable an attacker with limited resources and no cryptographic material to wreak havoc on wireless networks. To date, no general defenses against wormhole attacks have been proposed. This paper presents an analysis of wormhole attacks and proposes a countermeasure using directional antennas. We present a cooperative protocol whereby nodes share directional information to prevent wormhole endpoints from masquerading as false neighbors. Our defense greatly diminishes the threat of wormhole attacks and requires no location information or clock synchronization.
---
paper_title: Secure Multipath Routing for Mobile Ad Hoc Networks
paper_content:
Multipath routing minimizes the consequences of security attacks deriving from collaborating malicious nodes in MANET, by maximizing the number of nodes that an adversary must compromise in order to take control of the communication. In this paper, we identify several attacks that render multipath routing protocols more vulnerable than it is expected, to collaborating malicious nodes. We propose a novel on-demand multipath routing protocol, the Secure Multipath Routing protocol (SecMR) and we analyze its security properties. The SecMR protocol can be easily integrated in a wide range of on-demand routing protocols, such as DSR and AODV.
---
paper_title: Packet leashes: a defense against wormhole attacks in wireless networks
paper_content:
As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a new, general mechanism, called packet leashes, for detecting and thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes.
---
paper_title: Mitigating Byzantine Attacks in Ad Hoc Wireless Networks
paper_content:
Attacks where adversaries have full control of a number of authenticated devices and behave arbitrarily to disrupt the network are referred to as Byzantine attacks. Traditional secure routing protocols are vulnerable to this class of attacks since they usually assume that once authenticated, a node can be trusted to execute the protocol correctly. We present a detailed description of several Byzantine attacks (black hole, flood rushing, wormhole and overlay network wormhole), analyze their mechanisms and describe the major mitigation techniques. Through simulation, we perform a quantitative evaluation of the impact of these attacks on an insecure on-demand routing protocol. The relative strength of the attacks is analyzed in terms of the magnitude of disruption caused per adversary. An implementation of the On-Demand Secure Byzantine Routing protocol (ODSBR) was created in order to quantify its ability to mitigate the considered attacks. ODSBR was chosen because its design addresses a wide range of Byzantine attacks.
---
paper_title: Ad-hoc on-demand distance vector routing
paper_content:
An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. We present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each mobile host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic self starting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm.
---
paper_title: DSR : The Dynamic Source Routing Protocol for Multi-Hop Wireless Ad Hoc Networks
paper_content:
The Dynamic Source Routing protocol (DSR) is a simple and efficient routing protocol designed specifically for use in multi-hop wireless ad hoc networks of mobile nodes. DSR allows the network to be completely self-organizing and self-configuring, without the need for any existing network infrastructure or administration. The protocol is composed of the two mechanisms of Route Discovery and Route Maintenance, which work together to allow nodes to discover and maintain source routes to arbitrary destinations in the ad hoc network. The use of source routing allows packet routing to be trivially loop-free, avoids the need for up-to-date routing information in the intermediate nodes through which packets are forwarded, and allows nodes forwarding or overhearing packets to cache the routing information in them for their own future use. All aspects of the protocol operate entirely on-demand, allowing the routing packet overhead of DSR to scale automatically to only that needed to react to changes in the routes currently in use. We have evaluated the operation of DSR through detailed simulation on a variety of movement and communication patterns, and through implementation and significant experimentation in a physical outdoor ad hoc networking testbed we have constructed in Pittsburgh, and have demonstrated the excellent performance of the protocol. In this chapter, we describe the design of DSR and provide a summary of some of our simulation and testbed implementation results for the protocol.
---
paper_title: Requirements definition for survivable network systems
paper_content:
Pervasive societal dependency on large scale, unbounded network systems, the substantial risks of such dependency, and the growing sophistication of system intruders, have focused increased attention on how to ensure network system survivability. Survivability is the capacity of a system to provide essential services even after successful intrusion and compromise, and to recover full services in a timely manner. Requirements for survivable systems must include definitions of essential and non essential services, plus definitions of new survivability services for intrusion resistance, recognition, and recovery. Survivable system requirements must also specify both legitimate and intruder usage scenarios, and survivability practices for system development, operation, and evolution. The paper defines a framework for survivable systems requirements definition and discusses requirements for several emerging survivability strategies. Survivability must be designed into network systems, beginning with effective survivability requirements analysis and definition.
---
paper_title: A survey of security issues in mobile ad hoc and sensor networks
paper_content:
Security in mobile ad hoc networks is difficult to achieve, notably because of the vulnerability of wireless links, the limited physical protection of nodes, the dynamically changing topology, the absence of a certification authority, and the lack of a centralized monitoring or management point. Earlier studies on mobile ad hoc networks (MANETs) aimed at proposing protocols for some fundamental problems, such as routing, and tried to cope with the challenges imposed by the new environment. These protocols, however, fully trust all nodes and do not consider the security aspect. They are consequently vulnerable to attacks and misbehavior. More recent studies focused on security problems in MANETs, and proposed mechanisms to secure protocols and applications. This article surveys these studies. It presents and discusses several security problems along with the currently proposed solutions (as of July 2005) at different network layers of MANETs. Security issues involved in this article include routing and data forwarding, medium access, key management and intrusion detection systems (IDSs). This survey also includes an overview of security in a particular type of MANET, namely, wireless sensor networks (WSNs).
---
paper_title: Intrusion detection in wireless ad hoc networks
paper_content:
Intrusion detection has, over the last few years, assumed paramount importance within the broad realm of network security, more so in the case of wireless ad hoc networks. These are networks that do not have an underlying infrastructure; the network topology is constantly changing. The inherently vulnerable characteristics of wireless ad hoc networks make them susceptible to attacks, and it may be too late before any counter action can take effect. Second, with so much advancement in hacking, if attackers try hard enough they will eventually succeed in infiltrating the system. This makes it important to constantly (or at least periodically) monitor what is taking place on a system and look for suspicious behavior. Intrusion detection systems (IDSs) do just that: monitor audit data, look for intrusions to the system, and initiate a proper response (e.g., email the systems administrator, start an automatic retaliation). As such, there is a need to complement traditional security mechanisms with efficient intrusion detection and response. In this article we present a survey on the work that has been done in the area of intrusion detection in mobile ad hoc networks.
---
paper_title: Secure Routing for Mobile Ad hoc Networks
paper_content:
Buttyan L.et al.found out a security flaw in Aridane and proposed a secure routing protocol,EndairA, which can resist attacks of active-1-1 according to ref[9].But unfortunately we discover an as yet unknown active-0-1 attack,"man-in-the-middle attack",which EndairA can't resist.So we propose a new secure routing protocol,En- dairALoc.Analysis shows that EndairALoc can not only inherit security of EndairA,but also resist"man-in-the-mid- dle attack"and even wormhole attack.Furthermore EndairALoc uses pairwise secret keys instead of public keys Endai- rA used,so compared with EndairA,EndairALoc can save more energy in the process of routing establishment.
---
paper_title: SEAD: secure efficient distance vector routing for mobile wireless ad hoc networks
paper_content:
An ad hoc network is a collection of wireless computers (nodes), communicating among themselves over possibly multihop paths, without the help of any infrastructure such as base stations or access points. Although many previous ad hoc network routing protocols have been based in part on distance vector approaches, they have generally assumed a trusted environment. We design and evaluate the Secure Efficient Ad hoc Distance vector routing protocol (SEAD), a secure ad hoc network routing protocol based on the design of the Destination-Sequenced Distance-Vector routing protocol (DSDV). In order to support use with nodes of limited CPU processing capability, and to guard against denial-of-service (DoS) attacks in which an attacker attempts to cause other nodes to consume excess network bandwidth or processing time, we use efficient one-way hash functions and do not use asymmetric cryptographic operations in the protocol. SEAD performs well over the range of scenarios we tested, and is robust against multiple uncoordinated attackers creating incorrect routing state in any other node, even in spite of any active attackers or compromised nodes in the network.
---
paper_title: Secure ad hoc on-demand distance vector routing
paper_content:
This article gives an overview of different approaches to provide security features to routing protocols in mobile ad hoc networks (MANET). It also describes Secure AODV (an extension to AODV that provides security features) giving a summary of its operation and talking about future enhancements to the protocol.
---
paper_title: Fault and intrusion tolerance in wireless ad hoc networks
paper_content:
Current algorithms for distributed applications, such as the wireless GRID, for wireless ad hoc networks (WAHN) contain a few mechanisms for providing robust/tolerant network operation in the face of security attacks launched on the network by intruders. One approach to address thus issue is to design these applications for WAHNs in a way that they can handle intruder-induced malicious faults. However, this presents several drawbacks, including the considerable investment that it can induce. We present a new approach for building intrusion tolerant WAHN. The approach relies on extending the capabilities of existing applications to handle intruders without modifying their structure. We describe a new network mechanism for resource allocation using capabilities for detecting and recovering from intruder induced malicious faults. We also present a wireless router component that allows these mechanisms to be added to existing wireless nodes.
---
paper_title: Techniques for intrusion-resistant ad hoc routing algorithms (TIARA)
paper_content:
Architecture Technology Corporation (ATC) has developed a new approach for building intrusion resistant ad hoc networks called TIARA (Techniques for Intrusion-Resistant Ad Hoc Routing Algorithms). The approach, developed with funding from DARPA's Fault Tolerant Networks program, relies on extending the capabilities of existing ad hoc routing algorithms to handle intruders without modifying these algorithms. TIARA implements new network layer survivability mechanisms for detecting and recovering from intruder induced malicious faults that work in concert with existing ad hoc routing algorithms and augment their capabilities. The TIARA implementation architecture is designed to allow these survivability mechanisms to be "plugged" into existing wireless routers with little effort.
---
paper_title: Bootstrapping security associations for routing in mobile ad-hoc networks
paper_content:
To date, most solutions proposed for secure routing in mobile ad-hoc networks (MANETs), assume that secure associations between pairs of nodes can be established on-line; e.g., by a trusted third party, by distributed trust establishment. However, establishing such security associations, with or without trusted third parties, requires reliance on routing layer security. In this paper, we eliminate this apparent cyclic dependency between security services and secure routing in MANETs and show how to bootstrap security for the routing layer. We use the notion of statistically unique and cryptographically verifiable (SUCV) identifiers to implement a secure binding between IP addresses and keys that is independent of any trusted security service. We illustrate our solution with the dynamic source routing (DSR) protocol and compare it with other solutions for secure routing.
---
paper_title: ODSBR: An on-demand secure Byzantine resilient routing protocol for wireless ad hoc networks
paper_content:
Ah hoc networks offer increased coverage by using multihop communication. This architecture makes services more vulnerable to internal attacks coming from compromised nodes that behave arbitrarily to disrupt the network, also referred to as Byzantine attacks. In this work, we examine the impact of several Byzantine attacks performed by individual or colluding attackers. We propose ODSBR, the first on-demand routing protocol for ad hoc wireless networks that provides resilience to Byzantine attacks caused by individual or colluding nodes. The protocol uses an adaptive probing technique that detects a malicious link after log n faults have occurred, where n is the length of the path. Problematic links are avoided by using a route discovery mechanism that relies on a new metric that captures adversarial behavior. Our protocol never partitions the network and bounds the amount of damage caused by attackers. We demonstrate through simulations ODSBR's effectiveness in mitigating Byzantine attacks. Our analysis of the impact of these attacks versus the adversary's effort gives insights into their relative strengths, their interaction, and their importance when designing multihop wireless routing protocols.
---
paper_title: Fault and intrusion tolerance in wireless ad hoc networks
paper_content:
Current algorithms for distributed applications, such as the wireless GRID, for wireless ad hoc networks (WAHN) contain a few mechanisms for providing robust/tolerant network operation in the face of security attacks launched on the network by intruders. One approach to address thus issue is to design these applications for WAHNs in a way that they can handle intruder-induced malicious faults. However, this presents several drawbacks, including the considerable investment that it can induce. We present a new approach for building intrusion tolerant WAHN. The approach relies on extending the capabilities of existing applications to handle intruders without modifying their structure. We describe a new network mechanism for resource allocation using capabilities for detecting and recovering from intruder induced malicious faults. We also present a wireless router component that allows these mechanisms to be added to existing wireless nodes.
---
paper_title: Stimulating Cooperation in Self-Organizing Mobile Ad Hoc Networks
paper_content:
In military and rescue applications of mobile ad hoc networks, all the nodes belong to the same authority; therefore, they are motivated to cooperate in order to support the basic functions of the network. In this paper, we consider the case when each node is its own authority and tries to maximize the benefits it gets from the network. More precisely, we assume that the nodes are not willing to forward packets for the benefit of other nodes. This problem may arise in civilian applications of mobile ad hoc networks. In order to stimulate the nodes for packet forwarding, we propose a simple mechanism based on a counter in each node. We study the behavior of the proposed mechanism analytically and by means of simulations, and detail the way in which it could be protected against misuse.
---
paper_title: Cooperation in wireless ad hoc networks
paper_content:
In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal throughput that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called generous tit-for-tat (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.
---
paper_title: Performance Analysis of the CONFIDANT Protocol: Cooperation of Nodes – Fairness In Dynamic Ad Hoc Networks
paper_content:
Mobile ad-hoc networking works properly only if the participating nodes cooperate in routing and forwarding. However, it may be advantageous for individual nodes not to cooperate. We propose a protocol, called CONFIDANT, for making misbehavior unattractive; it is based on selective altruism and utilitarianism. It aims at detecting and isolating misbehaving nodes, thus making it unattractive to deny cooperation. Trust relationships and routing decisions are based on experienced, observed, or reported routing and forwarding behavior of other nodes. The detailed implementation of CONFIDANT in this paper assumes that the network layer is based on the Dynamic Source Routing (DSR) protocol. We present a performance analysis of DSR fortified by CONFIDANT and compare it to regular defenseless DSR. It shows that a network with CONFIDANT and up to 60% of misbehaving nodes behaves almost as well as a benign network, in sharp contrast to a defenseless network. All simulations have been implemented and performed in GloMoSim.
---
paper_title: On designing MAC protocols for wireless networks using directional antennas
paper_content:
We investigate the possibility of using directional antennas for medium access control in wireless ad hoc networks. Previous research in ad hoc networks typically assumes the use of omnidirectional antennas at all nodes. With omnidirectional antennas, while two nodes are communicating using a given channel, MAC protocols such as IEEE 802.11 require all other nodes in the vicinity to remain silent. With directional antennas, two pairs of nodes located in each other's vicinity may potentially communicate simultaneously, increasing spatial reuse of the wireless channel. Range extension due to higher gain of directional antennas can also be useful in discovering fewer hop routes. However, new problems arise when using directional beams that simple modifications to 802.11 may not be able to mitigate. This paper identifies these problems and evaluates the tradeoffs associated with them. We also design a directional MAC protocol (MMAC) that uses multihop RTSs to establish links between distant nodes and then transmits CTS, DATA, and ACK over a single hop. While MMAC does not address all the problems identified with directional communication, it is an attempt to exploit the primary benefits of beamforming in the presence of some of these problems. Results show that MMAC can perform better than IEEE 802.11, although we find that the performance is dependent on the topology and flow patterns in the system.
---
paper_title: Resisting Malicious Packet Dropping in Wireless Ad Hoc Networks
paper_content:
Most of the routing protocols in wireless ad hoc networks, such as DSR, assume nodes are trustworthy and cooperative. This assumption renders wireless ad hoc networks vulnerable to various types of Denial of Service (DoS) attacks. We present a distributed probing technique to detect and mitigate one type of DoS attacks, namely malicious packet dropping, in wireless ad hoc networks. A malicious node can promise to forward packets but in fact fails to do so. In our distributed probing technique, every node in the network will probe the other nodes periodically to detect if any of them fail to perform the forwarding function. Subsequently, node state information can be utilized by the routing protocol to bypass those malicious nodes. Our experiments show that in a moderately changing network, the probing technique can detect most of the malicious nodes with a relatively low false positive rate. The packet delivery rate in the network can also be increased accordingly.
---
paper_title: Friends and foes: preventing selfishness in open mobile ad hoc networks
paper_content:
Technological advances are leveraging the widespread deployment of mobile ad hoc networks. An interesting characteristic of ad hoc networks is their self-organization and their dependence of the behavior of individual nodes. Until recently, most research on ad hoc networks has assumed that all nodes were cooperative. This assumption is no longer valid in spontaneous networks formed by individuals with diverse goals and interests. In such environment, the presence of selfish nodes may degrade significantly the performance of the ad hoc network. This paper proposes a novel algorithm that aims to discourage selfish behavior in mobile ad hoc networks.
---
paper_title: Struggling against selfishness and black hole attacks in MANETs
paper_content:
Since mobile ad hoc networks (MANETs) are infrastructureless and multi-hop by nature, transmitting packets from any node to another usually relies on services provided by intermediate nodes. This reliance introduces a new vulnerability; one node could launch a Black Hole DoS attack by participating in the routing protocol and including itself in routes, then simply dropping packets it receives to forward. Another motivation for dropping packets in self-organized MANETs is resource preservation. Some solutions for detecting and isolating packet droppers have been recently proposed, but almost all of them employ the promiscuous mode monitoring approach (watchdog (WD)) which suffers from many problems, especially when employing the power control technique. In this paper we propose a novel monitoring approach that overcomes some WD's shortcomings, and improves the efficiency in detection. To overcome false detections due to nodes mobility and channel conditions we propose a Bayesian technique for the judgment, allowing node redemption before judgment. Finally, we suggest a social-based approach for the detection approval and isolation of guilty nodes. We analyze our solution and asses its performance by simulation. The results illustrate a large improvement of our monitoring solution in detection versus the WD, and an efficiency through our judgment and isolation techniques as well. Copyright © 2007 John Wiley & Sons, Ltd.
---
paper_title: CORE: A Collaborative Reputation Mechanism to enforce node cooperation in Mobile Ad hoc Networks
paper_content:
Countermeasures for node misbehavior and selfishness are mandatory requirements in MANET. Selfishness that causes lack of node activity cannot be solved by classical security means that aim at verifying the correctness and integrity of an operation. We suggest a generic mechanism based on reputation to enforce cooperation among the nodes of a MANET to prevent selfish behavior. Each network entity keeps track of other entities’ collaboration using a technique called reputation. The reputation is calculated based on various types of information on each entity’s rate of collaboration. Since there is no incentive for a node to maliciously spread negative information about other nodes, simple denial of service attacks using the collaboration technique itself are prevented. The generic mechanism can be smoothly extended to basic network functions with little impact on existing protocols.
---
paper_title: Data Security in MANETs using Multipath Routing and Directional Transmission
paper_content:
A cross-layer approach is investigated to improve data security in Mobile Ad Hoc Networks (MANETs). The use of directional antennas and intelligent multipath routing is proposed to enhance end-to-end data confidentiality and data availability with respect to outsider attacks. The goal is to impede rogue attempts to gain unauthorized access to classified information or disrupt the information flow. The interplay between the physical, link, and network layers is considered. A novel simulator is developed to accurately quantify the data confidentiality benefits of these approaches. This study leverages the existence of multiple paths between end-nodes to statistically improve data confidentiality and data availability in hostile MANET environments, where both insider and outsider adversaries may be present. Simulation results show that the proposed mechanisms can greatly improve data confidentiality as compared to existing schemes. These mechanisms can also improve data availability.
---
paper_title: Sprite: a simple, cheat-proof, credit-based system for mobile ad-hoc networks
paper_content:
Mobile ad hoc networking has been an active research area for several years. How to stimulate cooperation among selfish mobile nodes, however, is not well addressed yet. In this paper, we propose Sprite, a simple, cheat-proof, credit-based system for stimulating cooperation among selfish nodes in mobile ad hoc networks. Our system provides incentive for mobile nodes to cooperate and report actions honestly. Compared with previous approaches, our system does not require any tamper-proof hardware at any node. Furthermore, we present a formal model of our system and prove its properties. Evaluations of a prototype implementation show that the overhead of our system is small. Simulations and analysis show that mobile nodes can cooperate and forward each other's messages, unless the resource of each node is extremely low.
---
paper_title: Secure data transmission in mobile ad hoc networks
paper_content:
The vision of nomadic computing with its ubiquitous access has stimulated much interest in the Mobile Ad Hoc Networking (MANET) technology. However, its proliferation strongly depends on the availability of security provisions, among other factors. In the open, collaborative MANET environment practically any node can maliciously or selfishly disrupt and deny communication of other nodes. In this paper, we present and evaluate the Secure Message Transmission (SMT) protocol, which safeguards the data transmission against arbitrary malicious behavior of other nodes. SMT is a lightweight, yet very effective, protocol that can operate solely in an end-to-end manner. It exploits the redundancy of multi-path routing and adapts its operation to remain efficient and effective even in highly adverse environments. SMT is capable of delivering up to 250% more data messages than a protocol that does not secure the data transmission. Moreover, SMT outperforms an alternative single-path protocol, a secure data forwarding protocol we term Secure Single Path (SSP) protocol. SMT imposes up to 68% less routing overhead than SSP, delivers up to 22% more data packets and achieves end-to-end delays that are up to 94% lower than those of SSP. Thus, SMT is better suited to support QoS for real-time communications in the ad hoc networking environment. The security of data transmission is achieved without restrictive assumptions on the network nodes' trust and network membership, without the use of intrusion detection schemes, and at the expense of moderate multi-path transmission overhead only.
---
paper_title: SPREAD: enhancing data confidentiality in mobile ad hoc networks
paper_content:
Security is a critical issue in a mobile ad hoc network (MANET). We propose and investigate a novel scheme, security protocol for reliable data delivery (SPREAD), to enhance the data confidentiality service in a mobile ad hoc network. The proposed SPREAD scheme aims to provide further protection to secret messages from being compromised (or eavesdropped) when they are delivered across the insecure network. The basic idea is to transform a secret message into multiple shares by secret sharing schemes and then deliver the shares via multiple independent paths to the destination so that even if a small number of nodes that are used to relay the message shares are compromised, the secret message as a whole is not compromised. We present the overall system architecture and investigate the major design issues. We first describe how to obtain message shares using the secret sharing schemes. Then we study the appropriate choice of the secret sharing schemes and the optimal allocation of the message shares onto each path in order to maximize the security. The results show that the SPREAD is more secure and also provides a certain degree of reliability without sacrificing the security. Thirdly, the multipath routing techniques are discussed and the path set optimization algorithm is developed to find the multiple paths with the desired property, i.e., the overall path set providing maximum security. Finally, we present the simulation results to justify the feasibility and evaluate the effectiveness of SPREAD.
---
paper_title: Diversity coding for transparent self-healing and fault-tolerant communication networks
paper_content:
A channel coding approach called diversity coding is introduced for self-healing and fault-tolerance in digital communication networks for nearly instantaneous recovery from link failures. To achieve this goal, the problem of link failures is treated as an erasure channel problem. Implementation details of this technique in existing and future communication networks are discussed. >
---
paper_title: Efficient dispersal of information for security, load balancing, and fault tolerance
paper_content:
An Information Dispersal Algorithm (IDA) is developed that breaks a file F of length L = u F u into n pieces F i , l ≤ i ≤ n , each of length u F i u = L / m , so that every m pieces suffice for reconstructing F . Dispersal and reconstruction are computationally efficient. The sum of the lengths u F i u is ( n / m ) · L . Since n / m can be chosen to be close to l, the IDA is space efficient. IDA has numerous applications to secure and reliable storage of information in computer networks and even on single disks, to fault-tolerant and efficient transmission of information in networks, and to communications between processors in parallel computers. For the latter problem provably time-efficient and highly fault-tolerant routing on the n -cube is achieved, using just constant size buffers.
---
paper_title: Data Security in MANETs using Multipath Routing and Directional Transmission
paper_content:
A cross-layer approach is investigated to improve data security in Mobile Ad Hoc Networks (MANETs). The use of directional antennas and intelligent multipath routing is proposed to enhance end-to-end data confidentiality and data availability with respect to outsider attacks. The goal is to impede rogue attempts to gain unauthorized access to classified information or disrupt the information flow. The interplay between the physical, link, and network layers is considered. A novel simulator is developed to accurately quantify the data confidentiality benefits of these approaches. This study leverages the existence of multiple paths between end-nodes to statistically improve data confidentiality and data availability in hostile MANET environments, where both insider and outsider adversaries may be present. Simulation results show that the proposed mechanisms can greatly improve data confidentiality as compared to existing schemes. These mechanisms can also improve data availability.
---
paper_title: A survey of security issues in mobile ad hoc and sensor networks
paper_content:
Security in mobile ad hoc networks is difficult to achieve, notably because of the vulnerability of wireless links, the limited physical protection of nodes, the dynamically changing topology, the absence of a certification authority, and the lack of a centralized monitoring or management point. Earlier studies on mobile ad hoc networks (MANETs) aimed at proposing protocols for some fundamental problems, such as routing, and tried to cope with the challenges imposed by the new environment. These protocols, however, fully trust all nodes and do not consider the security aspect. They are consequently vulnerable to attacks and misbehavior. More recent studies focused on security problems in MANETs, and proposed mechanisms to secure protocols and applications. This article surveys these studies. It presents and discusses several security problems along with the currently proposed solutions (as of July 2005) at different network layers of MANETs. Security issues involved in this article include routing and data forwarding, medium access, key management and intrusion detection systems (IDSs). This survey also includes an overview of security in a particular type of MANET, namely, wireless sensor networks (WSNs).
---
paper_title: Self-Organized Public-Key Management for Mobile Ad Hoc Networks
paper_content:
In contrast with conventional networks, mobile ad hoc networks usually do not provide online access to trusted authorities or to centralized servers, and they exhibit frequent partitioning due to link and node failures and to node mobility. For these reasons, traditional security solutions that require online trusted authorities or certificate repositories are not well-suited for securing ad hoc networks. We propose a fully self-organized public-key management system that allows users to generate their public-private key pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any centralized services. Furthermore, our approach does not require any trusted authority, not even in the system initialization phase.
---
paper_title: Secure, redundant, and fully distributed key management scheme for mobile ad hoc networks: an analysis
paper_content:
Security poses a major challenge in ad hoc networks today due to the lack of fixed or organizational infrastructure. This paper proposes a modification to the existing "fully distributed certificate authority" scheme for ad hoc networks. In the proposed modification, redundancy is introduced by allocating more than one share to each node in order to increase the probability of creating the certificate for a node in a highly mobile network. A probabilistic analysis is carried out to analyze the trade-offs between the ease of certificate creation and the security provided by the proposed scheme. The analysis carried out from the intruder's perspective suggests that in the worst-case scenario, the intruder is just "one node" away from a legitimate node in compromising the certificate. The analysis also outlines the parameter selection criteria for a legitimate node to maintain a margin of advantage over an intruder in creating the certificate.
---
paper_title: The official PGP user's guide
paper_content:
A pretty bad problem, John Perry Barlow. Part 1 Essential topics: quick overview why do you need GDP? how public key cryptography works installing PGP using PGP managing keys advanced topics beware of snake oil. Part 2 Special topics: useful details setting configuration parameters a peek under the hood vulnerabilities legal issues compatibility with previous and future versions of PGP sources of information on PGP appendix - PGP quick reference.
---
| Title: A Survey of Survivability in Mobile Ad Hoc Networks
Section 1: INTRODUCTION
Description 1: Introduce the importance and applications of mobile ad hoc networks (MANETs) and outline the challenges in ensuring their survivability against attacks.
Section 2: SURVIVABILITY CONCEPTS
Description 2: Define survivability, discuss key properties of survivable systems, and present a classification of defense lines including preventive, reactive, and intrusion tolerance mechanisms.
Section 3: ISSUES AND MECHANISMS FOR SECURITY IN MANETS
Description 3: Summarize the characteristics of MANETs, their susceptibility to various security issues, and conventional countermeasures applied to address these issues.
Section 4: SURVIVABILITY REQUIREMENTS FOR MANETS
Description 4: Analyze the requirements for achieving survivability in MANETs, with a focus on maintaining essential services and addressing the constraints and dynamic nature of these networks.
Section 5: SURVIVABLE INITIATIVES FOR MANETS
Description 5: Describe and categorize several initiatives aimed at building survivable MANETs, divided into three main groups: route discovery, data forwarding, and key management and access control.
Section 6: CONCLUSION
Description 6: Summarize the findings of the survey, highlighting the need for cooperative defense lines and multi-layered approaches to enhance survivability in MANETs. Discuss future directions for research in this field. |
An overview of recent remote sensing and GIS based research in ecological informatics | 10 | ---
paper_title: Non-stationarity and local approaches to modelling the distributions of wildlife
paper_content:
Despite a growing interest in species distribution modelling, relatively little attention has been paid to spatial autocorrelation and non-stationarity. Both spatial autocorrelation (the tendency for adjacent locations to be more similar than distant ones) and non-stationarity (the variation in modelled relationships over space) are likely to be common properties of ecological systems. This paper focuses on non-stationarity and uses two local techniques, geographically weighted regression (GWR) and varying coefficient modelling (VCM), to assess its impact on model predictions. We extend two published studies, one on the presence–absence of calandra larks in Spain and the other on bird species richness in Britain, to compare GWR and VCM with the more usual global generalized linear modelling (GLM) and generalized additive modelling (GAM). For the calandra lark data, GWR and VCM produced better-fitting models than GLM or GAM. VCM in particular gave significantly reduced spatial autocorrelation in the model residuals. GWR showed that individual predictors became stationary at different spatial scales, indicating that distributions are influenced by ecological processes operating over multiple scales. VCM was able to predict occurrence accurately on independent data from the same geographical area as the training data but not beyond, whereas the GAM produced good results on all areas. Individual predictions from the local methods often differed substantially from the global models. For the species richness data, VCM and GWR produced far better predictions than ordinary regression. Our analyses suggest that modellers interpolating data to produce maps for practical actions (e.g. conservation) should consider local methods, whereas they should not be used for extrapolation to new areas. We argue that local methods are complementary to global methods, revealing details of habitat associations and data properties which global methods average out and miss.
---
paper_title: Environmental sensor networks in ecological research
paper_content:
Environmental sensor networks offer a powerful combination of distributed sensing capacity, real-time data visualization and analysis, and integration with adjacent networks and remote sensing data streams. These advances have become a reality as a combined result of the continuing miniaturization of electronics, the availability of large data storage and computational capacity, and the pervasive connectivity of the Internet. Environmental sensor networks have been established and large new networks are planned for monitoring multiple habitats at many different scales. Projects range in spatial scale from continental systems designed to measure global change and environmental stability to those involved with the monitoring of only a few meters of forest edge in fragmented landscapes. Temporal measurements have ranged from the evaluation of sunfleck dynamics at scales of seconds, to daily CO2 fluxes, to decadal shifts in temperatures. Above-ground sensor systems are partnered with subsurface soil measurement networks for physical and biological activity, together with aquatic and riparian sensor networks to measure groundwater fluxes and nutrient dynamics. More recently, complex sensors, such as networked digital cameras and microphones, as well as newly emerging sensors, are being integrated into sensor networks for hierarchical methods of sensing that promise a further understanding of our ecological systems by revealing previously unobservable phenomena.
---
paper_title: Individual tree-based species classification in high spatial resolution aerial images of forests using fuzzy sets
paper_content:
This paper presents an application of fuzzy set theory for classification of individual tree crowns into species groups, in high spatial resolution colour infrared aerial photographs. In this type of digital image, the trees are visible as individual objects. The number of individuals to classify might be very large in the acquired set of photographs, but the applied grade of membership (GoM) model, which this paper focuses on, is suitable for dealing with large datasets.The extent of each tree crown in the image is defined using a previously published procedure. Based on colour information (hue), an optimal fuzzy thresholding technique divides the tree crown universal set into a dominant set and its minor complement. Nine different features of each image object are estimated, and transformed using principal component analysis (PCA). The first three or four PCs are subsequently used in the GoM model. Furthermore, the concept of fuzzy relation is applied to one of the descriptors: to predict a centroid of the star-shaped pattern of Norway spruce.The GoM model needs initial membership values, which are estimated using an unsupervised fuzzy clustering approach of small subareas (branches in the tree crowns) and their corresponding digital numbers in each colour band (RGB-images). The complete classification system comprises three independent components: decisions on coniferous/deciduous, Scots pine/Norway spruce, and Birch/Aspen. The accuracies (ground patches excluded), using the supervised GoM model with crossvalidation, are 87%, 76%, and 79%, respectively. The accuracy for the compounded system is 67%.
---
paper_title: Merging hyperspectral and panchromatic image data: qualitative and quantitative analysis
paper_content:
Image fusion is one of the most commonly used image enhancement techniques for improving the spatial quality of the source image with minimal spectral distortion in remote sensing. Until now, data fusion algorithms were developed and applied to improve the spatial resolution of the multispectral images and also their performances were evaluated depending on the source images such as Landsat Enhanced Thematic Mapper Plus (ETM+), Landsat Multispectral (MS)/Panchromatic (PAN), Satellite pour l'Observation de la Terre (SPOT) XS/PAN and IKONOS MS/PAN datasets. This paper assesses whether hyperspectral images, having very narrow bands compared to multispectral images, can be fused with high spatial resolution panchromatic images using common and current new algorithms including Intensity-Hue Saturation (IHS), Principal Component Substitution (PCS), Gram Schmidt Transformation (GST), Smoothing Filter-based Intensity Modulation (SFIM), Discrete Wavelet Transform (DWT), wavelet-based IHS (DWT-IMS) and PCS (DWT-PCS) and Fast Fourier Transform (FFT)-enhanced IHS. We also examine the performance of the fused hyperspectral images with respect to the fused multispectral images. For this purpose, two different source datasets (EO1 Hyperion/ALI PAN and EO1 ALI MS/PAN) were used. Some qualitative and quantitative analyses were implemented to assess the spatial and spectral quality of the fused images. The results show that it was possible to carry out the fusion of a narrow-band hyperspectral image and a high spatial resolution panchromatic image. The fusion of EO1 Hyperion/ALI PAN and EO1 ALI MS/PAN datasets using the SFIM, DWT-PCS, DWT-IHS and FFT-IHS algorithms produces better results than other techniques. Also, the results show that the fusion methods behaved for both datasets in the same performances except the DWT algorithm. The DWT method has a lower performance for the hyperspectral image compared to the multispectral image. Therefore the DWT algorithm should be further studied to improve the spectral qualities of a fused hyperspectral image based on wavelet transformation.
---
paper_title: Super‐resolution mapping of the waterline from remotely sensed data
paper_content:
Methods for mapping the waterline at a subpixel scale from a soft image classification of remotely sensed data are evaluated. Unlike approaches based on hard classification, these methods allow the waterline to run through rather than between image pixels and so have the potential to derive accurate and realistic representations of the waterline from imagery with relatively large pixels. The most accurate predictions of waterline location were made from a geostatistical approach applied to the output of a soft classification (RMSE = 2.25 m) which satisfied the standards for mapping at 1 : 5000 scale from imagery with a 20 m spatial resolution.
---
paper_title: Detection and analysis of individual leaf-off tree crowns in small footprint, high sampling density lidar data from the eastern deciduous forest in North America
paper_content:
Leaf-off individual trees in a deciduous forest in the eastern USA are detected and analysed in small footprint, high sampling density lidar data. The data were acquired February 1, 2001, using a SAAB TopEye laser profiling system, with a sampling density of approximately 12 returns per square meter. The sparse and complex configuration of the branches of the leaf-off forest provides sufficient returns to allow the detection of the trees as individual objects and to analyse their vertical structures. Initially, for the detection of the individual trees only, the lidar data are first inserted in a 2D digital image, with the height as the pixel value or brightness level. The empty pixels are interpolated, and height outliers are removed. Gaussian smoothing at different scales is performed to create a three-dimensional scale-space structure. Blob signatures based on second-order image derivatives are calculated, and then normalised so they can be compared at different scale-levels. The grey-level blobs with the strongest normalised signatures are selected within the scale-space structure. The support regions of the blobs are marked one-at-a-time in the segmentation result image with higher priority for stronger blobs. The segmentation results of six individual hectare plots are assessed by a computerised, objective method that makes use of a ground reference data set of the individual tree crowns. For analysis of individual trees, a subset of the original laser returns is selected within each tree crown region of the canopy reference map. Indices based on moments of the first four orders, maximum value and number of canopy and ground returns, are estimated. The indices are derived separately for height and laser reflectance of branches for the two echoes. Significant differences (p<0.05) are detected for numerous indices for three major native species groups: oaks (Quercus spp.), red maple (Acer rubrum) and yellow poplar (Liriodendron tuliperifera). Tree species classification results of different indices suggest a moderate to high degree of accuracy using single or multiple variables. Furthermore, the maximum tree height is compared to ground reference tree height for 48 sample trees and a 1.1-m standard error (R 2 =68%
---
paper_title: Multitemporal spectral analysis for cheatgrass (Bromus tectorum) classification
paper_content:
Operational satellite remote sensing data can provide the temporal repeatability necessary to capture phenological differences among species. This study develops a multitemporal stacking method coupled with spectral analysis for extracting information from Landsat imagery to provide species-level information. Temporal stacking can, in an approximate mathematical sense, effectively increase the 'spectral' resolution of the system by adding spectral bands of several multitemporal images. As a demonstration, multitemporal linear spectral unmixing is used to successfully delineate cheatgrass (Bromus tectorum) from soil and surrounding vegetation (77% overall accuracy). This invasive plant is an ideal target for exploring multitemporal methods because of its phenological differences with other vegetation in early spring and, to a lesser degree, in late summer. The techniques developed in this work are directly applicable for other targets with temporally unique spectral differences.
---
paper_title: The pixel: a snare and a delusion
paper_content:
The pixel is an explicit feature of remotely-sensed imagery, and a primary concept of the raster GIS (Geographical Information System) which is the usual vehicle for integration. This Letter addresses the underlying spatial conceptualization of the pixel, which is the parallel of the grid cell in spatial analysis, and the regular grid in sampling. It is argued that integration of remote sensing and GIS can only possibly advance if we develop methods to address the conceptual short-comings of the pixel as a spatial entity, and stop pretending that it is a true geographical object. Three major strands of research which address this issue are highlighted, including mixture modelling, geostatistics and fuzzy classification.
---
paper_title: Spectrally segmented principal component analysis of hyperspectral imagery for mapping invasive plant species
paper_content:
Principal component analysis (PCA) is one of the most commonly adopted feature reduction techniques in remote sensing image analysis. However, it may overlook subtle but useful information if applied directly to the analysis of hyperspectral data, especially for discriminating between different vegetation types. In order to accurately map an invasive plant species (horse tamarind, Leucaena leucocephala) in southern Taiwan using Hyperion hyperspectral imagery, this study developed a spectrally segmented PCA based on the spectral characteristics of vegetation over different wavelength regions. The developed algorithm can not only reduce the dimensionality of hyperspectral imagery but also extracts helpful information for differentiating more effectively the target plant species from other vegetation types. Experiments conducted in this study demonstrated that the developed algorithm performs better than correlation-based segmented principal component transformation (SPCT) and conventional PCA (overall accuracy: 86%, 76%, 66%; kappa value: 0.81, 0.69, 0.57) in detecting the target plant species, as well as mapping other vegetation covers.
---
paper_title: Mapping nonnative plants using hyperspectral imagery
paper_content:
Nonnative plant species are causing enormous ecological and environmental impacts from local to global scale. Remote sensing images have had mixed success in providing spatial information on land cover characteristics to land managers that increase effective management of invasions into native habitats. However, there has been limited evaluation of the use of hyperspectral data and processing techniques for mapping specific invasive species based on their spectral characteristics. This research evaluated three different methods of processing hyperspectral imagery: minimum noise fraction (MNF), continuum removal, and band ratio indices for mapping iceplant (Carpobrotus edulis) and jubata grass (Cortaderia jubata) in California's coastal habitat. Validation with field sampling data showed high mapping accuracies for all methods for identifying presence or absence of iceplant (97%), with the MNF procedure producing the highest accuracy (55%) when the classes were divided into four different densities of iceplant.
---
paper_title: Global mean values in linear spectral unmixing: double fallacy!
paper_content:
Almost all conventional linear spectral unmixing techniques are based on the principle of least squares. The global mean digital number (DN) of an end-member is taken as the representative (i.e. contributory) DN for the end-member. This paper sets out to prove that the notion is a fallacy, and will always lead to negative percentages, super-positive percentages and non-100% sum of percentages if the unmixed pixel is not composed of, to within some tolerance, the global mean DNs only. Three sets of spectral end-members (two, three and four spectral end-members) are generated from Landsat ETM+ data. Practical percentages (between 0% and 100% and totalling 100%) of the end-members are returned by pixels in which the local mean DNs of the spectral end-members do not differ from the global mean DNs by, on average, 4.
---
paper_title: Approaches for the production and evaluation of fuzzy land cover classifications from remotely-sensed data
paper_content:
Abstract Remote sensing is an attractive source of data for land cover mapping applications. Mapping is generally achieved through the application of a conventional statistical classification, which allocates each image pixel to a land cover class. Such approaches are inappropriate for mixed pixels, which contain two or more land cover classes, and a fuzzy classification approach is required. When pixels may have multiple and partial class membership measures of the strength of class membership may be output and, if strongly related to the land cover composition, mapped to represent such fuzzy land cover. This type of representation can be derived by softening the output of a conventional ‘hard’ classification or using a fuzzy classification. The accuracy of the representation provided by a fuzzy classification is, however, difficult to evaluate. Conventional measures of classification accuracy cannot be used as they are appropriate only for ‘hard’ classifications. The accuracy of a classification may, ho...
---
paper_title: Mapping Chinese tallow with color-infrared photography
paper_content:
Airborne color-infrared photography (CIR) (1:12,000 scale) was used to map localized occurrences of the widespread and aggressive Chinese tallow (Sapium sebiferum), an invasive species. Photography was collected during senescence when Chinese tallow's bright red leaves presented a high spectral contrast within the native bottomland hardwood and upland forests and marsh land-cover types. Mapped occurrences were conservative because not all senescing tallow leaves are bright red simultaneously. To simulate low spectral but high spatial resolution satellitelairborne image and digital video data, the CIR photography was transformed into raster images at spatial resolutions approximating 0.5 m and 1.0 m. The image data were then spectrally classified for the occurrence of bright red leaves associated with senescing Chinese tallow. Classification accuracies were greater than 95 percent at both spatial resolutions. There was no significant difference in either forest in the detection of tallow or inclusion of non-tallow trees associated with the two spatial resolutions. In marshes, slightly more tallow occurrences were mapped with the lower spatial resolution, but there were also more misclassifications of native land covers as tallow. Combining all land covers, there was no difference at detecting tallow occurrences (equal omission errors) between the two resolutions, but the higher spatial resolution was associated with less inclusion of nontallow land covers as tallow (lower commission error). Overall, these results confirm that high spatial (51 m) but low spectral resolution remote sensing data can be used for mapping Chinese tallow trees in dominant environments found in coastal and adjacent upland landscapes.
---
paper_title: Multiframe demosaicing and super-resolution of color images
paper_content:
In the last two decades, two related categories of problems have been studied independently in image restoration literature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and, as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address them in a unified context. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteron estimation technique by minimizing a multiterm cost function. The L1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance components. Finally, an additional regularization term is used to force similar edge location and orientation in different color channels. We show that the minimization of the total cost function is relatively easy and fast. Experimental results on synthetic and real data sets confirm the effectiveness of our method.
---
paper_title: Towards an operational MODIS continuous field of percent tree cover algorithm: examples using AVHRR and MODIS data
paper_content:
The continuous fields Moderate Resolution Imaging Spectroradiometer (MODIS) land cover products are 500-m sub-pixel representations of basic vegetation characteristics including tree, herbaceous and bare ground cover. Our previous approach to deriving continuous fields used a linear mixture model based on spectral endmembers of forest, grassland and bare ground training. We present here a new approach for estimating percent tree cover employing continuous training data over the whole range of tree cover. The continuous training data set is derived by aggregating high-resolution tree cover to coarse scales and is used with multi-temporal metrics based on a full year of coarse resolution satellite data. A regression tree algorithm is used to predict the dependent variable of tree cover based on signatures from the multitemporal metrics. The automated algorithm was tested globally using Advanced Very High Resolution Radiometer (AVHRR) data, as a full year of MODIS data has not yet been collected. A root mean square error (rmse) of 9.06% tree cover was found from the global training data set. Preliminary MODIS products are also presented, including a 250-m map of the lower 48 United States and 500-m maps of tree cover and leaf type for North America. Results show that the new approach used with MODIS data offers an improved characterization of land cover.
---
paper_title: Variability in Soft Classification Prediction and its implications for Sub-pixel Scale Change Detection and Super Resolution Mapping
paper_content:
The impact of intra-class spectral variability on the estimation of sub-pixel land-cover class composition with a linear mixture model is explored. It is shown that the nature of intra-class variation present has a marked impact on the accuracy of sub-pixel class composition estimation, as it violates the assumption that a class can be represented by a single spectral endmember. It is suggested that a distribution of possible class compositions can be derived from pixels instead of a single class composition prediction. This distribution provides a richer indication of possible subpixel class compositions and highlights a limitation for super-resolution mapping. Moreover, the class composition distribution information may be used to derive different scenarios of changes when used in a post-classification comparison type approach to change detection. This latter issue is illustrated with an example of forest cover change in Brazil from Landsat TM data.
---
paper_title: High Spatial Resolution Remotely Sensed Data for Ecosystem Characterization
paper_content:
Abstract Characterization of ecosystem structure, diversity, and function is increasingly desired at finer spatial and temporal scales than have been derived in the past. Many ecological applications require detailed data representing large spatial extents, but these data are often unavailable or are impractical to gather using field-based techniques. Remote sensing offers an option for collecting data that can represent broad spatial extents with detailed attribute characterizations. Remotely sensed data are also appropriate for use in studies across spatial scales, in conjunction with field-collected data. This article presents the pertinent technical aspects of remote sensing for images at high spatial resolution (i.e., with a pixel size of 16 square meters or less), existing and future options for the processing and analysis of remotely sensed data, and attributes that can be estimated with these data for forest ecosystems.
---
paper_title: Using remote sensing to assess biodiversity
paper_content:
This review paper evaluates the potential of remote sensing for assessing species diversity, an increasingly urgent task. Existing studies of species distribution patterns using remote sensing can be essentially categorized into three types. The first involves direct mapping of individual plants or associations of single species in relatively large, spatially contiguous units. The second technique involves habitat mapping using remotely sensed data, and predictions of species distribution based on habitat requirements. Finally, establishment of direct relationships between spectral radiance values recorded from remote sensors and species distribution patterns recorded from field observations may assist in assessing species diversity. Direct mapping is applicable over smaller extents, for detailed information on the distribution of certain canopy tree species or associations. Estimations of relationships between spectral values and species distributions may be useful for the limited purpose of indicating areas with higher levels of species diversity, and can be applied over spatial extents of hundreds of square kilometres. Habitat maps appear most capable of providing information on the distributions of large numbers of species in a wider variety of habitat types. This is strongly limited by variation in species composition, and best applied over limited spatial extents of tens of square kilometres.
---
paper_title: Classifying Eucalyptus forests with high spatial and spectral resolution imagery: an investigation of individual species and vegetation communities
paper_content:
Mapping the spatial distribution of individual species is an important ecological and forestry issue that requires continued research to coincide with advances in remote-sensing technologies. In this study, we investigated the application of high spatial resolution (80 cm) Compact Airborne Spectrographic Imager 2 (CASI-2) data for mapping both spectrally complex species and species groups (subgenus grouping) in an Australian eucalypt forest. The relationships between spectral reflectance curves of individual tree species and identified statistical differences among species were analysed with ANOVA. Supervised maximum likelihood classifications were then performed to assess tree species separability in CASI-2 imagery. Results indicated that turpentine (Syncarpia glomulifera Smith), mesic vegetation (primarily rainforest species), and an amalgamated group of eucalypts could be readily distinguished. The discrimination of S. glomulifera was particularly robust, with consistently high classification accuracies. Eucalypt classification as a broader species group, rather than individual species, greatly improved classification performance. However, separating sunlit and shaded aspects of tree crowns did not increase classification accuracy.
---
paper_title: Examining pine spectral separability using hyperspectral data from an airborne sensor : An extension of field-based results
paper_content:
Three southern USA forestry species, loblolly pine (Pinus taeda), Virginia pine (Pinus virginiana), and shortleaf pine (Pinus echinata), were previously shown to be spectrally separable (83% accuracy) using data from a full-range spectroradiometer (400-2500 nm) acquired above tree canopies. This study focused on whether these same species are also separable using hyperspectral data acquired using the airborne visible/infrared imaging spectrometer (AVIRIS). Stepwise discriminant techniques were used to reduce data dimensionality to a maximum of 10 spectral bands, followed by discriminant techniques to measure separability. Discriminatory variables were largely located in the visible and near-infrared regions of the spectrum. Cross-validation accuracies ranged from 65% (1 pixel radiance data) to as high as 85% (3×3 pixel radiance data), indicating that these species have strong potential to be classified accurately using hyperspectral data from air-or space-borne sensors.
---
paper_title: Issues of uncertainty in super-resolution mapping and their implications for the design of an inter-comparison study
paper_content:
Super-resolution mapping is a relatively new field in remote sensing whereby classification is undertaken at a finer spatial resolution than that of the input remotely sensed multiple-waveband imagery. A variety of different methods for super-resolution mapping have been proposed, including spatial pixel-swapping, spatial simulated annealing, Hopfield neural networks, feed-forward back-propagation neural networks and geostatistical methods. The accuracy of all of these new approaches has been tested, but the tests have tended to focus on the new technique (i.e. with little benchmarking against other techniques) and have used different measures of accuracy. There is, therefore, a need for greater inter-comparison between the various methods available, and a super-resolution inter-comparison study would be a welcome step towards this goal. This paper describes some of the issues that should be considered in the design of such a study.
---
paper_title: Spatial patterns in species distributions reveal biodiversity change
paper_content:
Interpretation of global biodiversity change is hampered by a lack of information on the historical status of most species in most parts of the world. Here we show that declines and increases can be deduced from current species distributions alone, using spatial patterns of occupancy combined with distribution size. Declining species show sparse, fragmented distributions for their distribution size, reflecting the extinction process; expanding species show denser, more aggregated distributions, reflecting colonization. Past distribution size changes for British butterflies were deduced successfully from current distributions, and former distributions had some power to predict future change. What is more, the relationship between distribution pattern and change in British butterflies independently predicted distribution change for butterfly species in Flanders, Belgium, and distribution change in British rare plant species is similarly related to spatial distribution pattern. This link between current distribution patterns and processes of distribution change could be used to assess relative levels of threat facing different species, even for regions and taxa lacking detailed historical and ecological information.
---
paper_title: The factor of scale in remote sensing
paper_content:
Abstract Thanks to such second- and third-generation sensor systems as Thematic Mapper, SPOT, and AVHRR, a user of digital satellite imagery for remote sensing of the earth's surface now has a choice of image scales ranging from 10 m to 1 km. The choice of an appropriate scale, or spatial resolution, for a particular application depends on several factors. These include the information desired about the ground scene, the analysis methods to be used to extract the information, and the spatial structure of the scene itself. A graph showing how the local variance of a digital image for a scene changes as the resolution-cell size changes can help in selecting an appropriate image scale. Such graphs are obtained by imaging the scene at fine resolution and then collapsing the image to successively coarser resolutions while calculating a measure of local variance. The local variance/resolution graphs for the forested, agricultural, and urban/suburban environments examined in this paper reveal the spatial structure of each type of scene, which is a function of the sizes and spatial relationships of the objects the scene contains. At the spatial resolutions of SPOT and Thematic Mapper imagery, local image variance is relatively high for forested and urban/suburban environments, suggesting that information-extracting techniques utilizing texture, context, and mixture modeling are appropriate for these sensor systems. In agricultural environments, local variance is low, and the more traditional classifiers are appropriate.
---
paper_title: APPLICATION OF 1-M AND 4-M RESOLUTION SATELLITE DATA TO ECOLOGICAL STUDIES OF TROPICAL RAIN FORESTS
paper_content:
Understanding the current status of the world's tropical rain forests (TRF) can be greatly advanced by global coverage of remotely sensed data at the scale of individual tree crowns. In 1999 the IKONOS satellite began offering worldwide 1-m panchromatic and 4-m multispectral data. Here we show that these data can be used to address diverse aspects of forest ecology and land-use classification in the tropics. Using crowns of emergent trees as control points, we georeferenced a 600-ha subset of IKONOS 1-m and 4-m data from an August 2000 image of the La Selva Biological Station, Costa Rica (root mean square error 5 4.3 m). Crown area measured on the image was highly correlated with crown area for the same tree measured from the ground. Using a 1988 aerial photograph as a baseline, all trees .1 m diameter in a long-term study that died over the ensuing 12-year period, and that could be located in the photograph, were detected as missing in the IKONOS image (N 5 7). Crown growth for large trees visible on both images averaged 12 m 2 /yr (N 5 16). We thus demonstrate that IKONOS imagery can provide data on four variables necessary for doing demographic research: tree size, location, mortality, and growth. Stand basal area, estimated aboveground biomass, and percentage of the canopy .15 m tall for 18 0.5-ha permanent forest inventory plots in old growth were all highly sig- nificantly correlated with different indices derived from the IKONOS data. We used summary statistics from the original IKONOS data as well as derived indices to characterize nine areas with well-documented land-use histories. Secondary forests were clearly separable from the other sites. One of the secondary forests was 40 years old, suggesting that IKONOS data can be used to detect significantly older secondary forest than is possible with coarser resolution satellite data. The selectively logged forest was distinguishable by measuring the size of the largest crowns on the 1-m image. This suggests a range of applications for detecting and quantifying biomass degradation due to selective logging and edge effects. Satellite data at 1-m and 4-m resolution make possible a truly global approach to fine spatial resolution remote-sensing studies of TRF ecology and land use.
---
paper_title: Identifying species of individual trees using airborne laser scanner
paper_content:
Abstract Individual trees can be detected using high-density airborne laser scanner data. Also, variables characterizing the detected trees such as tree height, crown area, and crown base height can be measured. The Scandinavian boreal forest mainly consists of Norway spruce ( Picea abies L. Karst.), Scots pine ( Pinus sylvestris L.), and deciduous trees. It is possible to separate coniferous from deciduous trees using near-infrared images, but pine and spruce give similar spectral signals. Airborne laser scanning, measuring structure and shape of tree crowns could be used for discriminating between spruce and pine. The aim of this study was to test classification of Scots pine versus Norway spruce on an individual tree level using features extracted from airborne laser scanning data. Field measurements were used for training and validation of the classification. The position of all trees on 12 rectangular plots (50×20 m 2 ) were measured in field and tree species was recorded. The dominating species (>80%) was Norway spruce for six of the plots and Scots pine for six plots. The field-measured trees were automatically linked to the laser-measured trees. The laser-detected trees on each plot were classified into species classes using all laser-detected trees on the other plots as training data. The portion correctly classified trees on all plots was 95%. Crown base height estimations of individual trees were also evaluated ( r =0.84). The classification results in this study demonstrate the ability to discriminate between pine and spruce using laser data. This method could be applied in an operational context. In the first step, a segmentation of individual tree crowns is performed using laser data. In the second step, tree species classification is performed based on the segments. Methods could be developed in the future that combine laser data with digital near-infrared photographs for classification with the three classes: Norway spruce, Scots pine, and deciduous trees.
---
paper_title: Remote Sensing for Sustainable Forest Management
paper_content:
INTRODUCTION Forest Management Questions Remote Sensing Data and Methods Categories of Applications of Remote Sensing Organization of the Book SUSTAINABLE FOREST MANAGEMENT Definition of Sustainable Forest Management Ecosystem Management Criteria and Indicators of Sustainable Forest Management Information Needs of Forest Managers Role of Remote Sensing ACQUISITION OF IMAGERY Field, Aerial, and Satellite Imagery Data Characteristics Resolution and Scale Aerial Platforms and Sensors Satellite Platforms and Sensors General Limits of Airborne and Satellite Remote Sensing Data IMAGE CALIBRATION AND PROCESSING Georadiometric Effects and Spectral Response Image Processing Systems and Functionality Image Analysis Support Functions Image Information Extraction Image Understanding FOREST MODELING AND GIS Geographical Information Science Ecosystem Process Models Spatial Pattern Modeling FOREST CLASSIFICATION Information on Forest Classes Classification Systems for Use with Remote Sensing Data Level I Classes Level II Classes Level III Classes FOREST STRUCTURE ESTIMATION Information on Forest Structure Forest Inventory Variables Biomass Volume and Growth Assessment FOREST CHANGE DETECTION Information on Forest Change Harvesting and Silviculture Activity Natural Disturbances Change in Spatial Structure CONCLUSION The Technological Approach - Revisited References
---
paper_title: Tree Species Classification using Semi-automatic Delineation of Trees on Aerial Images
paper_content:
The purpose of this study was to develop a method for classifying tree species from remote sensing images by combining a semi-automatic pattern recognition technique and spectral properties of trees. Five stands in southern Finland were studied. Individual trees in the digital colour infrared (CIR) aerial photographs were segmented by a method based on the recognition of tree crown patterns at subpixel accuracy. The images were filtered with the Gaussian N-by-N smoothing operator and local maxima above a threshold level were segmented. The segments were classified into three tree species classes. The kappa coefficients for stands varied from 0.43 to 0.86 when the training data and test data were from the same aerial photograph. When training data from other photographs were used as reference data, the kappa coefficients ranged from 0.40 to 0.75. The method described provides an interesting approach for detecting tree species semi-automatically in digital aerial data.
---
paper_title: Landscape as a continuum: an examination of the urban landscape structures and dynamics of Indianapolis City, 1991-2000, by using satellite images
paper_content:
The majority of the vast literature on remote sensing of urban landscapes has adopted a 'hard classification' approach, in which each image pixel is assigned a single land use and land cover category. Owing to the nature of urban landscapes, the confusion between land use and land cover definitions and the constraints of widely applied medium spatial resolution satellite images, high classification accuracy has been difficult to achieve with the conventional 'hard' classifiers. The prevalence of the mixed pixel problem in urban landscapes indicates a crucial need for an alternative approach to urban analyses. Identification, description and quantification, rather than classification, may provide a better understanding of the compositions and processes of heterogeneous landscapes such as urban areas. This study applied the Vegetation-Impervious Surface-Soil (V-I-S) model for characterizing urban landscapes and analysing their dynamics in Indianapolis, USA, between 1991 and 2000. To extract these landscape components from three dates of Landsat Thematic Mapper/Enhanced Thematic Mapper Plus (TM/ETM+) images in 1991 1995 and 2000, we used the technique of linear spectral mixture analysis (LSMA). These components were further classified into urban thematic classes, and used for analysis of the landscape patterns and dynamics. The results indicate that LSMA provides a suitable technique for detecting and mapping urban materials and V-I-S component surfaces in repetitive and consistent ways, and for solving the spectral mixing of medium spatial resolution satellite imagery. The reconciliation between the V-I-S model with LSMA for Landsat imagery allowed this continuum landscape model to be an alternative, effective approach to characterizing and quantifying the spatial and temporal changes of the urban landscape compositions in Indianapolis from 1991 to 2000. It is suggested that the model developed in this study offers a more realistic and robust representation of the true nature of urban landscapes, as compared with the conventional method based on 'hard classification' of satellite imagery. The general applicability of this continuum model, especially its spectral, spatial and temporal variability, is discussed.
---
paper_title: Evaluation of global land cover data sets over the tundra-taiga transition zone in northernmost Finland
paper_content:
The remote sensing-based continental to global scale land cover data sets provide several land cover depictions over the circumpolar tundra-taiga transition zone. The aim of this study was to evaluate three data sets in northernmost Finland: the Global Land Cover 2000 Northern Eurasia map (GLC2000-NE), the MODIS global land cover map (MODIS-IGBP) and the tree cover layer of the MODIS vegetation continuous fields product (MODIS-VCF). The data sets were first compared both visually and statistically to biotope inventory data including tree cover, height, species composition and shrub cover information as continuous variables. The agreement with reference data was poor because the classifications do not correspond to the class descriptions. The MODIS-VCF tree cover overestimates the tree cover in the low values and underestimates it in the high values. The agreement was relatively good when the global data sets were aggregated to a forest-non-forest level and compared to the Finnish CORINE Land Cover 2000 map over a larger area. However, the inaccurate mapping of the deciduous broadleaf forests and mires reduced the agreement at the forest-non-forest level. The vegetation transitions are difficult to map using low-resolution satellite data and further improvements to the land cover characterization over the tundra-taiga transition zone are required.
---
paper_title: Global composites of the MERIS Terrestrial Chlorophyll Index
paper_content:
From the year 2006, the European Space Agency (ESA) supported the production of the global composite (Level 3) of a unique terrestrial chlorophyll product called the MERIS Terrestrial Chlorophyll Index (MTCI) (Dash and Curran 2004). The MTCI is calculated using three red/near infrared bands of Envisat MERIS data (Rast et al. 1999). This index estimates the relative location of the reflectance 'red edge' of vegetation and is more sensitive than red edge position to canopy chlorophyll content, notably at high chlorophyll contents. This product effectively combines information on leaf area index and the chlorophyll concentration of leaves to produce an image of chlorophyll content (i.e. the amount of chlorophyll per unit area of ground). Chlorophyll content plays an important role in determining the physiological status of a plant, is related to photosynthetic rate and varies temporarily and spatially. MTCI global composites can be used to estimate relative and land cover specific, chlorophyll content in space and time and this in turn can be a key input to models of terrestrial productivity, gas exchange and vegetation health. Two global monthly MTCI composites for March and August 2003 are presented on the cover. These images display the MTCI on a nominal scale of 0 to 6, with higher values indicating higher chlorophyll content. These images clearly capture the phenology of global vegetation. In March, a major part of the southern hemisphere (e.g. South Africa, South America) had high MTCI values during the peak of their growing season, whereas a major part of the northern hemisphere had low MTCI values in March. In August, the situation was reversed. A major part of the northern hemisphere (e.g. Europe, North America) had high MTCI values when the coverage of green leaves was at a maximum. In both images the tropical rain forests had relatively high MTCI values. It is interesting to note that even at the centre of these forests there is a change in MTCI values between March and August. This global MTCI product will be produced as weekly and monthly composites and is the only terrestrial chlorophyll product available from space. The MTCI along with oceanographic chlorophyll concentration estimates, also from MERIS, can be used to generate a 'global chlorophyll map' for the estimation of global productivity.
---
paper_title: Remote sensing in physical geography: a twenty-first-century perspective
paper_content:
Timely, high-quality data from remote sensing can benefi t the study of physical geography in many ways (Wolman, 2004). Rapid developments in remote sensing, particularly from satellite platforms, have generated much, often well-placed, enthusiasm about its potential as a powerful research tool (Skole, 2004). Indeed, remote sensing is viewed by many as having come of age (Tatem et al., 2008) and can be deemed a mature discipline with its physical principles well understood and its range of applications a showcase of its versatility (Warner et al., 2009). Historically, physical geography has benefited greatly from the emergence of new technologies, like remote sensing, and is likely to continue to reap benefi ts from further technological advances in the foreseeable future (Rhoads, 2004). The beginning of the twenty-fi rst century promised to be a period of technological innovation, with remote sensing identifi ed as one of 21 technologies well placed to meet contemporary issues and challenges facing society (AllBusiness, 2001). Given that many of these challenges involve the interaction between society and the physical environment (Demeritt, 2009), and that there is an increasing need for all areas of science, including physical geography, to iterate their value to society (Rediscovering Geography Committee, NRC, 1997), the close of the fi rst decade of the twenty-fi rst century is a suitable point at which to take stock of current trends and developments in remote sensing within physical geography and interrelated fi elds. Remote sensing is useful for the physical geographer not only as a tool for data collection, with superiority over other techniques by virtue of its spatial and temporal coverage, but also because of the logic implicit in the reasoning process employed to analyse the data (Estes et al., 1980). In addition to the experimental, hypothetical and case-study style of reasoning that tends to characterize the use of remote sensing within physical geography, there are increasingly operational modes of usage (eg, within meteorology and climatology, hazard management). Since the turn of the twenty-fi rst century more than 100 satellite sensors have been launched (www.itc.nl/research/products/sensordb/ searchsat.aspx) and numerous airborne and terrestrial sensors manufactured. Many of these sensors are the result of emerging technologies (eg, Krabach, 2000), with the processing and interpretation of the data acquired benefiting from advanced pro-
---
paper_title: Comparison of IKONOS and QuickBird images for mapping mangrove species on the Caribbean coast of Panama
paper_content:
Mangrove stands of differing species composition are hard to distinguish in conventional, coarse resolution satellite images. The new generation of meter-level satellite imagery provides a unique opportunity to achieve this goal. In this study, an IKONOS Geo bundle image and a QuickBird Standard bundle image were acquired for a study area located at Punta Galeta on the Caribbean coast of Panama. The two images cover the same area and were acquired under equivalent conditions. Three comparison tests were designed and implemented, each with separate objectives. First, a comparison was conducted band by band by examining their spectral statistics and species by species by inspecting their textural roughness. The IKONOS image had a higher variance and entropy value in all the compared bands, whereas the QuickBird image displayed a finer textural roughness in the forest canopy. Second, maximum likelihood classification (MLC) was executed with two different band selections. When examining only multispectral bands, the IKONOS image had better spectral discrimination than QuickBird while the inclusion of panchromatic bands had no effect on the classification accuracy of either the IKONOS or QuickBird image. Third, first- and second-order texture features were extracted from the panchromatic images at different window sizes and with different grey level (GL) quantization levels and were compared through MLC classification. Results indicate that the consideration of image texture enhances classifications based on the IKONOS panchromatic band more than it does classifications based on comparable QuickBird imagery. An object-based classification was also utilized to compare underlying texture in both panchromatic and multispectral bands. On the whole, both IKONOS and QuickBird images produced promising results in classifying mangrove species.
---
paper_title: Two improvement schemes of PAN modulation fusion methods for spectral distortion minimization
paper_content:
Fusion of panchromatic (PAN) and multispectral (MS) images is one of the most promising issues in remote sensing. PAN modulation fusion methods are usually based on an assumption that a ratio of two different-resolution versions of an MS band is equal to a ratio of two different-resolution versions of a PAN image. In such fusion methods, image haze is rarely taken into account, and it may produce serious spectral distortion in synthetic images. In this paper, assuming that the previous ratio relationship only holds for haze-free images, two relevant improvement schemes are proposed to better express the ratio relationship of haze-included images. In a test on a spatially degraded IKONOS dataset, the first scheme synthesizes an image with minimum spectral distortion, and the second modifies several current PAN modulation fusion methods and generates high-quality synthetic products. The experiment results confirm that image haze can seriously impact the quality of fused images obtained by using PAN modulation fusion methods, and it should be taken into account in relevant image fusion.
---
paper_title: Estimating area errors for fine‐scale feature‐based ecological mapping
paper_content:
High spatial resolution feature‐based approaches are especially useful for ecological mapping in densely populated landscapes. This paper evaluates errors in estimating ecological map class areas from fine‐scale current (∼2002) and historical (∼1945) feature‐based ecological mapping by a set of trained interpreters across densely populated rural sites in China based on field‐validated interpretation of high spatial resolution (1 m) imagery. Median overall map accuracy, corrected for chance, was greater than 85% for mapping by trained interpreters, with greater accuracy for current versus historical mapping. An error model based on feature perimeter proved as reliable in predicting 90% confidence intervals for map class areas as did models derived from the conventional error matrix. A conservative error model combining these approaches was developed and tested for statistical reliability in predicting confidence intervals for ecological map class areas from fine‐scale feature‐based mapping by a set of tra...
---
paper_title: Exploitation of Very High Resolution Satellite Data for Tree Species Identification
paper_content:
With the emergence of very high spatial resolution satellite images, the spatial resolution gap which existed between satellite images and aerial photographs has decreased. A study of the potential of these images for tree species in “monoculture stands” identification was conducted. Two Ikonos images were acquired, one in June 2000 and the other in October 2000, for an 11- by 11-km area covering the Sonian Forest in the southeastern part of the Brussels-Capital region (Belgium). The two images were orthorectified using a digital elevation model and 1256 geodetic control points. The identification of the tree species was carried out utilizing a supervised maximum-likelihood classification on a pixel-by-pixel basis. Classifications were performed on the orthorectified data, NDVI transformed data, and principal components imagery. In order to decrease the intraclass variance, a mean filter was applied to all the spectral bands and neo-channels used in the classification process. Training and validation areas were selected and digitized using detailed geographical databases of the tree species. The selection of the relevant bands and neo-channels was carried out by successive addition of information in order to improve the classification results. Seven different tree species of one to two different age classes were identified with an overall accuracy of 86 percent. The seven identified tree species or species groups are Oaks (Quercus sp.), Beech (Fagus sylvatica L.), Purple Beech (Fagus sylvatica purpurea), Douglas Fir (Pseudotsuga menziesii (Mirb.) Franco), Scots Pine (Pinus sylvestris L.), Corsican Pine (Pinus nigra Arn. subsp. laricio (Poir.) Maire var. corsican), and Larch (Larix decidua Mill.).
---
paper_title: One-Class Classification for Mapping a Specific Land-Cover Class: SVDD Classification of Fenland
paper_content:
Remote sensing is a major source of land-cover information. Commonly, interest focuses on a single land-cover class. Although a conventional multiclass classifier may be used to provide a map depicting the class of interest, the analysis is not focused on that class and may be suboptimal in terms of the accuracy of its classification. With a conventional classifier, considerable effort is directed on the classes that are not of interest. Here, it is suggested that a one-class-classification approach could be appropriate when interest focuses on a specific class. This is illustrated with the classification of fenland, a habitat of considerable conservation value, from Landsat Enhanced Thematic Mapper Plus imagery. A range of one-class classifiers is evaluated, but attention focuses on the support-vector data description (SVDD). The SVDD was used to classify fenland with an accuracy of 97.5% and 93.6% from the user's and producer's perspectives, respectively. This classification was trained upon only the fenland class and was substantially more accurate in fen classification than a conventional multiclass maximum-likelihood classification provided with the same amount of training data, which classified fen with an accuracy of 90.0% and 72.0% from the user's and producer's perspectives, respectively. The results highlight the ability to classify a single class using only training data for that class. With a one-class classification, the analysis focuses tightly on the class of interest, with resources and effort not directed on other classes, and there are opportunities to derive highly accurate classifications from small training sets
---
paper_title: Status of land cover classification accuracy assessment
paper_content:
The production of thematic maps, such as those depicting land cover, using an image classification is one of the most common applications of remote sensing. Considerable research has been directed at the various components of the mapping process, including the assessment of accuracy. This paper briefly reviews the background and methods of classification accuracy assessment that are commonly used and recommended in the research literature. It is, however, evident that the research community does not universally adopt the approaches that are often recommended to it, perhaps a reflection of the problems associated with accuracy assessment, and typically fails to achieve the accuracy targets commonly specified. The community often tends to use, unquestioningly, techniques based on the confusion matrix for which the correct application and interpretation requires the satisfaction of often untenable assumptions (e.g., perfect coregistration of data sets) and the provision of rarely conveyed information (e.g., sampling design for ground data acquisition). Eight broad problem areas that currently limit the ability to appropriately assess, document, and use the accuracy of thematic maps derived from remote sensing are explored. The implications of these problems are that it is unlikely that a single standardized method of accuracy assessment and reporting can be identified, but some possible directions for future research that may facilitate accuracy assessment are highlighted.
---
paper_title: Optimizing Remote Sensing and GIS Tools for Mapping and Managing the Distribution of an Invasive Mangrove (Rhizophora mangle) on South Molokai, Hawaii
paper_content:
In 1902, the Florida red mangrove, Rhizophora mangle L., was introduced to the island of Molokai, Hawaii, and has since colonized nearly 25% of the south coast shoreline. By classifying three kinds of remote sensing imagery, we compared abilities to detect invasive mangrove distributions and to discriminate mangroves from surrounding terrestrial vegetation. Using three analytical techniques, we compared mangrove mapping accuracy for various sensor-technique combinations. ANOVA of accuracy assessments demonstrated significant differences among techniques, but no significant differences among the three sensors. We summarize advantages and disadvantages of each sensor and technique for mapping mangrove distributions in tropical coastal environments.
---
paper_title: Tree species mapping with Airborne hyper-spectral MIVIS data: the Ticino Park study case
paper_content:
The present work describes the procedure, which was studied for mapping the spatial distribution of tree forest communities in the Ticino Park located in Northern Italy. Ten overlapping airborne runs of the Multispectral Infrared Visible Imaging Spectrometer (MIVIS) were acquired to cover the entire park extent (920 km2). An integrated supervised classification procedure was developed using band ratios in the red edge portion (REP) of the spectrum and training collected by field survey and visual interpretation. Validation performed with a robust random stratified sampling scheme and taking into account the unequal distribution of the classes showed that, on large-scale application, high-resolution remotely sensed images can generate, in a cost-effective manner, accurate (overall accuracy 75%) local-scale thematic products.
---
paper_title: Remote Sensing Research Priorities in Tropical Dry Forest Environments
paper_content:
Abstract Satellite multi- and hyper-spectral sensors have evolved over the past three decades into powerful monitoring tools for ecosystem processes. Research in temperate environments, however, has tended to keep pace with new remote sensing technologies more so than in tropical environments. Here, we identify what we consider to be three priority areas for remote sensing research in Neotropical dry forests. The first priority is the use of improved sensor capabilities, which should allow for better characterization of tropical secondary forests than has been achieved. Secondary forests are of key interest due to their potential for sequestering carbon in relatively short periods of time. The second priority is the need to characterize leaf area index (LAI) and other biophysical variables by means of bidirectional reflectance function models. These biophysical parameters have importance linkages with net primary productivity and may be estimated through remote sensing. The third priority is to identify t...
---
paper_title: Estimates of forest canopy height and aboveground biomass using ICESat
paper_content:
Exchange of carbon between forests and the atmosphere is a vital component of the global carbon cycle. Satellite laser altimetry has a unique capability for estimating forest canopy height, which has a direct and increasingly well understood relationship to aboveground carbon storage. While the Geoscience Laser Altimeter System (GLAS) onboard the Ice, Cloud and land Elevation Satellite (ICESat) has collected an unparalleled dataset of lidar waveforms over terrestrial targets, processing of ICESat data to estimate forest height is complicated by the pulse broadening associated with large-footprint, waveform-sampling lidar. We combined ICESat waveforms and ancillary topography from the Shuttle Radar Topography Mission to estimate maximum forest height in three ecosystems; tropical broadleaf forests in Brazil, temperate broadleaf forests in Tennessee, and temperate needleleaf forests in Oregon. Final models for each site explained between 59% and 68% of variance in field-measured forest canopy height (RMSE between 4.85 and 12.66 m). In addition, ICESat-derived heights for the Brazilian plots were correlated with field-estimates of aboveground biomass (r(2) = 73%, RMSE = 58.3 Mgha(-1)).
---
paper_title: Modelling butterfly distribution based on remote sensing data
paper_content:
Aim ::: ::: We tested the usefulness of satellite based remote sensing data and geographical information system (GIS) techniques (1) in explaining the observed distribution of the threatened clouded apollo butterfly (Parnassius mnemosyne) and (2) in predicting the occurrence of the butterfly in two independent test areas with different landscape structure. ::: ::: ::: ::: Location ::: ::: The three study areas are located along the rivers Rekijoki and Halikonjoki in a boreal agricultural landscape in south-western Finland (60°40′ N; 23°20′ E). ::: ::: ::: ::: Methods ::: ::: Landsat satellite images were used to generate habitat maps of the three study areas. Topographical variables were calculated from a digital elevation model. These data were used to construct a multiple logistic regression model fitted with the observed occurrence of clouded apollo in 126 grid squares of 0.25 km2 within an area of 31.5 km2 (model building area). The parameterized model was used to predict the occurrence of the butterfly in an adjacent test area with a similar landscape structure as in the model building area, as well as in a more distant test area with a more fragmented pattern of semi-natural grassland. ::: ::: ::: ::: Results ::: ::: In the model building area probability of clouded apollo occurrence increased with connectivity of semi-natural grassland, cover of deciduous forest, cover of semi-natural slope grassland and topographical heterogeneity. The model accuracy was high, the correct classification rate being 98.4% overall and 95.0% for the butterfly presence squares. In both test areas the overall correct classification of the model prediction was high (c. 92%). However, in predicting the actual butterfly presence squares the model succeeded substantially better in the adjacent than in the distant test area (correct classification rates 91.3% and 66.7%, respectively). ::: ::: ::: ::: Main conclusions ::: ::: The results showed that the distribution of a habitat specialized butterfly may be quite successfully explained and predicted based on pure satellite imagery and topographical data. Nevertheless, the decrease of accuracy of the model prediction when applied to a different landscape structure than the one in which the model was parameterized suggests that the useful application of such models is limited by the environmental variability of the original model building data.
---
paper_title: Ground-based Laser Imaging for Assessing Three-dimensional Forest Canopy Structure
paper_content:
Improved understanding of the role of forests in carbon, nutrient, and water cycling can be facilitated with improved assessments of canopy structure, better linking leaf-level processes to canopy structure and forest growth. We examined the use of high-resolution, ground-based laser imaging for the spatially explicit assessment of forest canopies. Multiple range images were obtained and aligned during both leaf-off and leaf-on conditions on a 20 m × 40 m plot. The plot location was within a mixed species broadleaved deciduous forest in western North Carolina. Digital terrain and canopy height models were created for a 0.25 m square grid. Horizontal, vertical, and three-dimensional distributions of plant area index, created using gap-fraction based estimation, had 0.5 m resolution for a cubic lattice. Individual tree measurements, including tree positions and diameter at breast height, were made from the scanner data with positions, on average, within 0.43 m and diameters within 5 cm of independent measurements, respectively. Our methods and results confirm that applications of ground-based laser scanning provide high-resolution, spatially-explicit measures of plot-level forest canopy structure.
---
paper_title: Land use/cover changes using Landsat TM/ETM images in a tropical and biodiverse mountainous area of central-eastern Mexico
paper_content:
Land use/cover (LUC) and changes between 1990 and 2003 in a tropical mountainous watershed were analysed with Landsat TM images using a GIS-RS approach. The La Antigua River upper catchment is a 1325 km2, biodiverse hydrological region in central Veracruz, Mexico. A large set of training pixels was used to optimize the representation of environmental heterogeneity. Classification accuracy was assessed with spectral and field-checked error matrices. Overall classification accuracy for the 1990 (78.2%) and 2003 (79.7%) images was satisfactory. Ancillary data (DEM) was incorporated to improve discrimination between LUC categories. The Landsat TM sensor proved sensitive enough to separate the different spectral patterns related to the LUC classes in this complex landscape. The time interval and scale selected are suitable for strategic planning purposes. Depletion of tropical montane cloud forest, and its conversion to pasture and agriculture, was by far the most important LUC change over the period of study.
---
paper_title: Can Error Explain Map Differences Over Time?
paper_content:
This paper presents methods to test whether map error can explain the observed dif- ferences between two points in time among categories of land cover in maps. Such differences may be due to two reasons: error in the maps and change on the ground. Our methods use matrix algebra: (1) to determine whether error can explain specific types of observed categorical transitions between two maps, (2) to represent visually the differences between the maps that error cannot explain, and (3) to examine how the results are sensitive to possible variation in map error. The methods complement conventional accuracy assessment because they rely on standard confusion matrices that use either a random or a stratified sampling design. We illustrate the methods with maps from 1971 and 1999, which show seven land-cover categories for central Massachusetts. The methods detect four transitions from agriculture, range, forest, and barren in 1971 to built in 1999, which a 15 percent error cannot explain. Sensitivity analysis reveals that if the accuracy of the maps were less than 77 percent, then error could explain virtually all of the observed differences between the maps. The paper discusses the assumptions behind the methods and articulates priorities for future research.
---
paper_title: Refining Biodiversity Conservation Priorities
paper_content:
Although there is widespread agreement about conservation priorities at large scales (i.e., biodi- versity hotspots), their boundaries remain too coarse for setting practical conservation goals. Refining hotspot conservation means identifying specific locations (individual habitat patches) of realistic size and scale for managers to protect and politicians to support. Because hotspots have lost most of their original habitat, species endemic to them rely on what remains. The issue now becomes identifying where this habitat is and these species are. We accomplished this by using straightforward remote sensing and GIS techniques, identifying specific locations in Brazil's Atlantic Forest hotspot important for bird conservation. Our method requires a regional map of current forest cover, so we explored six popular products for mapping and quantifying forest: MODIS continuous fields and a MODIS land cover (preclassified products), AVHRR, SPOT VGT, MODIS (satellite images), and a GeoCover Landsat thematic mapper mosaic ( jpg). We compared subsets of these forest covers against a forest map based on a Landsat enhanced thematic mapper. The SPOT VGT forest cover predicted forest area and location well, so we combined it with elevation data to refine coarse distribution maps for forest endemic birds. Stacking these species distribution maps enabled identification of the subregion richest in threatened birds—the lowland forests of Rio de Janeiro State. We highlighted eight priority fragments, fo- cusing on one with finer resolved imagery for detailed study. This method allows prioritization of areas for conservation from a region >1 million km 2 to forest fragments of tens of square kilometers. To set priorities for biodiversity conservation, coarse biological information is sufficient. Hence, our method is attractive for tropical and biologically rich locations, where species location information is sparse.
---
paper_title: The use of airborne lidar to assess avian species diversity, density, and occurrence in a pine/aspen forest
paper_content:
Abstract Vegetation structure is an important factor that influences wildlife-habitat selection, reproduction, and survival. However, field-based measurements of vegetation structure can be time consuming, costly, and difficult to undertake in areas that are remote and/or contain rough terrain. Light detection and ranging (lidar) is an active remote sensing technology that can quantify three-dimensional vegetation structure over large areas and thus holds promise for examining wildlife-habitat relationships. We used discrete-return airborne lidar data acquired over the Black Hills Experimental Forest in South Dakota, USA in combination with field-collected vegetation and bird data to assess the utility of lidar data in quantifying vegetation structural characteristics that relate to avian diversity, density, and occurrence. Indices of foliage height diversity calculated from lidar data were positively and significantly correlated with indices of bird species diversity, with the highest correlations observed when foliage height diversity categories contained proportionally more foliage layers near the forest floor (
---
paper_title: An environmental domain classification of Canada using earth observation data for biodiversity assessment
paper_content:
Abstract Broad ecosystem based classifications are increasingly applied as a context to consider, understand, and manage biodiversity. The need for more spatially explicit, repeatable, transferable, transparent, and defensible environmental regionalization has become apparent. Increased computing power, sophisticated analysis software, and the availability of spatially explicit descriptions of the environment, principally derived from Earth observation data, have facilitated the development of statistical ecosystem regionalizations. These regionalizations are desired to produce environmentally unique ecoregions to provide the basis for stratification for ongoing biodiversity monitoring efforts. Using a suite of indicators of the physical environment, available energy such as vegetation production, and habitat suitability all derived from remote sensing technology at 1 km spatial resolution, we undertook an environmental regionalization using a two-stage multivariate classification of terrestrial Canada. A relatively large number of classes were initially derived (100) and a hierarchical clustering approach was then applied to derive a 40 level classification. These clusters where then used to assess which clusters were the most dissimilar to the majority thus providing indication of the most unique environmental domains across Canada. Secondly, a 14 class stratification was then produced to emulate the current ecozone stratification commonly used in Canada. Results indicated that a number of unique clusters exits across Canada, specifically the forest/urban-industrial/cropland mosaic in the southern portion of Ontario, the mixed wood forests in south–central Ontario and western Quebec, the foothills of south western Alberta, regions of the southern Arctic and the northern Boreal shield (particularly the areas south of Hudson Bay and Labrador). A resemblance between the 14 class stratification and the ecozone classification for Canada is evident; locations of within and between ecozone heterogeneity are also indicated. A critical key benefit of utilising ecoregions quantitatively using key indicators, such as those derived from remote sensing observations, is the capacity to establish, and quantify, how well particular networks of sites, or plot locations, represent the overall environment. As such, the incorporation of these types of methods, and remotely derived indicators, into biodiversity assessment is an important area of ongoing research.
---
paper_title: Identifying and quantifying structural characteristics of heterogeneous boreal forests using laser scanner data
paper_content:
Abstract Structural characteristics of forest stands, e.g. in relation to carbon content and biodiversity, are of special interest. It has been stated in numerous publications that discrete return laser scanner data produce accurate tree canopy information since the quantiles of the height distribution of laser scanner data are related to the vertical structure of the tree canopy. Since some of the laser pulses will penetrate under the dominant tree layer, it is also possible to analyse multi-layered stands. In this study, the existence and number of understory trees were examined. This was carried out by analysing the height distributions of reflected laser pulses. In the laser data of this study, (Toposys Falcon, survey May 2003 in Kalkkinen, Finland, flight altitude 400 m AGL) the average number of laser pulse hits/1m 2 was 12. The reference data consisted of 28 accurately measured field sample plots. These plots include highly heterogeneous structures of boreal forests. The existence of lower canopy layers, i.e. understory trees, was analysed visually by viewing the plotwise 3D-images of laser scanner-based canopy height point data and examining distributions of canopy densities which were computed as proportions of laser hits above different height quantiles. Furthermore, a developed histogram thresholding method (HistMod) was applied to the height distribution of laser hits in order to separate different tree storeys. Finally, the number and Lorey's mean height of understory trees were predicted with estimated regression models. The results show that multi-layered stand structures can be recognised and quantified using quantiles of laser scanner height distribution data. However, the accuracy of the results is dependent on the density of the dominant tree layer.
---
paper_title: Analysis of full waveform LIDAR data for the classification of deciduous and coniferous trees
paper_content:
The paper describes a methodology for tree species classification using features that are derived from small-footprint full waveform Light Detection and Ranging (LIDAR) data. First, 3-dimensional coordinates of the laser beam reflections, the intensity, and the pulse width are extracted by a waveform decomposition, which fits a series of Gaussian pulses to the waveform. Since multiple reflections are detected, and even overlapping pulse reflections are distinguished, a much higher point density is achieved compared to the conventional first/last-pulse technique. Secondly, tree crowns are delineated from the canopy height model (CHM) using the watershed algorithm. The CHM posts are equally spaced and robustly interpolated from the highest reflections in the canopy. Thirdly, tree features computed from the 3-dimensional coordinates of the reflections, the intensity and the pulse width are used to detect coniferous and deciduous trees by an unsupervised classification. The methodology is applied to datasets that have been captured with the TopEye MK II scanner and the Riegl LMS-Q560 scanner in the Bavarian Forest National Park in leaf-on and leaf-off conditions for Norway spruces, European beeches and Sycamore maples. The classification, which groups the data into two clusters (coniferous, deciduous), leads in the best case to an overall accuracy of 85% in a leaf-on situation and 96% in a leaf-off situation.
---
paper_title: Change detection techniques for canopy height growth measurements using airborne laser scanner data
paper_content:
This paper analyzes the potential of airborne laser scanner data for measuring individual tree height growth in a boreal forest using 82 sample trees of Scots pine. Point clouds (10 points/m 2 , beam size 40 cm) illuminating 50 percent of the treetops were acquired in September 1998 and May 2003 with the Toposys 83 kHz lidar system. The reference height and height growth of pines were measured with a tacheometer in the field. Three different types of features were extracted from the point clouds representing each tree; they were the difference between the highest z values, the difference between the DSMs of the tree crown, and the differences between the 85 th , 90 th and 95 th percentiles of the canopy height histograms corresponding to the crown. The best correspondence with the field measurements was achieved with an R 2 value of 0.68 and a RMSE of 43 cm. The results indicate that it is possible to measure the growth of an individual tree with multi-temporal laser surveys. We also demonstrated a new algorithm for tree-to-tree matching. It is needed in operational growth estimation based on individual trees, especially in dense spruce forests. The method is based on minimizing the distances between treetops in the Ndimensional data space. The experiments showed that the use of the location (derived from laser data) and height of the trees were together adequate to provide reliable tree-totree matching. In the future, a fourth dimension (the crown area) should also be included in the matching.
---
paper_title: Effect of coregistration error on patchy target detection using high-resolution imagery
paper_content:
Abstract Many factors influence classification accuracy and a typical error budget includes uncertainty arising from the 1) selection of processing algorithms, 2) selection of training sites, 3) quality of orthorectification, and 4) atmospheric effects. With the development of high spatial resolution imagery, the impact of errors in geographic coregistration between imagery and field sites has become apparent – and potentially limiting – for classification applications, especially those involving patchy target detection. The goal of this study was to document and quantify the effect of coregistration error between imagery and field sites on classification accuracy. Artificial patchy targets were randomly placed over a study area covered by a QuickBird image. Classification accuracy of these targets was assessed at two levels of coregistration. Results showed that producer's accuracy of target classification increased from 37.5% to 100% between low and high levels of coregistration respectively. In addition, “Error due to Location”, a measure of how well pixels were located within respective classes, decreased to zero at high coregistration levels. This study highlights the importance of considering coregistration between imagery and field sites in the error budget, especially with studies involving high spatial resolution imagery and patchy target detection.
---
paper_title: Linking spatial patterns of bird and butterfly species richness with Landsat TM derived NDVI
paper_content:
The ability to predict spatial patterns of species richness using a few easily measured environmental variables would facilitate timely evaluation of potential impacts of anthropogenic and natural disturbances on biodiversity and ecosystem functions. Two common hypotheses maintain that faunal species richness can be explained in part by either local vegetation heterogeneity or primary productivity. Although remote sensing has long been identified as a potentially powerful source of information on the latter, its principal application to biodiversity studies has been to develop classified vegetation maps at relatively coarse resolution, which then have been used to estimate animal diversity. Although classification schemes can be delineated on the basis of species composition of plants, these schemes generally do not provide information on primary productivity. Furthermore, the classification procedure is a time- and labour-intensive process, yielding results with limited accuracy. To meet decision-making ...
---
paper_title: Laser scanning of forest resources: the nordic experience
paper_content:
This article reviews the research and application of airborne laser scanning for forest inventory in Finland, Norway and Sweden. The first experiments with scanning lasers for forest inventory were conducted in 1991 using the FLASH system, a full-waveform experimental laser developed by the Swedish Defence Research Institute. In Finland at the same time, the HUTSCAT profiling radar provided experiences that inspired the following laser scanning research. Since 1995, data from commercially operated time-of-flight scanning lasers (e.g. TopEye, Optech ALTM and TopoSys) have been used. Especially in Norway, the main objective has been to develop methods that are directly suited for practical forest inventory at the stand level. Mean tree height, stand volume and basal area have been the most important forest mensurational parameters of interest. Laser data have been related to field training plot measurements using regression techniques, and these relationships have been used to predict corresponding properti...
---
paper_title: POVERTY AND CORRUPTION COMPROMISE TROPICAL FOREST RESERVES
paper_content:
We used the global fire detection record provided by the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) to determine the number of fires detected inside 823 tropical and subtropical moist forest reserves and for contiguous buffer areas 5, 10, and 15 km wide. The ratio of fire detection densities (detections per square kilometer) inside reserves to their contiguous buffer areas provided an index of reserve effectiveness. Fire detection density was significantly lower inside reserves than in paired, contiguous buffer areas but varied by five orders of magnitude among reserves. The buffer : reserve detection ratio varied by up to four orders of magnitude among reserves within a single country, and median values varied by three orders of magnitude among countries. Reserves tended to be least effective at reducing fire frequency in many poorer countries and in countries beset by corruption. Countries with the most successful reserves include Costa Rica, Jamaica, Malaysia, and Taiwan and the Indonesian island of Java. Countries with the most problematic reserves include Cambodia, Guatemala, Paraguay, and Sierra Leone and the Indonesian portion of Borneo. We provide fire detection density for 3964 tropical and subtropical reserves and their buffer areas in the hope that these data will expedite further analyses that might lead to improved management of tropical reserves.
---
paper_title: A less‐is‐more approach to geovisualization – enhancing knowledge construction across multidisciplinary teams
paper_content:
The 'less-is-more' concept in interface design for computer applications has recently gained ground. In this article, the concept is adopted for a user-centered design of geovisualization application. The premise is that using simple and clear design can lead to successful applications with improved ease of use. Over the last three decades, the development of GIS and geovisualization has seen a marked increase in the levels of interaction between the user, the system and the information. However, these enthusiastic advances in technology have not resulted in a significant increase in the number of users.This article suggests that types of user interaction should not simply emphasize traditional GIS functions such as zooming and panning but move towards interaction based on facilitating the knowledge construction process. Considerations are made for the complexity of the system, the task at hand and the skills and limitations of the users. These elements are particularly important when maps act as the mediators in collaboration with users across disciplinary backgrounds. In such cases, the emphasis on simplicity and usability becomes as important as functionality. In these situations a geovisualization application designed for specific uses can maximize effective development of geographic knowledge.In this article, a minimalistic design approach to geovisualization is adopted by creating a geographic profiling tool which shifts the emphasis from technological advances or interaction with the map to the interaction elements key to building the spatial knowledge of GIS experts and non-experts alike. To evaluate this notion of 'less-is-more geovisualization' the profiling tool is evaluated according to usability matrices: efficiency, effectiveness and learnability. How well the Suburban Profiler contributes to these elements is assessed by conducting a video analysis of the types and forms of user interaction available. The video analysis demonstrates the usefulness and usability of the Suburban Profiler, providing proof of concept for 'less-is-more geovisualization'.
---
paper_title: A comparison of biophysical parameter retrieval for forestry using airborne and satellite LiDAR
paper_content:
This paper compares vegetation height metrics and fractional cover derived from coincident small footprint, discrete return airborne Light Detection and Ranging (LiDAR) scanning data (Optech Airborne Laser Terrain Mapper (ALTM)) with those estimated from large footprint, full waveform LiDAR profiling using the Geoscience Laser Altimeter System (GLAS). Estimates of maximum canopy height showed correspondence between the two methods with R2 = 0.68 (rms. error (RMSE) = 4.4 m). The relationship between 99th percentiles (often associated with forestry top height) showed R2 = 0.75, RMSE = 3.5 m. Detection of surface elevation limits corresponded well, (R2 = 0.71, RMSE = 5.0 m). Correlations between satellite waveform and airborne LiDAR canopy cover estimates gave R2 = 0.41 and R2 = 0.63 for dominant cover of conifers or broadleaf species, respectively. The results suggest that the broad Ice, Cloud and land Elevation Satellite (ICESat)/GLAS footprints can provide estimates of mixed vegetation canopy height which...
---
paper_title: Progress in the use of remote sensing for coral reef biodiversity studies
paper_content:
Coral reefs are hotspots of marine biodiversity, and their global decline is a threat to our natural heritage. Conservation management of these precious ecosystems relies on accurate and up-to-date information about ecosystem health and the distribution of species and habitats, but such information can be costly to gather and interpret in the fi eld. Remote sensing has proven capable of collecting information on geomorphologic zones and substrate types for coral reef environments, and is cost-effective when information is needed for large areas. Remote sensing-based mapping of coral habitat variables known to infl uence biodiversity has only recently been undertaken and new sensors and improved data processing show great potential in this area. This paper reviews coral reef biodiversity, the infl uence of habitat variables on its local spatial distribution, and the potential for remote sensing to produce maps of these habitat variables, thus indirectly mapping coral reef biodiversity and fulfi lling information needs of coral reef managers.
---
paper_title: Accurate prediction of bird species richness patterns in an urban environment using Landsat‐derived NDVI and spectral unmixing
paper_content:
Urban landscapes are expanding rapidly and are reshaping the distribution of many animal and plant species. With these changes, the need to understand and to include urban biodiversity patterns in research and management programmes is becoming vital. Recent studies have shown that remote sensing tools can be useful in studies examining biodiversity patterns in natural landscapes. The present study aimed to explore whether remote sensing tools can be applied in biodiversity research in an urban landscape. More specifically, the study examined whether the Landsat-derived Normalized Difference Vegetation Index (NDVI) and linear spectral unmixing of urban land cover can predict bird richness in the city of Jerusalem. Bird richness was sampled in 40 1-ha sites over a range of urban environments in 329 surveys. NDVI and the per cent cover of built-up area were strongly and negatively correlated with each other, and were both very successful in explaining the number of bird species in the study sites. Mean NDVI in each site was positively correlated with the site bird species richness. A hump-shaped relationship between NDVI and species richness was observed (when calculated over increasing spatial scales), with a maximum value (Pearson's R = 0.87, p<0.001, n = 40) at a scale of 15 ha. We suggest that remote sensing approaches may provide planners and conservation biologists with an efficient and cost-effective method to study and estimate biodiversity across urban environments that range between densely built-up areas, residential neighbourhoods, urban parks and the peri-urban environment.
---
paper_title: Monitoring conservation effectiveness in a global biodiversity hotspot: the contribution of land cover change assessment
paper_content:
Tropical forests, which play critical roles in global biogeochemical cycles, radiation budgets and biodiversity, have undergone rapid changes in land cover in the last few decades. This study examines the complex process of land cover change in the biodiversity hotspot of Western Ghats, India, specifically investigating the effects of conservation measures within the Indira Gandhi Wildlife Sanctuary. Current vegetation patterns were mapped using an IRS P6 LISS III image and this was used together with Landsat MSS data from 1973 to map land cover transitions. Two major and divergent trends were observed. A dominant degradational trend can be attributed to agricultural expansion and infrastructure development while a successional trend, resulting from protection of the area, showed the resilience of the system after prolonged disturbances. The sanctuary appears susceptible to continuing disturbances under the current management regime but at lower rates than in surrounding unprotected areas. The study demonstrates that remotely sensed land cover assessments can have important contributions to monitoring land management strategies, understanding processes underpinning land use changes and helping to inform future conservation strategies.
---
paper_title: Land use and biodiversity relationships.
paper_content:
Abstract The relationships between land use and biodiversity are fundamental to understanding the links between people and their environment. Biodiversity can be measured in many ways. The concept covers not only the overall richness of species present in a particular area but also the diversity of genotypes, functional groups, communities, habitats and ecosystems there. As a result, the relationships between biodiversity in its broadest sense and land use can be complex and highly context dependent. Moreover, the relationships between them are often two-way, so that simple relationships between cause and effect can be difficult to identify. In some places, specific land uses or land management practices may be important in sustaining particular patterns of biodiversity. Elsewhere, the uses to which land can be put are highly dependent on the biodiversity resources present. The review will consider how changes in the quantity, quality and spatial configuration of different aspects of land use can impact on different components of biodiversity, and what direct and indirect factors might drive these changes. The need to distinguish between land cover and land use will be discussed in relation to the economic and social drivers of land use change. The review will also consider whether framing biodiversity objectives involves society in placing constraints upon the types of land use and management practice that are possible, and will consider such arguments in relation to assessments of the costs of biodiversity loss. It would seem that while considerable progress has been made in mapping out plausible futures for land use and biodiversity at global and regional scales, closer integration of modelling, scenario and field-based monitoring is needed to strengthen the evidence base available to decision makers. Challenges that face us include how we take account of the qualitative changes in land cover, and the impacts of such modifications on biodiversity and ecosystem services. Broader perspectives on the value of biodiversity and ecosystem services are also needed as the basis for developing adaptive and flexible approaches to policy and management.
---
paper_title: Integrating Habitat Status, Human Population Pressure, and Protection Status into Biodiversity Conservation Priority Setting
paper_content:
Priority setting is an essential component of biodiversity conservation. Existing methods to identify priority areas for conservation have focused almost entirely on biological factors. We suggest a new relative ranking method for identifying priority conservation areas that integrates both biological and social aspects. It is based on the following criteria: the habitat's status, human population pressure, human efforts to protect habitat, and number of endemic plant and vertebrate species. We used this method to rank 25 hotspots, 17 megadiverse countries, and the hotspots within each megadiverse country. We used consistent, comprehensive, georeferenced,andmultibanddatasetsandanalyticalremotesensingandgeographicinformationsystemtools to quantify habitat status, human population pressure, and protection status. The ranking suggests that the Philippines,AtlanticForest,MediterraneanBasin,CaribbeanIslands,Caucasus,andIndo-Burmaarethehottest hotspots and that China, the Philippines, and India are the hottest megadiverse countries. The great variation in terms of habitat, protected areas, and population pressure among the hotspots, the megadiverse countries, and the hotspots within the same country suggests the need for hotspot- and country-specific conservation policies.
---
paper_title: Landscape change and the dynamics of open formations in a natural reserve
paper_content:
Remote sensing, when used in conjunction with landscape pattern metrics, is a powerful method for the study of ecological dynamics at the landscape scale by means of multi-temporal analysis. In this paper, we examine temporal change in open formations in the natural reserve of Poggio all’Olmo (central Italy). This area has undergone rural depopulation and the cessation of traditional methods of agriculture, resulting in the subsequent re-establishment and spread of other vegetation formations. Aerial photographs taken in 1954, 1977 and 1998 were orthorectified and classified based on the physiognomic characteristics of the vegetation. An objective definition of the minimum mapping unit (MMU) was guaranteed by using vector format grids for this classification. We applied landscape pattern metrics based on landscape composition, the shape and size of patches and patch isolation. Our results demonstrate the key roles of shrubland, woodland and coniferous plantations in the ongoing fragmentation of open formations in the landscape. Multi-temporal landscape analyses, and, in particular, a restricted suite of landscape metrics, proved useful for detecting and quantitatively characterizing dynamic ecological processes. We conclude with some recommendations on the management alternatives feasible for the protection of the remaining grassland formations in the natural reserve of Poggio all’Olmo. © 2005 Published by Elsevier B.V.
---
paper_title: Amazonian biodiversity and protected areas: do they meet?
paper_content:
Protected areas are crucial for Amazonian nature conservation. Many Amazonian reserves have been selected systematically to achieve biodiversity representativeness. We review the role natural-scientific understanding has played in reserve selection, and evaluate the theoretical potential of the existing reserves to cover a complete sample of the species diversity of the Amazonian rainforest biome. In total, 108 reserves (604,832 km2) are treated as strictly protected and Amazonian; 87 of these can be seen as systematically selected to sample species diversity (75.3% of total area). Because direct knowledge on all species distributions is unavailable, surrogates have been used to select reserves: direct information on some species distributions (15 reserves, 14.8% of total area); species distribution patterns predicted on the basis of conceptual models, mainly the Pleistocene refuge hypothesis, (5/10.3%); environmental units (46/27.3%); or a combination of distribution patterns and environmental units (21/22.9%). None of these surrogates are reliable: direct information on species distributions is inadequate; the Pleistocene refuge hypothesis is highly controversial; and environmental classifications do not capture all relevant ecological variation, and their relevance for species distribution patterns is undocumented. Hence, Amazonian reserves cannot be safely assumed to capture all Amazonian species. To improve the situation, transparency and an active dialogue with the scientific community should be integral to conservation planning. We suggest that the best currently available approach for sampling Amazonian species diversity in reserve selection is to simultaneously inventory indicator plant species and climatic and geological conditions, and to combine field studies with remote sensing.
---
paper_title: GIS-Based Multicriteria Evaluation and Fuzzy Sets to Identify Priority Sites for Marine Protection
paper_content:
There is an increasing momentum within the marine conservation community to develop representative networks of marine protected areas (MPAs) covering up to 30% of global marine habitats. However, marine conservation initiatives are perceived as uncoordinated at most levels of planning and decision-making. These initiatives also face the challenge of being in conflict with ongoing drives for sustained or increased resource extraction. Hence, there is an urgent need to develop large scale theoretical frameworks that explicitly address conflicting objectives that are embedded in the design and development of a global MPA network. Further, the frameworks must be able to guide the implementation of smaller scale initiatives within this global context. This research examines the applicability of an integrated spatial decision support framework based on geographic information systems (GIS), multicriteria evaluation (MCE) and fuzzy sets to objectively identify priority locations for future marine protection. MCE is a well-established optimisation method used extensively in land use resource allocation and decision support, and which has to date been underutilised in marine planning despite its potential to guide such efforts. The framework presented here was implemented in the Pacific Canadian Exclusive Economic Zone (EEZ) using two conflicting objectives - biodiversity conservation and fisheries profit-maximisation. The results indicate that the GIS-based MCE framework supports the objective identification of priority locations for future marine protection. This is achieved by integrating multi-source spatial data, facilitating the simultaneous combination of multiple objectives, explicitly including stakeholder preferences in the decisions, and providing visualisation capabilities to better understand how global MPA networks might be developed under conditions of uncertainty and complexity.
---
paper_title: Land cover mapping of large areas from satellites: status and research priorities
paper_content:
Although land cover mapping is one of the earliest applications of remote sensing technology, routine mapping over large areas has only relatively recently come under consideration. This change has resulted from new information requirements as well as from new developments in remote sensing science and technology. In the near future, new data types will become available that will enable marked progress to be made in land cover mapping over large areas at a range of spatial resolutions. This paper is concerned with mapping strategies based on 'coarse' and 'fine' resolution satellite data as well as their combinations. The status of land cover mapping is discussed in relation to requirements, data sources and analysis methodologies - including pixel or scene compositing, radiometric corrections, classification and accuracy assessment. The overview sets the stage for identifying research priorities in data pre-processing and classification in relation to forthcoming improvements in data sources as well as ne...
---
paper_title: Development of a large area biodiversity monitoring system driven by remote sensing
paper_content:
Biodiversity is a multifaceted concept that often eludes simple operational definitions. As a result, a variety of definitions have been proposed each with varying levels of complexity and scope. While different definitions of biodiversity exist, the basic unit of measurement for the vast majority of studies is conducted at the species level. Traditional approaches to measuring species richness provide useful, yet spatially constrained information. Remote sensing offers the opportunity for large area characterizations of biodiversity in a systematic, repeatable, and spatially exhaustive manner. Based on this review we examine the potential for a national biodiversity monitoring system for Canada driven by remote sensing, a country approaching 1 billion ha in area, with the aim of producing recommendations that are transferable for regional or continental applications. A combination of direct and indirect approaches is proposed, with four selected key indicators of diversity that can be derived from Earth ...
---
paper_title: A stochastic approach to marine reserve design: Incorporating data uncertainty
paper_content:
Abstract Marine reserves, or protected areas, are used to meet an array of biodiversity and conservation objectives. The design of regional networks of marine reserves is concerned with the problem of where to place the marine protected areas and how to spatially configure them. Quantitative methods for doing this provide important decision support tools for marine managers. The central problem is to balance the costs and benefits of the reserve network, whilst satisfying conservation objectives (hence solving a constrained optimization problem). Current optimization algorithms for reserve design are widely used, but none allow for the systematic incorporation of data uncertainty and its effect on the reserve design solutions. The central purpose of this study is to provide a framework for incorporating uncertain ecological input data into algorithms for designing networks of marine reserves. In order to do this, a simplified version of the marine reserve design optimization problem is considered. A Metropolis–Hastings random search procedure is introduced to systematically sample the model solution space and converge on an optimal reserve design. Incorporation of the uncertain input data builds on this process and relies on a parametric bootstrapping procedure. This allows for the solution (i.e. the marine reserve design) to be expressed as the probability of any planning unit being included in the marine reserve network. Spatial plots of this acceptance probability are easily interpretable for decision making under uncertainty. The bootstrapping methodology is also readily adapted to existing comprehensive reserve design algorithms. Here, a preliminary application of the algorithm is made to the Mesoamerican Barrier Reef System (in the Caribbean Sea) based on satellite-derived and mapped conservation features (from Landsat).
---
paper_title: Hyperspectral Remote Sensing of Canopy Biodiversity in Hawaiian Lowland Rainforests
paper_content:
Mapping biological diversity is a high priority for conservation research, management and policy development, but few studies have provided diversity data at high spatial resolution from remote sensing. We used airborne imaging spectroscopy to map woody vascular plant species richness in lowland tropical forest ecosystems in Hawai’i. Hyperspectral signatures spanning the 400–2,500 nm wavelength range acquired by the NASA Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) were analyzed at 17 forest sites with species richness values ranging from 1 to 17 species per 0.1–0.3 ha. Spatial variation (range) in the shape of the AVIRIS spectra (derivative reflectance) in wavelength regions associated with upper-canopy pigments, water, and nitrogen content were well correlated with species richness across field sites. An analysis of leaf chlorophyll, water, and nitrogen content within and across species suggested that increasing spectral diversity was linked to increasing species richness by way of increasing biochemical diversity. A linear regression analysis showed that species richness was predicted by a combination of four biochemically-distinct wavelength observations centered at 530, 720, 1,201, and 1,523 nm (r 2 = 0.85, p < 0.01). This relationship was used to map species richness at approximately 0.1 ha resolution in lowland forest reserves throughout the study region. Future remote sensing studies of biodiversity will benefit from explicitly connecting chemical and physical properties of the organisms to remotely sensed data.
---
paper_title: Behavioral barriers to non-migratory movements of birds
paper_content:
Although effects of physical barriers to animal movement are well established, the behavioral inhibition of individuals moving across habitat gaps, ecotones, and interpatch (matrix) habitat has received little attention. Birds are often cited as a taxon in which movements should not be disrupted by gaps in landscape connectivity. Here we synthesize evidence from the literature for behavioral inhibition of movements by birds, and fi nd that a wide variety of behavioral inhibitions to movements have been observed. We also present a model for describing edge or gap permeability that incorporates the propensity of an individual to cross an ecotone or enter a gap, and the effect of gap width. From published observations, we propose fi ve ecologically based patterns of behavioral inhibition of bird movements as hypotheses: that habitat specialists, understory-dwellers, tropical species, solitary species, and non-migratory species are more inhibited than are species that are their ecological counterparts. Understanding what animals perceive as impediments to movement will contribute to efforts to maintain populations through landscape design, and will allow us to predict the types and degrees of habitat fragmentation that will cause persistence problems for various species.
---
paper_title: Crop classification by support vector machine with intelligently selected training data for an operational application
paper_content:
The accuracy of a supervised classification is dependent to a large extent on the training data used. The aim in training is often to capture a large training set to fully describe the classes spectrally, commonly with the requirements of a conventional statistical classifier in mind. However, it is not always necessary to provide a complete description of the classes, especially if using a support vector machine (SVM) as the classifier. An SVM seeks to fit an optimal hyperplane between the classes and uses only some of the training samples that lie at the edge of the class distributions in feature space (support vectors). This should allow the definition of the most informative training samples prior to the analysis. An approach to identify informative training samples was demonstrated for the classification of agricultural classes in south-western part of Punjab state, India. A small, intelligently selected, training dataset was acquired in the field with the aid of ancillary information. This dataset contained the data from training sites that were predicted before the classification to be amongst the most informative for an SVM classification. The intelligent training collection scheme yielded a classification of comparable accuracy, ∼91%, to one derived using a larger training set acquired by a conventional approach. Moreover, from inspection of the training sets it was apparent that the intelligently defined training set contained a greater proportion of support vectors (0.70), useful training sites, than that acquired by the conventional approach (0.41). By focusing on the most informative training samples, the intelligent scheme required less investment in training than the conventional approach and its adoption would have reduced the total financial outlay in classification production and evaluation by ∼26%. Additionally, the analysis highlighted the possibility to further reduce the training set size without any significant negative impact on classification accuracy.
---
paper_title: Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner
paper_content:
In this study we use a technique referred to as Gaussian decomposition for processing and calibrating data acquired with a novel small-footprint airborne laser scanner that digitises the complete waveform of the laser pulses scattered back from the Earth's surface. This paper presents the theoretical basis for modelling the waveform as a series of Gaussian pulses. In this way the range, amplitude, and width are provided for each pulse. Using external reference targets it is also possible to calibrate the data. The calibration equation takes into account the range, the amplitude, and pulse width and provides estimates of the backscatter cross-section of each target. The applicability of this technique is demonstrated based on RIEGL LMS-Q560 data acquired over the city of Vienna.
---
paper_title: Mapping snags and understory shrubs for a LiDAR-based assessment of wildlife habitat suitability
paper_content:
Abstract The lack of maps depicting forest three-dimensional structure, particularly as pertaining to snags and understory shrub species distribution, is a major limitation for managing wildlife habitat in forests. Developing new techniques to remotely map snags and understory shrubs is therefore an important need. To address this, we first evaluated the use of LiDAR data for mapping the presence/absence of understory shrub species and different snag diameter classes important for birds (i.e. ≥ 15 cm, ≥ 25 cm and ≥ 30 cm) in a 30,000 ha mixed-conifer forest in Northern Idaho (USA). We used forest inventory plots, LiDAR-derived metrics, and the Random Forest algorithm to achieve classification accuracies of 83% for the understory shrubs and 86% to 88% for the different snag diameter classes. Second, we evaluated the use of LiDAR data for mapping wildlife habitat suitability using four avian species (one flycatcher and three woodpeckers) as case studies. For this, we integrated LiDAR-derived products of forest structure with available models of habitat suitability to derive a variety of species-habitat associations (and therefore habitat suitability patterns) across the study area. We found that the value of LiDAR resided in the ability to quantify 1) ecological variables that are known to influence the distribution of understory vegetation and snags, such as canopy cover, topography, and forest succession, and 2) direct structural metrics that indicate or suggest the presence of shrubs and snags, such as the percent of vegetation returns in the lower strata of the canopy (for the shrubs) and the vertical heterogeneity of the forest canopy (for the snags). When applied to wildlife habitat assessment, these new LiDAR-based maps refined habitat predictions in ways not previously attainable using other remote sensing technologies. This study highlights new value of LiDAR in characterizing key forest structure components important for wildlife, and warrants further applications to other forested environments and wildlife species.
---
paper_title: Assessing forest metrics with a ground-based scanning lidar
paper_content:
A ground-based scanning lidar (light detection and ranging) system was evaluated to assess its potential utility for tree-level forest mensuration data extraction. Ground-based-lidar and field-mens...
---
paper_title: COUPLING ECOLOGY AND GIS TO EVALUATE EFFICACY OF MARINE PROTECTED AREAS IN HAWAII
paper_content:
In order to properly determine the efficacy of marine protected areas (MPAs), a seascape perspective that integrates ecosystem elements at the appropriate ecological scale is necessary. Over the past four decades, Hawaii has developed a system of 11 Marine Life Conservation Districts (MLCDs) to conserve and replenish marine resources around the state. Initially established to provide opportunities for public interaction with the marine environment, these MLCDs vary in size, habitat quality, and management regimes, providing an excellent opportunity to test hypotheses concerning MPA design and function using multiple discrete sampling units. Digital benthic habitat maps for all MLCDs and adjacent habitats were used to evaluate the efficacy of existing MLCDs using a spatially explicit stratified random sampling design. Analysis of benthic cover validated the a priori classification of habitat types and provided justification for using these habitat strata to conduct stratified random sampling and analyses of fish habitat utilization patterns. Results showed that a number of fish assemblage characteristics (e.g., species richness, biomass, diversity) vary among habitat types, but were significantly higher in MLCDs compared with adjacent fished areas across all habitat types. Overall fish biomass was 2.6 times greater in the MLCDs compared to open areas. In addition, apex predators and other species were more abundant and larger in the MLCDs, illustrating the effectiveness of these closures in conserving fish populations within their boundaries. Habitat type, protected area size, and level of protection from fishing were all important determinates of MLCD effectiveness with respect to their associated fish assemblages. Although size of these protected areas was positively correlated with a number of fish assemblage characteristics, all appear too small to have any measurable influence on the adjacent fished areas. These protected areas were not designed for biodiversity conservation or fisheries enhancement yet still provide varying degrees of protection for fish populations within their boundaries. Implementing this type of biogeographic process, using remote sensing technology and sampling across the range of habitats present within the seascape, provides a robust evaluation of existing MPAs and can help to define ecologically relevant boundaries for future MPA design in a range of locations.
---
paper_title: Remote sensing and land cover area estimation
paper_content:
This article gives an overview of different ways to use satellite images for land cover area estimation. Approaches are grouped into three categories. (1) Estimates coming essentially from remote sensing. Ground data, are used as an auxiliary tool, mainly as training data for image classification, or sub-pixel analysis. Area estimates from pixel counting are sometimes used without a solid statistical justification. (2) Methods, such as regression, calibration and small area estimators, combining exhaustive but inaccurate information (from satellite images) with accurate information on a sample (most often ground surveys). (3) Satellite images can support area frame surveys in several ways: to define sampling units, for stratification; as graphic documents for the ground survey, or for quality control. Cost-efficiency is discussed. Operational use of remote sensing is easier now with cheaper Landsat Thematic Mapper images and computing, but many administrations are reluctant to integrate remote sensing in ...
---
paper_title: Mediterranean ecosystems: problems and tools for conservation
paper_content:
Mediterranean ecosystems rival tropical ecosystems in terms of plant biodiversity. The Mediterranean Basin (MB) itself hosts 25 000 plant species, half of which are endemic. This rich biodiversity and the complex biogeographical and political issues make conservation a difficult task in the region. Species, habitat, ecosystem and landscape approaches have been used to identify conservation targets at various scales: ie, European, national, regional and local. Conservation decisions require adequate information at the species, community and habitat level. Nevertheless and despite recent improvements/efforts, this information is still incomplete, fragmented and varies from one country to another. This paper reviews the biogeographic data, the problems arising from current conservation efforts and methods for the conservation assessment and prioritization using GIS. GIS has an important role to play for managing spatial and attribute information on the ecosystems of the MB and to facilitate interactions with existing databases. Where limited information is available it can be used for prediction when directly or indirectly linked to externally built models. As well as being a predictive tool today GIS incorporate spatial techniques which can improve the level of information such as fuzzy logic, geostatistics, or provide insight about landscape changes such as 3D visualization. Where there are limited resources it can assist with identifying sites of conservation priority or the resolution of environmental conflicts (scenario building). Although not a panacea, GIS is an invaluable tool for improving the understanding of Mediterranean ecosystems and their dynamics and for practical management in a region that is under increasing pressure from human impact.
---
paper_title: Interannual variability of NDVI and species richness in Kenya
paper_content:
Ecologists have long recognized the spatial variability of species richness. In an attempt to identify the factors responsible for this variability, ecologists have traditionally used environmental data obtained from sparse point samples (such as meteorological stations). However, remotely sensed data also provide a means of estimating relevant environmental factors and thereby improving predictions of species richness. The Advanced Very High Resolution Radiometer Normalized Difference Vegetation Index (AVHRR NDVI) has been shown to be related to net primary productivity (NPP) and actual evapotranspiration (AET) for many vegetation types. NPP and AET have frequently been used as surrogate measures for species richness. Local spatial variability of NPP and AET, indicating habitat heterogeneity, is hypothesized as another influence on species richness. We examined the relationship between interannual maximum NDVI parameters and species richness of vascular plants and mammals. The study was done at a landsca...
---
paper_title: Climate: Counting carbon in the Amazon
paper_content:
If the next climate treaty tackles deforestation, tropical nations will need to monitor the biomass of their forests. One ecologist has worked out a way to do that from the sky, finds Jeff Tollefson.
---
paper_title: The first detailed land‐cover map of Socotra Island by Landsat/ETM+ data
paper_content:
Present study has produced first detailed land‐cover map of Socotra Island. A Landsat 7 ETM+ dataset was used as a main source of remotely sensed data. From numerous reference points (more than 250) coming from the ground data verification the set of training fields and the set of evaluation fields were digitised. As a classification method the supervised maximum likelihood classification without prior probabilities was used in combination with rule‐based post‐classification sorting, providing results of sufficient accuracy and subject resolution. Estimates of the area and degree of coverage of particular land‐cover classes within Socotra Island have brought excellent overview on state of island biotopes. Overall accuracy of the map achieved is more than 80%, 19 terrestrial land‐cover classes (including three types of Shrublands, three types of Woodlands, two types of Forests and Mangroves) have been distinguished. It consequently allows estimates of the current and potential occurrence of endemic plant p...
---
paper_title: Deforestation in Central Africa: Estimates at regional, national and landscape levels by advanced processing of systematically-distributed Landsat extracts
paper_content:
Accurate land cover change estimates are among the headline indicators set by the Convention on Biological Diversity to evaluate the progress toward its 20 10 target concerning habitat conservation. Tropical deforestation is of prime interest since it threatens the terrestrial biomes hosting the highest levels of biodiversity. Local forest change dynamics, detected over very large extents, are necessary to derive regional and national figures for multilateral environmental agreements and sustainable forest management. Current deforestation estimates in Central Africa are derived either from coarse to medium resolution imagery or from wall-to-wall coverage of limited areas. Whereas the first approach cannot detect small forest changes widely spread across a landscape, operational costs limit the mapping extent in the second approach. This research developed and implemented a new cost-effective approach to derive area estimates of land cover change by combining a systematic regional sampling scheme based on high spatial resolution imagery with object-based unsupervised classification techniques. A multi-date segmentation is obtained by grouping pixels with similar land cover change trajectories which are then classified by unsupervised procedures. The interactive part of the processing chain is therefore limited to land cover class labelling of object clusters. The combination of automated image processing and interactive labelling renders this method cost-efficient. The approach was operationally applied to the entire Congo River basin to accurately estimate deforestation at regional, national and landscape levels. The survey was composed of 10 x 10 km sampling sites systematically-distributed every 0.5 degrees over the whole forest domain of Central Africa, corresponding to a sampling rate of 3.3%. For each of the 571 sites, subsets were extracted from both Landsat TM and ETM+ imagery acquired in 1990 and 2000 respectively. Approximately 60% of the 390 cloud-free samples do not show any forest cover change. For the other 165 sites, the results are depicted by a change matrix for every sample site describing four land cover change processes: deforestation, reforestation, forest degradation and forest recovery. This unique exercise estimates the deforestation rate at 0.21% per year, while the forest degradation rate is close to 0.15% per year. However, these figures are less reliable for the coastal region where there is a lack of cloud-free imagery. The results also show that the Landscapes designated after 2000 as high priority conservation zones by the Congo Basin Forest Partnership had undergone significantly less deforestation and forest degradation between 1990 and 2000 than the rest of the Central African forest. (C) 2008 Elsevier Inc. All rights reserved.
---
paper_title: Bird distributions relative to remotely sensed habitats in Great Britain : Towards a framework for national modelling
paper_content:
Abstract This paper develops a comprehensive and objective picture of bird distributions relative to habitats across Britain. Bird species presence/absence data from an extensive field survey and habitat data from the remotely sensed UK Land Cover Map 2000 were analysed in 36,920 tetrads (2 km×2 km) across Britain (a 65% sample of Britain's c. 240 000 km 2 ). Cluster analysis linked birds to generalised landscapes based on distinctive habitat assemblages. Maps of the clusters showed strong regional patterns associated with the habitat assemblages. Cluster centroid coordinates for each bird species and each habitat were combined across clusters to derive individualised bird–habitat preference indices and examine the importance of individual habitats for each bird species. Even rare species and scarce habitats showed successful linkages. Results were assessed against published accounts of bird–habitat relations. Objective corroboration strongly supported the associations. Relatively scarce coastal and wetland habitats proved particularly important for many birds. However, extensive arable farmland and woodland habitats were also favoured by many species, despite reported declines in bird numbers in these habitats. The fact that habitat-specialists do not or cannot move habitat is perhaps a reason for declining numbers where habitats have become unsuitable. This study showed that there are unifying principles determining bird–habitat relations which apply and can be quantified at the national scale, and which corroborate and complement the cumulative knowledge of many and varied surveys and ecological studies. This ‘generality’ suggests that we may be able, reliably and objectively, to integrate and scale up such disparate studies to the national scale, using this generalised framework. It also suggests the potential for a landscape ecology approach to bird–habitat analyses. Such developments will be important steps in building models to develop and test the sustainable management of landscapes for birds.
---
paper_title: Advanced full-waveform lidar data echo detection: Assessing quality of derived terrain and tree height models in an alpine coniferous forest
paper_content:
Small footprint full-waveform airborne lidar systems offer large opportunities for improved forest characterization. To take advantage of full-waveform information, this paper presents a new processing method based on the decomposition of waveforms into a sum of parametric functions. The method consists of an enhanced peak detection algorithm combined with advanced echo modelling including Gaussian and generalized Gaussian models. The study focuses on the qualification of the extracted geometric information. Resulting 3D point clouds were compared to the point cloud provided by the operator. 40 to 60% additional points were detected mainly in the lower part of the canopy and in the low vegetation. Their contribution to Digital Terrain Models (DTMs) and Canopy Height Models (CHMs) was then analysed. The quality of DTMs and CHM-based heights was assessed using field measurements on black pine plots under various topographic and stand characteristics. Results showed only slight improvements, up to 5 cm bias ...
---
paper_title: Mapping and indicator approaches for the assessment of habitats at different scales using remote sensing and GIS methods
paper_content:
The paper presents a case study for the application of satellite remote sensing and GIS data and methods in the context of habitat monitoring and landscape assessment at different scales. The range of work covers the production of overview maps for land covers, techniques of classification for detailed habitat maps, change detection as a management support tool for the updating of existing habitat databases and an integrative GIS model to delineate habitat suitability for key species. Furthermore the role of comprehensive indicators and historical satellite data in investigating landscape change over two decades on a regional scale is discussed. Future activities for transferring the respective approaches onto a pan-European scale are presented in a concluding discussion.
---
paper_title: A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon
paper_content:
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
---
paper_title: Landscape pattern and species richness; regional scale analysis from remote sensing
paper_content:
Concern about the future of biodiversity in the wider countryside is stimulating the development of methods for species and ecosystem monitoring over large areas. The objective of this paper is to explore the potential of remotely sensed data for measuring landscape structure as an important determinant of species diversity. Data from the satellite Land Cover Map of Great Britain, a land cover classification of Landsat Thematic Mapper scenes, were used to derive a set of simple measures of landscape structure within 2km x 2km tetrads for three vascular plant families. Results from a model to predict plant diversity from landscape structure alone proved difficult to interpret ecologically and highlighted the need to obtain data on both landscape quality and landscape structure.
---
paper_title: Landsat's Role in Ecological Applications of Remote Sensing
paper_content:
Remote sensing, geographic information systems, and modeling have combined to produce a virtual explosion of growth in ecological investigations and applications that are explicitly spatial and temporal. Of all remotely sensed data, those acquired by Landsat sensors have played the most pivotal role in spatial and temporal scaling. Modern terrestrial ecology relies on remote sensing for modeling biogeochemical cycles and for characterizing land cover, vegetation biophysical attributes, forest structure, and fragmentation in relation to biodiversity. Given the more than 30-year record of Landsat data, mapping land and vegetation cover change and using the derived surfaces in ecological models is becoming commonplace. In this article, we summarize this large body of work, highlighting the unique role of Landsat.
---
paper_title: Associations between wasp communities and forest structure: Do strong local patterns hold across landscapes?
paper_content:
Forest structure and habitat complexity have been used extensively to predict the distribution and abundance of insect assemblages in forest ecosystems.We tested empirically derived predictions of strong, consis- tent relationships between wasp assemblages and habitat complexity, using both field assessments and vegetation indices from remote sensing as measures of habitat complexity. Wasp samples from 26 paired 'high and low' complexity sites in two forests approximately 70 km apart, were compared with normalized difference vegetation indices (NDVIs) derived from multispectral videography of the survey sites.We describe a strong unequivocal link between habitat complexity and wasp communities, the patterns holding over coarse and fine landscape scales. NDVIs were also excellent predictors of habitat complexity and hence wasp community patterns. Sites with greater NDVIs consistently supported a greater abundance and species richness, and a different composition of wasps to sites with low NDVIs. Using vegetation indices from remote sensing to gauge habitat complexity has significant potential for ecosystem modelling and rapid biodiversity assessment.
---
paper_title: PREDICTING WOODY-PLANT SPECIES RICHNESS IN TROPICAL DRY FORESTS: A CASE STUDY FROM SOUTH FLORIDA, USA
paper_content:
Tropical dry forests are one of the world's most endangered forest types. Currently there are no comparative data on extent or levels of species richness for remaining forest fragments. This research identifies landscape metrics and spectral indices that can be applied at the stand and patch level to predict woody-plant species richness in tropical dry forests. This study was undertaken in 18 stands of tropical dry forest with nine sites in the Florida Keys and nine sites within an urban–agricultural matrix in mainland Florida, USA. Woody-plant species richness was quantified at the stand level (belt transects totaling 500 m2) and patch level (presence/absence data for 65 native tropical plants ≤2.5 cm dbh) for all study sites. Landsat Enhanced Thematic Mapper Plus (ETM+) satellite images (pixel resolution 30 × 30 m) were used to assess the utility of landscape metrics (forest patch area, nearest neighbor distance, shape index, boundary complexity) and spectral indices (normalized-difference vegetation index [NDVI] for nine pixels and 500 pixels directly over transects, and all pixels in the forest patch area) for predicting stand- and patch-level species richness. The 18 stands of tropical dry forest sampled in this study included 4248 woody plants, representing 71 species. ::: ::: Islands in the Florida Keys had higher levels of woody-plant species richness than mainland sites. There was a significant positive relationship between mean NDVI for the nine pixels over each stand and stand species richness and a significant negative relationship between species richness and standard deviation of NDVI for nine pixels over each stand. The density of evergreen plants explained 66% of the variability in mean NDVI. At the patch level, forest patch area and mean NDVI at the stand, 500-pixel, and patch level were all positively associated with patch species richness. However, combining forest patch area with NDVI significantly improved the prediction of patch species richness. Results from this study support the species–energy theory at the level of a forest stand and patch and suggest that a first-order approximation of woody-plant species richness in stands and patches of tropical dry forest is possible in biodiversity hot spots.
---
paper_title: Bias in land cover change estimates due to misregistration
paper_content:
Land cover change may be overestimated due to positional error in multi-temporal images. To assess the potential magnitude of this bias, we introduced random positional error to identical classified images and then subtracted them. False land cover change ranged from less than 5% for a 5-class AVHRR classification, to more than 33% for a 20-class Landsat TM classification. The potential for false change was higher with more classes. However, false change could not be reliably estimated simply by number of classes, since false change varied significantly by simulation trial when class size remained constant. Registration model root mean squared (rms) error may underestimate the actual image co-registration asccuracy. In simulations with 5 to 50 ground control locations, the mean model rms error was always less than the actual population rms error. The model rms error was especially unreliable when small sample sizes were used to develop second order rectification models. We introduce a bootstrap resampling...
---
paper_title: Remotely sensed habitat diversity predicts butterfly species richness and community similarity in Canada.
paper_content:
Although there is no shortage of potential explanations for the large-scale patterns of biological diversity, the hypothesis that energy-related factors are the primary determinants is perhaps most extensively supported, especially in cold-temperate regions. By using unusually high-resolution biodiversity and environmental data that have not previously been available, we demonstrate that habitat heterogeneity, as measured by remotely sensed land cover variation, explains Canadian butterfly richness better than any energy-related variable we measured across spatial scales. Although species-richness predictability declines with progressively smaller quadrat sizes, as expected, we demonstrate that most variability (>90%) in butterfly richness may be explained by habitat heterogeneity with secondary contributions from climatic energy. We also find that patterns of community similarity across Canada are strongly related to patterns of habitat composition but not to differences in energy-related factors. Energy should still be considered significant but its main role may be through its effects on within-habitat diversity and perhaps, indirectly, on the sorts of habitats that may be found in a region. Effects of sampling intensity and spatial autocorrelation do not alter our findings.
---
paper_title: EDGE RESPONSES OF TROPICAL AND TEMPERATE BIRDS
paper_content:
Abstract Tropical birds may differ from temperate birds in their sensitivity to forest edges. We provide predictions about the proportions of tropical and temperate species that should avoid or exploit edges, and relationships between natural-history characters and edge responses. We conducted exploratory meta-analyses from 11 studies using 287 records of 220 neotropical and temperate species' responses to edges to address our predictions. A higher proportion of neotropical species were edge-avoiders compared with temperate species and a higher proportion of temperate species were edge-exploiters compared with neotropical species. Edge-avoiding responses were positively associated with being an insectivore for neotropical birds, and with being of small body mass and a latitudinal migrant for temperate birds. Temperate edge-exploiters were less likely to be insectivores and migrants than temperate birds that were not edge-exploiters. A greater proportion of neotropical birds than temperate birds may be at ...
---
paper_title: PREDICTING BIRD SPECIES RICHNESS USING REMOTE SENSING IN BOREAL AGRICULTURAL‐FOREST MOSAICS
paper_content:
One of the main goals in nature conservation and land use planning is to identify areas important for biodiversity. One possible cost-effective surrogate for deriving appropriate estimates of spatial patterns of species richness is provided by predictive modeling based on remote sensing and topographic data. Using bird species richness data from a spatial grid system (105 squares of 0.25 km2 within an area of 26.25 km2), we tested the usefulness of Landsat TM satellite-based remote sensing and topographic data in bird species richness modeling in a boreal agricultural-forest mosaic in southwestern Finland. We built generalized linear models for the bird species richness and validated the accuracy of the models with an independent test area of 50 grid squares (12.5 km2). We evaluated separately the modeling performance of habitat structure, habitat composition, topographical-moisture variables and all variables in the model-building and model-test areas. Areas of high observed and predicted bird species ri...
---
paper_title: A method to compare and improve land cover datasets: application to the GLC-2000 and MODIS land cover products
paper_content:
This paper presents a methodology for the comparison of different land cover datasets and illustrates how this can be extended to create a hybrid land cover product. The datasets used in this paper are the GLC-2000 and MODIS land cover products. The methodology addresses: 1) the harmonization of legend classes from different global land cover datasets and 2) the uncertainty associated with the classification of the images. The first part of the methodology involves mapping the spatial disagreement between the two land cover products using a combination of fuzzy logic and expert knowledge. Hotspots of disagreement between the land cover datasets are then identified to determine areas where other sources of data such as TM/ETM images or detailed regional and national maps can be used in the creation of a hybrid land cover dataset
---
paper_title: Laser remote sensing of canopy habitat heterogeneity as a predictor of bird species richness in an eastern temperate forest, USA
paper_content:
Habitat heterogeneity has long been recognized as a fundamental variable indicative of species diversity, in terms of both richness and abundance. Satellite remote sensing data sets can be useful for quantifying habitat heterogeneity across a range of spatial scales. Past remote sensing analyses of species diversity have largely been limited to correlative studies based on the use of vegetation indices or derived land cover maps. A relatively new form of laser remote sensing (lidar) provides another means to acquire information on habitat heterogeneity. Here we examine the efficacy of lidar metrics of canopy structural diversity as predictors of bird species richness in the temperate forests of Maryland, USA. Canopy height, topography and the vertical distribution of canopy elements were derived from lidar imagery of the Patuxent National Wildlife Refuge and compared to bird survey data collected at referenced grid locations. The canopy vertical distribution information was consistently found to be the strongest predictor of species richness, and this was predicted best when stratified into guilds dominated by forest, scrub, suburban and wetland species. Similar lidar variables were selected as primary predictors across guilds. Generalized linear and additive models, as well as binary hierarchical regression trees produced similar results. The lidar metrics were also consistently better predictors than traditional remotely sensed variables such as canopy cover, indicating that lidar provides a valuable resource for biodiversity research applications.
---
paper_title: Global land surface phenology trends from GIMMS database
paper_content:
A double logistic function has been used to describe global inventory mapping and monitoring studies (GIMMS) normalized difference vegetation index (NDVI) yearly evolution for the 1981 to 2003 period, in order to estimate land surface phenology parameter. A principal component analysis on the resulting time series indicates that the first components explain 36, 53 and 37% of the variance for the start, end and length of growing season, respectively, and shows generally good spatial homogeneity. Mann-Kendall trend tests have been carried out, and trends were estimated by linear regression. Maps of these trends show a global advance in spring dates of 0.38 days per year, a global delay in autumn dates of 0.45 days per year and a global increase of 0.8 days per year in the growing seasons validated by comparison with previous works. Correlations between retrieved phenological parameters and climate indices generally showed a good spatial coherence.
---
paper_title: Global vegetation phenology from Moderate Resolution Imaging Spectroradiometer (MODIS): Evaluation of global patterns and comparison with in situ measurements
paper_content:
[1] In the last two decades the availability of global remote sensing data sets has provided a new means of studying global patterns and dynamics in vegetation. The vast majority of previous work in this domain has used data from the Advanced Very High Resolution Radiometer, which until recently was the primary source of global land remote sensing data. In recent years, however, a number of new remote sensing data sources have become available that have significantly improved the capability of remote sensing to monitor global ecosystem dynamics. In this paper, we describe recent results using data from NASA's Moderate Resolution Imaging Spectroradiometer to study global vegetation phenology. Using a novel new method based on fitting piecewise logistic models to time series data from MODIS, key transition dates in the annual cycle(s) of vegetation growth can be estimated in an ecologically realistic fashion. Using this method we have produced global maps of seven phenological metrics at 1-km spatial resolution for all ecosystems exhibiting identifiable annual phenologies. These metrics include the date of year for (1) the onset of greenness increase (greenup), (2) the onset of greenness maximum (maturity), (3) the onset of greenness decrease (senescence), and (4) the onset of greenness minimum (dormancy). The three remaining metrics are the growing season minimum, maximum, and summation of the enhanced vegetation index derived from MODIS. Comparison of vegetation phenology retrieved from MODIS with in situ measurements shows that these metrics provide realistic estimates of the four transition dates identified above. More generally, the spatial distribution of phenological metrics estimated from MODIS data is qualitatively realistic, and exhibits strong correspondence with temperature patterns in mid- and high-latitude climates, with rainfall seasonality in seasonally dry climates, and with cropping patterns in agricultural areas.
---
paper_title: Monitoring vegetation phenology using MODIS
paper_content:
Abstract Accurate measurements of regional to global scale vegetation dynamics (phenology) are required to improve models and understanding of inter-annual variability in terrestrial ecosystem carbon exchange and climate–biosphere interactions. Since the mid-1980s, satellite data have been used to study these processes. In this paper, a new methodology to monitor global vegetation phenology from time series of satellite data is presented. The method uses series of piecewise logistic functions, which are fit to remotely sensed vegetation index (VI) data, to represent intra-annual vegetation dynamics. Using this approach, transition dates for vegetation activity within annual time series of VI data can be determined from satellite data. The method allows vegetation dynamics to be monitored at large scales in a fashion that it is ecologically meaningful and does not require pre-smoothing of data or the use of user-defined thresholds. Preliminary results based on an annual time series of Moderate Resolution Imaging Spectroradiometer (MODIS) data for the northeastern United States demonstrate that the method is able to monitor vegetation phenology with good success.
---
paper_title: The MERIS terrestrial chlorophyll index
paper_content:
The long wavelength edge of the major chlorophyll absorption feature in the spectrum of a vegetation canopy moves to longer wavelengths with an increase in chlorophyll content. The position of this red-edge has been used successfully to estimate, by remote sensing, the chlorophyll content of vegetation canopies. Techniques used to estimate this red-edge position (REP) have been designed for use on small volumes of continuous spectral data rather than the large volumes of discontinuous spectral data recorded by contemporary satellite spectrometers. Also, each technique produces a different value of REP from the same spectral data and REP values are relatively insensitive to chlorophyll content at high values of chlorophyll content. This paper reports on the design and indirect evaluation of a surrogate REP index for use with spectral data recorded at the standard band settings of the Medium Resolution Imaging Spectrometer (MERIS). This index, termed the MERIS terrestrial chlorophyll index (MTCI), was evaluated using model spectra, field spectra and MERIS data. It was easy to calculate (and so can be automated), was correlated strongly with REP but unlike REP was sensitive to high values of chlorophyll content. As a result this index became an official MERIS level-2 product of the European Space Agency in March 2004. Further direct evaluation of the MTCI is proposed, using both greenhouse and field data.
---
paper_title: Characterising the spatial pattern of phenology for the tropical vegetation of India using multi-temporal MERIS chlorophyll data
paper_content:
The annual growth cycles of terrestrial ecosystems are related to long-term regional/global climatic patterns. Understanding vegetation phenology and its spatio-temporal variation is required to reveal and predict ongoing changes in Earth system dynamics. The study attempts to characterize the phenology of the major tropical vegetation types in India, since such information is not yet available for India. Multi-temporal Medium Resolution Imaging Spectrometer (MERIS) Terrestrial Chlorophyll Index (MTCI) data were utilized to derive onset of greenness (OG) and end of senescence (ES) for four major tropical vegetation types. The study found that Fourier-smoothed results using the first four components revealed adequately the annual phenological variation of the natural vegetation types in India. From these smoothed data, inflection points were located iteratively through a spatio-temporal search, spanning over 18 months of 8-day composite data, per pixel such as to derive the OG and ES. The median OG and ES was extracted from the available annual results for the years 2003–04, 2004–05, 2005–06 and 2006–07. The GLC2000 land cover map (1 km spatial resolution) was utilized to determine the locations of the major vegetation types. The percentage of each vegetation type falling beneath a MTCI composite pixel (4.6 km spatial resolution) was calculated. MTCI composite pixels with homogeneity of ≥80% vegetative cover were used for examining pattern of phenology in different regions, different years and at different latitudes. The most common dates for the occurrence of OG for the tropical evergreen, semi-evergreen, moist-deciduous, and dry-deciduous vegetation types were found to be during February–April, January–April, March–May, and February–May, respectively. Similarly, for ES the most common dates were in February–April, January–April, February–April, and December–April, respectively. The phenological pattern was uniquely different for each vegetation type, as expected, and also differed with regions and latitudes. A general trend of early occurrence of OG in the lower latitudes was observed.
---
paper_title: Intercalibration of vegetation indices from different sensor systems
paper_content:
Spectroradiometric measurements were made over a range of crop canopy densities, soil backgrounds and foliage colour. The reflected spectral radiances were convoluted with the spectral response functions of a range of satellite instruments to simulate their responses. When Normalised Difference Vegetation Indices (NDVI) from the different instruments were compared, they varied by a few percent, but the values were strongly linearly related, allowing vegetation indices from one instrument to be intercalibrated against another. A table of conversion coefficents is presented for AVHRR, ATSR-2, Landsat MSS, TM and ETM+, SPOT-2 and SPOT-4 HRV, IRS, IKONOS, SEAWIFS, MISR, MODIS, POLDER, Quickbird and MERIS (see Appendix A for glossary of acronyms). The same set of coefficients was found to apply, within the margin of error of the analysis, for the Soil Adjusted Vegetation Index SAVI. The relationships for SPOT vs. TM and for ATSR-2 vs. AVHRR were directly validated by comparison of atmospherically corrected image data. The results indicate that vegetation indices can be interconverted to a precision of 1–2%. This result offers improved opportunities for monitoring crops through the growing season and the prospects of better continuity of long-term monitoring of vegetation responses to environmental change.
---
paper_title: Landsat continuity : Issues and opportunities for land cover monitoring
paper_content:
Initiated in 1972, the Landsat program has provided a continuous record of earth observation for 35 years. The assemblage of Landsat spatial, spectral, and temporal resolutions, over a reasonably sized image extent, results in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is absolutely unique and indispensable for monitoring, management, and scientific activities. Recent technical problems with the two existing Landsat satellites, and delays in the development and launch of a successor, increase the likelihood that a gap in Landsat continuity may occur. In this communication, we identify the key features of the Landsat program that have resulted in the extensive use of Landsat data for large area land cover mapping and monitoring. We then augment this list of key features by examining the data needs of existing large area land cover monitoring programs. Subsequently, we use this list as a basis for reviewing the current constellation of earth observation satellites to identify potential alternative data sources for large area land cover applications. Notions of a virtual constellation of satellites to meet large area land cover mapping and monitoring needs are also presented. Finally, research priorities that would facilitate the integration of these alternative data sources into existing large area land cover monitoring programs are identified. Continuity of the Landsat program and the measurements provided are critical for scientific, environmental, economic, and social purposes. It is difficult to overstate the importance of Landsat; there are no other systems in orbit, or planned for launch in the short-term, that can duplicate or approach replication, of the measurements and information conferred by Landsat. While technical and political options are being pursued, there is no satellite image data stream poised to enter the National Satellite Land Remote Sensing Data Archive should system failures occur to Landsat-5 and -7.
---
paper_title: Responses of spring phenology to climate change
paper_content:
Summary ::: Climate change effects on seasonal activity in terrestrial ecosystems are significant and well documented, especially in the middle and higher latitudes. Temperature is a main driver of many plant developmental processes, and in many cases higher temperatures have been shown to speed up plant development and lead to earlier switching to the next ontogenetic stage. Qualitatively consistent advancement of vegetation activity in spring has been documented using three independent methods, based on ground observations, remote sensing, and analysis of the atmospheric CO2 signal. However, estimates of the trends for advancement obtained using the same method differ substantially. We propose that a high fraction of this uncertainty is related to the time frame analysed and changes in trends at decadal time scales. Furthermore, the correlation between estimates of the initiation of spring activity derived from ground observations and remote sensing at interannual time scales is often weak. We propose that this is caused by qualitative differences in the traits observed using the two methods, as well as the mixture of different ecosystems and species within the satellite scenes.
---
paper_title: Measuring phenological variability from satellite imagery
paper_content:
Vegetation phenological phenomena are closely related to seasonal dynamics of the lower atmosphere and are therefore important elements in global models and vegetation monitoring. Normalized difference vegetation index (NDVI) data derived from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiom- eter (AVHRR) satellite sensor offer a means of efficiently and objectively evaluating phenological characteristics over large areas. Twelve metrics linked to key phenological events were computed based on time-series NDVI data collected from 1989 to 1992 over the conterminous United States. These measures include the onset of greenness, time of peak NDVI, maximum NDVI, rate of greenup, rate of senescence, and integrated NDVI. Measures of central tendency and variabil- ity of the measures were computed and analyzed for various land cover types. Results from the analysis showed strong coincidence between the satellite-derived metrics and pre- dicted phenological characteristics. In particular, the metrics identified interannual variability of spring wheat in North Dakota, characterized the phenology of four types of grasslands, and established the phenological consistency of deciduous and coniferous forests. These results have implications for large- area land cover mapping and monitoring. The utility of re- motely sensed data as input to vegetation mapping is demon- strated by showing the distinct phenology of several land cover types. More stable information contained in ancillary data should be incorporated into the mapping process, particu- larly in areas with high phenological variability. In a regional or global monitoring system, an increase in variability in a region may serve as a signal to perform more detailed land cover analysis with higher resolution imagery.
---
paper_title: MERIS: the re-branding of an ocean sensor
paper_content:
MERIS (Medium Resolution Imaging Spectrometer) is a fine spectral and medium spatial resolution satellite sensor and is part of the core instrument payload of Envisat, the European Space Agency's (ESA) environmental research satellite, launched in March 2002. Designed primarily for ocean (‘MER’) and coastal zone remote sensing, this imaging spectrometer (‘IS’) now has a much broader environmental remit covering also land and atmospheric applications. This paper reviews (i) MERIS's development history, focusing on its changing mission objectives; (ii) MERIS's technical specification, including its radiometric, spectral and geometric characteristics, programmability and onboard calibration; (iii) decisions that led to modifications of MERIS's spectral, geometric and radiometric performance for land applications; (iv) MERIS's data products; and (v) some of the ways in which MERIS data might be used to provide information on terrestrial vegetation.
---
paper_title: Climate controls on vegetation phenological patterns in northern mid- and high latitudes inferred from MODIS data
paper_content:
Recent studies using both field measurements and satellite-derived-vegetation indices have demonstrated that global warming is influencing vegetation growth and phenology. To accurately predict the future response of vegetation to climate variation, a thorough understanding of vegetation phenological cycles and their relationship to temperature and precipitation is required. In this paper, vegetation phenological transition dates identified using data from the moderate-resolution imaging spectroradiometer (MODIS) in 2001 are linked with MODIS land surface temperature (LST) data from the northern hemisphere between 351N and 701N. The results show well-defined patterns dependent on latitude, in which vegetation greenup gradually migrates northward starting in March, and dormancy spreads southward from late September. Among natural vegetation land-cover types, the growing-season length for forests is strongly correlated with variation in mean annual LST. For urban areas, the onset of greenup is 4‐9 days earlier on average, and the onset of dormancy is about 2‐16 days later, relative to adjacent natural vegetation. This difference (especially for urban vs. forests) is apparently related to urban heat island effects that result in both the average spring temperature and the mean annual temperature in urban areas being about 1‐31C higher relative to rural areas. The results also indicate that urban heat island effects on vegetation phenology are stronger in North America than in Europe and Asia. Finally, the onset of forest greenup at continental scales can be effectively described using a thermal time-chilling model, which can be used to infer the delay or advance of greenup onset in relation to climatic warming at global scale.
---
paper_title: The MERIS Global Vegetation Index (MGVI): Description and preliminary application
paper_content:
This paper describes the physical and mathematical approach followed to design a vegetation index optimized for the Medium Resolution Imaging Spectrometer (MERIS) sensor, i.e. the MERIS Global Vegetation Index (MGVI). It complements an earlier feasibility study presented elsewhere in this issue by Govaerts and collaborators. Specifically, the crucial issue of the dependency of the vegetation index on changes in illumination and observing geometries is addressed, together with the atmospheric contamination problem. The derivation of the optimal MGVI index formulae allows a comparison of its performance with that of the widely used Normalized Difference Vegetation Index (NDVI), both from a theoretical and an experimental point of view. Data collected by the MOS/IRS-P3 instrument since March 1996 in spectral bands analogous to those that will be available from MERIS can be used to evaluate the MVGI.
---
paper_title: Evaluation of optical satellite remote sensing for rice paddy phenology in monsoon Asia using a continuous in situ dataset
paper_content:
In monsoon Asia, optical satellite remote sensing for rice paddy phenology suffers from atmospheric contaminations mainly due to frequent cloud cover. We evaluated the quality of satellite remote sensing of paddy phenology: (1) through continuous in situ observations of a paddy field in Japan for 1.5 years, we investigated phenological signals in the reflectance spectrum of the paddy field; (2) we tested daily satellite data taken by Terra/Aqua MODIS (MOD09 and L1B products) with regard to the agreement with the in situ data and the influence of cloud contamination. As a result, the in situ spectral characteristics evidently indicated some phenological changes in the rice paddy field, such as irrigation start, padding, heading, harvest and ploughing. The Enhanced Vegetation Index (EVI) was the best vegetation index in terms of agreement with the in situ data. More than 65% of MODIS observations were contaminated with clouds in this region. However, the combined use of Terra and Aqua decreased the rate of ...
---
paper_title: Variations in satellite-derived phenology in China's temperate vegetation
paper_content:
The relationship between vegetation phenology and climate is a crucial topic in global change research because it indicates dynamic responses of terrestrial ecosystems to climate changes. In this study, we investigate the possible impact of recent climate changes on growing season duration in the temperate vegetation of China, using the advanced very high resolution radiometer (AVHRR)/normalized difference vegetation index (NDVI) biweekly time-series data collected from January 1982 to December 1999 and concurrent mean temperature and precipitation data. The results show that over the study period, the growing season duration has lengthened by 1.16 days yr � 1 in temperate region of China. The green-up of vegetation has advanced in spring by 0.79 days yr � 1 and the dormancy delayed in autumn by 0.37 days yr � 1 . The dates of onset for phenological events are most significantly related with the mean temperature during the preceding 2–3 months. A warming in the early spring (March to early May) by 11C could cause an earlier onset of green-up of 7.5 days, whereas the same increase of mean temperature during autumn (mid-August through early October) could lead to a delay of 3.8 days in vegetation dormancy. Variations in precipitation also influenced the duration of growing season, but such influence differed among vegetation types and phenological phases.
---
paper_title: A comparative study of satellite and ground-based phenology
paper_content:
Long time series of ground-based plant phenology, as well as more than two decades of satellite-derived phenological metrics, are currently available to assess the impacts of climate variability and trends on terrestrial vegetation. Traditional plant phenology provides very accurate information on individual plant species, but with limited spatial coverage. Satellite phenology allows monitoring of terrestrial vegetation on a global scale and provides an integrative view at the landscape level. Linking the strengths of both methodologies has high potential value for climate impact studies. We compared a multispecies index from ground-observed spring phases with two types (maximum slope and threshold approach) of satellite-derived start-of-season (SOS) metrics. We focus on Switzerland from 1982 to 2001 and show that temporal and spatial variability of the multispecies index correspond well with the satellite-derived metrics. All phenological metrics correlate with temperature anomalies as expected. The slope approach proved to deviate strongly from the temporal development of the ground observations as well as from the threshold-defined SOS satellite measure. The slope spring indicator is considered to indicate a different stage in vegetation development and is therefore less suited as a SOS parameter for comparative studies in relation to ground-observed phenology. Satellite-derived metrics are, however, very susceptible to snow cover, and it is suggested that this snow cover should be better accounted for by the use of newer satellite sensors.
---
paper_title: A web-based GIS tool for exploring the world's biodiversity: The Global Biodiversity Information Facility Mapping and Analysis Portal Application (GBIF-MAPA)
paper_content:
Abstract Legacy biodiversity data from natural history and survey collections are rapidly becoming available in a common format over the Internet. Over 110 million records are already being served from the Global Biodiversity Information Facility (GBIF). However, our ability to use this information effectively for ecological research, management and conservation lags behind. A solution is a web-based Geographic Information System for enabling visualization and analysis of this rapidly expanding data resource. In this paper we detail a case study system, GBIF Mapping and Analysis Portal Application (MAPA), developed for deployment at distributed database portals. Building such a system requires overcoming a series of technical and research challenges. These challenges include: assuring fast speed of access to the vast amounts of data available through these distributed biodiversity databases; developing open standards based access to suitable environmental data layers for analyzing biodiversity distribution; building suitably flexible and intuitive map interfaces for refining the scope and criteria of an analysis; and building appropriate web-services based analysis tools that are of primary importance to the ecological community and make manifest the value of online biodiversity GBIF data. After discussing how we overcome these challenges, we provide case studies showing two examples of the use of GBIF-MAPA analysis tools.
---
paper_title: Fieldservers and Sensor Service Grid as Real-time Monitoring Infrastructure for Ubiquitous Sensor Networks
paper_content:
The fieldserver is an Internet based observation robot that can provide an outdoor solution for monitoring environmental parameters in real-time. The data from its sensors can be collected to a central server infrastructure and published on the Internet. The information from the sensor network will contribute to monitoring and modeling on various environmental issues in Asia, including agriculture, food, pollution, disaster, climate change etc. An initiative called Sensor Asia is developing an infrastructure called Sensor Service Grid (SSG), which integrates fieldservers and Web GIS to realize easy and low cost installation and operation of ubiquitous field sensor networks.
---
paper_title: Avian Information Systems: Developing Web-Based Bird Avoidance Models
paper_content:
Collisions between aircraft and birds, so-called "bird strikes," can result in serious damage to aircraft and even in the loss of lives. Information about the distribution of birds in the air and on the ground can be used to reduce the risk of bird strikes and their impact on operations en route and in and around air fields. Although a wealth of bird distribution and density data is collected by numerous organizations, these data are not readily available nor interpretable by aviation. This paper presents two national efforts, one in the Netherlands and one in the United States, to develop bird avoidance nodels for aviation. These models integrate data and expert knowledge on bird distributions and migratory behavior to provide hazard maps in the form of GIS-enabled Web services. Both models are in operational use for flight planning and flight alteration and for airfield and airfield vicinity management. These models and their presentation on the Internet are examples of the type of service that would be very useful in other fields interested in species distribution and movement information, such as conservation, disease transmission and prevention, or assessment and mitigation of anthropogenic risks to nature. We expect that developments in cyber-technology, a transition toward an open source philosophy, and higher demand for accessible biological data will result in an increase in the number of biological information systems available on the Internet.
---
paper_title: Integrated research of parallel computing: Status and future
paper_content:
In the past twenty years, the research group in University of Science and Technology of China has developed an integrated research method for parallel computing, which is a combination of “Architecture-Algorithm-Programming-Application”. This method is also called the ecological environment of parallel computing research. In this paper, we survey the current status of integrated research method for parallel computing and by combining the impact of multi-core systems, cloud computing and personal high performance computer, we present our outlook on the future development of parallel computing.
---
paper_title: Environmental sensor networks in ecological research
paper_content:
Environmental sensor networks offer a powerful combination of distributed sensing capacity, real-time data visualization and analysis, and integration with adjacent networks and remote sensing data streams. These advances have become a reality as a combined result of the continuing miniaturization of electronics, the availability of large data storage and computational capacity, and the pervasive connectivity of the Internet. Environmental sensor networks have been established and large new networks are planned for monitoring multiple habitats at many different scales. Projects range in spatial scale from continental systems designed to measure global change and environmental stability to those involved with the monitoring of only a few meters of forest edge in fragmented landscapes. Temporal measurements have ranged from the evaluation of sunfleck dynamics at scales of seconds, to daily CO2 fluxes, to decadal shifts in temperatures. Above-ground sensor systems are partnered with subsurface soil measurement networks for physical and biological activity, together with aquatic and riparian sensor networks to measure groundwater fluxes and nutrient dynamics. More recently, complex sensors, such as networked digital cameras and microphones, as well as newly emerging sensors, are being integrated into sensor networks for hierarchical methods of sensing that promise a further understanding of our ecological systems by revealing previously unobservable phenomena.
---
paper_title: Developing a grid-enabled spatial Web portal for Internet GIServices and geospatial cyberinfrastructure
paper_content:
Geospatial cyberinfrastructure integrates distributed geographic information processing (DGIP) technology, high-performance computing resources, interoperable Web services, and sharable geographic knowledge to facilitate the advancement of geographic information science (GIScience) research, geospatial technology, and geographic education. This article addresses three major development issues of geospatial cyberinfrastructure: the performance of grid-enabled DGIP services, the integration of Internet GIService resources, and the technical challenges of spatial Web portal implementation. A four-tier grid-enabled Internet GIService framework was designed for geospatial cyberinfrastructure. The advantages of the grid-enabled framework were demonstrated by a spatial Web portal. The spatial Web portal was implemented based on current available Internet technologies and utilizes multiple computing resources and high-performance systems, including local PC clusters and the TeraGrid. By comparing their performance testing results, we found that grid computing (TeraGrid) is more powerful and flexible than local PC clusters. However, job queuing time and relatively poor performance of cross-site computation are the major obstacles of grid computing for geospatial cyberinfrastructure. Detailed analysis of different computational settings and performance testing contributes to a deeper understanding of the improvements of DGIP services and geospatial cyberinfrastructure. This research demonstrates that resource/service integration and performance improvement can be accomplished by deploying the new four-tier grid-enabled Internet GIService framework. This article also identifies four research priorities for developing geospatial cyberinfrastructure: the design of GIS middleware, high-performance geovisualization methods, semantic GIService, and the integration of multiple GIS grid applications.
---
paper_title: XML Web Service‐based development model for Internet GIS applications
paper_content:
Most of the current Internet Geographic Information System (GIS) applications cannot be shared and are not interoperable because of their heterogeneous environments. With the growth of Internet GIS, many difficulties have occurred in integrating GIS components because of their diversity. The main objective of this study is to suggest a new development model for dynamic and interoperable Internet GIS applications. The model is based mainly on the dynamic integration of Internet GIS components by applying Extensible Markup Language (XML), XML Web Service, Geography Markup Language (GML), and Scalable Vector Graphics (SVG), etc. The relevant technologies of Internet GIS were reviewed thoroughly, and then a new model was designed. During the design of the new model, typical examples of GIS Web Service components were suggested, together with practical structures for applications using these components. Six examples of components and four types of applications were suggested, and they were experimentally implemented for model validation and improvement. The suggested model and components will enable easier and more rapid development of Internet GIS applications through the dynamic integration of distributed GIS components. Users will be able to avoid redundancy and consequently reduce both cost and time during each GIS project.
---
paper_title: A global organism detection and monitoring system for non-native species
paper_content:
Abstract Harmful invasive non-native species are a significant threat to native species and ecosystems, and the costs associated with non-native species in the United States is estimated at over $120 Billion/year. While some local or regional databases exist for some taxonomic groups, there are no effective geographic databases designed to detect and monitor all species of non-native plants, animals, and pathogens. We developed a web-based solution called the Global Organism Detection and Monitoring (GODM) system to provide real-time data from a broad spectrum of users on the distribution and abundance of non-native species, including attributes of their habitats for predictive spatial modeling of current and potential distributions. The four major subsystems of GODM provide dynamic links between the organism data, web pages, spatial data, and modeling capabilities. The core survey database tables for recording invasive species survey data are organized into three categories: “Where, Who & When, and What.” Organisms are identified with Taxonomic Serial Numbers from the Integrated Taxonomic Information System. To allow users to immediately see a map of their data combined with other user's data, a custom geographic information system (GIS) Internet solution was required. The GIS solution provides an unprecedented level of flexibility in database access, allowing users to display maps of invasive species distributions or abundances based on various criteria including taxonomic classification (i.e., phylum or division, order, class, family, genus, species, subspecies, and variety), a specific project, a range of dates, and a range of attributes (percent cover, age, height, sex, weight). This is a significant paradigm shift from “map servers” to true Internet-based GIS solutions. The remainder of the system was created with a mix of commercial products, open source software, and custom software. Custom GIS libraries were created where required for processing large datasets, accessing the operating system, and to use existing libraries in C++, R, and other languages to develop the tools to track harmful species in space and time. The GODM database and system are crucial for early detection and rapid containment of invasive species.
---
paper_title: GOBLET: An open-source geographic overlaying database and query module for spatial targeting in agricultural systems
paper_content:
We describe the development of an open-source mapping and query utility. The impetus for its development came originally from the need to be able to locate, spatially, target populations of resource-poor livestock keepers in the developing world, and to identify livestock management interventions that may be appropriate for these households. The tool was developed using open-source software, to facilitate its transfer and use. It brings together a considerable amount of spatial data from many sources, and allows users to overlay these geographic areas to identify a ''domain'' where characteristics like human population, area, and the number of livestock may vary. The tool serves as the core of several applications, including an ex ante impact assessment framework to help answer questions relating to improved targeting of fodder interventions in Kenya, and a communication tool for poverty related information in Africa.
---
paper_title: Free and Open Source Geographic Information Tools for Landscape Ecology
paper_content:
Abstract Geographic Information tools (GI tools) have become an essential component of research in landscape ecology. In this article we review the use of GIS (Geographic Information Systems) and GI tools in landscape ecology, with an emphasis on free and open source software (FOSS) projects. Specifically, we introduce the background and terms related to the free and open source software movement, then compare eight FOSS desktop GIS with proprietary GIS to analyse their utility for landscape ecology research. We also provide a summary of related landscape analysis FOSS applications, and extensions. Our results indicate that (i) all eight GIS provide the basic GIS functionality needed in landscape ecology, (ii) they all facilitate customisation, and (iii) they all provide good support via forums and email lists. Drawbacks that have been identified are related to the fact that most projects are relatively young. This currently affects the size of their user and developer communities, and their ability to include advanced spatial analysis functions and up-to-date documentation. However, we expect these drawbacks to be addressed over time, as systems mature. In general, we see great potential for the use of free and open source desktop GIS in landscape ecology research and advocate concentrated efforts by the landscape ecology community towards a common, customisable and free research platform.
---
paper_title: An overview on current free and open source desktop GIS developments
paper_content:
Over the past few years the world of free and open source geospatial software has experienced some major changes. For instance, the website FreeGIS.org currently lists 330 GIS‐related projects. Besides the advent of new software projects and the growth of established projects, a new organisation known as the OSGeo Foundation has been established to offer a point of contact. This paper will give an overview on existing free and open source desktop GIS projects. To further the understanding of the open source software development, we give a brief explanation of associated terms and introduce the two most established software license types: the General Public License (GPL) and the Lesser General Public License (LGPL). After laying out the organisational structures, we describe the different desktop GIS software projects in terms of their main characteristics. Two main tables summarise information on the projects and functionality of the currently available software versions. Finally, the advantages and disad...
---
paper_title: The GIS Weasel: An interface for the development of geographic information used in environmental simulation modeling
paper_content:
The GIS Weasel is a freely available, open-source software package built on top of ArcInfo Workstation^(R) [ESRI, Inc., 2001, ArcInfo Workstation (8.1 ed.), Redlands, CA] for creating maps and parameters of geographic features used in environmental simulation models. The software has been designed to minimize the need for GIS expertise and automate the preparation of the geographic information as much as possible. Although many kinds of data can be exploited with the GIS Weasel, the only information required is a raster dataset of elevation for the user's area of interest (AOI). The user-defined AOI serves as a starting point from which to create maps of many different types of geographic features, including sub-watersheds, streams, elevation bands, land cover patches, land parcels, or anything else that can be discerned from the available data. The GIS Weasel has a library of over 200 routines that can be applied to any raster map of geographic features to generate information about shape, area, or topological association with other features of the same or different maps. In addition, a wide variety of parameters can be derived using ancillary data layers such as soil and vegetation maps.
---
paper_title: Web service based spatial forest information system using an open source software approach
paper_content:
For technical and other reasons there is a dilemma that data providers cannot find an appropriate way to redistribute spatial forest data and data users who need spatial data cannot access and integrate available forest resources information. To overcome this dilemma, this paper proposed a spatial forest information system based art Web service using an open source software approach. With Web service based architecture, the system can enable interoperability, integrate Web services from other application servers, reuse codes, and shorten the development time and cost, At the same time, it is possible to extend the local system to a regional or national spatial forest information system. The growth of Open Source Software (OSS) provides an alternative choice to proprietary software for operating systems, web servers, Web-based GIS applications and database management systems. Using open source software to develop spatial forest information systems can greatly reduce the cost while providing high performance and sharing spatial forest information. We chose open source software to build a prototype system for Xixia County, Henan Province, China. By integrating OSS packages Deegree and IJMN MapServer which are compliant to the OGC open specifications, the prototype system enables users to access spatial forest information and travelling information of Xixia County which come from two different data servers via a standard Web browser and promotes spatial forest information sharing.
---
paper_title: A global organism detection and monitoring system for non-native species
paper_content:
Abstract Harmful invasive non-native species are a significant threat to native species and ecosystems, and the costs associated with non-native species in the United States is estimated at over $120 Billion/year. While some local or regional databases exist for some taxonomic groups, there are no effective geographic databases designed to detect and monitor all species of non-native plants, animals, and pathogens. We developed a web-based solution called the Global Organism Detection and Monitoring (GODM) system to provide real-time data from a broad spectrum of users on the distribution and abundance of non-native species, including attributes of their habitats for predictive spatial modeling of current and potential distributions. The four major subsystems of GODM provide dynamic links between the organism data, web pages, spatial data, and modeling capabilities. The core survey database tables for recording invasive species survey data are organized into three categories: “Where, Who & When, and What.” Organisms are identified with Taxonomic Serial Numbers from the Integrated Taxonomic Information System. To allow users to immediately see a map of their data combined with other user's data, a custom geographic information system (GIS) Internet solution was required. The GIS solution provides an unprecedented level of flexibility in database access, allowing users to display maps of invasive species distributions or abundances based on various criteria including taxonomic classification (i.e., phylum or division, order, class, family, genus, species, subspecies, and variety), a specific project, a range of dates, and a range of attributes (percent cover, age, height, sex, weight). This is a significant paradigm shift from “map servers” to true Internet-based GIS solutions. The remainder of the system was created with a mix of commercial products, open source software, and custom software. Custom GIS libraries were created where required for processing large datasets, accessing the operating system, and to use existing libraries in C++, R, and other languages to develop the tools to track harmful species in space and time. The GODM database and system are crucial for early detection and rapid containment of invasive species.
---
paper_title: A less‐is‐more approach to geovisualization – enhancing knowledge construction across multidisciplinary teams
paper_content:
The 'less-is-more' concept in interface design for computer applications has recently gained ground. In this article, the concept is adopted for a user-centered design of geovisualization application. The premise is that using simple and clear design can lead to successful applications with improved ease of use. Over the last three decades, the development of GIS and geovisualization has seen a marked increase in the levels of interaction between the user, the system and the information. However, these enthusiastic advances in technology have not resulted in a significant increase in the number of users.This article suggests that types of user interaction should not simply emphasize traditional GIS functions such as zooming and panning but move towards interaction based on facilitating the knowledge construction process. Considerations are made for the complexity of the system, the task at hand and the skills and limitations of the users. These elements are particularly important when maps act as the mediators in collaboration with users across disciplinary backgrounds. In such cases, the emphasis on simplicity and usability becomes as important as functionality. In these situations a geovisualization application designed for specific uses can maximize effective development of geographic knowledge.In this article, a minimalistic design approach to geovisualization is adopted by creating a geographic profiling tool which shifts the emphasis from technological advances or interaction with the map to the interaction elements key to building the spatial knowledge of GIS experts and non-experts alike. To evaluate this notion of 'less-is-more geovisualization' the profiling tool is evaluated according to usability matrices: efficiency, effectiveness and learnability. How well the Suburban Profiler contributes to these elements is assessed by conducting a video analysis of the types and forms of user interaction available. The video analysis demonstrates the usefulness and usability of the Suburban Profiler, providing proof of concept for 'less-is-more geovisualization'.
---
paper_title: Mobile environmental visualization
paper_content:
AbstractEnvironmental processes are a major point of concern for researchers, environmental engineers and the general public. This paper presents a multi-user mobile system to visualize environmental processes. This system enables multiple users to visualize and simulate environmental processes, and also retrieve additional information in real time, while moving through the environmental area under analysis. Each user is able to contribute to the simulation and assess the impact of adding or removing agents to or from the model, respectively.This system uses a modular approach, allowing its deployment through different platforms. Two main modules compose the system under development: a geo-referenced model and an Augmented Reality (AR) composition module. The geo-referenced model describes the environmental processes being modelled and tracks all users. The AR composition module allows users to visualize the geo-referenced model evolution and to interact with the model through two different views, namely ...
---
paper_title: GIS Visualization and Analysis of River Operations Impacts on Endangered Species Habitat
paper_content:
Many river systems include natural and manmade backwater areas providing habitat for a diverse community of aquatic and aviary species, including several listed as endangered by the U.S. Fish and Wildlife Service. Concern has been raised over ecological impacts of river operations on these species and their habitat in these backwater areas. A methodology is presented that utilizes a geographic information system (GIS) in association with a numerical hydraulic model to assess these impacts. The GIS provides geostatistical estimates of water surface elevations within the backwaters during passage of a hydrograph created by reservoir releases, and then quantifies and provides animated visualization of the changes in habitat for many species dwelling in these areas. This generalized tool is applied to a portion of the Lower Colorado River in Arizona/California, which includes several dams and diversion structures controlling flow for a variety of important purposes. It is demonstrated that this GIS-based tool provides effective support for river system operators in assessing the impacts of operations on endangered species habitat and evaluating remedial measures.
---
paper_title: SFMN GeoSearch: An interactive approach to the visualization and exchange of point-based ecological data
paper_content:
Abstract Recent advances in computer networks and information technologies have created exciting new possibilities for sharing and analyzing scientific research data. Although individual datasets can be studied efficiently, many scientists are still largely limited to considering data collected by themselves, their students, or closely affiliated research groups. Increasingly widespread high-speed network connections and the existence of large, coordinated research programs suggest the potential for scientists to access and learn from data from outside their immediate research circle. We are developing a web-based application that facilitates the sharing of scientific data within a research network using the now-common “virtual globe” in combination with advanced visualization methods designed for geographically distributed scientific data. Two major components of the system enable the rapid assessment of geographically distributed scientific data: a database built from information submitted by network members, and a module featuring novel and sophisticated geographic data visualization techniques. By enabling scientists to share results with each other and view their shared data through a common virtual-globe interface, the system provides a new platform for important meta-analyses and the analysis of broad-scale patterns. Here we present the design and capabilities of the SFMN GeoSearch platform for the Sustainable Forest Management Network, a pan-Canadian network of forest researchers who have accumulated data for more than a decade. Through the development and dissemination of this new tool, we hope to help scientists, students, and the general public to understand the depth and breadth of scientific data across potentially large areas.
---
paper_title: Integration of augmented reality and GIS : A new approach to realistic landscape visualisation
paper_content:
This paper describes the development of a photo-realistic visualisation method which uses the combination of a geographic information system (GIS) with off-line augmented reality (AR) techniques. The technique makes a linkage between GIS-based modelling and realistic panoramic video frames to dynamically augment a landscape view with modelled temporal changes. By using real object textures, computer-generated objects can be well matched to background frames and become hardly detectable as computer objects. The technique has been tested with data from an area in northeast Victoria, Australia and has successfully represented the dynamic spread of weeds (blackberries) and their effects on landscape over a period of 14 years. The proposed approach has the potential to improve the communication between policy makers and non-experts and to improve decision-making process. Extensions to other model types and to real-time usage are envisaged.
---
paper_title: Exploratory spatio-temporal visualization: an analytical review
paper_content:
Abstract Current software tools for visualization of spatio-temporal data, on the one hand, utilize the opportunities provided by modern computer technologies, on the other hand, incorporate the legacy from the conventional cartography. We have considered existing visualization-based techniques for exploratory analysis of spatio-temporal data from two perspectives: (1) what types of spatio-temporal data they are applicable to; (2) what exploratory tasks they can potentially support. The technique investigation has been based on an operational typology of spatio-temporal data and analytical tasks we specially devised for this purpose. The result of the study is a structured inventory of existing exploratory techniques related to the types of data and tasks they are appropriate for. This result is potentially helpful for data analysts—users of geovisualization tools: it provides guidelines for selection of proper exploratory techniques depending on the characteristics of data to analyze and the goals of analysis. At the same time the inventory as well as the suggested typology of tasks could be useful for tool designers and developers of various domain-specific geovisualization applications. The designers can, on the one hand, see what task types are insufficiently supported by the existing tools and direct their creative activities towards filling the gaps, on the other hand, use the techniques described as basic elements for building new, more sophisticated ones. The application developers can, on the one hand, use the task and data typology in the analysis of potential user needs, on the other hand, appropriately select and combine existing tools in order to satisfy these needs.
---
paper_title: Refining predictions of climate change impacts on plant species distribution through the use of local statistics
paper_content:
Abstract Bioclimate envelope models are often used to predict changes in species distribution arising from changes in climate. These models are typically based on observed correlations between current species distribution and climate data. One limitation of this basic approach is that the relationship modelled is assumed to be constant in space; the analysis is global with the relationship assumed to be spatially stationary. Here, it is shown that by using a local regression analysis, which allows the relationship under study to vary in space, rather than conventional global regression analysis it is possible to increase the accuracy of bioclimate envelope modelling. This is demonstrated for the distribution of Spotted Meddick in Great Britain using data relating to three time periods, including predictions for the 2080s based on two climate change scenarios. Species distribution and climate data were available for two of the time periods studied and this allowed comparison of bioclimate envelope model outputs derived using the local and global regression analyses. For both time periods, the area under the receiver operating characteristics curve derived from the analysis based on local statistics was significantly higher than that from the conventional global analysis; the curve comparisons were also undertaken with an approach that recognised the dependent nature of the data sets compared. Marked differences in the future distribution of the species predicted from the local and global based analyses were evident and highlight a need for further consideration of local issues in modelling ecological variables.
---
paper_title: Measurement and meaningfulness in conservation science.
paper_content:
Incomplete databases often require conservation scientists to estimate data either through expert judgment or other scoring, rating, and ranking procedures. At the same time, ecosystem complexity has led to the use of increasingly sophisticated algorithms and mathematical models to aid in conservation theorizing, planning, and decision making. Understanding the limitations imposed by the scales of measurement of conservation data is important for the development of sound conservation theory and policy. In particular, biodiversity valuation methods, systematic conservation planning algorithms, geographic information systems (GIS), and other conservation metrics and decision-support tools, when improperly applied to estimated data, may lead to conclusions based on numerical artifact rather than empirical evidence. The representational theory of measurement is described here, and the description includes definitions of the key concepts of scale, scale type, and meaningfulness. Representational measurement is the view that measurement entails the faithful assignment of numbers to empirical entities. These assignments form scales that are organized into a hierarchy of scale types. A statement involving scales is meaningful if its truth value is invariant under changes of scale within scale type. I apply these concepts to three examples of measurement practice in the conservation literature. The results of my analysis suggest that conservation scientists do not always investigate the scale type of estimated data and hence may derive results that are not meaningful. Recognizing the complexity of observation and measurement in conservation biology, and the constraints that measurement theory imposes, the examples are accompanied by suggestions for informal estimation of the scale type of conservation data and for conducting meaningful analysis and synthesis of this information.
---
paper_title: Non-stationarity and local approaches to modelling the distributions of wildlife
paper_content:
Despite a growing interest in species distribution modelling, relatively little attention has been paid to spatial autocorrelation and non-stationarity. Both spatial autocorrelation (the tendency for adjacent locations to be more similar than distant ones) and non-stationarity (the variation in modelled relationships over space) are likely to be common properties of ecological systems. This paper focuses on non-stationarity and uses two local techniques, geographically weighted regression (GWR) and varying coefficient modelling (VCM), to assess its impact on model predictions. We extend two published studies, one on the presence–absence of calandra larks in Spain and the other on bird species richness in Britain, to compare GWR and VCM with the more usual global generalized linear modelling (GLM) and generalized additive modelling (GAM). For the calandra lark data, GWR and VCM produced better-fitting models than GLM or GAM. VCM in particular gave significantly reduced spatial autocorrelation in the model residuals. GWR showed that individual predictors became stationary at different spatial scales, indicating that distributions are influenced by ecological processes operating over multiple scales. VCM was able to predict occurrence accurately on independent data from the same geographical area as the training data but not beyond, whereas the GAM produced good results on all areas. Individual predictions from the local methods often differed substantially from the global models. For the species richness data, VCM and GWR produced far better predictions than ordinary regression. Our analyses suggest that modellers interpolating data to produce maps for practical actions (e.g. conservation) should consider local methods, whereas they should not be used for extrapolation to new areas. We argue that local methods are complementary to global methods, revealing details of habitat associations and data properties which global methods average out and miss.
---
paper_title: Modelling species’ range shifts in a changing climate: The impacts of biotic interactions, dispersal distance and the rate of climate change
paper_content:
Abstract There is an urgent need for accurate prediction of climate change impacts on species ranges. Current reliance on bioclimatic envelope approaches ignores important biological processes such as interactions and dispersal. Although much debated, it is unclear how such processes might influence range shifting. Using individual-based modelling we show that interspecific interactions and dispersal ability interact with the rate of climate change to determine range-shifting dynamics in a simulated community with two growth forms—mutualists and competitors. Interactions determine spatial arrangements of species prior to the onset of rapid climate change. These lead to space-occupancy effects that limit the rate of expansion of the fast-growing competitors but which can be overcome by increased long-distance dispersal. As the rate of climate change increases, lower levels of long-distance dispersal can drive the mutualists to extinction, demonstrating the potential for subtle process balances, non-linear dynamics and abrupt changes from species coexistence to species loss during climate change.
---
paper_title: Use and misuse of the IUCN Red List Criteria in projecting climate change impacts on biodiversity
paper_content:
Recent attempts at projecting climate change impacts on biodiversity have used the IUCN Red List Criteria to obtain estimates of extinction rates based on projected range shifts. In these studies, the Criteria are often misapplied, potentially introducing substantial bias and uncertainty. These misapplications include arbitrary changes to temporal and spatial scales; confusion of the spatial variables; and assume a linear relationship between abundance and range area. Using the IUCN Red List Criteria to identify which species are threatened by climate change presents special problems and uncertainties, especially for shorter-lived species. Responses of most species to future climate change are not understood well enough to estimate extinction risks based solely on climate change scenarios and projections of shifts and/or reductions in range areas. One way to further such understanding would be to analyze the interactions among habitat shifts, landscape structure and demography for a number of species, using a combination of models. Evaluating the patterns in the results might allow the development of guidelines for assigning species to threat categories, based on a combination of life history parameters, characteristics of the landscapes in which they live, and projected range changes.
---
paper_title: Forecasting the Effects of Global Warming on Biodiversity
paper_content:
AbstractThe demand for accurate forecasting of the effects of global warming on biodiversity is growing, but current methods for forecasting have limitations. In this article, we compare and discuss the different uses of four forecasting methods: (1) models that consider species individually, (2) niche-theory models that group species by habitat (more specifically, by environmental conditions under which a species can persist or does persist), (3) general circulation models and coupled ocean–atmosphere–biosphere models, and (4) species–area curve models that consider all species or large aggregates of species. After outlining the different uses and limitations of these methods, we make eight primary suggestions for improving forecasts. We find that greater use of the fossil record and of modern genetic studies would improve forecasting methods. We note a Quaternary conundrum: While current empirical and theoretical ecological results suggest that many species could be at risk from global warming, during t...
---
paper_title: Predicting the impacts of climate change on the distribution of species : are bioclimate envelope models useful ?
paper_content:
Modelling strategies for predicting the potential impacts of climate change on the natural distribution of species have often focused on the characterization of a species’ bioclimate envelope. A number of recent critiques have questioned the validity of this approach by pointing to the many factors other than climate that play an important part in determining species distributions and the dynamics of distribution changes. Such factors include biotic interactions, evolutionary change and dispersal ability. This paper reviews and evaluates criticisms of bioclimate envelope models and discusses the implications of these criticisms for the different modelling strategies employed. It is proposed that, although the complexity of the natural system presents fundamental limits to predictive modelling, the bioclimate envelope approach can provide a useful first approximation as to the potentially dramatic impact of climate change on biodiversity. However, it is stressed that the spatial scale at which these models are applied is of fundamental importance, and that model results should not be interpreted without due consideration of the limitations involved. A hierarchical modelling framework is proposed through which some of these limitations can be addressed within a broader, scale-dependent
---
paper_title: Inferring habitat-suitability areas with ecological modelling techniques and GIS: A contribution to assess the conservation status of Vipera latastei
paper_content:
Some snakes are highly vulnerable to extinction due to several life history traits. However, the elusive behavior and secretive habits of some widespread species constrain the collection of demographic and ecological data necessary for the identification of extinction-prone species. In this scenario, the enhancement of ecological modelling techniques in Geographical Information Systems (GIS) is providing researchers with robust tools to apply to such species. This study has identified the environmental factors that limit the current distribution of Vipera latastei, a species with secretive behavior, and has evaluated how human activities affect its current conservation status, identifying areas of best habitat suitability in the Iberian Peninsula. Ecological-niche factor analysis (ENFA) indicated low marginality (0.299) and high tolerance (0.887) scores, suggesting strong tendency for the species to live in average conditions throughout the study area and to inhabit any of the environmental conditions. The analysis also revealed that this viper tends to select particular Mediterranean habitats, although topographic factors (altitude and slope) were the major environmental constraints for the Iberian distribution pattern of the species. The presence of other parapatric viper species in the north of the Iberian Peninsula (V. aspis and V. seoanei) and two human-related variables (landscape transformation and human density) also had a negative relation with the occurrence of V. latastei. All factors can explain its absence in northern Iberia and its fragmented distribution as currently is found mostly in mountains and relatively undisturbed low-altitude areas. The historical destruction and alteration of natural Mediterranean habitats and several life-history traits of the species contribute to its vulnerability to extinction. The ENFA analysis proved to be an outstanding method to evaluate the factors that limit the distribution range of secretive and widespread species such as V. latastei, updating evaluation of their conservation status.
---
paper_title: The role of land cover in bioclimatic models depends on spatial resolution
paper_content:
Aim We explored the importance of climate and land cover in bird species distribution models on multiple spatial scales. In particular, we tested whether the integration of land cover data improves the performance of pure bioclimatic models. ::: ::: ::: ::: Location Finland, northern Europe. ::: ::: ::: ::: Methods The data of the bird atlas survey carried out in 1986–89 using a 10 × 10 km uniform grid system in Finland were employed in the analyses. Land cover and climatic variables were compiled using the same grid system. The dependent and explanatory variables were resampled to 20-km, 40-km and 80-km resolutions. Generalized additive models (GAM) were constructed for each of the 88 land bird species studied in order to estimate the probability of occurrence as a function of (1) climate and (2) climate and land cover variables. Model accuracy was measured by a cross-validation approach using the area under the curve (AUC) of a receiver operating characteristic (ROC) plot. ::: ::: ::: ::: Results In general, the accuracies of the 88 bird–climate models were good at all studied resolutions. However, the inclusion of land cover increased the performance of 79 and 78 of the 88 bioclimatic models at 10-km and 20-km resolutions, respectively. There was no significant improvement at the 40-km resolution. In contrast to the finer resolutions, the inclusion of land cover variables decreased the modelling accuracy at 80km resolution. ::: ::: ::: ::: Main conclusions Our results suggest that the determinants of bird species distributions are hierarchically structured: climatic variables are large-scale determinants, followed by land cover at finer resolutions. The majority of the land bird species in Finland are rather clearly correlated with climate, and bioclimate envelope models can provide useful tools for identifying the relationships between these species and the environment at resolutions ranging from 10 km to 80 km. However, the notable contribution of land cover to the accuracy of bioclimatic models at 10–20-km resolutions indicates that the integration of climate and land cover information can improve our understanding and model predictions of biogeographical patterns under global change.
---
paper_title: Maximising the natural capital benefits of habitat creation : Spatially targeting native woodland using GIS
paper_content:
Abstract The establishment of the Common Agricultural Policy has dramatically transformed the relationship between the natural environment and agriculture in the UK. Accordingly, the Government now acknowledges that our stock of ‘natural capital’ is being managed unsustainably and is undertaking Common Agricultural Policy reforms to provide a more sustainable form of agriculture. Such reforms will be based on the economic rationale of payment in return for the provision of natural capital benefits such as biodiversity, carbon sequestration, landscape and recreation. A basic lower tier payment is proposed for general environmental practices, with higher tiers of payments being available for ‘benefit generating’ habitat maintenance and creation. In order to maximise the benefits of such habitat creation, some form of spatial targeting is required. Using geographical information systems (GIS), a suite of spatially explicit criteria are adopted to measure how the potential benefits of native woodland creation vary across the agricultural landscape of the Chilterns natural area. Rather than regarding habitat conservation solely in terms of biodiversity benefits, a more holistic natural capital benefit approach is thus adopted. Public preference on the provision of each benefit is integrated into the GIS-based suitability analysis through multicriteria evaluation. We demonstrate how such a targeted approach leads to large improvements in the delivery of natural capital benefits, with the attainment of biodiversity, landscape and recreation benefits being particularly complementary. As such, the targeted pursuit of natural capital benefits does not compromise the attainment of biodiversity goals, but actually aids in their achievement. However, due to limitations in data availability and accuracy, GIS should be regarded as a decision support tool, with validation of targeted sites being undertaken through a farm audit system.
---
paper_title: GBD-Explorer: Extending open source java GIS for exploring ecoregion-based biodiversity data
paper_content:
Abstract Biodiversity and ecosystem data are both geo-referenced and “species-referenced”. Ecoregion classification systems are relevant to basic ecological research and have been increasingly used for making policy and management decisions. There are practical needs to integrate taxonomic data with ecoregion data in a GIS to visualize and explore species distribution conveniently. In this study, we represent the species distributed in an ecoregion as a taxonomic tree and extend the classic GIS data model to incorporate operations on taxonomic trees. A prototype called GBD-Explorer was developed on top of the open source JUMP GIS. We use the World Wildlife Fund (WWF) terrestrial ecoregion and WildFinder species databases as an example to demonstrate the rich capabilities implemented in the prototype.
---
paper_title: Modelling breeding habitat preferences of Bonelli’s eagle (Hieraaetus fasciatus) in relation to topography, disturbance, climate and land use at different spatial scales
paper_content:
Predictive models on breeding habitat preferences of Bonelli’s eagle (Hieraaetus fasciatus; Aves: Accipitridae) have been performed at four different spatial scales in Castellon province, East of Iberian Peninsula. The scales considered were: (1) nest site scale (1×1 km2 Universal Transverse Mercator (UTM) square containing the nest); (2) near nest environment (3×3 km2 UTM square); (3) home range scale (5×5 km2 UTM square); and (4) landscape level scale (9×9 km2 UTM square containing the above mentioned ones). Topographic, disturbance, climatic and land use factors were measured on a geographic information system (GIS) at occupied and unoccupied UTM squares. Logistic regression was performed by means of a stepwise addition procedure. We tested whether inclusion of new subset of variables improved the models by increasing the area under the receiver operator characteristic plot. At nest site scale, only topographic factors were considered as the most parsimonious predictors. Probability of species occurrence increases with slope in craggy areas at lower altitudes. At the 3×3 km2 scale, climate and disturbance variables were included. At home range and landscape level scales, models included climate, disturbance, topographic and land use factors. Higher temperatures in January, template ones in July, higher rainfall in June, lower altitudes and higher slope in the sample unit increase probability of occurrence of Bonelli’s eagle at broadest scales. The species seems to prefer disperse forests, scrubland and agricultural areas. From our results, we consider that there is a hierarchical framework on habitat selection procedure. We suggest that it is necessary to analyse what key factors are affecting Bonelli’s eagle nest-site selection at every study area to take steps to ensure appropriate conservation measures. The combination of regression modelling and GIS will become a powerful tool for biodiversity and conservation studies, taking into account that application depends on sampling design and the model assumptions of the statistical methods employed. Finally, predictive models obtained could be used for the efficient monitoring of this scarce species, to predict range expansions or identify suitable locations for reintroductions, and also to design protected areas and to help on wildlife management.
---
paper_title: SPECIES: A Spatial Evaluation of Climate Impact on the Envelope of Species
paper_content:
A model, A Spatial Evaluation of Climate Impact on the Envelope of Species (SPECIES), is presented which has been developed to evaluate the impacts of climate change on the bioclimatic envelope of plant species in Great Britain. SPECIES couples an artificial neural network with a climate–hydrological process model. The hybrid model has been successfully trained to estimate current species distributions using climate and soils data at the European scale before application at a finer resolution national scale. Using this multi-scale approach ensures encapsulation of the full extent of future climate scenarios within Great Britain without extrapolating outside of the model's training dataset. Application of the model to 32 plant species produced a mean Pearson correlation coefficient of 0.841 and a mean Kappa statistic of 0.772 between observed and simulated distributions. Simulations of four climate change scenarios revealed that changes to suitable climate space in Great Britain is highly species dependent and that distribution changes may be multidirectional and temporally non-linear. Analysis of the SPECIES results suggests that the neural network methodology can provide a feasible alternative to more classical spatial statistical techniques.
---
paper_title: Mediterranean ecosystems: problems and tools for conservation
paper_content:
Mediterranean ecosystems rival tropical ecosystems in terms of plant biodiversity. The Mediterranean Basin (MB) itself hosts 25 000 plant species, half of which are endemic. This rich biodiversity and the complex biogeographical and political issues make conservation a difficult task in the region. Species, habitat, ecosystem and landscape approaches have been used to identify conservation targets at various scales: ie, European, national, regional and local. Conservation decisions require adequate information at the species, community and habitat level. Nevertheless and despite recent improvements/efforts, this information is still incomplete, fragmented and varies from one country to another. This paper reviews the biogeographic data, the problems arising from current conservation efforts and methods for the conservation assessment and prioritization using GIS. GIS has an important role to play for managing spatial and attribute information on the ecosystems of the MB and to facilitate interactions with existing databases. Where limited information is available it can be used for prediction when directly or indirectly linked to externally built models. As well as being a predictive tool today GIS incorporate spatial techniques which can improve the level of information such as fuzzy logic, geostatistics, or provide insight about landscape changes such as 3D visualization. Where there are limited resources it can assist with identifying sites of conservation priority or the resolution of environmental conflicts (scenario building). Although not a panacea, GIS is an invaluable tool for improving the understanding of Mediterranean ecosystems and their dynamics and for practical management in a region that is under increasing pressure from human impact.
---
paper_title: Surveyor consistency in presence/absence sampling for monitoring vegetation in a boreal forest
paper_content:
Vegetation assessments are a central part of many large-scale monitoring programmes. To accurately estimate change over time, consistent field methods are important. Presence/absence (P/A) sampling is considered to be less susceptible to judgement and measurement errors in comparison with visual cover assessment, although errors also occur with this method in complete species inventories. Few studies have evaluated surveyor consistency in P/A sampling with a limited list of species. In this study, the consistency of results in P/A sampling was evaluated in a field test, with different surveyors assessing the same sample plots. The results indicated a good consistency between surveyors and high accuracy according to a reference survey for many of the tested species, although some differences both between surveyors and in comparison with the reference survey were found. Comparing a group of surveyors with a larger experience of vegetation assessments with a group of surveyors with less experience indicated that the former were more accurate and consistent. No clear differences were found between different plot sizes tested. The main conclusion from the study is that P/A sampling is slightly affected by observer judgement bias, but that in comparison with the consistency of visual cover assessment observed in other studies, the difference between surveyors for many species is reasonably low.
---
paper_title: Species distribution models and ecological theory: A critical assessment and some possible new approaches
paper_content:
Given the importance of knowledge of species distribution for conservation and climate change management, continuous and progressive evaluation of the statistical models predicting species distributions is necessary. Current models are evaluated in terms of ecological theory used, the data model accepted and the statistical methods applied. Focus is restricted to Generalised Linear Models (GLM) and Generalised Additive Models (GAM). Certain currently unused regression methods are reviewed for their possible application to species modelling. A review of recent papers suggests that ecological theory is rarely explicitly considered. Current theory and results support species responses to environmental variables to be unimodal and often skewed though process-based theory is often lacking. Many studies fail to test for unimodal or skewed responses and straight-line relationships are often fitted without justification. Data resolution (size of sampling unit) determines the nature of the environmental niche models that can be fitted. A synthesis of differing ecophysiological ideas and the use of biophysical processes models could improve the selection of predictor variables. A better conceptual framework is needed for selecting variables. Comparison of statistical methods is difficult. Predictive success is insufficient and a test of ecological realism is also needed. Evaluation of methods needs artificial data, as there is no knowledge about the true relationships between variables for field data. However, use of artificial data is limited by lack of comprehensive theory. Three potentially new methods are reviewed. Quantile regression (QR) has potential and a strong theoretical justification in Liebig’s law of the minimum. Structural equation modelling (SEM) has an appealing conceptual framework for testing causality but has problems with curvilinear relationships. Geographically weighted regression (GWR) intended to examine spatial non-stationarity of ecological processes requires further evaluation before being used.
---
paper_title: Where will species go? Incorporating new advances in climate modelling into projections of species distributions
paper_content:
Bioclimatic models are the primary tools for simulating the impact of climate change on species distributions. Part of the uncertainty in the output of these models results from uncertainty in projections of future climates. To account for this, studies often simulate species responses to climates predicted by more than one climate model and/or emission scenario. One area of uncertainty, however, has remained unexplored: internal climate model variability. By running a single climate model multiple times, but each time perturbing the initial state of the model slightly, different but equally valid realizations of climate will be produced. In this paper, we identify how ongoing improvements in climate models can be used to provide guidance for impacts studies. In doing so we provide the first assessment of the extent to which this internal climate model variability generates uncertainty in projections of future species distributions, compared with variability between climate models. We obtained data on 13 realizations from three climate models (three from CSIRO Mark2 v3.0, four from GISS AOM, and six from MIROC v3.2) for two time periods: current (1985‐1995) and future (2025‐2035). Initially, we compared the simulated values for each climate variable (P, Tmax, Tmin ,a ndTmean) for the current period to observed climate data. This showed that climates simulated by realizations from the same climate model were more similar to each other than to realizations from other models. However, when projected into the future, these realizations followed different trajectories and the values of climate variables differed considerably within and among climate models. These had pronounced effects on the projected distributions of nine Australian butterfly species when modelled using the BIOCLIM component of DIVA-GIS. Our results show that internal climate model variability can lead to substantial differences in the extent to which the future distributions of species are projected to change. These can be greater than differences resulting from between-climate model variability. Further, different conclusions regarding the vulnerability of species to climate change can be reached due to internal model variability. Clearly, several climate models, each represented by multiple realizations, are required if we are to adequately capture the range of uncertainty associated with projecting species distributions in the future.
---
paper_title: An improved approach for predicting the distribution of rare and endangered species from occurrence and pseudo-absence data
paper_content:
Summary 1. Few examples of habitat-modelling studies of rare and endangered species exist in the literature, although from a conservation perspective predicting their distribution would prove particularly useful. Paucity of data and lack of valid absences are the probable reasons for this shortcoming. Analytic solutions to accommodate the lack of absence include the ecological niche factor analysis (ENFA) and the use of generalized linear models (GLM) with simulated pseudo-absences. 2. In this study we tested a new approach to generating pseudo-absences, based on a preliminary ENFA habitat suitability (HS) map, for the endangered species Eryngium alpinum . This method of generating pseudo-absences was compared with two others: (i) use of a GLM with pseudo-absences generated totally at random, and (ii) use of an ENFA only. 3. The influence of two different spatial resolutions (i.e. grain) was also assessed for tackling the dilemma of quality (grain) vs. quantity (number of occurrences). Each combination of the three above-mentioned methods with the two grains generated a distinct HS map. 4. Four evaluation measures were used for comparing these HS maps: total deviance explained, best kappa, Gini coefficient and minimal predicted area (MPA). The last is a new evaluation criterion proposed in this study. 5. Results showed that (i) GLM models using ENFA-weighted pseudo-absence provide better results, except for the MPA value, and that (ii) quality (spatial resolution and locational accuracy) of the data appears to be more important than quantity (number of occurrences). Furthermore, the proposed MPA value is suggested as a useful measure of model evaluation when used to complement classical statistical measures. 6. Synthesis and applications. We suggest that the use of ENFA-weighted pseudoabsence is a possible way to enhance the quality of GLM-based potential distribution maps and that data quality (i.e. spatial resolution) prevails over quantity (i.e. number of data). Increased accuracy of potential distribution maps could help to define better suitable areas for species protection and reintroduction.
---
paper_title: Spatially explicit models to analyze forest loss and fragmentation between 1976 and 2020 in southern Chile
paper_content:
Forest fragmentation threatens biodiversity in one of the last remaining temperate rainforests that occur in South America. We study the current and future impacts of fragmentation on spatial configuration of forest habitats at the landscape level time in southern Chile. For this purpose, we identify the geophysical variables (“pattern drivers”) that explain the spatial patterns of forest loss and fragmentation between 1976 and 1999 using both a GIS-based land-use change model (GEOMOD) and spatially explicit logistic regression. Then, we project where and how much forest fragmentation will occur in the future by extrapolation of the current rate of deforestation to 2010 and 2020. Both modeling approaches showed consistent and complementary results in terms of the pattern drivers that were most related to deforestation. Between 1976 and 1999, forest fragmentation has occurred mainly from the edges of small fragments situated on gentle slopes (less than 10°) and far away from rivers. We predict that patch density will decline from 2010 to 2020, and that total forest interior area and patch proximity will further decline as a result of forest fragmentation. Drivers identified by these approaches suggest that deforestation is associated with observed local socio-economic activities such as clearance of forest for pasture and crops and forest logging for fuelwood.
---
paper_title: Uncertainty of bioclimate envelope models based on the geographical distribution of species
paper_content:
Aim We explored the effects of prevalence, latitudinal range and spatial autocorrelation of species distribution patterns on the accuracy of bioclimate envelope models of butterflies. ::: ::: ::: ::: Location Finland, northern Europe. ::: ::: ::: ::: Methods The data of a national butterfly atlas survey (NAFI) carried out in 1991–2003 with a resolution of 10 × 10 km were used in the analyses. Generalized additive models (GAM) were constructed, for each of 98 species, to estimate the probability of occurrence as a function of climate variables. Model performance was measured using the area under the curve (AUC) of a receiver operating characteristic (ROC) plot. Observed differences in modelling accuracy among species were related to the species’ geographical attributes using multivariate GAM. ::: ::: ::: ::: Results Accuracies of the climate–butterfly models varied from low to very high (AUC values 0.59–0.99), with a mean of 0.79. The modelling performance was related negatively to the latitudinal range and prevalence, and positively to the spatial autocorrelation of the species distribution. These three factors accounted for 75.2% of the variation in the modelling accuracy. Species at the margin of their range or with low prevalence were better predicted than widespread species, and species with clumped distributions better than scattered dispersed species. ::: ::: ::: ::: Main conclusions The results from this study indicate that species’ geographical attributes highly influence the behaviour and uncertainty of species–climate models, which should be taken into account in biogeographical modelling studies and assessments of climate change impacts.
---
paper_title: Designing the core zone in a biosphere reserve based on suitable habitats: Yancheng Biosphere Reserve and the red crowned crane (Grus japonensis)
paper_content:
Although much research has been undertaken to design nature reserves, there are few practical methods to determine the interior structure of a reserve. A procedure for design of the core zone in reserves is proposed. As a case study, the core zone in Yancheng Biosphere Reserve, People's Republic of China, which was established to preserve the endangered red crowned crane (Grus japonensis) is designed. A statistical habitat model using geographic information system (GIS) is developed to predict crane presence or absence. Based on predicted suitable habitats, the minimum core zone is defined. More suitable habitats can be contained in the designated core zone, and this will be beneficial to the conservation and restoration for crane habitats. (C) 1999 Elsevier Science Ltd. All rights reserved.
---
paper_title: Methods and uncertainties in bioclimatic envelope modelling under climate change
paper_content:
Potential impacts of projected climate change on biodiversity are often assessed using single-species bioclimatic 'envelope' models. Such models are a special case of species distribution models in which the current geographical distribution of species is related to climatic variables so to enable projections of distributions under future climate change scenarios. This work reviews a number of critical methodological issues that may lead to uncertainty in predictions from bioclimatic modelling. Particular attention is paid to recent developments of bioclimatic modelling that address some of these issues as well as to the topics where more progress needs to be made. Developing and applying bioclimatic models in a informative way requires good understanding of a wide range of methodologies, including the choice of modelling technique, model validation, collinearity, autocorrelation, biased sampling of explanatory variables, scaling and impacts of non- climatic factors. A key challenge for future research is integrating factors such as land cover, direct CO2 effects, biotic interactions and dispersal mechanisms into species-climate models. We conclude that, although bioclimatic envelope models have a number of important advantages, they need to be applied only when users of models have a thorough understanding of their limitations and uncertainties.
---
paper_title: Network connectivity and dispersal barriers: using geographical information system (GIS) tools to predict landscape scale distribution of a key predator (Esox lucius) among lakes
paper_content:
The keystone piscivore northern pike Esox lucius can structure fish communities, and models predicting pike-focused connectivity will be important for management of many waters. We explored the ability of pike to colonize upstream locations and modelled presence-absence in lakes based on landscape features derived from maps. An upstream connectivity model (UC model) was generated using data from 87 lakes. We validated the UC model with retrospective whole-lake experiments involving introductions (n = 49) and extirpations (by rotenone) of pike (n = 96), as well as with the natural distribution of pike in lakes (n = 1365) within 26 drainage basin networks in northern Sweden. The UC model predicted the incidence of pike in lakes with stream-connections with 95.4% accuracy, based mainly on a single variable, S-V5max, that measures the minimum distance found between 5 m elevation intervals (= maximum stream slope) along watercourses from nearest downstream source of potential immigrants. Recolonizations of pike in rotenone lakes generated a near-identical classification tree, as in the UC model. The classification accuracy of pike presence in the external validation procedure ranged from 88.7 to 98.7% between different drainage basins. Predictions of pike absence were not as accurate, due possibly to undetected introductions, but still lead to 86.6% overall accuracy of the external validation. Most lakes lacking pike, but misclassified as having pike based on low S-V5max, were isolated from downstream sources of pike by subsurface stream flow through bouldery areas (SSB). Synthesis and applications. The variable S-V5max provides managers with a tool for revealing the location and severity of natural dispersal barriers to pike (and logically also barriers to other species with equivalent or less dispersal capacity). Because presented models only require map-based information, and have high predictive power, they may have the potential to be of fundamental use in predicting distribution of freshwater fish. These predictions may provide the means for prioritizing in risk assessment and control programmes to combat pike invasions, as well as contribute to determining a reference state of species incidence in specific lakes. Our results also point towards a possibility that, even where stream slope is low, long-term effective barriers may be designed that mimic natural SSB.
---
paper_title: POTENTIAL EFFECTS OF CLIMATE CHANGE ON ECOSYSTEM AND TREE SPECIES DISTRIBUTION IN BRITISH COLUMBIA
paper_content:
A new ecosystem-based climate envelope modeling approach was applied to assess potential climate change impacts on forest communities and tree species. Four orthogonal canonical discriminant functions were used to describe the realized climate space for British Columbia's ecosystems and to model portions of the realized niche space for tree species under current and predicted future climates. This conceptually simple model is capable of predicting species ranges at high spatial resolutions far beyond the study area, including outlying populations and southern range limits for many species. We analyzed how the realized climate space of current ecosystems changes in extent, elevation, and spatial distribution under climate change scenarios and evaluated the implications for potential tree species habitat. Tree species with their northern range limit in British Columbia gain potential habitat at a pace of at least 100 km per decade, common hardwoods appear to be generally unaffected by climate change, and some of the most important conifer species in British Columbia are expected to lose a large portion of their suitable habitat. The extent of spatial redistribution of realized climate space for ecosystems is considerable, with currently important sub-boreal and montane climate regions rapidly disappearing. Local predictions of changes to tree species frequencies were generated as a basis for systematic surveys of biological response to climate change.
---
paper_title: Accessible habitat: an improved measure of the effects of habitat loss and roads on wildlife populations
paper_content:
Habitat loss is known to be the main cause of the current global decline in biodiversity, and roads are thought to affect the persistence of many species by restricting movement between habitat patches. However, measuring the effects of roads and habitat loss separately means that the configuration of habitat relative to roads is not considered. We present a new measure of the combined effects of roads and habitat amount: accessible habitat. We define accessible habitat as the amount of habitat that can be reached from a focal habitat patch without crossing a road, and make available a GIS tool to calculate accessible habitat. We hypothesize that accessible habitat will be the best predictor of the effects of habitat loss and roads for any species for which roads are a major barrier to movement. We conducted a case study of the utility of the accessible habitat concept using a data set of anuran species richness from 27 ponds near a motorway. We defined habitat as forest in this example. We found that accessible habitat was not only a better predictor of species richness than total habitat in the landscape or distance to the motorway, but also that by failing to consider accessible habitat we would have incorrectly concluded that there was no effect of habitat amount on species richness.
---
paper_title: Bioclimatic Analysis to Enhance Reintroduction Biology of the Endangered Helmeted Honeyeater (Lichenostomus melanops cassidix) in Southeastern Australia
paper_content:
Reintroduction programs are a high-risk conservation strategy for restoring populations of endangered species. The success of these programs often depends on the ability to identify suitable habitat within the species' former range. Bioclimatic analysis offers an empirical, explicit, robust, and repeatable method to analyze large areas rapidly using a small number of locality records, and in turn predicting (and/or reconstructing) its potential distribution limits. This approach therefore can estimate the broad limits of the distribution of a taxon, using data that may be inadequate for standard forms of statistical analysis. We illustrate the potential value of bioclimatic modeling for reintroduction biology using a case study of the highly endangered Helmeted Honeyeater (Lichenostomus melanops cassidix) from Victoria, southeastern Australia. The results of our analyses assisted us to both predict the former range limits of the Helmeted Honeyeater and determine the broad limits of those areas that may contain potentially suitable sites for future reintroduction programs for the subspecies. The analysis predicted that the range of the Helmeted Honeyeater extends from the Yarra River district east of Melbourne, south to the Western Port Bay and east as far as the Morwell area of Victoria. The climatic characteristics of habitat occupied by the extant population of the Helmeted Honeyeater were found to be unique within its predicted range. We recommend that reintroduction efforts therefore be concentrated within this small area, as has occurred to date.
---
paper_title: Making better biogeographical predictions of species' distributions
paper_content:
Summary 1. Biogeographical models of species’ distributions are essential tools for assessing impacts of changing environmental conditions on natural communities and ecosystems. Practitioners need more reliable predictions to integrate into conservation planning (e.g. reserve design and management). 2. Most models still largely ignore or inappropriately take into account important features of species’ distributions, such as spatial autocorrelation, dispersal and migration, biotic and environmental interactions. Whether distributions of natural communities or ecosystems are better modelled by assembling individual species’ predictions in a bottom-up approach or modelled as collective entities is another important issue. An international workshop was organized to address these issues. 3. We discuss more specifically six issues in a methodological framework for generalized regression: (i) links with ecological theory; (ii) optimal use of existing data and artificially generated data; (iii) incorporating spatial context; (iv) integrating ecological and environmental interactions; (v) assessing prediction errors and uncertainties; and (vi) predicting distributions of communities or collective properties of biodiversity. 4. Synthesis and applications. Better predictions of the effects of impacts on biological communities and ecosystems can emerge only from more robust species’ distribution models and better documentation of the uncertainty associated with these models. An improved understanding of causes of species’ distributions, especially at their range limits, as well as of ecological assembly rules and ecosystem functioning, is necessary if further progress is to be made. A better collaborative effort between theoretical and functional ecologists, ecological modellers and statisticians is required to reach these goals.
---
paper_title: A global organism detection and monitoring system for non-native species
paper_content:
Abstract Harmful invasive non-native species are a significant threat to native species and ecosystems, and the costs associated with non-native species in the United States is estimated at over $120 Billion/year. While some local or regional databases exist for some taxonomic groups, there are no effective geographic databases designed to detect and monitor all species of non-native plants, animals, and pathogens. We developed a web-based solution called the Global Organism Detection and Monitoring (GODM) system to provide real-time data from a broad spectrum of users on the distribution and abundance of non-native species, including attributes of their habitats for predictive spatial modeling of current and potential distributions. The four major subsystems of GODM provide dynamic links between the organism data, web pages, spatial data, and modeling capabilities. The core survey database tables for recording invasive species survey data are organized into three categories: “Where, Who & When, and What.” Organisms are identified with Taxonomic Serial Numbers from the Integrated Taxonomic Information System. To allow users to immediately see a map of their data combined with other user's data, a custom geographic information system (GIS) Internet solution was required. The GIS solution provides an unprecedented level of flexibility in database access, allowing users to display maps of invasive species distributions or abundances based on various criteria including taxonomic classification (i.e., phylum or division, order, class, family, genus, species, subspecies, and variety), a specific project, a range of dates, and a range of attributes (percent cover, age, height, sex, weight). This is a significant paradigm shift from “map servers” to true Internet-based GIS solutions. The remainder of the system was created with a mix of commercial products, open source software, and custom software. Custom GIS libraries were created where required for processing large datasets, accessing the operating system, and to use existing libraries in C++, R, and other languages to develop the tools to track harmful species in space and time. The GODM database and system are crucial for early detection and rapid containment of invasive species.
---
paper_title: Interactions Among Spatial Scales Constrain Species Distributions in Fragmented Urban Landscapes
paper_content:
Understanding species' responses to habitat loss is a major challenge for ecologists and conservation biologists, who need quantitative, yet practical, frameworks to design landscapes better able to sustain native species. I here develop one such framework by synthesizing two ecological paradigms: scale-dependence and constraint-like interactions in biological phenomena. I develop a model and apply it to birds around Tucson, USA, investigating the manner in which spatial scales interact to constrain species distributions in fragmented urban landscapes. Species' responses vary in interesting ways. Surprisingly, most show situations in which habitat at one spatial scale constrains the influence of habitat at another scale. I discuss the implications of this work for conservation in human-dominated landscapes, and the need to recognize constraint-like interactions among processes and spatial scales in ecology.
---
paper_title: Effect of species rarity on the accuracy of species distribution models for reptiles and amphibians in southern California
paper_content:
Aim Several studies have found that more accurate predictive models of species’ occurrences can be developed for rarer species; however, one recent study found the relationship between range size and model performance to be an artefact of sample prevalence, that is, the proportion of presence versus absence observations in the data used to train the model. We examined the effect of model type, species rarity class, species’ survey frequency, detectability and manipulated sample prevalence on the accuracy of distribution models developed for 30 reptile and amphibian species. Location Coastal southern California, USA. Methods Classification trees, generalized additive models and generalized linear models were developed using species presence and absence data from 420 locations. Model performance was measured using sensitivity, specificity and the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot based on twofold cross-validation, or on bootstrapping. Predictors included climate, terrain, soil and vegetation variables. Species were assigned to rarity classes by experts. The data were sampled to generate subsets with varying ratios of presences and absences to test for the effect of sample prevalence. Join count statistics were used to characterize spatial dependence in the prediction errors. Results Species in classes with higher rarity were more accurately predicted than common species, and this effect was independent of sample prevalence. Although positive spatial autocorrelation remained in the prediction errors, it was weaker than was observed in the species occurrence data. The differences in accuracy among model types were slight. Main conclusions Using a variety of modelling methods, more accurate species distribution models were developed for rarer than for more common species. This was presumably because it is difficult to discriminate suitable from unsuitable habitat for habitat generalists, and not as an artefact of the effect of sample prevalence on model estimation.
---
paper_title: A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and kappa
paper_content:
Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence–absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have dramatic effects on model accuracy as well as the predicted prevalence for the variable (the overall proportion of locations where the variable is predicted to be present). The traditional default is to simply use a threshold of 0.5 as the cut-off, but this does not necessarily preserve the observed prevalence or result in the highest prediction accuracy, especially for data sets with very high or very low observed prevalence. Alternatively, the thresholds can be chosen to optimize map accuracy, as judged by various criteria. Here we examine the effect of 11 of these potential criteria on predicted prevalence, prediction accuracy, and the resulting map output. Comparisons are made using output from presence–absence models developed for 13 tree species in the northern mountains of Utah. We found that species with poor model quality or low prevalence were most sensitive to the choice of threshold. For these species, a 0.5 cut-off was unreliable, sometimes resulting in substantially lower kappa and underestimated prevalence, with possible detrimental effects on a management decision. If a management objective requires a map to portray unbiased estimates of species prevalence, then the best results were obtained from thresholds deliberately chosen so that the predicted prevalence equaled the observed prevalence, followed closely by thresholds chosen to maximize kappa. These were also the two criteria with the highest mean kappa from our independent test data. For particular management applications the special cases of user specified required accuracy may be most appropriate. Ultimately, maps will typically have multiple and somewhat conflicting management applications. Therefore, providing users with a continuous probability surface may be the most versatile and powerful method, allowing threshold choice to be matched with each maps intended use.
---
paper_title: Mapping urban areas on a global scale: which of the eight maps now available is more accurate?
paper_content:
Eight groups from government and academia have created 10 global maps that offer a ca 2000 portrait of land in urban use. Our initial investigation found that their estimates of the total amount of urban land differ by as much as an order of magnitude (0.27–3.52 ×106 km2). Since it is not possible for these heterogeneous maps to all represent urban areas accurately, we undertake the first global accuracy assessment of these maps using a two-tiered approach that draws on a stratified random sample of 10 000 high-resolution Google Earth validation sites and 140 medium-resolution Landsat-based city maps. Employing a wide range of accuracy measures at different spatial scales, we conclude that the new MODIS 500 m resolution global urban map has the highest accuracy, followed by a thresholded version of the Global Impervious Surface Area map based on the Night-time Lights and LandScan datasets.
---
paper_title: Harshness in image classification accuracy assessment
paper_content:
Thematic mapping via a classification analysis is one of the most common applications of remote sensing. The accuracy of image classifications is, however, often viewed negatively. Here, it is suggested that the approach to the evaluation of image classification accuracy typically adopted in remote sensing may often be unfair, commonly being rather harsh and misleading. It is stressed that the widely used target accuracy of 85% can be inappropriate and that the approach to accuracy assessment adopted commonly in remote sensing is pessimistically biased. Moreover, the maps produced by other communities, which are often used unquestioningly, may have a low accuracy if evaluated from the standard perspective adopted in remote sensing. A greater awareness of the problems encountered in accuracy assessment may help ensure that perceptions of classification accuracy are realistic and reduce unfair criticism of thematic maps derived from remote sensing.
---
paper_title: Sample size determination for image classification accuracy assessment and comparison
paper_content:
Many factors influence the quality and value of a classification accuracy assessment and evaluation programme. This paper focuses on the size of the testing set(s) used with particular regard to the impacts on accuracy assessment and comparison. Testing set size is important as the use of an inappropriately large or small sample could lead to limited and sometimes erroneous assessments of accuracy and of differences in accuracy. Here, some of the basic statistical principles of sample size determination are outlined, including a discussion of Type II errors and their control. The paper provides a discussion on some of the basic issues of sample size determination for accuracy assessment and includes factors linked to accuracy comparison. With the latter, the researcher should specify the effect size (minimum meaningful difference in accuracy), significance level and power used in an analysis and ideally also fit confidence limits to derived estimates. This will help design a study and aid the use of appro...
---
paper_title: The abuse of power: The pervasive fallacy of power calculations for data analysis
paper_content:
It is well known that statistical power calculations can be valuable in planning an experiment. There is also a large literature advocating that power calculations be made whenever one performs a statistical test of a hypothesis and one obtains a statistically nonsignificant result. Advocates of such post-experiment power calculations claim the calculations should be used to aid in the interpretation of the experimental results. This approach, which appears in various forms, is fundamentally flawed. We document that the problem is extensive and present arguments to demonstrate the flaw in the logic.
---
paper_title: More complex distribution models or more representative data
paper_content:
Distribution models for species are increasingly used to summarize species’ geography in conservation analyses. These models use increasingly sophisticated modeling techniques, but often lack detailed examination of the quality of the biological occurrence data on which they are based. I analyze the results of the best comparative study of the performance of different modeling techniques, which used pseudo-absence data selected at random. I provide an example of variation in model accuracy depending on the type of absence information used, showing that good model predictions depend most critically on better biological data.
---
paper_title: Statistical Power of Presence-Absence Data to Detect Population Declines
paper_content:
Abstract: Population declines may be inferred from a decrease in the number of sites at which a species is detected. Although such presence-absence data often are interpreted informally, it is simple to test the statistical significance of changes in the number of sites occupied by a species. I used simulations to examine the statistical power (i.e., the probability of making the Type II error that no population decline has occurred when the population actually has declined) of presence-absence designs. Most presence-absence designs have low power to detect declines of <20–50% in populations but have adequate power to detect steeper declines. Power was greater if the population disappeared entirely from a subset of formerly occupied sites than if it declined evenly over its entire range. Power also rose with (1) increases in the number of sites surveyed; (2) increases in population density or sampling effort at a site; and (3) decreases in spatial variance in population density. Because of potential problems with bias and inadequate power, presence-absence designs should be used and interpreted cautiously.
---
paper_title: A review of methods for the assessment of prediction errors in conservation presence/absence models
paper_content:
Summary Predicting the distribution of endangered species from habitat data is frequently perceived to be a useful technique. Models that predict the presence or absence of a species are normally judged by the number of prediction errors. These may be of two types: false positives and false negatives. Many of the prediction errors can be traced to ecological processes such as unsaturated habitat and species interactions. Consequently, if prediction errors are not placed in an ecological context the results of the model may be misleading. The simplest, and most widely used, measure of prediction accuracy is the number of correctly classified cases. There are other measures of prediction success that may be more appropriate. Strategies for assessing the causes and costs of these errors are discussed. A range of techniques for measuring error in presence/absence models, including some that are seldom used by ecologists (e.g. ROC plots and cost matrices), are described. A new approach to estimating prediction error, which is based on the spatial characteristics of the errors, is proposed. Thirteen recommendations are made to enable the objective selection of an error assessment technique for ecological presence/absence models.
---
paper_title: Threshold criteria for conversion of probability of species presence to either–or presence–absence
paper_content:
ABSTRACT For many applications the continuous prediction afforded by species distribution modeling must be converted to a map of presence or absence, so a threshold probability indicative of species presence must be fixed. Because of the bias in probability outputs due to frequency of presences (prevalence), a fixed threshold value, such as 0.5, does not usually correspond to the threshold above which the species is more likely to be present. In this paper four threshold criteria are compared for a wide range of sample sizes and prevalences, modeling a virtual species in order to avoid the omnipresent error sources that the use of real species data implies. In general, sensitivity–specificity difference minimizer and sensitivity–specificity sum maximizer criteria produced the most accurate predictions. The widely-used 0.5 fixed threshold and Kappa-maximizer criteria are the worst ones in almost all situations. Nevertheless, whatever the criteria used, the threshold value chosen and the research goals that determined its choice must be stated.
---
paper_title: Effect of errors in ground truth on classification accuracy
paper_content:
The effect of errors in ground truth on the estimated thematic accuracy of a classifier is considered. A relationship is derived between the true accuracy of a classifier relative to ground truth without errors, the actual accuracy of the ground truth used, and the measured accuracy of the classifier as a function of the number of classes. We show that if the accuracy of the ground truth is known or can be estimated, the true accuracy of a classifier can be estimated from the measured accuracy. In a series of simulations our method is shown to produce unbiased estimates of the true accuracy of the classifier with an uncertainty that depends on the number of samples and the accuracy of the ground truth. A method for determining the relative performance of two or more classifiers over the same area is then discussed. The results indicate that, as the number of samples increases, the performance of the classifiers can be effectively differentiated using inaccurate ground truth. It is argued that relative acc...
---
paper_title: Effects of species and habitat positional errors on the performance and interpretation of species distribution models
paper_content:
Aim A key assumption in species distribution modelling is that both species and environmental data layers contain no positional errors, yet this will rarely be true. This study assesses the effect of introduced positional errors on the performance and interpretation of species distribution models. Location Baixo Alentejo region of Portugal. Methods Data on steppe bird occurrence were collected using a random ::: stratified sampling design on a 1-km2 pixel grid. Environmental data were sourced from satellite imagery and digital maps. Error was deliberately introduced into the species data as shifts in a random direction of 0–1, 2–3, 4–5 and 0–5 pixels. Whole habitat layers were shifted by 1 pixel to cause mis-registration, and the cumulative effect of one to three shifted layers investigated. Distribution models were built for ::: three species using three algorithms with three replicates. Test models were compared with controls without errors. ::: Results Positional errors in the species data led to a drop in model performance (larger errors having larger effects – typically up to 10% drop in area under the curve on average), although not enough for models to be rejected. Model interpretation was more severely affected with inconsistencies in the contributing variables. Errors in the habitat layers had similar although lesser effects. Main conclusions Models with species positional errors are hard to detect, often ::: statistically good, ecologically plausible and useful for prediction, but interpreting them is dangerous. Mis-registered habitat layers produce smaller effects probably ::: because shifting entire layers does not break down the correlation structure to the same extent as random shifts in individual species observations. Spatial autocorrelation in the habitat layers may protect against species positional errors to some extent but the relationship is complex and requires further work. The key recommendation must be that positional errors should be minimised through careful field design and data processing.
---
paper_title: Measuring the accuracy of species distribution models: a review
paper_content:
Species distribution models (SDMs) are empirical models relating species occurrence to environmental variables based on statistical or other response surfaces. Species distribution modeling can be used as a tool to solve many theoretical and applied ecological and environmental problems, which include testing biogeographical, ecological and evolutionary hypotheses, assessing species invasion and climate change impact, and supporting conservation planning and reserve selection. The utility of SDM in real world applications requires the knowledge of the model’s accuracy. The accuracy of a model includes two aspects: discrimination capacity and reliability. The former is the power of the model to differentiate presences from absences; and the latter refers to the capability of the predicted probabilities to reflect the observed proportion of sites occupied by the subject species. Similar methodology has been used for model accuracy assessment in different fields, including medical diagnostic test, weather forecasting and machine learning, etc. Some accuracy measures are used in all fields, e.g. the overall accuracy and the area under the receiver operating characteristic curve; while the use of other measures is largely restricted to specific fields, e.g. F-measure is mainly used in machine learning field, or is referred to by different names in different fields, e.g. “true skill statistic” is used in atmospheric science and it is called “Youden’s J” in medical diagnostic field. In this paper we review those accuracy measures typically used in ecology. Generally, the measures can be divided into two groups: threshold-dependent and thresholdindependent. Measures in the first group are used for binary predictions, and those in the second group are used for continuous predictions. Continuous predictions may be transformed to binary ones if a specific threshold is employed. In such cases, the threshold-dependent accuracy measures can also be used. The threshold-dependent indices used in or introduced to SDM field include overall accuracy, sensitivity, specificity, positive predictive value, negative predictive value, odds ratio, true skill statistic, F-measure, Cohen’s kappa, and normalized mutual information (NMI). However, since NMI only measures the agreement between two patterns, it cannot differentiate the worse-than-random models from the better-thanrandom models, which reduces its utility as an accuracy measure. The threshold-independent indices used in or introduced to the SDM field include the area under the receiver operating characteristic curve (AUC), Gini index, and point biserial correlation coefficient. The proportion of explained deviance 2 D and its adjusted form have been also introduced into SDM field. But this adjusted metric has no theoretical foundation in the context of generalized linear modeling. Therefore, we provide another adjusted form, which was proposed by H. V. Houwelingen based on the asymptotic 2 χ distribution of the log-likelihood statistics. Its superiority over other related measures has been found through previous simulation studies. We also provide another analogous measure, the coefficient of determination 2 R , which has had a long history in weather forecast verification and was also recommended for use in medical diagnosis. Though these measures 2 D and 2 R are routinely used to evaluate generalized linear models (GLMs), we argue that nothing prevents them from being applied to other GLM-like models. In SDM accuracy assessment, discrimination capacity is often considered, but model reliability is frequently ignored. The primary reason for this is that no reliability measure has been introduced into the ecological literature. To meet this need we also suggest that root mean square error be used as a reliability measure. Its squared form, mean square error, has been used in meteorology for a long time, and is called Brier’s score. We also discuss the effect of prevalence dependence of accuracy measures and the precision of accuracy estimates.
---
paper_title: Assessing the accuracy of species distribution models: prevalence, kappa and the true skill statistic (TSS)
paper_content:
Summary ::: 1In recent years the use of species distribution models by ecologists and conservation managers has increased considerably, along with an awareness of the need to provide accuracy assessment for predictions of such models. The kappa statistic is the most widely used measure for the performance of models generating presence–absence predictions, but several studies have criticized it for being inherently dependent on prevalence, and argued that this dependency introduces statistical artefacts to estimates of predictive accuracy. This criticism has been supported recently by computer simulations showing that kappa responds to the prevalence of the modelled species in a unimodal fashion. ::: 2In this paper we provide a theoretical explanation for the observed dependence of kappa on prevalence, and introduce into ecology an alternative measure of accuracy, the true skill statistic (TSS), which corrects for this dependence while still keeping all the advantages of kappa. We also compare the responses of kappa and TSS to prevalence using empirical data, by modelling distribution patterns of 128 species of woody plant in Israel. ::: 3The theoretical analysis shows that kappa responds in a unimodal fashion to variation in prevalence and that the level of prevalence that maximizes kappa depends on the ratio between sensitivity (the proportion of correctly predicted presences) and specificity (the proportion of correctly predicted absences). In contrast, TSS is independent of prevalence. ::: 4When the two measures of accuracy were compared using empirical data, kappa showed a unimodal response to prevalence, in agreement with the theoretical analysis. TSS showed a decreasing linear response to prevalence, a result we interpret as reflecting true ecological phenomena rather than a statistical artefact. This interpretation is supported by the fact that a similar pattern was found for the area under the ROC curve, a measure known to be independent of prevalence. ::: 5Synthesis and applications. Our results provide theoretical and empirical evidence that kappa, one of the most widely used measures of model performance in ecology, has serious limitations that make it unsuitable for such applications. The alternative we suggest, TSS, compensates for the shortcomings of kappa while keeping all of its advantages. We therefore recommend the TSS as a simple and intuitive measure for the performance of species distribution models when predictions are expressed as presence–absence maps.
---
paper_title: WAS IT THERE? DEALING WITH IMPERFECT DETECTION FOR SPECIES PRESENCE/ABSENCE DATA†
paper_content:
Summary ::: ::: Species presence/absence surveys are commonly used in monitoring programs, metapopulation studies and habitat modelling, yet they can never be used to confirm that a species is absent from a location. Was the species there but not detected, or was the species genuinely absent? Not accounting for imperfect detection of the species leads to misleading conclusions about the status of the population under study. Here some recent modelling developments are reviewed that explicitly allow for the detection process, enabling unbiased estimation of occupancy, colonization and local extinction probabilities. The methods are illustrated with a simple analysis of presence/absence data collected on larvae and metamorphs of tiger salamander (Ambystoma tigrinum) in 2000 and 2001 from Minnesota farm ponds, which highlights that misleading conclusions can result from naive analyses that do not explicitly account for imperfect detection.
---
paper_title: Statistical Techniques for Evaluating the Diagnostic Utility of Laboratory Tests
paper_content:
Clinical laboratory data is used to help classify patients into diagnostic disease categories so that appropriate therapy may be implemented and prognosis estimated. Unfortunately, the process of correctly classifying patients with respect to disease status is often difficult. Patients may have several concurrent disease processes and the clinical signs and symptoms of many diseases lack specificity. In addition, results of laboratory tests and other diagnostic procedures from healthy and diseased individuals often overlap. Finally, advances in computer technology and laboratory automation have resulted in an extraordinary increase in the amount of information produced by the clinical laboratory; information which must be correctly evaluated and acted upon so that appropriate treatment and additional testing, if necessary, can be implemented. Clinical informatics refers to a broad array of statistical methods used for the evaluation and management of diagnostic information necessary for appropriate patient care. Within the realm of clinical chemistry, clinical informatics may be used to indicate the acquisition, evaluation, representation and interpretation of clinical chemistry data. This review discusses some of the techniques that should be used for the evaluation of the diagnostic utility of clinical laboratory data. The major topics to be covered include probabilistic approaches to data evaluation, and information theory. The latter topic will be discussed in some detail because it introduces important concepts useful in providing for cost-effective, quality patient care. In addition, an example illustrating how the informational value of diagnostic tests can be determined is shown.
---
paper_title: On the accuracy of landscape pattern analysis using remote sensing data
paper_content:
Advances in remote sensing technologies have provided practical means for land use and land cover mapping which is critically important for landscape ecological studies. However, all classifications of remote sensing data are subject to different kinds of errors, and these errors can be carried over or propagated in subsequent landscape pattern analysis. When these uncertainties go unreported, as they do commonly in the literature, they become hidden errors. While this is apparently an important issue in the study of landscapes from either a biophysical or socioeconomic perspective, limited progress has been made in resolving this problem. Here we discuss how errors of mapped data can affect landscape metrics and possible strategies which can help improve the reliability of landscape pattern analysis.
---
paper_title: The influence of spatial errors in species occurrence data used in distribution models
paper_content:
Summary 1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species’ environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species’ occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications . To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error. Journal of Applied Ecology (2007)
---
paper_title: Obtaining Environmental Favourability Functions from Logistic Regression
paper_content:
Logistic regression is a statistical tool widely used for predicting species’ potential distributions starting from presence/absence data and a set of independent variables. However, logistic regression equations compute probability values based not only on the values of the predictor variables but also on the relative proportion of presences and absences in the dataset, which does not adequately describe the environmental favourability for or against species presence. A few strategies have been used to circumvent this, but they usually imply an alteration of the original data or the discarding of potentially valuable information. We propose a way to obtain from logistic regression an environmental favourability function whose results are not affected by an uneven proportion of presences and absences. We tested the method on the distribution of virtual species in an imaginary territory. The favourability models yielded similar values regardless of the variation in the presence/absence ratio. We also illustrate with the example of the Pyrenean desman’s (Galemys pyrenaicus) distribution in Spain. The favourability model yielded more realistic potential distribution maps than the logistic regression model. Favourability values can be regarded as the degree of membership of the fuzzy set of sites whose environmental conditions are favourable to the species, which enables applying the rules of fuzzy logic to distribution modelling. They also allow for direct comparisons between models for species with different presence/absence ratios in the study area. This makes them more useful to estimate the conservation value of areas, to design ecological corridors, or to select appropriate areas for species reintroductions.
---
paper_title: Evaluating Diagnostic Tests with Imperfect Standards
paper_content:
New diagnostic tests frequently are evaluated against gold standards that are assumed to classify patients with unerring accuracy according to the presence or absence of disease. In practice, gold standards rarely are perfect predictors of disease and tend to misclassify a small number of patients. When an imperfect standard is used to evaluate a diagnostic test, many commonly used measures of test performance are distorted. It is not widely appreciated that these distortions occur in predictable directions and that they may be of considerable magnitude, even when the gold standard has a high degree of accuracy. The diagnostic powers of clinical tests will be more accurately reported if consideration is given to the types of biases that result from the use of imperfect standards. Several different approaches may be used to minimize these distortions when evaluating new tests.
---
paper_title: Classification accuracy comparison: hypothesis tests and the use of confidence intervals in evaluations of difference, equivalence and non-inferiority
paper_content:
The comparison of classification accuracy statements has generally been based upon tests of difference or inequality when other scenarios and approaches may be more appropriate. Procedures for evaluating two scenarios with interest focused on the similarity in accuracy values, non-inferiority and equivalence, are outlined following a discussion of tests of difference (inequality). It is also suggested that the confidence interval of the difference in classification accuracy may be used as well as or instead of conventional hypothesis testing to reveal more information about the disparity in the classification accuracy values compared.
---
paper_title: Surveyor consistency in presence/absence sampling for monitoring vegetation in a boreal forest
paper_content:
Vegetation assessments are a central part of many large-scale monitoring programmes. To accurately estimate change over time, consistent field methods are important. Presence/absence (P/A) sampling is considered to be less susceptible to judgement and measurement errors in comparison with visual cover assessment, although errors also occur with this method in complete species inventories. Few studies have evaluated surveyor consistency in P/A sampling with a limited list of species. In this study, the consistency of results in P/A sampling was evaluated in a field test, with different surveyors assessing the same sample plots. The results indicated a good consistency between surveyors and high accuracy according to a reference survey for many of the tested species, although some differences both between surveyors and in comparison with the reference survey were found. Comparing a group of surveyors with a larger experience of vegetation assessments with a group of surveyors with less experience indicated that the former were more accurate and consistent. No clear differences were found between different plot sizes tested. The main conclusion from the study is that P/A sampling is slightly affected by observer judgement bias, but that in comparison with the consistency of visual cover assessment observed in other studies, the difference between surveyors for many species is reasonably low.
---
paper_title: Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study
paper_content:
BackgroundA common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives.MethodsA simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC) corrected for verification bias varying both the rate and mechanism of verification.ResultsIn a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations.ConclusionScreening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.
---
paper_title: The impact of imperfect ground reference data on the accuracy of land cover change estimation
paper_content:
Error in the ground reference data set used in studies of land cover change can be a source of bias in the estimation of land cover change and of change detection accuracy. The magnitude of the bias introduced may be very large even if the ground reference data set is of a high accuracy. Sometimes the bias is of a predictable systematic nature and so may be reduced or even removed. The impacts of ground reference data error on the accuracy of estimates of the extent of change and on change detection accuracy were explored with simulated data. In one scenario illustrated, the producer's accuracy of change detection was estimated to be ∼61% when in reality it was 80%, the substantial underestimation of accuracy arising through the use of a ground reference data set with an accuracy of 90%. In the same scenario, the extent of change was also substantially overestimated at 26%, when in reality a change of only 20% had occurred. Reducing the effect of error in ground reference data will enable more accurate estimation of land cover change and a more realistic appraisal of the quality of remote sensing as a source of data on land cover change.
---
paper_title: Ecology and geography of avian influenza (HPAI H5N1) transmission in the Middle East and northeastern Africa
paper_content:
BackgroundThe emerging highly pathogenic avian influenza strain H5N1 ("HPAI-H5N1") has spread broadly in the past decade, and is now the focus of considerable concern. We tested the hypothesis that spatial distributions of HPAI-H5N1 cases are related consistently and predictably to coarse-scale environmental features in the Middle East and northeastern Africa.We used ecological niche models to relate virus occurrences to 8 km resolution digital data layers summarizing parameters of monthly surface reflectance and landform. Predictive challenges included a variety of spatial stratification schemes in which models were challenged to predict case distributions in broadly unsampled areas.ResultsIn almost all tests, HPAI-H5N1 cases were indeed occurring under predictable sets of environmental conditions, generally predicted absent from areas with low NDVI values and minimal seasonal variation, and present in areas with a broad range of and appreciable seasonal variation in NDVI values. Although we documented significant predictive ability of our models, even between our study region and West Africa, case occurrences in the Arabian Peninsula appear to follow a distinct environmental regime.ConclusionOverall, we documented a variable environmental "fingerprint" for areas suitable for HPAI-H5N1 transmission.
---
paper_title: Improved abundance prediction from presence-absence data
paper_content:
Aim Many ecological surveys record only the presence or absence of species in the cells of a rectangular grid. Ecologists have investigated methods for using these data to predict the total abundance of a species from the number of grid cells in which the species is present. Our aim is to improve such predictions by taking account of the spatial pattern of occupied cells, in addition to the number of occupied cells. Innovation We extend existing prediction models to include a spatial clustering variable. The extended models can be viewed as combining two macroecological regularities, the abundance‐occupancy regularity and a spatial clustering regularity. The models are estimated using data from five tropical forest censuses, including three Panamanian censuses (4, 6 and 50 ha), one Costa Rican census (16 ha) and one Puerto Rican census (16 ha). A serpentine grassland census (8 × 8 m) from northern California is also studied. Main conclusions Taking account of the spatial clustering of occupied cells improves abundance prediction from presence‐absence data, reducing the mean square error of log-predictions by roughly 54% relative to a benchmark Poisson predictor and by roughly 34% relative to current prediction methods. The results have high statistical significance.
---
paper_title: Evaluating presence-absence models in ecology: the need to account for prevalence
paper_content:
1. Models for predicting the distribution of organisms from environmental data are widespread in ecology and conservation biology. Their performance is invariably evaluated from the percentage success at predicting occurrence at test locations. 2. Using logistic regression with real data from 34 families of aquatic invertebrates in 180 Himalayan streams, we illustrate how this widespread measure of predictive accuracy is affected systematically by the prevalence (i.e. the frequency of occurrence) of the target organism. Many evaluations of presence-absence models by ecologists are inherently misleading. 3. With the same invertebrate models, we examined alternative performance measures used in remote sensing and medical diagnostics. We particularly explored receiver-operating characteristic (ROC) plots, from which were derived (i) the area under each curve (AUC), considered an effective indicator of model performance independent of the threshold probability at which the presence of the target organism is accepted, and (ii) optimized probability thresholds that maximize the percentage of true absences and presences that are correctly identified. We also evaluated Cohen's kappa, a measure of the proportion of all possible cases of presence or absence that are predicted correctly after accounting for chance effects. 4. AUC measures from ROC plots were independent of prevalence, but highly significantly correlated with the much more easily computed kappa. Moreover, when applied in predictive mode to test data, models with thresholds optimized by ROC erroneously overestimated true occurrence among scarcer organisms, often those of greatest conservation interest. We advocate caution in using ROC methods to optimize thresholds required for real prediction. 5. Our strongest recommendation is that ecologists reduce their reliance on prediction success as a performance measure in presence-absence modelling. Cohen's kappa provides a simple, effective, standardized and appropriate statistic for evaluating or comparing presence-absence models, even those based on different statistical algorithms. None of the performance measures we examined tests the statistical significance of predictive accuracy, and we identify this as a priority area for research and development.
---
paper_title: Sensitivity of species-distribution models to error, bias, and model design : An application to resource selection functions for woodland caribou
paper_content:
Models that predict distribution are now widely used to understand the patterns and processes of plant and animal occurrence as well as to guide conservation and management of rare or threatened species. Application of these methods has led to corresponding studies evaluating the sensitivity of model performance to requisite data and other factors that may lead to imprecise or false inferences. We expand upon these works by providing a relative measure of the sensitivity of model parameters and prediction to common sources of error, bias, and variability. We used a one-at-a-time sample design and GPS location data for woodland caribou (Rangifer tarandus caribou) to assess one common species-distribution model: a resource selection function. Our measures of sensitivity included change in coefficient values, prediction success, and the area of mapped habitats following the systematic introduction of geographic error and bias in occurrence data, thematic misclassification of resource maps, and variation in model design. Results suggested that error, bias and model variation have a large impact on the direct interpretation of coefficients. Prediction success and definition of important habitats were less responsive to the perturbations we introduced to the baseline model. Model coefficients, prediction success, and area of ranked habitats were most sensitive to positional error in species locations followed by sampling bias, misclassification of resources, and variation in model design. We recommend that researchers report, and practitioners consider, levels of error and bias introduced to predictive species-distribution models. Formal sensitivity and uncertainty analyses are the most effective means for evaluating and focusing improvements on input data and considering the range of values possible from imperfect models.
---
paper_title: Frequentist and Bayesian approaches to prevalence estimation using examples from Johne's disease.
paper_content:
Although frequentist approaches to prevalence estimation are simple to apply, there are circumstances where it is difficult to satisfy assumptions of asymptotic normality and nonsensical point estimates (greater than 1 or less than 0) may result. This is particularly true when sample sizes are small, test prevalences are low and imperfect sensitivity and specificity of diagnostic tests need to be incorporated into calculations of true prevalence. Bayesian approaches offer several advantages including direct computation of range-respecting interval estimates (e.g. intervals between 0 and 1 for prevalence) without the requirement of transformations or large-sample approximations, direct probabilistic interpretation, and the flexibility to model in a straightforward manner the probability of zero prevalence. In this review, we present frequentist and Bayesian methods for animal- and herd-level true prevalence estimation based on individual and pooled samples. We provide statistical methods for detecting differences between population prevalence and frequentist methods for sample size and power calculations. All examples are motivated using Mycobacterium avium subspecies paratuberculosis infection and we provide WinBUGS code for all examples of Bayesian estimation.
---
paper_title: The effects of species’ range sizes on the accuracy of distribution models: ecological phenomenon or statistical artefact?
paper_content:
Summary ::: ::: ::: 1 ::: Conservation scientists and resource managers increasingly employ empirical distribution models to aid decision-making. However, such models are not equally reliable for all species, and range size can affect their performance. We examined to what extent this effect reflects statistical artefacts arising from the influence of range size on the sample size and sampling prevalence (proportion of samples representing species presence) of data used to train and test models. ::: ::: 2 ::: Our analyses used both simulated data and empirical distribution models for 32 bird species endemic to South Africa, Lesotho and Swaziland. Models were built with either logistic regression or non-linear discriminant analysis, and assessed with four measures of model accuracy: sensitivity, specificity, Cohen's kappa and the area under the curve (AUC) of receiver-operating characteristic (ROC) plots. Environmental indices derived from Fourier-processed satellite imagery served as predictors. ::: ::: 3 ::: We first followed conventional modelling practice to illustrate how range size might influence model performance, when sampling prevalence reflects species’ natural prevalences. We then demonstrated that this influence is primarily artefactual. Statistical artefacts can arise during model assessment, because Cohen's kappa responds systematically to changes in prevalence. AUC, in contrast, is largely unaffected, and thus a more reliable measure of model performance. Statistical artefacts also arise during model fitting. Both logistic regression and discriminant analysis are sensitive to the sample size and sampling prevalence of training data. Both perform best when sample size is large and prevalence intermediate. ::: ::: 4 ::: Synthesis and applications. Species’ ecological characteristics may influence the performance of distribution models. Statistical artefacts, however, can confound results in comparative studies seeking to identify these characteristics. To mitigate artefactual effects, we recommend careful reporting of sampling prevalence, AUC as the measure of accuracy, and fixed, intermediate levels of sampling prevalence in comparative studies.
---
paper_title: Land change in the Brazilian Savanna (Cerrado), 1986–2002: Comparative analysis and implications for land-use policy
paper_content:
The Brazilian Cerrado, a biodiverse savanna ecoregion covering ∼1.8 million km2 south and east of the Amazon rainforest, is in rapid decline because of the expansion of modern agriculture. Previous studies of Cerrado land-use and land-cover (LULC) change imply spatial homogeneity, report widely varying rates of land conversion, use ambiguous LULC categories, and generally do not attempt to validate results. This study addresses this gap in the literature by analyzing moderate-resolution, multi-spectral satellite remote sensing data from 1986 to 2002 in two regions with identical underlying drivers. Unsupervised classification by the ISODATA algorithm indicates that Cerrado was converted to agro-pastoral land covers in 31% (3646 km2) of the study region in western Bahia and 24% (3011 km2) of the eastern Mato Grosso study region, while nearly 40% (4688 km2 and 5217 km2, respectively) of each study region remained unchanged. Although aggregate land change is similar, large and contiguous fragments persist in western Bahia, while smaller fragments remain in eastern Mato Grosso. These findings are considered in the current context of Cerrado land-use policy, which is dominated by the conservation set-aside and command-control policy models. The spatial characteristics of Cerrado remnants create considerable obstacles to implement the models; an alternative approach, informed by countryside biogeography, may encourage collaboration between state officials and farmer-landowners toward conservation land-use policies.
---
paper_title: Bias and prevalence effects on kappa viewed in terms of sensitivity and specificity
paper_content:
Abstract Paradoxical effects of bias and prevalence on the kappa coefficient are examined using the concepts of sensitivity and specificity. Results that appear paradoxical when viewed as a 2 × 2 table of frequencies do not appear paradoxical when viewed as a pair of sensitivity and specificity measures where each observer is treated as a predictor of the other observer. An adjusted kappa value can be obtained from these sensitivity/specificity measures but simulation studies indicate that it would result in substantial overestimation of reliability when bias or prevalence effects are observed. It is suggested that investigators concentrate on obtaining populations with trait prevalence near 50% rather than searching for statistical indices to rescue or excuse inefficient experiments.
---
paper_title: Effects of sample size on accuracy of species distribution models
paper_content:
Abstract Given increasing access to large amounts of biodiversity information, a powerful capability is that of modeling ecological niches and predicting geographic distributions. Because, sampling species’ distributions is costly, we explored sample size needs for accurate modeling for three predictive modeling methods via re-sampling of data for well-sampled species, and developed curves of model improvement with increasing sample size. In general, under a coarse surrogate model, and machine-learning methods, average success rate at predicting occurrence of a species at a location, or accuracy, was 90% of maximum within ten sample points, and was near maximal at 50 data points. However, a fine surrogate model and logistic regression model had significantly lower rates of increase in accuracy with increasing sample size, reaching similar maximum accuracy at 100 data points. The choice of environmental variables also produced unpredictable effects on accuracy over the range of sample sizes on the logistic regression method, while the machine-learning method had robust performance throughout. Examining correlates of model performance across species, extent of geographic distribution was the only significant ecological factor.
---
paper_title: A method to compare and improve land cover datasets: application to the GLC-2000 and MODIS land cover products
paper_content:
This paper presents a methodology for the comparison of different land cover datasets and illustrates how this can be extended to create a hybrid land cover product. The datasets used in this paper are the GLC-2000 and MODIS land cover products. The methodology addresses: 1) the harmonization of legend classes from different global land cover datasets and 2) the uncertainty associated with the classification of the images. The first part of the methodology involves mapping the spatial disagreement between the two land cover products using a combination of fuzzy logic and expert knowledge. Hotspots of disagreement between the land cover datasets are then identified to determine areas where other sources of data such as TM/ETM images or detailed regional and national maps can be used in the creation of a hybrid land cover dataset
---
paper_title: Land cover and global productivity: A measurement strategy for the NASA programme
paper_content:
NASA's Earth science programme is developing an improved understanding of terrestrial productivity and its relationship to global environmental change. Environmental change includes changes that are anthropogenic, caused for example by increasing population and resource use, as well as those that are natural, caused by interannual or decadal variability in climate and intrinsic vegetation dynamics. In response to current science and policy concerns, the Earth science programme has carbon and the major biogeochemical cycles as a primary focus but is broad enough to include related topics such as land-atmosphere interactions associated with the hydrological cycle and the chemical composition of the atmosphere. The research programme includes the study of ecosystems both as respondents to change and as mediators of feedback to the atmosphere. Underlying all the research elements are important questions of natural resources and sustainable land management. The land cover and land use change element of the pro...
---
paper_title: Intercalibration of vegetation indices from different sensor systems
paper_content:
Spectroradiometric measurements were made over a range of crop canopy densities, soil backgrounds and foliage colour. The reflected spectral radiances were convoluted with the spectral response functions of a range of satellite instruments to simulate their responses. When Normalised Difference Vegetation Indices (NDVI) from the different instruments were compared, they varied by a few percent, but the values were strongly linearly related, allowing vegetation indices from one instrument to be intercalibrated against another. A table of conversion coefficents is presented for AVHRR, ATSR-2, Landsat MSS, TM and ETM+, SPOT-2 and SPOT-4 HRV, IRS, IKONOS, SEAWIFS, MISR, MODIS, POLDER, Quickbird and MERIS (see Appendix A for glossary of acronyms). The same set of coefficients was found to apply, within the margin of error of the analysis, for the Soil Adjusted Vegetation Index SAVI. The relationships for SPOT vs. TM and for ATSR-2 vs. AVHRR were directly validated by comparison of atmospherically corrected image data. The results indicate that vegetation indices can be interconverted to a precision of 1–2%. This result offers improved opportunities for monitoring crops through the growing season and the prospects of better continuity of long-term monitoring of vegetation responses to environmental change.
---
paper_title: Satellite remote sensing of forest resources: three decades of research development
paper_content:
Three decades have passed since the launch of the first international satellite sensor programme designed for monitoring Earth’s resources. Over this period, forest resources have come under increasing pressure, thus their management and use should be underpinned by information on their properties at a number of levels. This paper provides a comprehensive review of how satellite remote sensing has been used in forest resource assessment since the launch of the first Earth resources satellite sensor (ERTS) in 1972. The use of remote sensing in forest resource assessment provides three levels of information; namely (1) the spatial extent of forest cover, which can be used to assess the spatial dynamics of forest cover; (2) forest type and (3) biophysical and biochemical properties of forests. The assessment of forest information over time enables the comprehensive monitoring of forest resources. This paper provides a comprehensive review of how satellite remote sensing has been used to date and, building on...
---
paper_title: Landsat continuity : Issues and opportunities for land cover monitoring
paper_content:
Initiated in 1972, the Landsat program has provided a continuous record of earth observation for 35 years. The assemblage of Landsat spatial, spectral, and temporal resolutions, over a reasonably sized image extent, results in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is absolutely unique and indispensable for monitoring, management, and scientific activities. Recent technical problems with the two existing Landsat satellites, and delays in the development and launch of a successor, increase the likelihood that a gap in Landsat continuity may occur. In this communication, we identify the key features of the Landsat program that have resulted in the extensive use of Landsat data for large area land cover mapping and monitoring. We then augment this list of key features by examining the data needs of existing large area land cover monitoring programs. Subsequently, we use this list as a basis for reviewing the current constellation of earth observation satellites to identify potential alternative data sources for large area land cover applications. Notions of a virtual constellation of satellites to meet large area land cover mapping and monitoring needs are also presented. Finally, research priorities that would facilitate the integration of these alternative data sources into existing large area land cover monitoring programs are identified. Continuity of the Landsat program and the measurements provided are critical for scientific, environmental, economic, and social purposes. It is difficult to overstate the importance of Landsat; there are no other systems in orbit, or planned for launch in the short-term, that can duplicate or approach replication, of the measurements and information conferred by Landsat. While technical and political options are being pursued, there is no satellite image data stream poised to enter the National Satellite Land Remote Sensing Data Archive should system failures occur to Landsat-5 and -7.
---
paper_title: Modelling coral reef habitat trajectories: Evaluation of an integrated timed automata and remote sensing approach
paper_content:
The rapid degradation of many reefs worldwide calls for more effective monitoring and predictions of the trajectories of coral reef habitats as they cross cycles of disturbance and recovery. Current approaches include in situ monitoring, computer modelling, and remote sensing observations. We aimed to combine these three sources of information for Abor ´ e
---
paper_title: Analysis of remotely sensed data: the formative decades and the future
paper_content:
Developments in the field of image understanding in remote sensing over the past four decades are reviewed, with an emphasis, initially, on the contributions of David Landgrebe and his colleagues at the Laboratory for Applications of Remote Sensing, Purdue University. The differences in approach required for multispectral, hyperspectral and radar image data are emphasised, culminating with a commentary on methods commonly adopted for multisource image analysis. The treatment concludes by examining the requirements of an operational multisource thematic mapping process, in which it is suggested that the most practical approach is to analyze each data type separately, by techniques optimized to that data's characteristics, and then to fuse at the label level.
---
paper_title: Remote sensing in the coming decade: the vision and the reality
paper_content:
Investment in understanding the Earth pays off twice. It enables pursuit of scientific questions that rank among the most interesting and profound of our time. It also serves society's practical need for increased prosperity and security. Over the last half-century, we have built a sophisticated network of satellites, aircraft, and ground-based remote sensing systems to provide the raw information from which we derive Earth knowledge. This network has served us well in the development of science and the provision of operational services. In the next decade, the demand for such information will grow dramatically. New remote sensing capabilities will emerge. Rapid evolution of Internet geospatial and location-based services will make communication and sharing of Earth knowledge much easier. Governments, businesses, and consumers will all benefit. But this exciting future is threatened from many directions. Risks range from technology and market uncertainties in the private sector to budget cuts and project setbacks in the public sector. The coming decade will see a dramatic confrontation between the vision of what needs to be accomplished in Earth remote sensing and the reality of our resources and commitment. The outcome will have long-term implications for both the remote sensing community and society as a whole.
---
paper_title: Multiple Methods in the Study of Driving Forces of Land Use and Land Cover Change: A Case Study of SE Kajiado District, Kenya
paper_content:
This landscape-scale study combines analysis of multitemporal satellite imagery spanning 30 years and information from field studies extending over 25 years to assess the extent and causes of land use and land cover change in the Loitokitok area, southeast Kajiado District, Kenya. Rain fed and irrigated agriculture, livestock herding, and wildlife and tourism have all experienced rapid change in their structure, extent, and interactions over the past 30 years in response to a variety of economic, cultural, political, institutional, and demographic processes. Land use patterns and processes are explored through a complementary application of interpretation of satellite imagery and case study analysis that explicitly addresses the local–national spatial scale over a time frame appropriate to the identification of fundamental causal processes. The results illustrate that this combination provides an effective basis for describing and explaining patterns of land use and land cover change and their root causes.
---
paper_title: Landsat's Role in Ecological Applications of Remote Sensing
paper_content:
Remote sensing, geographic information systems, and modeling have combined to produce a virtual explosion of growth in ecological investigations and applications that are explicitly spatial and temporal. Of all remotely sensed data, those acquired by Landsat sensors have played the most pivotal role in spatial and temporal scaling. Modern terrestrial ecology relies on remote sensing for modeling biogeochemical cycles and for characterizing land cover, vegetation biophysical attributes, forest structure, and fragmentation in relation to biodiversity. Given the more than 30-year record of Landsat data, mapping land and vegetation cover change and using the derived surfaces in ecological models is becoming commonplace. In this article, we summarize this large body of work, highlighting the unique role of Landsat.
---
| Title: An overview of recent remote sensing and GIS based research in ecological informatics
Section 1: Background
Description 1: Provide an introduction to geotechnology and its significance in the 21st century, emphasizing recent developments in GIS and remote sensing and their impact on ecological research.
Section 2: Remote sensing: overcoming limitations in spatial resolution
Description 2: Discuss the advancements in remote sensing technology, particularly in relation to overcoming spatial resolution limitations, and the challenges and solutions associated with mixed pixels and image classification.
Section 3: Remote sensing: targeted mapping
Description 3: Explain the use of remote sensing for tailored mapping of specific environmental features or species, and discuss the benefits and challenges of using airborne and spaceborne lidar systems.
Section 4: Remote sensing: capturing the temporal dimension
Description 4: Describe the importance of temporal data in remote sensing for ecological studies, particularly for monitoring vegetative phenological events and understanding climate change impacts.
Section 5: GIS: the internet revolution
Description 5: Explore the recent developments in WebGIS, mobile GIS, and sensor webs/networks, highlighting the benefits of internet technology for geographic data processing and sharing.
Section 6: Free and Open Source GIS Software and data
Description 6: Discuss the rise of Free and Open Source GIS software, including examples of projects, their benefits for ecological informatics, and the growing adoption within the GIS community.
Section 7: Geovisualization
Description 7: Outline the advancements in geovisualization technologies, their applications in ecological research, and the integration of approaches from various domains to enhance spatial data analysis.
Section 8: Species distribution modelling
Description 8: Provide an overview of the use of GIS in species distribution modelling, highlighting recent research trends, tools, and the impact of environmental changes on species distributions.
Section 9: Accuracy and comparison
Description 9: Discuss the importance of accuracy in ecological data and modelling, the challenges of data quality and quantity, and the methods used for accuracy assessment and validation.
Section 10: Prospects
Description 10: Summarize the future prospects of remote sensing and GIS in ecological informatics, considering potential advancements in sensor technology, data continuity, and the integration of multidisciplinary data. |
A Review of Standards and Statistics Used to Describe Blood Glucose Monitor Performance | 17 | ---
paper_title: Criteria for Judging Precision and Accuracy in Method Development and Evaluation
paper_content:
We describe an approach for formulating criteria that can be used to judge whether an analytical method has acceptable precision and accuracy. We derive criteria for several experiments that are commonly used in method-evaluation studies: precision or replicates, recovery, interference, and comparison of patient values between the new method and a proven method. These criteria are based on the medical usefulness of the test results, thus the acceptability of the method is judged with respect to the clinical requirements.
---
paper_title: Recommendation to treat continuous variable errors like attribute errors
paper_content:
Clinical laboratory errors can be considered as either belonging to attribute or continuous variables. Attribute errors are usually considered to be pre- or post-analytical errors, whereas continuous variable errors are analytical. Goals for each error type are different. Error goals for continuous variables are often specified as limits that contain 95% of the results, whereas attribute error goals are specified as allowed error rates for serious events. This leads to a discrepancy, because for a million results, there can be up to 50,000 medically unacceptable analytical errors, but allowable pre- and post-analytical error rates are much lower than 5%. Steps to remedy this are to classify analytical error rates into severity categories, exemplified by existing glucose error grids. The results in each error grid zone are then counted, as has been recommended by the Food and Drug Administration (FDA). This in effect transforms the continuous variable errors into attribute errors. This is an improvement over current practices for analytical errors, whereby the use of uncertainty intervals is recommended that include only 95% of the results (i.e., leaves out the worst 5%), and it is precisely this 5% of results that are likely to be in the most severe zones of an error grid.
---
paper_title: Criteria for Judging Precision and Accuracy in Method Development and Evaluation
paper_content:
We describe an approach for formulating criteria that can be used to judge whether an analytical method has acceptable precision and accuracy. We derive criteria for several experiments that are commonly used in method-evaluation studies: precision or replicates, recovery, interference, and comparison of patient values between the new method and a proven method. These criteria are based on the medical usefulness of the test results, thus the acceptability of the method is judged with respect to the clinical requirements.
---
paper_title: Instruments for Self-Monitoring of Blood Glucose: Comparisons of Testing Quality Achieved by Patients and a Technician
paper_content:
Background: Instruments for self-monitoring of blood glucose (SMBG) are increasingly used by patients with diabetes. The analytical quality of meters in routine use is poorly characterized. ::: ::: Methods: We compared SMBG performance achieved by patients and by a medical laboratory technician. Imprecision was calculated from duplicate measurements, and deviation as the difference between the first measurement and the mean of duplicate laboratory-method results (calibrated with NIST material). Analytical quality for five groups of SMBG instruments was compared with quality specifications for BG measurements. All participants completed a questionnaire assessing both SMBG training and use of the meters. ::: ::: Results: We recruited 159 SMBG users from a hospital outpatient clinic and 263 others from 65 randomly selected general practices (total of 422). Most (two thirds) used insulin. CVs for the five meter types were 7%, 11%, 18%, 18%, and 20% in the hands of patients and 2.5–5.9% for the technician. For three of five meter types, patients’ BG measurements had larger deviations from the laboratory results than did the technician’s results. The technician’s performance could not predict the patients’. No instrument when used by patients (but two operated by the technician) met published quality specifications. The analytical quality of patients’ results was not related to whether they had chosen the instruments on advice from healthcare personnel (one-third of patients), were only self-educated in SMBG (50%), or performed SMBG fewer than seven times/week (62%). ::: ::: Conclusions: The analytical quality of SMBG among patients was poorer than, and could not be predicted from, the performance of the meters in the hands of a technician. We suggest that new instruments be tested in the hands of patients who are trained on meter use in a routine way.
---
paper_title: Assuring the accuracy of home glucose monitoring.
paper_content:
BACKGROUND An estimated 2.5 million diabetic patients in the United States practice self-monitoring of blood glucose (SMBG). The validity of the glucose values they obtain is in doubt. An American Diabetes Association consensus panel reported that up to 50% of SMBG determinations might vary more than 20% from their true value. Accurate glucose values are an integral part of intensive treatment and reduction of long-term complications. The objective of this study was to determine the technical skill and accuracy of SMBG in an outpatient population. METHODS This study was conducted in two family practice residency sites where 111 patients with type 1 and type 2 adult diabetes were observed testing their blood glucose values on their own glucose monitors. Patient-measured glucose levels were immediately compared with a laboratory value obtained from a calibrated hand-held glucose monitor. RESULTS Fifty-three percent of patient glucose values were within 10% of the control value, 84% were within 20% of the control value, and 16% varied 20% or more from the control value. Two patients had dangerously inaccurate glucose determinations. Four glucose monitors required replacement. The patients were observed using a 13-point checklist of critical steps in calibration and operation of their glucose monitor. Only 1 patient made no errors in testing. CONCLUSIONS Despite multiple technical errors when using SMBG, most patients obtained clinically useful values. This project can be easily introduced into a medical office.
---
paper_title: A new consensus error grid to evaluate the clinical significance of inaccuracies in the measurement of blood glucose
paper_content:
OBJECTIVE ::: The objectives of this study were 1) to construct new error grids (EGs) for blood glucose (BG) self-monitoring by using the expertise of a large panel of clinicians and 2) to use the new EGs to evaluate the accuracy of BG measurements made by patients. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: To construct new EGs for type 1 and type 2 diabetic patients, a total of 100 experts of diabetes were asked to assign any error in BG measurement to 1 of 5 risk categories. We used these EGs to evaluate the accuracy of self-monitoring of blood glucose (SMBG) levels in 152 diabetic patients. The SMBG data were used to compare the new type 1 diabetes EG with a traditional EG. ::: ::: ::: RESULTS ::: Both the type 1 and type 2 diabetes EGs divide the risk plane into 8 concentric zones with no discontinuities. The new EGs are similar to each other, but they differ from the traditional EG in several significant ways. When used to evaluate a data set of measurements made by a sample of patients experienced in SMBG, the new type 1 diabetes EG rated 98.6% of their measurements as clinically acceptable, compared with 95% for the traditional EG. ::: ::: ::: CONCLUSIONS ::: The consensus EGs furnish a new tool for evaluating errors in the measurement of BG for patients with type 1 and type 2 diabetes.
---
paper_title: Toward metrological traceability in the determination of prostate-specific antigen (PSA): calibrating Beckman Coulter Hybritech Access PSA assays to WHO standards compared with the traditional Hybritech standards
paper_content:
Background: The metrological traceability of prostate-specific antigen (PSA) assay calibration to WHO standards is desirable to potentially improve the comparability between PSA assays. A method comparison was performed between the traditionally standardized Beckman Coulter Hybritech Access PSA and free PSA (fPSA) assays and a new alternate calibration of assays aligned to the WHO standards 96/670 and 96/668, respectively. Methods: Sera from 641 men with and without prostate cancer, various control materials and mixtures of different proportions of the WHO standards were measured with both assay calibrations. Results: Excellent comparability between the corresponding assay calibrations was observed, with correlation coefficients of at least 0.996. The Passing-Bablok slopes were 0.747 for total PSA (tPSA), 0.776 for fPSA and 1.02 for the percentage ratio of fPSA to tPSA (%fPSA), while the corresponding percentages of the new WHO-aligned assay results related to the traditional assays were 76.2%, 77% and 102.2%. Receiver operating characteristics revealed no differences between the two PSA assay calibrations. Conclusions: The WHO calibration yields results approximately 25% lower for tPSA and fPSA values when compared with the conventional Hybritech calibration. Using the WHO-aligned PSA assay, a tPSA cut-off of 3 μg/L should be considered in clinical practice, while %fPSA cut-offs could be retained.
---
paper_title: Multi-factor designs. IV. How multi-factor designs improve the estimate of total error by accounting for protocol-specific biases
paper_content:
Total error is often calculated as a combination of random error and fixed bias. However, the specific protocols used to estimate random error and fixed bias are themselves variable factors that can affect the estimate of total error. We refer to biases such as assay drift, sample-to-sample carryover, and reagent carryover as examples of fixed biases that are protocol-specific and distinguish them from other fixed biases. Failing to account for protocol-specific biases that are present will lead to incorrect estimates of total error when routine use of the assay involves a protocol different from that used to estimate total error. Multi-factor protocols are recommended to determine protocol-specific biases, which, if present, should be included in the estimate of total error.
---
paper_title: Estimating total analytical error and its sources. Techniques to improve method evaluation.
paper_content:
The process of method evaluation starts with identifying goals either to demonstrate the clinical validity of an assay or to identify assay error sources that require improvement. Taguchi's idea of continual quality improvement vs the notion of meeting or failing specification has been applied to clinical chemistry. In this article, I propose a model of assay performance that includes the terms random interferences and protocol-specific biases (a series of systematic errors). I explain these terms, as well as the consequences of failing to consider them. To validate an assay clinically, I recommend direct estimation of total analytical error from a method comparison. To identify assay error sources that require improvement, I recommend a multifactor protocol (in addition to a method comparison). Individual error sources are related to total analytical error with the use of an error propagation technique. Much of the proposed data analysis techniques are straightforward but not routinely practiced. I demonstrate principles with the use of a cholesterol assay.
---
paper_title: Glucose: A Simple Molecule That Is Not Simple to Quantify
paper_content:
Small increments in blood glucose substantially increase the risk of developing diabetes mellitus; but preanalytical and analytical variables, such as the absence of harmonization for glucose assays, make it difficult to correctly apply these epidemiological insights to individual patients. Harmonization can be improved if 3 variables are addressed: changing proficiency test grading from consensus based to accuracy based, effectively controlling glycolysis, and taking into account the time of day blood was collected. ::: ::: The continuous and graded quantitative relationship of fasting glucose measurements to the risk of developing diabetes was well documented recently by Tirosh et al.(1). They found an increased risk of type 2 diabetes across quintiles of fasting plasma glucose (FPG) concentrations within the newly defined reference range, <5.55 mmol/L (<100 mg/dL). For example, a person with an FPG between 4.83 and 5.00 mmol/L (87 and 90 mg/dL) has an age-adjusted risk of developing diabetes that is 1.81 times that of a person with an FPG <4.55 mmol/L (82 mg/dL; 95% CI 1.16–2.83). Thus, a difference as small as 0.28 mmol/L (5 mg/dL) nearly doubles the risk. Higher concentrations of FPG …
---
paper_title: Performance of Four Homogeneous Direct Methods for LDL-Cholesterol
paper_content:
Background: Homogeneous LDL-cholesterol methods from Genzyme, Reference Diagnostics, Roche, and Sigma were evaluated for precision, accuracy, and specificity for LDL in the presence of abnormal lipoproteins. ::: ::: Methods: Each homogeneous method was performed by a Roche/Hitachi 911 according to the vendors’ instructions, and the results were compared with the β-quantification reference method. We measured precision over 20 days using quality-control and frozen serum specimens. Sera from 100 study participants, including 60 with hyperlipidemias, were assayed by each method. Accuracy was evaluated from regression and total error analysis. Specificity was evaluated from the bias (as a percentage) vs concentration of triglycerides. ::: ::: Results: The total CV was 12% bias were as follows: Genzyme, 8.0%; Reference Diagnostics, 11.0%; Roche, 10.0%; and Sigma, 30.0%. Total error calculated from mean systematic bias and all-sources random bias was as follows: Genzyme, 12.6%; Reference Diagnostics, 16.5%; Roche, 41.6%; and Sigma, 38.3%. Slopes of bias (as a percentage) vs triglycerides were P <0.001 for all methods except the Roche method, which was P = 0.094. ::: ::: Conclusions: The evaluated methods show nonspecificity toward abnormal lipoproteins, thus compromising their ability to satisfy the National Cholesterol Education Program goal for a total error of <12%. These homogeneous LDL-cholesterol results do not improve on the performance of LDL-cholesterol calculated by the Friedewald equation at triglyceride concentrations <4000 mg/L.
---
paper_title: How to Improve Total Error Modeling by Accounting for Error Sources Beyond Imprecision and Bias
paper_content:
Boyd and Bruns (1) have used Monte Carlo simulations to assess glucose meter specifications. This letter suggests that their modeling methods do not account for all possible error types and thus their conclusions may not follow. A more realistic modeling method is reviewed as well as an alternative to modeling. ::: ::: The error simulation method chosen for glucose by Boyd and Bruns (1) was also used in principle to generate the National Cholesterol Education Program goals for cholesterol analytical performance (2). Boyd and Bruns (1) generate glucose error by adding various levels of assay imprecision to various levels of assay bias. This method is intuitively appealing as a way of simulating total analytical error. Although the method is a simulation, it is helpful to consider how the data relate to an actual experiment. Thus, if one measures glucose in a set of patient specimens in both a field and a reference method, one can imagine obtaining a bias as …
---
paper_title: Guidelines and recommendations for laboratory analysis in the diagnosis and management of diabetes mellitus.
paper_content:
BACKGROUND ::: Multiple laboratory tests are used in the diagnosis and management of patients with diabetes mellitus. The quality of the scientific evidence supporting the use of these assays varies substantially. ::: ::: ::: APPROACH ::: An expert committee drafted evidence-based recommendations for the use of laboratory analysis in patients with diabetes. An external panel of experts reviewed a draft of the guidelines, which were modified in response to the reviewers' suggestions. A revised draft was posted on the Internet and was presented at the AACC Annual Meeting in July, 2000. The recommendations were modified again in response to oral and written comments. The guidelines were reviewed by the Professional Practice Committee of the American Diabetes Association. ::: ::: ::: CONTENT ::: Measurement of plasma glucose remains the sole diagnostic criterion for diabetes. Monitoring of glycemic control is performed by the patients, who measure their own plasma or blood glucose with meters, and by laboratory analysis of glycated hemoglobin. The potential roles of noninvasive glucose monitoring, genetic testing, autoantibodies, microalbumin, proinsulin, C-peptide, and other analytes are addressed. ::: ::: ::: SUMMARY ::: The guidelines provide specific recommendations based on published data or derived from expert consensus. Several analytes are of minimal clinical value at the present time, and measurement of them is not recommended.
---
paper_title: Statistical Comparison of Multiple Analytic Procedures: Application to Clinical Chemistry
paper_content:
The basic sciences all require an ability to measure the amounts of substances under study. With new methods of measurement constantly being proposed there is a need for techniques for comparing these methods in terms of their precision and accuracy. Of particular interest is the case in which none of the individual methods are known to measure “truth”. A multiple methods comparison technique for this case is proposed in this paper, and is illustrated by an example from the field of clinical chemistry. Estimates of the components of variance for each method are developed, and some of their properties explored.
---
paper_title: Performance of Four Homogeneous Direct Methods for LDL-Cholesterol
paper_content:
Background: Homogeneous LDL-cholesterol methods from Genzyme, Reference Diagnostics, Roche, and Sigma were evaluated for precision, accuracy, and specificity for LDL in the presence of abnormal lipoproteins. ::: ::: Methods: Each homogeneous method was performed by a Roche/Hitachi 911 according to the vendors’ instructions, and the results were compared with the β-quantification reference method. We measured precision over 20 days using quality-control and frozen serum specimens. Sera from 100 study participants, including 60 with hyperlipidemias, were assayed by each method. Accuracy was evaluated from regression and total error analysis. Specificity was evaluated from the bias (as a percentage) vs concentration of triglycerides. ::: ::: Results: The total CV was 12% bias were as follows: Genzyme, 8.0%; Reference Diagnostics, 11.0%; Roche, 10.0%; and Sigma, 30.0%. Total error calculated from mean systematic bias and all-sources random bias was as follows: Genzyme, 12.6%; Reference Diagnostics, 16.5%; Roche, 41.6%; and Sigma, 38.3%. Slopes of bias (as a percentage) vs triglycerides were P <0.001 for all methods except the Roche method, which was P = 0.094. ::: ::: Conclusions: The evaluated methods show nonspecificity toward abnormal lipoproteins, thus compromising their ability to satisfy the National Cholesterol Education Program goal for a total error of <12%. These homogeneous LDL-cholesterol results do not improve on the performance of LDL-cholesterol calculated by the Friedewald equation at triglyceride concentrations <4000 mg/L.
---
paper_title: Laboratory process specifications for assuring quality in the U.S. National Cholesterol Education Program
paper_content:
We have assessed the laboratory specifications necessary for ensuring that cholesterol testing processes satisfy the quality required by the U.S. National Cholesterol Education Program (NCEP). A model for setting process specifications has been developed to relate the NCEP guidelines for medical interpretation of a cholesterol test to the pre-analytical and analytical variables that can affect a test result. Using this model, we derived specifications for the imprecision (coefficient of variation, CV, or standard deviation, s) and inaccuracy (bias) that are allowable under stable operation, as well as the quality-control procedures (control rules and number of control measurements) that are necessary to detect unstable operation. The NCEP goals of an allowable CV less than or equal to 3% and an allowable bias no greater than +/- 3% are inadequate for assuring the quality of an individual or single cholesterol test when monitoring performance with many of the statistical quality-control procedures currently used in the U.S. With quality-control procedures having two control measurements per run, a CV of 3% is allowable only when bias is zero; a CV less than or equal to 2% is necessary if bias is +/- 3%. With quality-control procedures having four control measurements per run, a CV of 3% is allowable when bias is +/- 1.5%; a CV less than or equal to 2.5% is required if bias is as large as +/- 3%. For two serial tests, the NCEP 3% goals are adequate for current quality-control procedures having four control measurements per run.
---
paper_title: Estimating total analytical error and its sources. Techniques to improve method evaluation.
paper_content:
The process of method evaluation starts with identifying goals either to demonstrate the clinical validity of an assay or to identify assay error sources that require improvement. Taguchi's idea of continual quality improvement vs the notion of meeting or failing specification has been applied to clinical chemistry. In this article, I propose a model of assay performance that includes the terms random interferences and protocol-specific biases (a series of systematic errors). I explain these terms, as well as the consequences of failing to consider them. To validate an assay clinically, I recommend direct estimation of total analytical error from a method comparison. To identify assay error sources that require improvement, I recommend a multifactor protocol (in addition to a method comparison). Individual error sources are related to total analytical error with the use of an error propagation technique. Much of the proposed data analysis techniques are straightforward but not routinely practiced. I demonstrate principles with the use of a cholesterol assay.
---
paper_title: STATISTICAL METHODS FOR ASSESSING AGREEMENT BETWEEN TWO METHODS OF CLINICAL MEASUREMENT
paper_content:
In clinical measurement comparison of a new measurement technique with an established one is often needed to see whether they agree sufficiently for the new to replace the old. Such investigations are often analysed inappropriately, notably by using correlation coefficients. The use of correlation is misleading. An alternative approach, based on graphical techniques and simple calculations, is described, together with the relation between this analysis and the assessment of repeatability.
---
paper_title: A Simple, Graphical Method to Evaluate Laboratory Assays
paper_content:
Evaluation methods of laboratory assays often fail to predict the large, infrequent errors that are a major source of clinician complaints. We present a simple, graphical method to evaluate laboratory assays, which focuses on detecting large, infrequent errors. Our method, the folded empirical cumulative distribution plot or, more simply, mountain plot, is prepared by computing a percentile for each ranked difference between the new and reference method. To get a folded plot, one performs the following subtraction for all percentiles over 50 : percentile = 100 - percentile. Percentiles (y axis) are then plotted against differences or percent differences (x axis). The calculations and plots are simple enough to perform in a spreadsheet. We also offer Windows based software to perform all calculations and plots. The mountain plot compared to the difference plot focuses attention on two features of the data : the center and the tails. We prefer the mountain plot over other graphical techniques because : 1. It is easier to find the central 95% of the data. 2. It is easier to estimate percentiles for large differences (e.g., percentiles greater than 95%). 3. Unlike a histogram, the plot shape is not a function of the intervals. 4. Comparing different distributions is easier. 5. The plot is easier to interpret than a standard empirical cumulative distribution plot. Difference and moutain plots each provide complementary perspectives on the data. We recommend both plots. This method can also be used with data from a wide variety of other applications, such as clinical trials and quality control.
---
paper_title: Misuse of correlation and regression in three medical journals
paper_content:
Errors relating to the use of the correlation coefficient and bivariate linear regression are often to be found in medical publications. This paper reports a literature search to define the problems. All the papers and letters published in the British Medical Journal, The Lancet and the New England Journal of Medicine during 1997 were screened for examples. Fifteen categories of errors were identified of which eight were important or common. These included: failure to define clearly the relevant sample number; the display of potentially misleading scatterplots; attachment of unwarranted importance to significance levels; and the omission of confidence intervals for correlation coefficients and around regression lines.
---
paper_title: Evaluation of point-of-care glucose testing accuracy using locally-smoothed median absolute difference curves.
paper_content:
BACKGROUND ::: We introduce locally-smoothed (LS) median absolute difference (MAD) curves for the evaluation of hospital point-of-care (POC) glucose testing accuracy. ::: ::: ::: METHODS ::: Arterial blood samples (613) were obtained from a university hospital blood gas laboratory. Four hospital glucose meter systems (GMS) were tested against the YSI 2300 glucose analyzer for paired reference observations. We made statistical comparisons using conventional methods (e.g., linear regression, mean absolute differences). ::: ::: ::: RESULTS ::: Difference plots with superimposed ISO 15197 tolerance bands showed bias, scatter, heteroscedasticity, and erroneous results well. LS MAD curves readily revealed GMS accuracy patterns. Performance in hypoglycemic and hyperglycemic ranges erratically exceeded the recommended LS MAD error tolerance limit (5 mg/dl). Some systems showed acceptable (within LS MAD tolerance) or nearly acceptable performance in and around a tight glycemic control (TGC) interval of 80-110 mg/dl. Performance patterns varied in this interval, creating potential for discrepant therapeutic decisions. ::: ::: ::: CONCLUSIONS ::: Erroneous results demonstrated by ISO 15197-difference plots must be carefully considered. LS MAD curves draw on the unique human ability to recognize patterns quickly and discriminate accuracy visually. Performance standards should incorporate LS MAD curves and the recommended error tolerance limit of 5 mg/dl for hospital bedside glucose testing. Each GMS must be considered individually when assessing overall performance for therapeutic decision making in TGC.
---
| Title: A Review of Standards and Statistics Used to Describe Blood Glucose Monitor Performance
Section 1: Introduction
Description 1: Introduce the importance of blood glucose testing in diabetes management and provide an overview of the different types and sources of errors associated with glucose measurements.
Section 2: Published Glucose Standards
Description 2: Detail the different standards published regarding glucose monitoring, including discussions on ISO limits and the implications of these standards.
Section 3: Standards That Specify 100% of Data
Description 3: Discuss standards from organizations like ADA that specify stringent error limits for glucose measurements and analyze the implications of these standards.
Section 4: Error Grid Details
Description 4: Explore the Clarke and Parkes error grids and other grids used for assessing glucose monitor accuracy, including their distinctions and applications.
Section 5: Analytical Error Sources
Description 5: Describe different sources of analytical error, including imprecision, patient interferences, and biases, and their contributions to total error.
Section 6: Methods Used to Estimate Total Error
Description 6: Review various methods for estimating total error in glucose measurements, including modeling, parametric, and nonparametric approaches.
Section 7: Westgard
Description 7: Describe the Westgard model for assessing glucose measurement errors and discuss its use and limitations.
Section 8: Boyd and Bruns
Description 8: Review the proposals by Boyd and Bruns for glucose requirements and the criticisms of their model.
Section 9: Lawton
Description 9: Explain the Lawton model for accounting for non-specificity and interferences in total error estimation.
Section 10: Bland-Altman
Description 10: Detail the Bland-Altman approach to comparing glucose measurement methods and its usability.
Section 11: Mountain Plots
Description 11: Describe mountain plots and their use in analyzing glucose measurement differences.
Section 12: Error Grids (Revisited)
Description 12: Revisit error grids as a method for estimating total error, emphasizing their practical applications and limitations.
Section 13: Methods That Estimate Total Error Components
Description 13: Discuss Clinical and Laboratory Standards Institute protocols developed for various analytical performance parameters related to glucose.
Section 14: Correlation Coefficient
Description 14: Explain the use and limitations of the correlation coefficient in the context of glucose measurement accuracy.
Section 15: Locally Smoothed Median Absolute Differences (LS MAD)
Description 15: Describe LS MAD curves and their application in evaluating performance over different glucose ranges.
Section 16: Total Error Evaluation Protocol
Description 16: Outline the protocols for estimating total error in routine use, emphasizing the need for comprehensive error assessments.
Section 17: Conclusions
Description 17: Summarize the paper's findings and argue for the need for comprehensive glucose specifications that encompass total error limits and detailed testing protocols. |
Survey of security services on group communications | 7 | ---
paper_title: Secure group communications using key graphs
paper_content:
Many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive simulation, and collaborative work) are based upon a group communications model, i.e., they require packet delivery from one or more authorized senders to a very large number of authorized receivers. As a result, securing group communications (i.e., providing confidentiality, integrity, and authenticity of messages delivered between group members) will become a critical networking issue.In this paper, we present a novel solution to the scalability problem of group/multicast key management. We formalize the notion of a secure group as a triple (U,K,R) where U denotes a set of users, K a set of keys held by the users, and R a user-key relation. We then introduce key graphs to specify secure groups. For a special class of key graphs, we present three strategies for securely distributing rekey messages after a join/leave, and specify protocols for joining and leaving a secure group. The rekeying strategies and join/leave protocols are implemented in a prototype group key server we have built. We present measurement results from experiments and discuss performance comparisons. We show that our group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves. In particular, the average measured processing time per join/leave increases linearly with the logarithm of group size.
---
paper_title: Scalable Secure Group Communication over IP Multicast
paper_content:
We introduce and analyze a scalable rekeying scheme for implementing secure group communications Internet protocol multicast. We show that our scheme incurs constant processing, message, and storage overhead for a rekey operation when a single member joins or leaves the group, and logarithmic overhead for bulk simultaneous changes to the group membership. These bounds hold even when group dynamics are not known a priori. Our rekeying algorithm requires a particular clustering of the members of the secure multicast group. We describe a protocol to achieve such clustering and show that it is feasible to efficiently cluster members over realistic Internet-like topologies. We evaluate the overhead of our own rekeying scheme and also of previously published schemes via simulation over an Internet topology map containing over 280 000 routers. Through analysis and detailed simulations, we show that this rekeying scheme performs better than previous schemes for a single change to group membership. Further, for bulk group changes, our algorithm outperforms all previously known schemes by several orders of magnitude in terms of actual bandwidth usage, processing costs, and storage requirements.
---
paper_title: Secure group communication using robust contributory key agreement
paper_content:
Contributory group key agreement protocols generate group keys based on contributions of all group members. Particularly appropriate for relatively small collaborative peer groups, these protocols are resilient to many types of attacks. Unlike most group key distribution protocols, contributory group key agreement protocols offer strong security properties such as key independence and perfect forward secrecy. We present the first robust contributory key agreement protocol resilient to any sequence of group changes. The protocol, based on the Group Diffie-Hellman contributory key agreement, uses the services of a group communication system supporting virtual synchrony semantics. We prove that it provides both virtual synchrony and the security properties of Group Diffie-Hellman, in the presence of any sequence of (potentially cascading) node failures, recoveries, network partitions, and heals. We implemented a secure group communication service, Secure Spread, based on our robust key agreement protocol and Spread group communication system. To illustrate its practicality, we compare the costs of establishing a secure group with the proposed protocol and a protocol based on centralized group key management, adapted to offer equivalent security properties.
---
paper_title: Hierarchical group access control for secure multicast communications
paper_content:
Many group communications require a security infrastructure that ensures multiple levels of access control for group members. While most existing group key management schemes are designed for single level access control, we present a multi-group key management scheme that achieves hierarchical group access control. Particularly, we design an integrated key graph that maintains keying material for all members with different access privileges. It also incorporates new functionalities that are not present in conventional multicast key management, such as user relocation on the key graph. Analysis is performed to evaluate the storage and communication overhead associated key management. Comprehensive simulations are performed for various application scenarios where users statistical behavior is modelled using a discrete Markov chain. Compared with applying existing key management schemes directly to the hierarchical access control problem, the proposed scheme significantly reduces the overhead associated with key management and achieves better scalability.
---
paper_title: A centralized key management scheme for hierarchical access control
paper_content:
Key management schemes are used to provide access control to data streams for legitimate users. The users often have certain partially ordered relations, while data streams also form some partially ordered relations. Previous key management schemes have failed to take into consideration either the user relations or data stream relations. We propose a centralized key management scheme for hierarchical access control that considers both partially ordered users and partially ordered data streams. Our scheme improves the efficiency of key management by encrypting multiple equivalent data streams with a single data encryption key, instead of encrypting each data stream with a unique data encryption key in the multi-group key management scheme (Sun, Y. and Ray Liu, K.J., IEEE INFOCOM, 2004). We develop a simulation model to evaluate the performance of our proposed scheme. Simulation results show that our scheme reduces at least 20% of storage overhead at every user and rekey overhead compared to the multi-group key management scheme.
---
paper_title: A Practical and Provably Secure Coalition-Resistant Group Signature Scheme
paper_content:
A group signature scheme allows a group member to sign messages anonymously on behalf of the group. However, in the case of a dispute, the identity of a signature's originator can be revealed (only) by a designated entity. The interactive counterparts of group signatures are identity escrow schemes or group identification scheme with revocable anonymity. This work introduces a new provably secure group signature and a companion identity escrow scheme that are significantly more efficient than the state of the art. In its interactive, identity escrow form, our scheme is proven secure and coalition-resistant under the strong RSA and the decisional Diffie-Hellman assumptions. The security of the non-interactive variant, i.e., the group signature scheme, relies additionally on the Fiat-Shamir heuristic (also known as the random oracle model).
---
paper_title: Threshold signature scheme with multiple signing policies
paper_content:
Based on one cryptographic assumption, a basic threshold scheme is derived which allows a group of users to share multiple secrets. The major advantage of the threshold scheme is that only one secret shadow needs to be kept by each user. The basic threshold scheme is applied to design an efficient threshold signature scheme with multiple signing policies. The threshold signature scheme with multiple signing policies allows multiple secret keys to be shared among a group of users, and each secret key has its specific threshold value. Different secret keys can be used to sign documents depending on the significance of the documents. Once the number of cooperated users is greater than or equal to the threshold value of the group secret key, then they can co-operate to create group signatures. Similarly, each user keeps only one secret shadow.
---
paper_title: Provably secure and ID-based group signature scheme
paper_content:
There are two important directions in group signature scheme: ID-based group signature scheme and provably secure group signature scheme. We first analyze the advantages and flaws of these two directions, and propose a provably secure ID-based group signature scheme basing on the ACJT group signature scheme. Compared with prior ID-Based group signature scheme, our scheme is provably secure in random oracle model. Compared with the ACJT scheme, our scheme is ID-Based and strong nonrepudiation.
---
paper_title: A Practical and Provably Secure Coalition-Resistant Group Signature Scheme
paper_content:
A group signature scheme allows a group member to sign messages anonymously on behalf of the group. However, in the case of a dispute, the identity of a signature's originator can be revealed (only) by a designated entity. The interactive counterparts of group signatures are identity escrow schemes or group identification scheme with revocable anonymity. This work introduces a new provably secure group signature and a companion identity escrow scheme that are significantly more efficient than the state of the art. In its interactive, identity escrow form, our scheme is proven secure and coalition-resistant under the strong RSA and the decisional Diffie-Hellman assumptions. The security of the non-interactive variant, i.e., the group signature scheme, relies additionally on the Fiat-Shamir heuristic (also known as the random oracle model).
---
paper_title: Framework for anonymity in IP-multicast environments
paper_content:
The importance of the global Internet for conferencing and entertainment increases with the expanding availability of multicast capable networks. As with other applications and services, security concerns in multicast environments become a major topic for the research community. While there are a lot of publications available analysing the problems of group authentication and privacy, the aspect of anonymity in multicast environments has seldom been considered yet. This paper focuses on a fundamental overview of this topic, introduces a concept for providing anonymity for both senders and receivers in a multicast scenario and presents a optimisation concept for the system.
---
paper_title: Threshold signature scheme with multiple signing policies
paper_content:
Based on one cryptographic assumption, a basic threshold scheme is derived which allows a group of users to share multiple secrets. The major advantage of the threshold scheme is that only one secret shadow needs to be kept by each user. The basic threshold scheme is applied to design an efficient threshold signature scheme with multiple signing policies. The threshold signature scheme with multiple signing policies allows multiple secret keys to be shared among a group of users, and each secret key has its specific threshold value. Different secret keys can be used to sign documents depending on the significance of the documents. Once the number of cooperated users is greater than or equal to the threshold value of the group secret key, then they can co-operate to create group signatures. Similarly, each user keeps only one secret shadow.
---
paper_title: Hierarchical group access control for secure multicast communications
paper_content:
Many group communications require a security infrastructure that ensures multiple levels of access control for group members. While most existing group key management schemes are designed for single level access control, we present a multi-group key management scheme that achieves hierarchical group access control. Particularly, we design an integrated key graph that maintains keying material for all members with different access privileges. It also incorporates new functionalities that are not present in conventional multicast key management, such as user relocation on the key graph. Analysis is performed to evaluate the storage and communication overhead associated key management. Comprehensive simulations are performed for various application scenarios where users statistical behavior is modelled using a discrete Markov chain. Compared with applying existing key management schemes directly to the hierarchical access control problem, the proposed scheme significantly reduces the overhead associated with key management and achieves better scalability.
---
paper_title: Secure group communications using key graphs
paper_content:
Many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive simulation, and collaborative work) are based upon a group communications model, i.e., they require packet delivery from one or more authorized senders to a very large number of authorized receivers. As a result, securing group communications (i.e., providing confidentiality, integrity, and authenticity of messages delivered between group members) will become a critical networking issue.In this paper, we present a novel solution to the scalability problem of group/multicast key management. We formalize the notion of a secure group as a triple (U,K,R) where U denotes a set of users, K a set of keys held by the users, and R a user-key relation. We then introduce key graphs to specify secure groups. For a special class of key graphs, we present three strategies for securely distributing rekey messages after a join/leave, and specify protocols for joining and leaving a secure group. The rekeying strategies and join/leave protocols are implemented in a prototype group key server we have built. We present measurement results from experiments and discuss performance comparisons. We show that our group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves. In particular, the average measured processing time per join/leave increases linearly with the logarithm of group size.
---
paper_title: A centralized key management scheme for hierarchical access control
paper_content:
Key management schemes are used to provide access control to data streams for legitimate users. The users often have certain partially ordered relations, while data streams also form some partially ordered relations. Previous key management schemes have failed to take into consideration either the user relations or data stream relations. We propose a centralized key management scheme for hierarchical access control that considers both partially ordered users and partially ordered data streams. Our scheme improves the efficiency of key management by encrypting multiple equivalent data streams with a single data encryption key, instead of encrypting each data stream with a unique data encryption key in the multi-group key management scheme (Sun, Y. and Ray Liu, K.J., IEEE INFOCOM, 2004). We develop a simulation model to evaluate the performance of our proposed scheme. Simulation results show that our scheme reduces at least 20% of storage overhead at every user and rekey overhead compared to the multi-group key management scheme.
---
paper_title: Secure group communication using robust contributory key agreement
paper_content:
Contributory group key agreement protocols generate group keys based on contributions of all group members. Particularly appropriate for relatively small collaborative peer groups, these protocols are resilient to many types of attacks. Unlike most group key distribution protocols, contributory group key agreement protocols offer strong security properties such as key independence and perfect forward secrecy. We present the first robust contributory key agreement protocol resilient to any sequence of group changes. The protocol, based on the Group Diffie-Hellman contributory key agreement, uses the services of a group communication system supporting virtual synchrony semantics. We prove that it provides both virtual synchrony and the security properties of Group Diffie-Hellman, in the presence of any sequence of (potentially cascading) node failures, recoveries, network partitions, and heals. We implemented a secure group communication service, Secure Spread, based on our robust key agreement protocol and Spread group communication system. To illustrate its practicality, we compare the costs of establishing a secure group with the proposed protocol and a protocol based on centralized group key management, adapted to offer equivalent security properties.
---
paper_title: Xor-trees for efficient anonymous multicast and reception
paper_content:
We examine the problem of efficient anonymous multicast and reception in general communication networks. We present algorithms that achieve anonymous communication, are protected against traffic analysis, and require O (1) amortized communication complexity on each link and low computational comlexity. The algorithms support sender anonymity, receiver(s) anonymity, or sender-receiver anonymity.
---
paper_title: Provably secure and ID-based group signature scheme
paper_content:
There are two important directions in group signature scheme: ID-based group signature scheme and provably secure group signature scheme. We first analyze the advantages and flaws of these two directions, and propose a provably secure ID-based group signature scheme basing on the ACJT group signature scheme. Compared with prior ID-Based group signature scheme, our scheme is provably secure in random oracle model. Compared with the ACJT scheme, our scheme is ID-Based and strong nonrepudiation.
---
paper_title: Framework for anonymity in IP-multicast environments
paper_content:
The importance of the global Internet for conferencing and entertainment increases with the expanding availability of multicast capable networks. As with other applications and services, security concerns in multicast environments become a major topic for the research community. While there are a lot of publications available analysing the problems of group authentication and privacy, the aspect of anonymity in multicast environments has seldom been considered yet. This paper focuses on a fundamental overview of this topic, introduces a concept for providing anonymity for both senders and receivers in a multicast scenario and presents a optimisation concept for the system.
---
paper_title: KHIP—a scalable protocol for secure multicast routing
paper_content:
We present Keyed HIP (KHIP), a secure, hierarchical multicast routing protocol. We show that other shared-tree multicast routing protocols are subject to attacks against the multicast routing infrastructure that can isolate receivers or domains or introduce loops into the structure of the multicast routing tree. KHIP changes the multicast routing model so that only trusted members are able to join the multicast tree. This protects the multicast routing against attacks that could form branches to unauthorized receivers, prevents replay attacks and limits the effects of flooding attacks. Untrusted routers that are present on the path between trusted routers cannot change the routing and can mount no denial-of-service attack stronger than simply dropping control messages. KHIP also provides a simple mechanism for distributing data encryption keys while adding little overhead to the protocol.
---
paper_title: Xor-trees for efficient anonymous multicast and reception
paper_content:
We examine the problem of efficient anonymous multicast and reception in general communication networks. We present algorithms that achieve anonymous communication, are protected against traffic analysis, and require O (1) amortized communication complexity on each link and low computational comlexity. The algorithms support sender anonymity, receiver(s) anonymity, or sender-receiver anonymity.
---
paper_title: A secure group solution for multi-agent ec system
paper_content:
Mobile agent technology applied into EC is facing many problems. Security should be the first concern. But in large scale multi-agent systems, the most challenging problems encountered are locating and communicating of these autoprocessing agents. Some approaches have used several different methods to solve these problems. But unfortunately, they were not satisfied either for their complexity, poor performance or their lack of applicability. In our approach, we propose a secure group communication solution toward the security, locating and communication problems. We differentiated the ”Agent-Based” Multicast concept from the traditional ”Host-Based” Multicast. We then described the requirements for providing secure group services and introduce the key distribution method. A cryptographic method is complemented for further protect the agents. Finally, we analyzed the security and performance issues and conclude our solution.
---
paper_title: A Secure Group Communication Framework in Private Personal Area Networks (P-PANs)
paper_content:
One of the next promising generation networks is personal networks where a user can make ad-hoc networks with his/her personal devices. However, the present security mechanism does not consider at all what happens whenever a mobile node (device) is compromised, lost or stolen. Since a user may hold many different types of devices, the leakage of stored secrets sometimes results in the complete breakdown of the intended security level. For that, we propose two Leakage-Resilient and Forward-Secure Authenticated Key Exchange (LRFS-AKE1 and LRFS-AKE2) protocols where the former is used to authenticate a device when its owner is present whereas the latter is used in the other case. These protocols guarantee not only forward secrecy of the shared key between device and its server but also a new additional layer of security against leakage of stored secrets. Furthermore, we give a secure group communication framework, based on the LRFS-AKE1 and LRFS-AKE2 protocols, in Private Personal Area Networks (P-PANs) which provides a group key privacy against the involved server who does not deviate the protocol.
---
paper_title: Efficient secure group management for SSM
paper_content:
We propose an approach to channel key management in the architecture S-SSM, we designed to secure SSM communication. S-SSM defines two mechanisms for access control and content protection. The first one is carried out through subscriber authentication and access permission. The second is realized through the management of a unique key, called the channel key, k/sub ch/, shared among the sender and subscribers. The management of k/sub ch/ is based on a novel distributed encryption scheme that enables an entity to efficiently add and remove a subscriber without affecting other subscribers.
---
paper_title: A Secure Group Communication Framework in Private Personal Area Networks (P-PANs)
paper_content:
One of the next promising generation networks is personal networks where a user can make ad-hoc networks with his/her personal devices. However, the present security mechanism does not consider at all what happens whenever a mobile node (device) is compromised, lost or stolen. Since a user may hold many different types of devices, the leakage of stored secrets sometimes results in the complete breakdown of the intended security level. For that, we propose two Leakage-Resilient and Forward-Secure Authenticated Key Exchange (LRFS-AKE1 and LRFS-AKE2) protocols where the former is used to authenticate a device when its owner is present whereas the latter is used in the other case. These protocols guarantee not only forward secrecy of the shared key between device and its server but also a new additional layer of security against leakage of stored secrets. Furthermore, we give a secure group communication framework, based on the LRFS-AKE1 and LRFS-AKE2 protocols, in Private Personal Area Networks (P-PANs) which provides a group key privacy against the involved server who does not deviate the protocol.
---
paper_title: Efficient secure group management for SSM
paper_content:
We propose an approach to channel key management in the architecture S-SSM, we designed to secure SSM communication. S-SSM defines two mechanisms for access control and content protection. The first one is carried out through subscriber authentication and access permission. The second is realized through the management of a unique key, called the channel key, k/sub ch/, shared among the sender and subscribers. The management of k/sub ch/ is based on a novel distributed encryption scheme that enables an entity to efficiently add and remove a subscriber without affecting other subscribers.
---
| Title: Survey of Security Services on Group Communications
Section 1: Introduction
Description 1: Provide an overview of group communication systems, the need for security services, and the role of cryptographic materials.
Section 2: Security requirements and services in group communications
Description 2: Discuss the six security requirements in group communications and the five security services required to meet these needs.
Section 3: Performance attributes for evaluating secure GCSs
Description 3: Explain the fundamental and service-specific attributes used to evaluate and compare different secure group communication systems (SGCs).
Section 4: Security services for group communications
Description 4: Detail the essential security services that meet the security requirements discussed earlier, including group key management and group access control.
Section 5: Group communication-oriented networks
Description 5: Review some SGC frameworks implemented on existing networks, such as multi-agent systems, personal area networks, and IP multicast networks.
Section 6: Challenging factors in designing secure GCSs
Description 6: Summarize various attributes and considerations in designing secure and high-performance group communication systems.
Section 7: Conclusions
Description 7: Provide a summary and better understanding of security requirements and services in GCSs. Mention the evaluation attributes used and the comparisons made in the study. |
Medical imaging fusion applications: An overview | 6 | ---
paper_title: Adaptive neural network imaging in medical systems
paper_content:
Recent technological advances in medicine has facilitated the development of sophisticated equipment enabling the better delivery of health care services. In parallel, artificial neural networks have emerged as promising tools for the application and implementation of intelligent systems. The aim of this paper is to provide a snapshot of the application of neural network systems in medical imaging. The paper highlights neural network applications in the analysis of cervicovaginal smears, mammography, microscopy, ultrasound imaging, and lesion placement in pallidotomy. It is anticipated that the application of neural network systems in medicine will provide the framework for the development of emerging medical systems, enabling the better delivery of health care.
---
paper_title: Some terms of reference in data fusion
paper_content:
The concept of data fusion is easy to understand. H owever its exact meaning varies from one scientist to another. A working group, set up by the European Association of Remote Sensing Laboratories (EARSeL) and the French Society for Electricity and Electronics (SEE, French affiliate of the IEEE), de voted most of its efforts to establish a lexicon or terms of reference, which is presented in this communication. A new definition o f the data fusion is proposed which better fits the remote sensing domain. Data fusion should be seen as a framework, not merely as a collection of tools and means. This definition e mphasizes the concepts and the fundamentals in remote sensing. The establishment o f a lexicon or terms of reference allows the scient ific community to express the same ideas using the same words, and also to dissem inate their knowledge towards the industry and 'cus tomers' communities. Moreover it is a sine qua non condition to set up clearly the concept of data fu sion and the associated formal framework. Such a framework is mandatory for a better understanding o f data fusion fundamentals and of its properties. I t allows a better description and formalization of the potentials of synergy betw een the remote sensing data, and accordingly, a bet ter exploitation of these data. Finally, the introduction of the concept of data fu sion into the remote sensing domain should raise th e awareness of our colleagues on the whole chain ranging from the sensor to the d ecision, including the management, assessment and control of the quality of the information. The problem of alignment of the inform ation to be fused is very difficult to tackle. It i s a pre-requisite to any fusion process and should be considered with great care.
---
paper_title: Adaptive neural network imaging in medical systems
paper_content:
Recent technological advances in medicine has facilitated the development of sophisticated equipment enabling the better delivery of health care services. In parallel, artificial neural networks have emerged as promising tools for the application and implementation of intelligent systems. The aim of this paper is to provide a snapshot of the application of neural network systems in medical imaging. The paper highlights neural network applications in the analysis of cervicovaginal smears, mammography, microscopy, ultrasound imaging, and lesion placement in pallidotomy. It is anticipated that the application of neural network systems in medicine will provide the framework for the development of emerging medical systems, enabling the better delivery of health care.
---
paper_title: Fusion of ultrasonic and radiographic images of the breast
paper_content:
The diagnosis of breast pathologies requires in most cases a mammographic and an echographic examination. Radiographic and ultrasonic images are complementary for the elaboration of a diagnosis. To realize both examinations quasi simultaneously, a device has been developed to acquire data from the same volume of interest in the breast: 2D radiographic images and a series of ultrasonic images which constitutes a volume of data. From this volume, an ultrasonic image, similar to a X-ray image is synthesized by a conical projection of the volume. The important information provided by each modality is then merged by data fusion techniques, involving fuzzy logic, to provide an image which is a tool for diagnostic aid.
---
paper_title: Automatic assessment of myocardial viability based on PET-MRI data fusion
paper_content:
In this paper, a fusion system which combines data generated by tagged magnetic resonance imaging (MRI) and F18-FDG positron emission tomography (PET) is presented. MRI and PET complementarity leads to a very accurate assessment of the myocardial viability in patients with coronary heart disease. An accurate viability analysis allows a better prediction of the successes of the revascularization procedure. The fusion system is based on soft computing techniques. It is a modular network which consists of four adaptive network-based fuzzy inference systems (ANFIS) organized in a hierarchical way. The network is able to learn and adapt itself and integrates expert knowledge. It is considered a valuable tool for clinical and research applications.
---
| Title: Medical Imaging Fusion Applications: An Overview
Section 1: Introduction
Description 1: Present the overall objective of Computer Aided Diagnostic (CAD) systems and introduce the concept of medical imaging fusion, highlighting its benefits and applications.
Section 2: Literature Review and a Snapshot of Selected Applications
Description 2: Summarize the literature search results on data fusion in medical imaging, citing relevant statistics and providing examples of its application in various medical fields.
Section 3: Case Studies
Description 3: Discuss detailed case studies showcasing image fusion techniques in oncology, microscopy, ultrasound imaging, and lesion placement in pallidotomy.
Section 4: A Modular Neural Network System for the Analysis of Nuclei in Histopathological Sections
Description 4: Describe the modular neural network system used for detecting and classifying breast cancer nuclei, detailing the methods employed and the results obtained.
Section 5: A Multi-feature Multi-classifier System for the Classification of Atherosclerotic Carotid Plaques
Description 5: Explain the computer-aided system developed for characterizing carotid plaques using texture features, and discuss its effectiveness in identifying patients at risk of stroke.
Section 6: Concluding Remarks
Description 6: Summarize the concluding remarks, discussing the general issues in medical imaging and data fusion, and providing insights and recommendations for future research. |
A Review of Personal Communications Services | 9 | ---
paper_title: Personal Communications Services Through the Evolution of Fixed and Mobile Communications and the Intelligent Network Concept
paper_content:
New telecommunications services tend to consider fixed network subscribers' requirements as well as mobile network subscribers' requirements. On one hand, subscribers of fixed networks would like to benefit from the mobility offered in mobile networks. On the other hand, mobile subscribers would like to access to services inherent in fixed networks. Personal communications services (PCS) meet this trend while allowing fixed and mobile convergence. In this environment, the application of intelligent networks (INs) to fixed and mobile networks is very convenient to realize PCS. Thus, the natural advancement of telecommunications systems (fixed and mobile) consists in the definition of new telecommunications architectures which take into account technologies from both fixed and mobile environments. This article studies how the IN is used to support mobility and interworking for PCS. Although mobility management already exists in cellular networks like GSM, it is desirable to use the IN concept to introduce flexibility. In addition, the IN allows the introduction of new supplementary services in PCS. Furthermore, the IN concept can be utilized to provide necessary networking functions for the integration of fixed and mobile networks. This article also highlights the involvement of IN in the definition of the global communications systems such as Telecommunication Information Networking Architecture (TINA), Universal Mobile Telecommunications System (UMTS), and International Mobile Telecommunications in the year 2000 (IMT2000).
---
paper_title: Global development of PCS
paper_content:
A brief survey of international efforts to address spectrum and incumbency issues and implement new personal communications services (PCS) is presented. Worldwide frequency allocations agreed to in the 1992 World Administrative Radio Conference (WARC 92) are discussed. The development of personal communication services in European, Asian, and North American countries is reviewed. >
---
paper_title: Network evolution to support personal communications services
paper_content:
Personal communications services (PCS) are a set of capabilities that allow terminal mobility, personal mobility, and service mobility. The PCS concept is part of such initiatives as universal personal telecommunications (UPT) promoted by standards bodes. PCS is generally segmented into low tier and high tier services, significantly lower in cost than today's cellular service, providing significantly more mobility than today's landline service, and having voice quality and feature sets similar to today's landline service. The paper addresses how communications networks may evolve in supporting the vision of personal communications networks (PCN) which is to create a single seamless network to provide services associated with a mobile individual or a personal identifier, and not simply a terminal. Specifically addressed are the evolution of two of the most common networks in the United States-the cellular and landline networks. Discussed are several new technologies that facilitate the creation of a PCN, among the most important are: SS7 signaling, intelligent network, digital switching, digital radio, portable terminal and operations support technologies. Also addressed is an evolution of wireless communications to broadband networks.
---
paper_title: Toward a framework for power control in cellular systems
paper_content:
Efficiently sharing the spectrum resource is of paramount importance in wireless communication systems, in particular in Personal Communications where large numbers of wireless subscribers are to be served. Spectrum resource sharing involves protecting other users from excessive interference as well as making receivers more tolerant to this interference. Transmitter power control techniques fall into the first category. In this paper we describe the power control problem, discuss its major factors, objective criteria, measurable information and algorithm requirements. We attempt to put the problem in a general framework and propose an evolving knowledge-bank to share, study and compare between algorithms.
---
paper_title: Personal communications networks bridging the gap between cellular and cordless phones
paper_content:
Today's cellular radiotelephone systems currently serve some 12 million subscribers but at average costs of $70/month for service, cellular remains a business, not a consumer, service. On the other hand, cordless phones are already a consumer product in over 40% of US households and annual sales of new cordless phones are already greater than sales of regular corded phones. Personal Communications Networks (PCN) providing Personal Communications Services (PCS) are designed to bridge the gap between expensive public cellular and private cordless services. In this paper we explore PCN/PCS topics including: a definition of the service, identification of the underlying technologies, and discussion of tradeoffs between the technologies. >
---
paper_title: Personal Communications Services Through the Evolution of Fixed and Mobile Communications and the Intelligent Network Concept
paper_content:
New telecommunications services tend to consider fixed network subscribers' requirements as well as mobile network subscribers' requirements. On one hand, subscribers of fixed networks would like to benefit from the mobility offered in mobile networks. On the other hand, mobile subscribers would like to access to services inherent in fixed networks. Personal communications services (PCS) meet this trend while allowing fixed and mobile convergence. In this environment, the application of intelligent networks (INs) to fixed and mobile networks is very convenient to realize PCS. Thus, the natural advancement of telecommunications systems (fixed and mobile) consists in the definition of new telecommunications architectures which take into account technologies from both fixed and mobile environments. This article studies how the IN is used to support mobility and interworking for PCS. Although mobility management already exists in cellular networks like GSM, it is desirable to use the IN concept to introduce flexibility. In addition, the IN allows the introduction of new supplementary services in PCS. Furthermore, the IN concept can be utilized to provide necessary networking functions for the integration of fixed and mobile networks. This article also highlights the involvement of IN in the definition of the global communications systems such as Telecommunication Information Networking Architecture (TINA), Universal Mobile Telecommunications System (UMTS), and International Mobile Telecommunications in the year 2000 (IMT2000).
---
paper_title: Heterogeneous personal communications services: integration of PCS systems
paper_content:
Personal communications services (PCS) are being introduced to offer ubiquitous communication. In its first phase PCS consists of a plethora of systems that address cellular, vehicular, cordless phone, and a variety of other services. The integration of these different systems is referred to as "heterogeneous PCS (HPCS)". We describe the various PCS systems available and address in detail the issue of PCS systems integration. Key implementation issues for integrating PCS systems are defined and discussed.
---
paper_title: Network architecture and signaling for wireless personal communications
paper_content:
A distributed microcellular architecture based on the IEEE 802.6 Metropolitan Area Network (MAN) is proposed, and is shown to meet anticipated personal communications service (PCS) needs. A method is presented to calculate MAN coverage in urban areas, and is used to demonstrate coverage of approximately 50 city blocks per MAN. A distributed subscriber database architecture is proposed to facilitate call setup, tracking of roamers and handoffs. To fully utilize MAN bandwidth, a quick method for the heat stations to switch on/off isochronous slots is proposed to facilitate adaptation to PCS traffic level variations. Call setup and handoff procedures are detailed. The PCS signaling overhead is calculated to be 15% of the capacity required to carry voice traffic. >
---
paper_title: Comparison of signaling loads for PCS systems
paper_content:
We present a comparison of the control signaling load of two vastly different architectures for providing personal communication services (PCSs). One architecture is based on current cellular networks. The other architecture, called the wireless distributed call processing architecture (WDCPA), distributes processing from the mobile switching centers and cell sites and executes new procedures for tracking mobile users and locating mobile users to deliver calls. We determine the signaling load generated within each system to support mobility management and call control based on standard assumptions about the operating parameters of a cellular network. Our results show that, when compared to current cellular systems, for simple single-connection services, WDCPA has marginally reduced cross-network signaling loads. For multiconnection calls, WDCPA incurs 35% less total signaling load for mobility management, has reduced cross-network signaling load for mobility management by up to 65%, and depending on the user model (e.g., data or telecommunication), has reduced total cross-network signaling load, including procedures for call/connection and mobility management, by up to 55% when compared to current cellular systems, while more flexibly supporting services.
---
paper_title: Intelligent network requirements for personal communications services
paper_content:
It is shown that a moderate set of capabilities within the service switching point (SSP), service control point (SCP), and intelligent peripheral (IP), which are the intelligent network (IN) components directly responsible for the real-time execution and control of end-user services, can engender a wide range of end-user personal communication service (PCS) features. These capabilities could be used as the starting point for an economic analysis of IN implementation costs versus service worth. From a very large target set of call model trigger check points (TCPs), the dozen or so identified in the CCITT Capability Set 1 are shown to be sufficient. The SCP and IP functional entity actions identified are also sufficient to support PCS core network functions. >
---
paper_title: IN architectures for implementing universal personal telecommunications
paper_content:
This article focuses on architectures for providing universal personal telecommunications (UPT) service to wireline users. Although UPT services could be provided to users of wireless phones, thereby giving those users personal communication services (PCS), the wireline environment introduces certain important complications. Unlike "smart" cellular phones, which can register themselves and the user automatically, wireline telephones are unable to automatically detect and register a UPT user. UPT therefore includes a manual registration procedure to associate a PTN with the phone where calls will be received or placed. Also, unlike personal communications terminals that are typically used by only one person, wireline phones are likely to be shared among other users. Therefore, the network must keep track of who is using the phone, so it can provide the appropriate telecommunications services. It would be difficult or impossible to implement UPT as a switch-based service. Fortunately, an intelligent network (IN) architecture that is well suited for implementing UPT is being deployed by many local exchange (LECs) and interexchange carriers (IXCs). >
---
paper_title: Intelligent network: a key platform for PCS interworking and interoperability
paper_content:
The Tl and TIA standards committees in the United States have worked jointly on the development of the first phase of personal communication services (PCS) standards, which were approved in December 1995. PCS systems based on these standards are currently under development. As these systems are deployed, the variety of wireless systems will grow, making interworking and interoperability a key challenge. This article provides an overview of PCS standards and explores how the different types of wireless systems (PCS and cellular) will utilize the capabilities of the intelligent network to provide seamless roaming.
---
paper_title: Personal Communications Services Through the Evolution of Fixed and Mobile Communications and the Intelligent Network Concept
paper_content:
New telecommunications services tend to consider fixed network subscribers' requirements as well as mobile network subscribers' requirements. On one hand, subscribers of fixed networks would like to benefit from the mobility offered in mobile networks. On the other hand, mobile subscribers would like to access to services inherent in fixed networks. Personal communications services (PCS) meet this trend while allowing fixed and mobile convergence. In this environment, the application of intelligent networks (INs) to fixed and mobile networks is very convenient to realize PCS. Thus, the natural advancement of telecommunications systems (fixed and mobile) consists in the definition of new telecommunications architectures which take into account technologies from both fixed and mobile environments. This article studies how the IN is used to support mobility and interworking for PCS. Although mobility management already exists in cellular networks like GSM, it is desirable to use the IN concept to introduce flexibility. In addition, the IN allows the introduction of new supplementary services in PCS. Furthermore, the IN concept can be utilized to provide necessary networking functions for the integration of fixed and mobile networks. This article also highlights the involvement of IN in the definition of the global communications systems such as Telecommunication Information Networking Architecture (TINA), Universal Mobile Telecommunications System (UMTS), and International Mobile Telecommunications in the year 2000 (IMT2000).
---
paper_title: Intelligent networks: a key to provide personal communications services
paper_content:
Personal communications services provide communication services to any user anywhere, whichever network he uses, at any time and in any form. This concept could be symbolized by a small telephone used indoors (home, office) and outdoors (in the street). Such attractive features impose some requirements on the network. The intelligent network is a concept characterized by a flexible and open architecture. Thus, the intelligent network has been introduced in fixed networks to facilitate the development of new services and is being used to integrate mobility in this type of networks. The intelligent network has been also introduced in mobile networks to increase the flexibility of such network, allowing an improved services provision. As a consequence, IN shows efficient characteristics to provide PCS in fixed and mobile networks. Since PCS could be offered in fixed and mobile networks thanks to the IN, the future networks will have to consider this concept to propose personal services to the users.
---
| Title: A Review of Personal Communications Services
Section 1: Introduction
Description 1: Introduce the concept of Personal Communications Services (PCS) and discuss its significance and general features.
Section 2: Technological Evolution for Personal Communications
Description 2: Review the history and technological development of personal communication systems, including mobile radio, cellular telephone, and cordless telephone.
Section 3: Spectrum Allocation
Description 3: Discuss the spectrum allocation for PCS, including the decisions made by the World Administrative Radio Conference (WARC) and the role of the FCC.
Section 4: Mobility
Description 4: Explain the different types of mobility supported by PCS, namely terminal mobility, personal mobility, and service mobility, and their relevant functionalities.
Section 5: Standardization Efforts
Description 5: Explore the international standards and organizations involved in PCS standardization, such as ITU, ETSI, TIA, and Committee T1.
Section 6: Heterogeneous PCS (HPCS)
Description 6: Describe the concept and advantages of integrating different PCS systems, such as cellular and cordless telephony, into a heterogeneous PCS.
Section 7: Distributed Network Architecture
Description 7: Detail the proposed distributed network architectures for PCS deployment, including distributed microcellular architecture and Wireless Distributed Call Processing Architecture (WDCPA).
Section 8: Intelligent Network
Description 8: Outline the application of intelligent network (IN) concepts in PCS to provide flexible and cost-effective service provisioning and support seamless roaming.
Section 9: Conclusion
Description 9: Summarize the unique characteristics and benefits of PCS, and emphasize the importance of integrating wireless and wireline systems for ubiquitous PCS. |
A Survey of parallel intrusion detection on graphical processors | 14 | ---
paper_title: Offloading IDS Computation to the GPU
paper_content:
Signature-matching Intrusion Detection Systems can experience significant decreases in performance when the load on the IDS-host increases. We propose a solution that off-loads some of the computation performed by the IDS to the Graphics Processing Unit (GPU). Modern GPUs are programmable, stream-processors capable of high-performance computing that in recent years have been used in non-graphical computing tasks. The major operation in a signature-matching IDS is matching values seen operation to known black-listed values, as such, our solution implements the string-matching on the GPU. The results show that as the CPU load on the IDS host system increases, PixelSnort?s performance is significantly more robust and is able to outperform conventional Snort by up to 40%.
---
paper_title: Offloading IDS Computation to the GPU
paper_content:
Signature-matching Intrusion Detection Systems can experience significant decreases in performance when the load on the IDS-host increases. We propose a solution that off-loads some of the computation performed by the IDS to the Graphics Processing Unit (GPU). Modern GPUs are programmable, stream-processors capable of high-performance computing that in recent years have been used in non-graphical computing tasks. The major operation in a signature-matching IDS is matching values seen operation to known black-listed values, as such, our solution implements the string-matching on the GPU. The results show that as the CPU load on the IDS host system increases, PixelSnort?s performance is significantly more robust and is able to outperform conventional Snort by up to 40%.
---
paper_title: Offloading IDS Computation to the GPU
paper_content:
Signature-matching Intrusion Detection Systems can experience significant decreases in performance when the load on the IDS-host increases. We propose a solution that off-loads some of the computation performed by the IDS to the Graphics Processing Unit (GPU). Modern GPUs are programmable, stream-processors capable of high-performance computing that in recent years have been used in non-graphical computing tasks. The major operation in a signature-matching IDS is matching values seen operation to known black-listed values, as such, our solution implements the string-matching on the GPU. The results show that as the CPU load on the IDS host system increases, PixelSnort?s performance is significantly more robust and is able to outperform conventional Snort by up to 40%.
---
paper_title: A framework for network traffic analysis using GPUs
paper_content:
During the last years the computer networks have become an important part of our society. ::: Networks have kept growing in size and complexity, making more complex its management ::: and traffic monitoring and analysis processes, due to the huge amount of data and calculations ::: involved. ::: In the last decade, several researchers found effective to use graphics processing units (GPUs) ::: rather than a traditional processors (CPU) to boost the execution of some algorithms not related ::: to graphics (GPGPU). In 2006 the GPU chip manufacturer NVIDIA launched CUDA, a ::: library that allows software developers to use their GPUs to perform general purpose algorithm ::: calculations, using the C programming language. ::: This thesis presents a framework which tries to simplify the task of programming network traffic ::: analysis with CUDA to software developers. The objectives of the framework have been abstracting ::: the task of obtaining network packets, simplify the task of creating network analysis ::: programs using CUDA and offering an easy way to reuse the analysis code. Several network ::: traffic analysis have also been developed.
---
paper_title: Parallelizing a network intrusion detection system using a GPU.
paper_content:
P ARRELLELIZING A NETWORK INTRUSION DETECTION SYSTEM USINGAGPU Anju Panicker Madhusoodhanan Sathik April II, 2012 As network speeds continue to increase and attacks get increasingly more complicated, there is need to improved detection algorithms and improved performance of Network Intrusion Detection Systems (NIDS). Recently, several attempts have been made to use the underutilized parallel processing capabilities of GPUs, to offload the costly NIDS pattern matching algorithms. This thesis presents an interface for NIDS Snort that allows porting of the pattern-matching algorithm to run on a GPU. The analysis show that this system can achieve up to four times speedup over the existing Snort implementation and that GPUs can be effectively utilized to perform intensive computational processes like pattern matching.
---
paper_title: A framework for network traffic analysis using GPUs
paper_content:
During the last years the computer networks have become an important part of our society. ::: Networks have kept growing in size and complexity, making more complex its management ::: and traffic monitoring and analysis processes, due to the huge amount of data and calculations ::: involved. ::: In the last decade, several researchers found effective to use graphics processing units (GPUs) ::: rather than a traditional processors (CPU) to boost the execution of some algorithms not related ::: to graphics (GPGPU). In 2006 the GPU chip manufacturer NVIDIA launched CUDA, a ::: library that allows software developers to use their GPUs to perform general purpose algorithm ::: calculations, using the C programming language. ::: This thesis presents a framework which tries to simplify the task of programming network traffic ::: analysis with CUDA to software developers. The objectives of the framework have been abstracting ::: the task of obtaining network packets, simplify the task of creating network analysis ::: programs using CUDA and offering an easy way to reuse the analysis code. Several network ::: traffic analysis have also been developed.
---
paper_title: Accelerating the local outlier factor algorithm on a GPU for intrusion detection systems
paper_content:
The Local Outlier Factor (LOF) is a very powerful anomaly detection method available in machine learning and classification. The algorithm defines the notion of local outlier in which the degree to which an object is outlying is dependent on the density of its local neighborhood, and each object can be assigned an LOF which represents the likelihood of that object being an outlier. Although this concept of a local outlier is a useful one, the computation of LOF values for every data object requires a large number of k-nearest neighbor queries -- this overhead can limit the use of LOF due to the computational overhead involved. Due to the growing popularity of Graphics Processing Units (GPU) in general-purpose computing domains, and equipped with a high-level programming language designed specifically for general-purpose applications (e.g., CUDA), we look to apply this parallel computing approach to accelerate LOF. In this paper we explore how to utilize a CUDA-based GPU implementation of the k-nearest neighbor algorithm to accelerate LOF classification. We achieve more than a 100X speedup over a multi-threaded dual-core CPU implementation. We also consider the impact of input data set size, the neighborhood size (i.e., the value of k) and the feature space dimension, and report on their impact on execution time.
---
paper_title: A GPU-Based Multiple-Pattern Matching Algorithm for Network Intrusion Detection Systems
paper_content:
By the development of network applications, network security issues are getting more and more important. This paper proposes a multiple-pattern matching algorithm for the network intrusion detection systems based on the GPU (Graphics Processing Units). The highly parallelism of the GPU computation power is used to inspect the packet content in parallel. The performance of the proposed approach is analyzed through evaluations such as using various texture formats and different implementations. Experimental results indicate that the performance of the proposed approach is twice of that of the modified Wu-Manber algorithm used in Snort. The proposed approach makes a commodity and cheap GPU card as a high performance pattern matching co-processor.
---
paper_title: Efficient Packet Pattern Matching for Gigabit Network Intrusion Detection Using GPUs
paper_content:
With the rapid development of network hardware technologies and network bandwidth, the high link speeds and huge amount of threats poses challenges to network intrusion detection systems, which must handle the higher network traffic and perform more complicated packet processing. In general, pattern matching is a highly computationally intensive process part of network intrusion detection systems. In this paper, we present an efficient GPU-based pattern matching algorithm by leveraging the computational power of GPUs to accelerate the pattern matching operations to increase the over-all processing throughput. From the experiment results, the proposed algorithm achieved a maximum traffic processing throughput of 2.4 Gbit/s. The results demonstrate that GPUs can be used effectively to speed up intrusion detection systems.
---
paper_title: Efficient string matching: an aid to bibliographic search
paper_content:
This paper describes a simple, efficient algorithm to locate all occurrences of any of a finite number of keywords in a string of text. The algorithm consists of constructing a finite state pattern matching machine from the keywords and then using the pattern matching machine to process the text string in a single pass. Construction of the pattern matching machine takes time proportional to the sum of the lengths of the keywords. The number of state transitions made by the pattern matching machine in processing the text string is independent of the number of keywords. The algorithm has been used to improve the speed of a library bibliographic search program by a factor of 5 to 10.
---
paper_title: Offloading IDS Computation to the GPU
paper_content:
Signature-matching Intrusion Detection Systems can experience significant decreases in performance when the load on the IDS-host increases. We propose a solution that off-loads some of the computation performed by the IDS to the Graphics Processing Unit (GPU). Modern GPUs are programmable, stream-processors capable of high-performance computing that in recent years have been used in non-graphical computing tasks. The major operation in a signature-matching IDS is matching values seen operation to known black-listed values, as such, our solution implements the string-matching on the GPU. The results show that as the CPU load on the IDS host system increases, PixelSnort?s performance is significantly more robust and is able to outperform conventional Snort by up to 40%.
---
paper_title: Gnort : High Performance Network Intrusion Detection Using Graphics Processors
paper_content:
The constant increase in link speeds and number of threats poses challenges to network intrusion detection systems (NIDS), which must cope with higher traffic throughput and perform even more complex per-packet processing. In this paper, we present an intrusion detection system based on the Snort open-source NIDS that exploits the underutilized computational power of modern graphics cards to offload the costly pattern matching operations from the CPU, and thus increase the overall processing throughput. Our prototype system, called Gnort, achieved a maximum traffic processing throughput of 2.3 Gbit/s using synthetic network traces, while when monitoring real traffic using a commodity Ethernet interface, it outperformed unmodified Snort by a factor of two. The results suggest that modern graphics cards can be used effectively to speed up intrusion detection systems, as well as other systems that involve pattern matching operations.
---
paper_title: A distributed network intrusion detection system architecture based on computer stations using GPGPU
paper_content:
This article deals with the proposal of distributed parallel network system architecture for intrusion detection. The aim is to design model of architecture that will be used for searching, detecting and analyzing intrusions. This proposal of detection system in network will use enormous performance of the GPGPU high-parallel data computing technology which can substitute the most complicated part of the intrusion detection process in NIDS. The model is highly scalable and allows the usage of several most recent trends in the IDS field with GPGPU support. Due to the fast attacks reaction system analyzes all the data in different formats in whole set of attacks and system alerts.
---
| Title: A Survey of Parallel Intrusion Detection on Graphical Processors
Section 1: Introduction
Description 1: This section introduces the importance of intrusion detection systems (IDS) in an organization's security infrastructure, highlighting the increasing vulnerability to cyber threats and how IDS serves as a critical layer of defense.
Section 2: Background of IDS
Description 2: This section provides a fundamental understanding of IDS, discussing the approaches of anomaly detection and misuse detection, different steps involved in the process, and various types of IDS such as host-based and network-based systems.
Section 3: Detection Techniques
Description 3: This section elaborates on the different techniques used for intrusion detection, including anomaly detection and misuse detection, along with details on various methods like statistical analysis, neural networks, data mining, and genetic algorithms.
Section 4: IDS Performance Issues
Description 4: This section discusses the performance challenges IDS faces, particularly focusing on CPU-bound and I/O-bound limitations, and other factors that affect the efficiency of IDS, such as encryption and new sophisticated attacks.
Section 5: CPU-Bound Limitations
Description 5: This section delves deeper into the CPU-bound limitations in IDS, discussing the impact of system load, string-matching operations, and approaches to improve the runtime of packet comparisons.
Section 6: I/O-Bound Limitations
Description 6: This section focuses on the I/O-bound limitations in IDS, analyzing the bottleneck caused by packet reading operations and other constraints that hinder IDS performance.
Section 7: Advantages of Parallel Computing Implementation on GPU
Description 7: This section highlights the benefits of using GPU for parallel computing in IDS, detailing improved performance, better memory bandwidth, high scalability, and cost-effectiveness.
Section 8: Related Parallel Computing Implementations
Description 8: This section reviews various implementations and parallelization methods for intrusion detection using GPUs, examining several algorithms and techniques that enhance IDS performance.
Section 9: Intrusion Detection Tools Using GPU
Description 9: This section introduces specific tools and systems that utilize GPU for intrusion detection, describing their functionalities and effectiveness.
Section 10: Snort
Description 10: This section provides an overview of Snort, an open-source IDS solution using GPU technologies, discussing its components, architecture, and advantages.
Section 11: Pixel-snort
Description 11: This section discusses Pixel-Snort, an extension of Snort, focusing on offloading packet processing tasks to GPU to enhance performance.
Section 12: Gnort
Description 12: This section details Gnort, an IDS tool that leverages GPGPU technologies for improved packet processing and comparison performance.
Section 13: Suricata
Description 13: This section introduces Suricata, an advanced IDS project that supports GPU and multithreading for efficient load distribution and high performance.
Section 14: Conclusion
Description 14: This section concludes the survey by summarizing the advancements in GPGPU technologies for IDS, discussing current prototypes, and emphasizing the potential for future developments in parallel computing for intrusion detection. |
A Survey on Resilient Machine Learning | 7 | ---
paper_title: Stealing Machine Learning Models via Prediction APIs
paper_content:
Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service ("predictive analytics") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. ::: The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., "steal") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.
---
paper_title: Intriguing properties of neural networks
paper_content:
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. ::: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. ::: Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
---
paper_title: Stackelberg games for adversarial prediction problems
paper_content:
The standard assumption of identically distributed training and test data is violated when test data are generated in response to a predictive model. This becomes apparent, for example, in the context of email spam filtering, where an email service provider employs a spam filter and the spam sender can take this filter into account when generating new emails. We model the interaction between learner and data generator as a Stackelberg competition in which the learner plays the role of the leader and the data generator may react on the leader's move. We derive an optimization problem to determine the solution of this game and present several instances of the Stackelberg prediction game. We show that the Stackelberg prediction game generalizes existing prediction models. Finally, we explore properties of the discussed models empirically in the context of email spam filtering.
---
paper_title: Adversarial learning
paper_content:
Many classification tasks, such as spam filtering, intrusion detection, and terrorism detection, are complicated by an adversary who wishes to avoid detection. Previous work on adversarial classification has made the unrealistic assumption that the attacker has perfect knowledge of the classifier [2]. In this paper, we introduce the adversarial classifier reverse engineering (ACRE) learning problem, the task of learning sufficient information about a classifier to construct adversarial attacks. We present efficient algorithms for reverse engineering linear classifiers with either continuous or Boolean features and demonstrate their effectiveness using real data from the domain of spam filtering.
---
paper_title: The random subspace method for constructing decision forests
paper_content:
Much of previous attention on decision trees focuses on the splitting criteria and optimization of tree sizes. The dilemma between overfitting and achieving maximum accuracy is seldom resolved. A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. The classifier consists of multiple trees constructed systematically by pseudorandomly selecting subsets of components of the feature vector, that is, trees constructed in randomly chosen subspaces. The subspace method is compared to single-tree classifiers and other forest construction methods by experiments on publicly available datasets, where the method's superiority is demonstrated. We also discuss independence between trees in a forest and relate that to the combined classification accuracy.
---
paper_title: W.: Polymorphic blending attacks
paper_content:
A very effective means to evade signature-based intrusion detection systems (IDS) is to employ polymorphic techniques to generate attack instances that do not share a fixed signature. Anomaly-based intrusion detection systems provide good defense because existing polymorphic techniques can make the attack instances look different from each other, but cannot make them look like normal. In this paper we introduce a new class of polymorphic attacks, called polymorphic blending attacks, that can effectively evade byte frequency-based network anomaly IDS by carefully matching the statistics of the mutated attack instances to the normal profiles. The proposed polymorphic blending attacks can be viewed as a subclass of the mimicry attacks. We take a systematic approach to the problem and formally describe the algorithms and steps required to carry out such attacks. We not only show that such attacks are feasible but also analyze the hardness of evasion under different circumstances. We present detailed techniques using PAYL, a byte frequency-based anomaly IDS, as a case study and demonstrate that these attacks are indeed feasible. We also provide some insight into possible countermeasures that can be used as defense.
---
paper_title: Nightmare at test time: robust learning by feature deletion
paper_content:
When constructing a classifier from labeled data, it is important not to assign too much weight to any single input feature, in order to increase the robustness of the classifier. This is particularly important in domains with nonstationary feature distributions or with input sensor failures. A common approach to achieving such robustness is to introduce regularization which spreads the weight more evenly between the features. However, this strategy is very generic, and cannot induce robustness specifically tailored to the classification task at hand. In this work, we introduce a new algorithm for avoiding single feature over-weighting by analyzing robustness using a game theoretic formalization. We develop classifiers which are optimally resilient to deletion of features in a minimax sense, and show how to construct such classifiers using quadratic programming. We illustrate the applicability of our methods on spam filtering and handwritten digit recognition tasks, where feature deletion is indeed a realistic noise model.
---
paper_title: Bagging Predictors
paper_content:
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
---
paper_title: Static prediction games for adversarial learning problems
paper_content:
The standard assumption of identically distributed training and test data is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for example, in the context of email spam filtering. Here, email service providers employ spam filters, and spam senders engineer campaign templates to achieve a high rate of successful deliveries despite the filters. We model the interaction between the learner and the data generator as a static game in which the cost functions of the learner and the data generator are not necessarily antagonistic. We identify conditions under which this prediction game has a unique Nash equilibrium and derive algorithms that find the equilibrial prediction model. We derive two instances, the Nash logistic regression and the Nash support vector machine, and empirically explore their properties in a case study on email spam filtering.
---
paper_title: Can machine learning be secure?
paper_content:
Machine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications, such as intrusion detection systems and spam e-mail filtering. However, machine learning algorithms themselves can be a target of attack by a malicious adversary. This paper provides a framework for answering the question, "Can machine learning be secure?" Novel contributions of this paper include a taxonomy of different types of attacks on machine learning techniques and systems, a variety of defenses against those attacks, a discussion of ideas that are important to security for machine learning, an analytical model giving a lower bound on attacker's work function, and a list of open problems.
---
paper_title: Robustness of multimodal biometric fusion methods against spoof attacks
paper_content:
In this paper, we address the security of multimodal biometric systems when one of the modes is successfully spoofed. We propose two novel fusion schemes that can increase the security of multimodal biometric systems. The first is an extension of the likelihood ratio based fusion scheme and the other uses fuzzy logic. Besides the matching score and sample quality score, our proposed fusion schemes also take into account the intrinsic security of each biometric system being fused. Experimental results have shown that the proposed methods are more robust against spoof attacks when compared with traditional fusion methods.
---
paper_title: Adversarial classification
paper_content:
Essentially all data mining algorithms assume that the data-generating process is independent of the data miner's activities. However, in many domains, including spam detection, intrusion detection, fraud detection, surveillance and counter-terrorism, this is far from the case: the data is actively manipulated by an adversary seeking to make the classifier produce false negatives. In these domains, the performance of a classifier can degrade rapidly after it is deployed, as the adversary learns to defeat it. Currently the only solution to this is repeated, manual, ad hoc reconstruction of the classifier. In this paper we develop a formal framework and algorithms for this problem. We view classification as a game between the classifier and the adversary, and produce a classifier that is optimal given the adversary's optimal strategy. Experiments in a spam detection domain show that this approach can greatly outperform a classifier learned in the standard way, and (within the parameters of the problem) automatically adapt the classifier to the adversary's evolving manipulations.
---
paper_title: Nash equilibria of static prediction games
paper_content:
The standard assumption of identically distributed training and test data is violated when an adversary can exercise some control over the generation of the test data. In a prediction game, a learner produces a predictive model while an adversary may alter the distribution of input data. We study single-shot prediction games in which the cost functions of learner and adversary are not necessarily antagonistic. We identify conditions under which the prediction game has a unique Nash equilibrium, and derive algorithms that will find the equilibrial prediction models. In a case study, we explore properties of Nash-equilibrial prediction models for email spam filtering empirically.
---
paper_title: Multiple classifier systems for robust classifier design in adversarial environments
paper_content:
Pattern recognition systems are increasingly being used in adversarial environments like network intrusion detection, spam filtering and biometric authentication and verification systems, in which an adversary may adaptively manipulate data to make a classifier ineffective. Current theory and design methods of pattern recognition systems do not take into account the adversarial nature of such kind of applications. Their extension to adversarial settings is thus mandatory, to safeguard the security and reliability of pattern recognition systems in adversarial environments. In this paper we focus on a strategy recently proposed in the literature to improve the robustness of linear classifiers to adversarial data manipulation, and experimentally investigate whether it can be implemented using two well known techniques for the construction of multiple classifier systems, namely, bagging and the random subspace method. Our results provide some hints on the potential usefulness of classifier ensembles in adversarial classification tasks, which is different from the motivations suggested so far in the literature.
---
paper_title: Paragraph: Thwarting Signature Learning by Training Maliciously
paper_content:
Defending a server against Internet worms and defending a user's email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class. ::: ::: Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms.
---
paper_title: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
paper_content:
Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by Fredrikson et al., adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to models. We experimentally show attacks that are able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and, in the other context, show how to recover recognizable images of people's faces given only their name and access to the ML model. We also initiate experimental exploration of natural countermeasures, investigating a privacy-aware decision tree training algorithm that is a simple variant of CART learning, as well as revealing only rounded confidence values. The lesson that emerges is that one can avoid these kinds of MI attacks with negligible degradation to utility.
---
paper_title: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing
paper_content:
We initiate the study of privacy in pharmacogenetics, wherein machine learning models are used to guide medical treatments based on a patient's genotype and background. Performing an in-depth case study on privacy in personalized warfarin dosing, we show that suggested models carry privacy risks, in particular because attackers can perform what we call model inversion: an attacker, given the model and some demographic information about a patient, can predict the patient's genetic markers. ::: ::: As differential privacy (DP) is an oft-proposed solution for medical settings such as this, we evaluate its effectiveness for building private versions of pharmacogenetic models. We show that DP mechanisms prevent our model inversion attacks when the privacy budget is carefully selected. We go on to analyze the impact on utility by performing simulated clinical trials with DP dosing models. We find that for privacy budgets effective at preventing attacks, patients would be exposed to increased risk of stroke, bleeding events, and mortality. We conclude that current DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated in situ using the general methodology introduced by our work.
---
paper_title: Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
paper_content:
Machine-learning ML enables computers to learn how to recognise patterns, make unintended decisions, or react to a dynamic environment. The effectiveness of trained machines varies because of more suitable ML algorithms or because superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. In this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. Such information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights.
---
paper_title: Membership Inference Attacks Against Machine Learning Models
paper_content:
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
---
paper_title: Improving Generalization with Active Learning
paper_content:
Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization.
---
paper_title: Stealing Machine Learning Models via Prediction APIs
paper_content:
Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service ("predictive analytics") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. ::: The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., "steal") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.
---
paper_title: Practical Black-Box Attacks against Machine Learning
paper_content:
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
---
paper_title: DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
paper_content:
State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1
---
paper_title: The Limitations of Deep Learning in Adversarial Settings
paper_content:
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
---
paper_title: Practical Black-Box Attacks against Machine Learning
paper_content:
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
---
paper_title: Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
paper_content:
Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim. Recent work has further developed a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack. We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19% misclassification rate) and Google (88.94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.
---
paper_title: Intriguing properties of neural networks
paper_content:
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. ::: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. ::: Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
---
paper_title: Explaining and Harnessing Adversarial Examples
paper_content:
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
---
paper_title: Autoencoding beyond pixels using a learned similarity metric
paper_content:
We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
---
paper_title: Adversarial Examples for Generative Models
paper_content:
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
---
paper_title: Query Strategies for Evading Convex-Inducing Classifiers
paper_content:
Classifiers are often used to detect miscreant activities. We study how an adversary can systematically query a classifier to elicit information that allows the adversary to evade detection while incurring a near-minimal cost of modifying their intended malfeasance. We generalize the theory of Lowd and Meek (2005) to the family of convex-inducing classifiers that partition input space into two sets one of which is convex. We present query algorithms for this family that construct undetected instances of approximately minimal cost using only polynomially-many queries in the dimension of the space and in the level of approximation. Our results demonstrate that near-optimal evasion can be accomplished without reverse-engineering the classifier's decision boundary. We also consider general lp costs and show that near-optimal evasion on the family of convex-inducing classifiers is generally efficient for both positive and negative convexity for all levels of approximation if p=1.
---
paper_title: Optimal randomized classification in adversarial settings
paper_content:
The problem of learning to distinguish good inputs from malicious has come to be known as adversarial classification emphasizing the fact that, unlike traditional classification, the adversary can manipulate input instances to avoid being so classified. We offer the first general theoretical analysis of the problem of adversarial classification, resolving several important open questions in the process. First, we significantly generalize previous results on adversarial classifier reverse engineering (ACRE), showing that if a classifier can be efficiently learned, it can subsequently be efficiently reverse engineered with arbitrary precision. We extend this result to randomized classification schemes, but now observe that reverse engineering is imperfect, and its efficacy depends on the defender's randomization scheme. Armed with this insight, we proceed to characterize optimal randomization schemes in the face of adversarial reverse engineering and classifier manipulation. What we find is quite surprising: in all the model variations we consider, the defender's optimal policy tends to be either to randomize uniformly (ignoring baseline classification accuracy), which is the case for targeted attacks, or not to randomize at all, which is typically optimal when attacks are indiscriminate.
---
paper_title: DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
paper_content:
State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1
---
paper_title: The Limitations of Deep Learning in Adversarial Settings
paper_content:
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
---
paper_title: Deep Text Classification Can be Fooled
paper_content:
In this paper, we present an effective method to craft text adversarial samples, revealing one important yet underestimated fact that DNN-based text classifiers are also prone to adversarial sample attack. Specifically, confronted with different adversarial scenarios, the text items that are important for classification are identified by computing the cost gradients of the input (white-box attack) or generating a series of occluded test samples (black-box attack). Based on these items, we design three perturbation strategies, namely insertion, modification, and removal, to generate adversarial samples. The experiment results show that the adversarial samples generated by our method can successfully fool both state-of-the-art character-level and word-level DNN-based text classifiers. The adversarial samples can be perturbed to any desirable classes without compromising their utilities. At the same time, the introduced perturbation is difficult to be perceived.
---
paper_title: Explaining and Harnessing Adversarial Examples
paper_content:
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
---
paper_title: Evasion Attacks against Machine Learning at Test Time
paper_content:
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
---
paper_title: Security Evaluation of Pattern Classifiers under Attack
paper_content:
Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier's behavior in adversarial environments, and lead to better design choices.
---
paper_title: ANTIDOTE: understanding and defending against poisoning of anomaly detectors
paper_content:
Statistical machine learning techniques have recently garnered increased popularity as a means to improve network design and security. For intrusion detection, such methods build a model for normal behavior from training data and detect attacks as deviations from that model. This process invites adversaries to manipulate the training data so that the learned model fails to detect subsequent attacks. We evaluate poisoning techniques and develop a defense, in the context of a particular anomaly detector - namely the PCA-subspace method for detecting anomalies in backbone networks. For three poisoning schemes, we show how attackers can substantially increase their chance of successfully evading detection by only adding moderate amounts of poisoned data. Moreover such poisoning throws off the balance between false positives and false negatives thereby dramatically reducing the efficacy of the detector. To combat these poisoning activities, we propose an antidote based on techniques from robust statistics and present a new robust PCA-based detector. Poisoning has little effect on the robust model, whereas it significantly distorts the model produced by the original PCA method. Our technique substantially reduces the effectiveness of poisoning for a variety of scenarios and indeed maintains a significantly better balance between false positives and false negatives than the original method when under attack.
---
paper_title: Security Analysis of Online Centroid Anomaly Detection
paper_content:
Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method--online centroid anomaly detection--in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.
---
paper_title: Poisoning Attacks against Support Vector Machines
paper_content:
We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data. ::: ::: The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error.
---
paper_title: Adversarial Label Flips Attack on Support Vector Machines
paper_content:
To develop a robust classification algorithm in the adversarial setting, it is important to understand the adversary's strategy. We address the problem of label flips attack where an adversary contaminates the training set through flipping labels. By analyzing the objective of the adversary, we formulate an optimization framework for finding the label flips that maximize the classification error. An algorithm for attacking support vector machines is derived. Experiments demonstrate that the accuracy of classifiers is significantly degraded under the attack.
---
paper_title: Data Poisoning Attacks on Factorization-Based Collaborative Filtering
paper_content:
Recommendation and collaborative filtering systems are important in modern information and e-commerce applications. As these systems are becoming increasingly popular in the industry, their outputs could affect business decision making, introducing incentives for an adversarial party to compromise the availability or integrity of such systems. We introduce a data poisoning attack on collaborative filtering systems. We demonstrate how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected. While the complete knowledge assumption seems extreme, it enables a robust assessment of the vulnerability of collaborative filtering schemes to highly motivated attacks. We present efficient solutions for two popular factorization-based collaborative filtering algorithms: the \emph{alternative minimization} formulation and the \emph{nuclear norm minimization} method. Finally, we test the effectiveness of our proposed algorithms on real-world data and discuss potential defensive strategies.
---
paper_title: Distilling the Knowledge in a Neural Network
paper_content:
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
---
paper_title: Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
paper_content:
Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested.
---
paper_title: Adversarial Training Methods for Semi-Supervised Text Classification
paper_content:
Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis showing that the learned word embeddings have improved in quality and that while training, the model is less prone to overfitting.
---
| Title: A Survey on Resilient Machine Learning
Section 1: INTRODUCTION
Description 1: Provide an overview of the importance of machine learning across various application domains and introduce the concept of adversarial attacks.
Section 2: EARLIER RELATED WORK
Description 2: Discuss early research on attacking machine learning models, particularly in domains like spam filtering, anti-malware, and biometric verification.
Section 3: EXPLORATORY ATTACKS
Description 3: Explore different methodologies adversaries use to extract information from machine learning models without altering the training process.
Section 4: EVASION ATTACKS
Description 4: Examine techniques used to craft inputs that evade detection by making machine learning algorithms misclassify them as benign.
Section 5: POISONING ATTACKS
Description 5: Analyze strategies for poisoning machine learning models by altering their training data to degrade performance or introduce specific biases.
Section 6: SUMMARY
Description 6: Summarize early work, exploratory attacks, evasion attacks, and poisoning attacks, providing concise overviews of each approach and application.
Section 7: EMERGING RESEARCH DIRECTIONS
Description 7: Discuss future research directions and open problems in making machine learning algorithms resilient to adversarial and poisoning attacks. |
Resampling Detection in Digital Images: A Survey Archana V | 12 | ---
paper_title: Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue
paper_content:
This paper revisits the state-of-the-art resampling detector, which is based on periodic artifacts in the residue of a local linear predictor. Inspired by recent findings from the literature, we take a closer look at the complex detection procedure and model the detected artifacts in the spatial and frequency domain by means of the variance of the prediction residue. We give an exact formulation on how transformation parameters influence the appearance of periodic artifacts and analytically derive the expected position of characteristic resampling peaks. We present an equivalent accelerated and simplified detector, which is orders of magnitudes faster than the conventional scheme and experimentally shown to be comparably reliable.
---
paper_title: Blind Authentication Using Periodic Properties of Interpolation
paper_content:
In this paper, we analyze and analytically describe the specific statistical changes brought into the covariance structure of signal by the interpolation process. We show that interpolated signals and their derivatives contain specific detectable periodic properties. Based on this, we propose a blind, efficient, and automatic method capable of finding traces of resampling and interpolation. The proposed method can be very useful in many areas, especially in image security and authentication. For instance, when two or more images are spliced together, to create high quality and consistent image forgeries, almost always geometric transformations, such as scaling, rotation, or skewing are needed. These procedures are typically based on a resampling and interpolation step. By having a method capable of detecting the traces of resampling, we can significantly reduce the successful usage of such forgeries. Among other points, the presented method is also very useful in estimation of the geometric transformations factors.
---
paper_title: Detection of linear and cubic interpolation in JPEG compressed images
paper_content:
A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: Detection of linear and cubic interpolation in JPEG compressed images
paper_content:
A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: Blind Authentication Using Periodic Properties of Interpolation
paper_content:
In this paper, we analyze and analytically describe the specific statistical changes brought into the covariance structure of signal by the interpolation process. We show that interpolated signals and their derivatives contain specific detectable periodic properties. Based on this, we propose a blind, efficient, and automatic method capable of finding traces of resampling and interpolation. The proposed method can be very useful in many areas, especially in image security and authentication. For instance, when two or more images are spliced together, to create high quality and consistent image forgeries, almost always geometric transformations, such as scaling, rotation, or skewing are needed. These procedures are typically based on a resampling and interpolation step. By having a method capable of detecting the traces of resampling, we can significantly reduce the successful usage of such forgeries. Among other points, the presented method is also very useful in estimation of the geometric transformations factors.
---
paper_title: Detection of linear and cubic interpolation in JPEG compressed images
paper_content:
A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: A bibliography on blind methods for identifying image forgery
paper_content:
Verifying the integrity of digital images and detecting the traces of tampering without using any protecting pre-extracted or pre-embedded information have become an important and hot research field. The popularity of this field and the rapid growth in papers published during the last years have put considerable need on creating a complete bibliography addressing published papers in this area. In this paper, an extensive list of blind methods for detecting image forgery is presented. By the word blind we refer to those methods that use only the image function. An attempt has been made to make this paper complete by listing most of the existing references and by providing a detailed classification group.
---
paper_title: Detection of linear and cubic interpolation in JPEG compressed images
paper_content:
A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: Improving re-sampling detection by adding noise
paper_content:
Current image re-sampling detectors can reliably detect re-sampling in JPEG images only up to a Quality Factor (QF) of ::: 95 or higher. At lower QFs, periodic JPEG blocking artifacts interfere with periodic patterns of re-sampling. We add a ::: controlled amount of noise to the image before the re-sampling detection step. Adding noise suppresses the JPEG ::: artifacts while the periodic patterns due to re-sampling are partially retained. JPEG images of QF range 75-90 are ::: considered. Gaussian/Uniform noise in the range of 28-24 dB is added to the image and the images thus formed are ::: passed to the re-sampling detector. The detector outputs are averaged to get a final output from which re-sampling can ::: be detected even at lower QFs. ::: We consider two re-sampling detectors - one proposed by Poposcu and Farid [1], which works well on uncompressed ::: and mildly compressed JPEG images and the other by Gallagher [2], which is robust on JPEG images but can detect only ::: scaled images. For multiple re-sampling operations (rotation, scaling, etc) we show that the order of re-sampling matters. ::: If the final operation is up-scaling, it can still be detected even at very low QFs.
---
paper_title: Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue
paper_content:
This paper revisits the state-of-the-art resampling detector, which is based on periodic artifacts in the residue of a local linear predictor. Inspired by recent findings from the literature, we take a closer look at the complex detection procedure and model the detected artifacts in the spatial and frequency domain by means of the variance of the prediction residue. We give an exact formulation on how transformation parameters influence the appearance of periodic artifacts and analytically derive the expected position of characteristic resampling peaks. We present an equivalent accelerated and simplified detector, which is orders of magnitudes faster than the conventional scheme and experimentally shown to be comparably reliable.
---
paper_title: Detecting Resized JPEG Images by Analyzing High Frequency Elements in DCT Coefficients
paper_content:
In this paper, we propose a method for detecting resized JPEG images. We defined 8 × 8 periodic blocks as JPEG blocks. JPEG block boundaries are detected by applying 8 × 8 block discrete cosine transform (DCT) to all the pixels of the input image and analyzing the high frequency coefficients in them. In order to quantitatively analyze the degree of forgery, we have developed two approaches such as truth-score and correlation-score methods. Experimental results using 375 original (untouched) JPEG images and 2,250 resized images recompressed with a variety of quality factors demonstrated that our proposed method can classify them with over 90% of accuracy. Our proposed method can detect not only conventional resizing but also state-of-the-art non-linear resizing such as seam carving.
---
paper_title: Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue
paper_content:
This paper revisits the state-of-the-art resampling detector, which is based on periodic artifacts in the residue of a local linear predictor. Inspired by recent findings from the literature, we take a closer look at the complex detection procedure and model the detected artifacts in the spatial and frequency domain by means of the variance of the prediction residue. We give an exact formulation on how transformation parameters influence the appearance of periodic artifacts and analytically derive the expected position of characteristic resampling peaks. We present an equivalent accelerated and simplified detector, which is orders of magnitudes faster than the conventional scheme and experimentally shown to be comparably reliable.
---
paper_title: Noise-Enhanced Detection of Micro-Calcifications in Digital Mammograms
paper_content:
The appearance of micro-calcifications in mammograms is a crucial early sign of breast cancer. Automatic micro-calcification detection techniques play an important role in cancer diagnosis and treatment. This, however, still remains a challenging task. This paper presents novel algorithms for the detection of micro-calcifications using stochastic resonance (SR) noise. In these algorithms, a suitable dose of noise is added to the abnormal mammograms such that the performance of a suboptimal lesion detector is improved without altering the detector's parameters. First, a SR noise-based detection approach is presented to improve some suboptimal detectors which suffer from model mismatch due to the Gaussian assumption. Furthermore, a SR noise-based detection enhancement framework is presented to deal with more general model mismatch cases. Our algorithms and the framework are tested on a set of 75 representative abnormal mammograms. They yield superior performance when compared with several classification and detection approaches developed in our work as well as those available in the literature.
---
paper_title: Detection of linear and cubic interpolation in JPEG compressed images
paper_content:
A novel algorithm is introduced that can detect the presence of interpolation in images prior to compression as well as estimate the interpolation factor. The interpolation detection algorithm exploits a periodicity in the second derivative signal of interpolated images. The algorithm performs well for a wide variety of interpolation factors, both integer factors and non-integer factors. The algorithm performance is noted with respect to a digital camera's "digital zoom" feature. Overall the algorithm has demonstrated robust results and might prove to be useful for situations where an original resolution of the image determines the action of an image processing chain.
---
paper_title: A bibliography on blind methods for identifying image forgery
paper_content:
Verifying the integrity of digital images and detecting the traces of tampering without using any protecting pre-extracted or pre-embedded information have become an important and hot research field. The popularity of this field and the rapid growth in papers published during the last years have put considerable need on creating a complete bibliography addressing published papers in this area. In this paper, an extensive list of blind methods for detecting image forgery is presented. By the word blind we refer to those methods that use only the image function. An attempt has been made to make this paper complete by listing most of the existing references and by providing a detailed classification group.
---
paper_title: Improving re-sampling detection by adding noise
paper_content:
Current image re-sampling detectors can reliably detect re-sampling in JPEG images only up to a Quality Factor (QF) of ::: 95 or higher. At lower QFs, periodic JPEG blocking artifacts interfere with periodic patterns of re-sampling. We add a ::: controlled amount of noise to the image before the re-sampling detection step. Adding noise suppresses the JPEG ::: artifacts while the periodic patterns due to re-sampling are partially retained. JPEG images of QF range 75-90 are ::: considered. Gaussian/Uniform noise in the range of 28-24 dB is added to the image and the images thus formed are ::: passed to the re-sampling detector. The detector outputs are averaged to get a final output from which re-sampling can ::: be detected even at lower QFs. ::: We consider two re-sampling detectors - one proposed by Poposcu and Farid [1], which works well on uncompressed ::: and mildly compressed JPEG images and the other by Gallagher [2], which is robust on JPEG images but can detect only ::: scaled images. For multiple re-sampling operations (rotation, scaling, etc) we show that the order of re-sampling matters. ::: If the final operation is up-scaling, it can still be detected even at very low QFs.
---
paper_title: Theory of the Stochastic Resonance Effect in Signal Detection: Part I—Fixed Detectors
paper_content:
In Part I of this paper [ldquoTheory of the Stochastic Resonance Effect in Signal Detection: Part I-Fixed Detectors,rdquo IEEE Transactions on Signal Processing, vol. 55, no. 7, pt. 1, pp. 3172-3184], the mechanism of the stochastic resonance (SR) effect for a fixed detector has been examined. This paper analyzes the stochastic resonance (SR) effect under the condition that the detector structure or its parameters can also be changed. The detector optimization problem with SR noise under both Neyman-Pearson and Bayesian criteria is examined. In the Bayesian approach when the prior probabilities are unknown, the minimax approach is adopted. The form of the optimal noise pdf along with the corresponding detector as well as the maximum achievable performance are determined. The developed theory is then applied to a general class of weak signal detection problems. Under the assumptions that the sample size N is large enough and the test statistics satisfies the conditions of central limit theorem, the optimal SR noise is shown to be a constant vector and independent of the signal strength for both Neyman-Pearson and Bayesian criteria. Illustrative examples are presented where performance comparisons are made between the original detector and the optimal SR noise modified detector for different types of SR noise.
---
paper_title: On Resampling Detection and its Application to Detect Image Tampering
paper_content:
Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.
---
paper_title: Image Tampering Detection Using Bayer Interpolation and JPEG Compression
paper_content:
In this paper, we describe a technique to detect image tampering using two different methods. The first is based on the Bayer interpolation process and its consequences in the Fourier domain. The second uses artifacts of the JPEG compression and more particularly in the JPEG frame observable in the Fourier domain.
---
| Title: Resampling Detection in Digital Images: A Survey
Section 1: Introduction
Description 1: This section introduces the concept of resampling detection in digital images and its significance in the field of passive forgery detection.
Section 2: Resampling Detection Techniques
Description 2: This section discusses various techniques used for detecting resampling in digital images by analyzing the periodicity and other properties introduced during the resampling process.
Section 3: Properties of Second Difference
Description 3: This section explains the role of second order derivatives in identifying resampled images and how periodicity in second differences can be exploited for detection.
Section 4: Tamper Detection Using DCT High Pass Filtering
Description 4: This section covers the use of Discrete Cosine Transform (DCT) high pass filtering to identify inconsistencies in high frequency components of an image that has undergone resampling.
Section 5: Tamper Detection Using Wavelets
Description 5: This section describes how wavelet transforms can be used to detect resampled regions in an image by analyzing changes in high frequency content.
Section 6: Propescu & Farid's Method
Description 6: This section outlines the algorithm proposed by Propescu and Farid using expectation/maximization for identifying correlations in resampled signals.
Section 7: Rotation Tolerant Resampling Detection
Description 7: This section discusses methods developed to detect resampling even when the image has undergone rotation, focusing on the use of energy spectra.
Section 8: Resampling Detection on JPEG Images
Description 8: This section addresses the challenges and methods of detecting resampling in JPEG compressed images.
Section 9: Adding Noise to Suppress JPEG Blockiness
Description 9: This section elaborates on techniques to suppress JPEG artifacts by adding noise, making resampling patterns more detectable.
Section 10: Recompressed Resampling Detection
Description 10: This section explores the effects of recompression on resampling detection and methods to distinguish between original and recompressed images.
Section 11: Performance Analysis
Description 11: This section evaluates the performance of different resampling detection methods and discusses their strengths and weaknesses.
Section 12: Conclusions
Description 12: This section summarizes the findings of the survey, noting the limitations of current methods and suggesting areas for future research. |
Fixing Geometric Errors on Polygonal Models: A Survey | 9 | ---
paper_title: Interpolation and Approximation of Surfaces from Three-Dimensional Scattered Data Points
paper_content:
There is a wide range of applications for which surface interpolation or approximation from scattered data points in space is important. Dependent on the field of application and the related properties of the data, many algorithms were developed in the past. This contribution gives a survey of existing algorithms, and identifies basic methods common to independently developed solutions. We distinguish surface construction based on spatial subdivision, distance functions, warping, and incremental surface growing. The systematic analysis of existing approaches leads to several interesting open questions for further research.
---
paper_title: Amodal volume completion: 3D visual completion
paper_content:
This work considers the common problem of completing partially visible artifacts within a 3D scene. Human vision abilities to complete such artifacts are well studied within the realms of perceptual psychology. However, the psychological explanations for completion have received only limited application in the domain of 3D computer vision. Here, we examine prior work in this area of computer vision with reference to psychological accounts of completion and identify remaining challenges for future work.
---
paper_title: Stitching and Filling: Creating Conformal Faceted Geometry
paper_content:
Consistent and accurate representation of geometry is required by a number of applications such as mesh generation, rapid prototyping, manufacturing, and computer graphics. Unfortunately, faceted Computer Aided Design (CAD) models received by downstream applications have many issues that pose problems for their successful usability. Automatic or semi-automatic tools are needed to process the geometry to make it suitable for these downstream applications. An algorithm is presented to detect commonly found geometrical and topological issues in the faceted geometry and process them with minimum user interaction. The present algorithm is based on the iterative vertex pair contraction and expansion operations called stitching and filling respectively. The combination of generality, accuracy, and efficiency of this algorithm seems to be a significant improvement over existing techniques. Results are presented showing the effectiveness of the algorithm to process two- and three-dimensional configurations.
---
paper_title: Context-based surface completion
paper_content:
Sampling complex, real-world geometry with range scanning devices almost always yields imperfect surface samplings. These "holes" in the surface are commonly filled with a smooth patch that conforms with the boundary. We introduce a context-based method: the characteristics of the given surface are analyzed, and the hole is iteratively filled by copying patches from valid regions of the given surface. In particular, the method needs to determine best matching patches, and then, fit imported patches by aligning them with the surrounding surface. The completion process works top down, where details refine intermediate coarser approximations. To align an imported patch with the existing surface, we apply a rigid transformation followed by an iterative closest point procedure with non-rigid transformations. The surface is essentially treated as a point set, and local implicit approximations aid in measuring the similarity between two point set patches. We demonstrate the method at several point-sampled surfaces, where the holes either result from imperfect sampling during range scanning or manual removal.
---
paper_title: Filling holes in meshes using a mechanical model to simulate the curvature variation minimization
paper_content:
The presence of holes in a triangle mesh is classically ascribed to the deficiencies of the point cloud acquired from a physical object to be reverse engineered. This lack of information results from both the scanning process and the object complexity. The consequences are simply not acceptable in many application domains (e.g. visualization, finite element analysis or STL prototyping). This paper addresses the way these holes can be filled in while minimizing the curvature variation between the surrounding and inserted meshes. The curvature variation is simulated by the variation between external forces applied to the nodes of a linear mechanical model coupled to the meshes. The functional to be minimized is quadratic and a set of geometric constraints can be added to further shape the inserted mesh. In addition, a complete cleaning toolbox is proposed to remove degenerated and badly oriented triangles resulting from the scanning process.
---
paper_title: Dual domain extrapolation
paper_content:
Shape optimization and surface fairing for polygon meshes have been active research areas for the last few years. Existing approaches either require the border of the surface to be fixed, or are only applicable to closed surfaces. In this paper, we propose a new approach, that computes natural boundaries. This makes it possible not only to smooth an existing geometry, but also to extrapolate its shape beyond the existing border. Our approach is based on a global parameterization of the surface and on a minimization of the squared curvatures, discretized on the edges of the surface. The so-constructed surface is an approximation of a minimal energy surface (MES). Using a global parameterization makes it possible to completely decouple the outer fairness (surface smoothness) from the inner fairness (mesh quality). In addition, the parameter space provides the user with a new means of controlling the shape of the surface. When used as a geometry filter, our approach computes a smoothed mesh that is discrete conformal to the original one. This allows smoothing textured meshes without introducing distortions.
---
paper_title: Filling Gaps in the Boundary of a Polyhedron
paper_content:
Abstract In this paper we present an algorithm for detecting and repairing defects in the boundary of a polyhedron. These defects, usually caused by problems in CAD software, consist of small gaps bounded by edges that are incident to only one polyhedron face. The algorithm uses a partial curve matching technique for matching parts of the defects, and an optimal triangulation of 3-D polygons for resolving the unmatched parts. It is also shown that finding a consistent set of partial curve matches with maximum score, a subproblem which is related to our repairing process, is NP-hard. Experimental results on several polyhedra are presented.
---
paper_title: Progressive Gap Closing for Mesh Repairing
paper_content:
Modern 3D acquisition and modeling tools generate high-quality, detailed geometric models. However, in order to cope with the associated complexity, several mesh decimation methods have been developed in the recent years. On the other hand, a common problem of geometric modeling tools is the generation of consistent three-dimensional meshes. Most of these programs output meshes containing degenerate faces, T-vertices, narrow gaps and cracks. Applying well-established decimation methods to such meshes results in severe artifacts due to lack of consistent connectivity information. The industrial relevance of this problem is emphasized by the fact that as an output of most of the commercial CAD/CAM and other modeling tools, the user usually gets consistent meshes only for separate polygonal patches as opposed to the whole mesh.
---
paper_title: Surface simplification using quadric error metrics
paper_content:
Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations
---
paper_title: Template-Based Mesh Completion
paper_content:
Meshes generated by range scanners and other acquisition tools are often incomplete and typically contain multiple connected components with irregular boundaries and complex holes. This paper introduces a robust algorithm for completion of such meshes using a mapping between the incomplete mesh and a template model. The mapping is computed using a novel framework for bijective parameterization of meshes with gaps and holes. We employ this mapping to correctly glue together the components of the input mesh and to close the holes. The template is used to fill in the topological and geometric information missing in the input. The completed models are guaranteed to have the same topology as the template. Furthermore, if no appropriate template exists or if only topologically correct completion is required a standard canonical shape can be used as a template. As part of our completion method we propose a boundary-mapping technique useful for mesh editing operations such as merging, blending, and detail transfer. We demonstrate that by using this technique we can automatically perform complex editing operations that previously required a large amount of user interaction.
---
paper_title: A multistep approach to restoration of locally undersampled meshes
paper_content:
The paper deals with the problem of remeshing and fairing of undersampled areas (called "holes") in triangular meshes. In this work, we are particularly interested in meshes constructed with geological data but the method can however be applied to any kind of data. With such input data, the point density is often drastically lesser in some regions than in others: this leads to what we call "holes". Once these holes identified, they are filled using a multistep approach. We iteratively: insert vertices in the hole in order to progressively converge towards the density of its neighbourhood, then deform this patch mesh (by minimizing a discrete thin-plate energy) in order to restore the local curvature and guarantee the smoothness of the hole boundary. The main goal of our method is to control both time and space complexity in order to handle huge models while producing quality meshes.
---
paper_title: Filling Gaps in the Boundary of a Polyhedron
paper_content:
Abstract In this paper we present an algorithm for detecting and repairing defects in the boundary of a polyhedron. These defects, usually caused by problems in CAD software, consist of small gaps bounded by edges that are incident to only one polyhedron face. The algorithm uses a partial curve matching technique for matching parts of the defects, and an optimal triangulation of 3-D polygons for resolving the unmatched parts. It is also shown that finding a consistent set of partial curve matches with maximum score, a subproblem which is related to our repairing process, is NP-hard. Experimental results on several polyhedra are presented.
---
paper_title: An Approach to Blend Surfaces
paper_content:
In this paper, we present an application of a space mapping technique for surface reconstruction (more precisely: reconstruction of missing parts of a real geometric object represented by volume data). Using a space mapping technique, the surface of a given model, in particular tooth shape is fitted by a shape transformation to extrapolate the remaining surface of a patient’s tooth with occurring damage such as a “drill hole.” The genetic algorithm minimizes the error of the approximation by optimizing a set of control points that determine the coefficients for spline functions, which in turn define a space transformation. The fitness function to be minimized consists of two components. First one is the error between the blended surface of an object and the surface of the object to be blended in some predefined points. The second is a component that is responsible for the bending energy being minimized.
---
paper_title: Repairing incomplete measured data with a deformable model under constraints of feature shapes
paper_content:
Measured data of a product can be incomplete because of the inaccessibility or invisibility of some portions of the product surface for measure tools in reverse engineering. Usually, the missed surface areas include some key form features of the product, which represent important design components. However, flat surfaces are often used to mend the incomplete form features in current reverse engineering systems. In this paper, an incomplete data repair method is presented using a deformable model under the constraints of given feature forms. The method ensures the accuracy of identifying the missed local areas by choosing a set of proper rules of mesh construction according to the density distribution of measured data, and fills them with an energy-based, deformable local mesh that is adaptively-formed using an iteration of the procedure of mesh subdivision, constraint satisfaction and finite element equation solving.
---
paper_title: Discovering structural regularity in 3D geometry
paper_content:
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or meshbased models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis.
---
paper_title: A Hole-Filling Algorithm Using Non-Uniform Rational B-Splines
paper_content:
A three-dimensional (3D) geometric model obtained from a 3D device or other approaches is not necessarily watertight due to the presence of geometric deficiencies. These inadequacies must be repaired to create a valid surface mesh on the model as a pre-process of computational engineering analyses. This procedure has been a tedious and labor-intensive step, as there are many kinds of deficiencies that can make the geometry to be nonwatertight, such as gaps and holes. It is still challenging to repair discrete surface models based on available geometric information. The focus of this paper is to develop a new automated method for patching holes on the surface models in order to achieve watertightness. It describes a numerical algorithm utilizing Non-Uniform Rational B-Splines (NURBS) surfaces to generate smooth triangulated surface patches for topologically simple holes on discrete surface models. The Delaunay criterion for point insertion and edge swapping is used in this algorithm to improve the outcome. Surface patches are generated based on existing points surrounding the holes without altering them. The watertight geometry produced can be used in a wide range of engineering applications in the field of computational engineering simulation studies.
---
paper_title: Inference of segmented color and texture description by tensor voting
paper_content:
A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by ND tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive ND tensor, followed by a voting process that infers noniteratively the optimal color values in the ND texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using ND tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.
---
paper_title: A statistical method for robust 3D surface reconstruction from sparse data
paper_content:
General information about a class of objects, such as human faces or teeth, can help to solve the otherwise ill-posed problem of reconstructing a complete surface from sparse 3D feature points or 2D projections of points. We present a technique that uses a vector space representation of shape (3D morphable model) to infer missing vertex coordinates. Regularization derived from a statistical approach makes the system stable and robust with respect to noise by computing the optimal tradeoff between fitting quality and plausibility. We present a direct, noniterative algorithm to calculate this optimum efficiently, and a method for simultaneously compensating unknown rigid transformations. The system is applied and evaluated in two different fields: (1) reconstruction of 3D faces at unknown orientations from 2D feature points at interactive rates, and (2) restoration of missing surface regions of teeth for CAD-CAM production of dental inlays and other medical applications.
---
paper_title: A Sharpness-Dependent Filter for Recovering Sharp Features in Repaired 3D Mesh Models
paper_content:
This paper presents a sharpness-based method for hole-filling that can repair a 3D model such that its shape conforms to that of the original model. The method involves two processes: interpolation-based hole-filling, which produces an initial repaired model, and postprocessing, which adjusts the shape of the initial repaired model to conform to that of the original model. In the interpolation-based hole-filling process, a surface interpolation algorithm based on the radial basis function creates a smooth implicit surface that fills the hole. Then, a regularized marching tetrahedral algorithm is used to triangulate the implicit surface. Finally, a stitching and regulating strategy is applied to the surface patch and its neighboring boundary polygon meshes to produce an initial repaired mesh model, which is a regular mesh model suitable for postprocessing. During postprocessing, a sharpness-dependent filtering algorithm is applied to the initial repaired model. This is an iterative procedure whereby each iteration step adjusts the face normal associated with each meshed polygon to recover the sharp features hidden in the repaired model. The experiment results demonstrate that the method is effective in repairing incomplete 3D mesh models.
---
paper_title: Filling holes in meshes
paper_content:
We describe a method for filling holes in unstructured triangular meshes. The resulting patching meshes interpolate the shape and density of the surrounding mesh. Our methods work with arbitrary holes in oriented connected manifold meshes. The steps in filling a hole include boundary identification, hole triangulation, refinement, and fairing.
---
paper_title: Context-based surface completion
paper_content:
Sampling complex, real-world geometry with range scanning devices almost always yields imperfect surface samplings. These "holes" in the surface are commonly filled with a smooth patch that conforms with the boundary. We introduce a context-based method: the characteristics of the given surface are analyzed, and the hole is iteratively filled by copying patches from valid regions of the given surface. In particular, the method needs to determine best matching patches, and then, fit imported patches by aligning them with the surrounding surface. The completion process works top down, where details refine intermediate coarser approximations. To align an imported patch with the existing surface, we apply a rigid transformation followed by an iterative closest point procedure with non-rigid transformations. The surface is essentially treated as a point set, and local implicit approximations aid in measuring the similarity between two point set patches. We demonstrate the method at several point-sampled surfaces, where the holes either result from imperfect sampling during range scanning or manual removal.
---
paper_title: Automatic Hole-Filling of Triangular Meshes Using Local Radial Basis Function
paper_content:
Creating models of real objects is a complex task for which the use of traditional modeling techniques has proven to be difficult. To solve some of these problems, laser rangefinders are frequently used to sample an object?s surface from several viewpoints resulting in a set of range images that are registered and integrated into a final triangulated model. In practice, due to surface reflectance properties, occlusions and accessibility limitations, certain areas of the object?s surface are usually not sampled, leaving holes which create undesirable artifacts in the integrated model. In this paper, we present a novel algorithm for the automatic hole?filling of triangulated models. The algorithm starts by locating hole boundary regions. A hole consists of a closed path of edges of boundary triangles that have at least an edge, which is not shared with any other triangle. The edge of the hole is then fitted with a b-spline where the average variation of the torsion of the b-spline approximation is calculated. Using a simple threshold of the average variation of the torsion along the edge, one can automatically classify real holes from man-made holes. Following this classification process, we then use an automated version of a radial basis function interpolator to fill the inside of the hole using neighboring edges. Excellent experimental results are presented.
---
paper_title: Image Guided Geometry Inference
paper_content:
We introduce a new method for filling holes in geometry obtained from 3D range scanners. Our method makes use of 2D images of the areas where geometric data is missing. The 2D images guide the filling using the relationship between the images and geometry learned from the existing 3D scanned data. Our method builds on existing techniques for using scanned geometry and for estimating shape from shaded images. Rather than creating plausibly filled holes, we attempt to approximate the missing geometry. We present results for scanned data from both triangulation and time-of-flight scanners for various types of materials. To quantitatively validate our proposed method, we also compare the filled areas with ground-truth data.
---
paper_title: Filling holes in meshes using a mechanical model to simulate the curvature variation minimization
paper_content:
The presence of holes in a triangle mesh is classically ascribed to the deficiencies of the point cloud acquired from a physical object to be reverse engineered. This lack of information results from both the scanning process and the object complexity. The consequences are simply not acceptable in many application domains (e.g. visualization, finite element analysis or STL prototyping). This paper addresses the way these holes can be filled in while minimizing the curvature variation between the surrounding and inserted meshes. The curvature variation is simulated by the variation between external forces applied to the nodes of a linear mechanical model coupled to the meshes. The functional to be minimized is quadratic and a set of geometric constraints can be added to further shape the inserted mesh. In addition, a complete cleaning toolbox is proposed to remove degenerated and badly oriented triangles resulting from the scanning process.
---
paper_title: Dual domain extrapolation
paper_content:
Shape optimization and surface fairing for polygon meshes have been active research areas for the last few years. Existing approaches either require the border of the surface to be fixed, or are only applicable to closed surfaces. In this paper, we propose a new approach, that computes natural boundaries. This makes it possible not only to smooth an existing geometry, but also to extrapolate its shape beyond the existing border. Our approach is based on a global parameterization of the surface and on a minimization of the squared curvatures, discretized on the edges of the surface. The so-constructed surface is an approximation of a minimal energy surface (MES). Using a global parameterization makes it possible to completely decouple the outer fairness (surface smoothness) from the inner fairness (mesh quality). In addition, the parameter space provides the user with a new means of controlling the shape of the surface. When used as a geometry filter, our approach computes a smoothed mesh that is discrete conformal to the original one. This allows smoothing textured meshes without introducing distortions.
---
paper_title: A finite element method for surface restoration with smooth boundary conditions
paper_content:
In surface restoration usually a damaged region of a surface has to be replaced by a surface patch which restores the region in a suitable way. In particular one aims for C1-continuity at the patch boundary. The Willmore energy is considered to measure fairness and to allow appropriate boundary conditions to ensure continuity of the normal field. The corresponding L2-gradient flow as the actual restoration process leads to a system of fourth order partial differential equations, which can also be written as a system of two coupled second order equations. As it is well known, fourth order problems require an implicit time discretization. Here a semi-implicit approach is presented which allows large time steps. For the discretization of the boundary condition, two different numerical methods are introduced. Finally, we show applications to different surface restoration problems.
---
paper_title: A robust hole-filling algorithm for triangular mesh
paper_content:
This paper presents a novel hole-filling algorithm that can fill arbitrary holes in triangular mesh models. First, the advancing front mesh technique is used to cover the hole with newly created triangles. Next, the desirable normals of the new triangles are approximated using our desirable normal computing schemes. Finally, the three coordinates of every new vertex are re-positioned by solving the Poisson equation based on the desirable normals and the boundary vertices of the hole. Many experimental results and error evaluations are given to show the robustness and efficiency of the algorithm.
---
paper_title: Cutting and Stitching: Converting Sets of Polygons to Manifold Surfaces
paper_content:
Many real-world polygonal surfaces contain topological singularities that represent a challenge for processes such as simplification, compression, and smoothing. We present an algorithm that removes singularities from nonmanifold sets of polygons to create manifold (optionally oriented) polygonal surfaces. We identify singular vertices and edges, multiply singular vertices, and cut through singular edges. In an optional stitching operation, we maintain the surface as a manifold while joining boundary edges. We present two different edge stitching strategies, called pinching and snapping. Our algorithm manipulates the surface topology and ignores physical coordinates. Except for the optional stitching, the algorithm has a linear complexity and requires no floating point operations. In addition to introducing new algorithms, we expose the complexity (and pitfalls) associated with stitching. Finally, several real-world examples are studied.
---
paper_title: Making radiosity usable: automatic preprocessing and meshing techniques for the generation of accurate radiosity solutions
paper_content:
Generating accurate radiosity solutions of real world environments is user-intensive and requires significant knowledge of the method. As a result, few end-users such as architects and designers use it. The output of most commercial modeling packages must be substantially "cleaned up" to satisfy the geometrical and topological criteria imposed by radiosity solution algorithms. Furthermore, the mesh used as the basis of the radiosity computation must meet several additional requirements for the solution to be accurate.A set of geometrical and topological requirements is formalized that when satisfied yields an accurate radiosity solution. A series of algorithms is introduced that automatically processes raw model databases to meet these requirements. Thus, the end-user can concentrate on the design rather than on the details of the radiosity solution process. These algorithms are generally independent of the radiosity solution technique used, and thus apply to all mesh based radiosity methods.
---
paper_title: RSVP: A Geometric Toolkit for Controlled Repair of Solid Models
paper_content:
The paper presents a system and the associated algorithms for repairing the boundary representation of CAD models. Two types of errors are considered: topological errors, i.e., aggregate errors, like zero volume parts, duplicate or missing parts, inconsistent surface orientation, etc., and geometric errors, i.e., numerical imprecision errors, like cracks or overlaps of geometry. The output of our system describes a set of clean and consistent two-manifolds (possibly with boundaries) with derived adjacencies. Such solid representation enables the application of a variety of rendering and analysis algorithms, e.g., finite element analysis, radiosity computation, model simplification, and solid free form fabrication. The algorithms described were originally designed to correct errors in polygonal B-Reps. We also present an extension for spline surfaces. Central to our system is a procedure for inferring local adjacencies of edges. The geometric representation of topologically adjacent edges are merged to evolve a set of two-manifolds. Aggregate errors are discovered during the merging step. Unfortunately, there are many ambiguous situations where errors admit more than one valid solution. Our system proposes an object repairing process based on a set of user tunable heuristics. The system also allows the user to override the algorithm's decisions in a repair visualization step. In essence, this visualization step presents an organized and intuitive way for the user to explore the space of valid solutions and to select the correct one.
---
paper_title: Removing zero-volume parts from CAD models for layered manufacturing
paper_content:
CAD models for automated manufacturing should describe solids as mathematically regularized sets. The technique presented clears STL files of zero-volume parts that violate the definition of a regularized set. >
---
paper_title: The Marching Intersections algorithm for merging range images
paper_content:
A new algorithm for the integration of partially overlapping range images into a triangular mesh is presented. The algorithm consists of three main steps: it locates the intersections between the range surfaces and a reference grid chosen by the user, then merges all nearly coincident and redundant intersections according to a proximity criterion, and, finally, reconstructs the merged surface(s) from the filtered intersection set. Compared with previous methods, which adopt a volumetric approach, our algorithm shows lower computational costs and improves the accuracy of the surfaces produced. It takes into account the quality of the input measurements and is able to patch small holes corresponding to the parts of the 3D scanned object that were not observed by the acquisition device. The algorithm has been tested on several datasets of range maps; graphical and numeric results are reported.
---
paper_title: Zippered polygon meshes from range images
paper_content:
Range imaging offers an inexpensive and accurate means for digitizing the shape of three-dimensional objects. Because most objects self occlude, no single range image suffices to describe the entire object. We present a method for combining a collection of range images into a single polygonal mesh that completely describes an object to the extent that it is visible from the outside. The steps in our method are: 1) align the meshes with each other using a modified iterated closest-point algorithm, 2) zipper together adjacent meshes to form a continuous surface that correctly captures the topology of the object, and 3) compute local weighted averages of surface positions on all meshes to form a consensus surface geometry. Our system differs from previous approaches in that it is incremental; scans are acquired and combined one at a time. This approach allows us to acquire and combine large numbers of scans with minimal storage overhead. Our largest models contain up to 360,000 triangles. All the steps needed to digitize an object that requires up to 10 range scans can be performed using our system with five minutes of user interaction and a few hours of compute time. We show two models created using our method with range data from a commercial rangefinder that employs laser stripe technology.
---
paper_title: Dual contouring of hermite data
paper_content:
This paper describes a new method for contouring a signed grid whose edges are tagged by Hermite data (i.e; exact intersection points and normals). This method avoids the need to explicitly identify and process "features" as required in previous Hermite contouring methods. Using a new, numerically stable representation for quadratic error functions, we develop an octree-based method for simplifying contours produced by this method. We next extend our contouring method to these simpli£ed octrees. This new method imposes no constraints on the octree (such as being a restricted octree) and requires no "crack patching". We conclude with a simple test for preserving the topology of the contour during simplification.
---
paper_title: Efficient representation and extraction of 2-manifold isosurfaces using kd-trees
paper_content:
In this paper, we propose the utilization of a kd-tree based hierarchy as an implicit object representation. Compared to an octree, the kd-tree based hierarchy is superior in terms of adaptation to the object surface. In consequence, we obtain considerably more compact implicit representations especially in case of thin object structures. We describe a new isosurface extraction algorithm for this kind of implicit representation. In contrast to related algorithms for octrees, it generates 2-manifold meshes even for kd-trees with cells containing multiple surface components. The algorithm retains all the good properties of the dual contouring approach [10] like feature preservation, computational efficiency, etc. In addition, we present a simplification framework for the surfaces represented by the kd-tree based on quadric error metrics. We adapt this framework to quantify the influence of topological changes, thereby allowing controlled topological simplification of the object. The advantages of the new algorithm are demonstrated by several examples.
---
paper_title: 3D distance fields: a survey of techniques and applications
paper_content:
A distance field is a representation where, at each point within the field, we know the distance from that point to the closest point on any object within the domain. In addition to distance, other properties may be derived from the distance field, such as the direction to the surface, and when the distance field is signed, we may also determine if the point is internal or external to objects within the domain. The distance field has been found to be a useful construction within the areas of computer vision, physics, and computer graphics. This paper serves as an exposition of methods for the production of distance fields, and a review of alternative representations and applications of distance fields. In the course of this paper, we present various methods from all three of the above areas, and we answer pertinent questions such as How accurate are these methods compared to each other? How simple are they to implement?, and What is the complexity and runtime of such methods?.
---
paper_title: Using distance maps for accurate surface representation in sampled volumes
paper_content:
High quality rendering and physics based modeling in volume graphics have been limited because intensity based volumetric data do not represent surfaces well. High spatial frequencies due to abrupt intensity changes at object surfaces result in jagged or terraced surfaces in rendered images. The use of a distance-to-closest-surface function to encode object surfaces is proposed. This function varies smoothly across surfaces and hence can be accurately reconstructed from sampled data. The zero value iso surface of the distance map yields the object surface and the derivative of the distance map yields the surface normal. Examples of rendered images are presented along with a new method for calculating distance maps from sampled binary data.
---
paper_title: Topology-reducing surface simplification using a discrete solid representation
paper_content:
This paper presents a new approach for generating coarse-level approximations of topologically complex models. Dramatic topology reduction is achieved by converting a 3D model to and from a volumetric representation. Our approach produces valid, error-bounded models and supports the creation of approximations that do not interpenetrate the original model, either being completely contained in the input solid or bounding it. Several simple to implement versions of our approach are presented and discussed. We show that these methods perform significantly better than other surface-based approaches when simplifying topologically-rich models such as scene parts and complex mechanical assemblies.
---
paper_title: Filling holes in complex surfaces using volumetric diffusion
paper_content:
We address the problem of building watertight 3D models from surfaces that contain holes - for example, sets of range scans that observe most but not all of a surface. We specifically address situations in which the holes are too geometrically and topologically complex to fill using triangulation algorithms. Our solution begins by constructing a signed distance function, the zero set of which defines the surface. Initially, this function is defined only in the vicinity of observed surfaces. We then apply a diffusion process to extend this function through the volume until its zero set bridges whatever holes may be present. If additional information is available, such as known-empty regions of space inferred from the lines of sight to a 3D scanner, it can be incorporated into the diffusion process. Our algorithm is simple to implement, is guaranteed to produce manifold non-interpenetrating surfaces, and is efficient to run on large datasets because computation is limited to areas near holes.
---
paper_title: Filling the signed distance field by fitting local quadrics
paper_content:
We propose a method of filling unmeasured regions of shape models integrated from multiple measurements of surface shapes. We use the signed distance field (SDF) as shape representation that contains information of the surface normal along with the signed distance at the closest point on the surface from the sampling point. We solve this problem by iteratively fitting quadratic function to generate smoothly connected SDF. We analyzed the relationship between the quadratic coefficients and the surface curvature, and by using the coefficients, we evenly propagated the SDF so that it satisfies the constraints of the field. The proposed method was tested on synthetic data and real data that was generated by integrating multiple range images.
---
paper_title: Robust repair of polygonal models
paper_content:
We present a robust method for repairing arbitrary polygon models. The method is guaranteed to produce a closed surface that partitions the space into disjoint internal and external volumes. Given any model represented as a polygon soup, we construct an inside/outside volume using an octree grid, and reconstruct the surface by contouring. Our novel algorithm can efficiently process large models containing millions of polygons and is capable of reproducing sharp features in the original geometry.
---
paper_title: Automatic restoration of polygon models
paper_content:
We present a fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object. The algorithm removes all typical mesh artifacts such as degenerate triangles, incompatible face orientation, non-manifold vertices and edges, overlapping and penetrating polygons, internal redundant geometry, as well as gaps and holes up to a user-defined maximum size ρ. Moreover, the output mesh always stays within a prescribed tolerance e to the input mesh. Due to the effective use of a hierarchical octree data structure, the algorithm achieves high voxel resolution (up to 40963 on a 2GB PC) and processing times of just a few minutes for moderately complex objects. We demonstrate our technique on various architectural CAD models to show its robustness and reliability.
---
paper_title: Robust Reconstruction of Watertight 3D Models from Non-uniformly Sampled Point Clouds Without Normal Information
paper_content:
We present a new volumetric method for reconstructing watertight triangle meshes from arbitrary, unoriented point clouds. While previous techniques usually reconstruct surfaces as the zero level-set of a signed distance function, our method uses an unsigned distance function and hence does not require any information about the local surface orientation. Our algorithm estimates local surface confidence values within a dilated crust around the input samples. The surface which maximizes the global confidence is then extracted by computing the minimum cut of a weighted spatial graph structure. We present an algorithm, which efficiently converts this cut into a closed, manifold triangle mesh with a minimal number of vertices. The use of an unsigned distance function avoids the topological noise artifacts caused by misalignment of 3D scans, which are common to most volumetric reconstruction techniques. Due to a hierarchical approach our method efficiently produces solid models of low genus even for noisy and highly irregular data containing large holes, without loosing fine details in densely sampled regions. We show several examples for different application settings such as model generation from raw laser-scanned data, image-based 3D reconstruction, and mesh repair.
---
paper_title: Robust tetrahedral meshing of triangle soups
paper_content:
We propose a novel approach to generate coarse tetrahedral meshes which can be used in interactive simulation frameworks. The proposed algorithm processes unconstrained, i. e. unorientable and non-manifold triangle soups. Since the volume bounded by an unconstrained surface is not defined, we tetrahedralize the pseudo volume of the surface, namely the space that is intuitively occupied by the surface. Using our approach, we can generate coarse tetrahedral meshes from damaged surfaces and even triangle soups without any connectivity. Various examples underline the robustness of our approach. The usability of the resulting meshes is illustrated in the context of interactive deformable modeling.
---
paper_title: Simplification and Repair of Polygonal Models Using Volumetric Techniques
paper_content:
Two important tools for manipulating polygonal models are simplification and repair and we present voxel-based methods for performing both of these tasks. We describe a method for converting polygonal models to a volumetric representation in a way that handles models with holes, double walls, and intersecting parts. This allows us to perform polygon model repair simply by converting a model to and from the volumetric domain. We also describe a new topology-altering simplification method that is based on 3D morphological operators. Visually unimportant features such as tubes and holes may be eliminated from a model by the open and close morphological operators. Our simplification approach accepts polygonal models as input, scan converts these to create a volumetric description, performs topology modification, and then converts the results back to polygons. We then apply a topology-preserving polygon simplification technique to produce a final model. Our simplification method produces results that are everywhere manifold.
---
paper_title: Robust repair of polygonal models
paper_content:
We present a robust method for repairing arbitrary polygon models. The method is guaranteed to produce a closed surface that partitions the space into disjoint internal and external volumes. Given any model represented as a polygon soup, we construct an inside/outside volume using an octree grid, and reconstruct the surface by contouring. Our novel algorithm can efficiently process large models containing millions of polygons and is capable of reproducing sharp features in the original geometry.
---
paper_title: Improved space carving method for merging and interpolating multiple range images using information of light sources of active stereo
paper_content:
To merge multiple range data obtained by range scanners, filling holes caused by unmeasured regions, the space carving method is a simple and effective method. However, this method often fails if the number of the input range images is small, because unseen voxels that are not carved out remains in the volume area. In this paper, we propose an improved algorithm of the space carving method that produces stable results. In the proposed method, a discriminant function defined on volume space is used to estimate whether each voxel is inside or outside the objects. Also, in particular case that the range images are obtained by active stereo method, the information of the positions of the light sources can be used to improve the accuracy of the results.
---
paper_title: Taking consensus of signed distance field for complementing unobservable surface
paper_content:
When we use range finders to observe the shape of an object, many occluded areas may occur. These become holes and gaps in the model and make it undesirable to utilize the model for various applications. We propose a novel method to fill holes and gaps and complement such an incomplete model. We use a signed distance field (SDF) as an intermediate representation, which stores Euclidean signed distances from a voxel to the nearest point of the mesh model. Since the signs of a SDF become unstable around holes or gaps, we take a consensus of the signed distances of neighbor voxels by estimating the consistency of the SDF. Once we make the SDF consistent, we can efficiently fill holes and gaps.
---
paper_title: Filling holes in complex surfaces using volumetric diffusion
paper_content:
We address the problem of building watertight 3D models from surfaces that contain holes - for example, sets of range scans that observe most but not all of a surface. We specifically address situations in which the holes are too geometrically and topologically complex to fill using triangulation algorithms. Our solution begins by constructing a signed distance function, the zero set of which defines the surface. Initially, this function is defined only in the vicinity of observed surfaces. We then apply a diffusion process to extend this function through the volume until its zero set bridges whatever holes may be present. If additional information is available, such as known-empty regions of space inferred from the lines of sight to a 3D scanner, it can be incorporated into the diffusion process. Our algorithm is simple to implement, is guaranteed to produce manifold non-interpenetrating surfaces, and is efficient to run on large datasets because computation is limited to areas near holes.
---
paper_title: Hole Filling of a 3D Model by Flipping Signs of a Signed Distance Field in Adaptive Resolution
paper_content:
When we use range finders to observe the shape of an object, many occluded areas may occur. These become holes and gaps in the model and make it undesirable for various applications. We propose a novel method to fill holes and gaps to complete this incomplete model. As an intermediate representation, we use a signed distance field (SDF), which stores euclidean signed distances from a voxel to the nearest point of the mesh model. By using an SDF, we can obtain interpolating surfaces for holes and gaps. The proposed method generates an interpolating surface that becomes smoothly continuous with real surfaces by minimizing the area of the interpolating surface. Since the isosurface of an SDF can be identified as being a real or interpolating surface from the magnitude of signed distances, our method computes the area of an interpolating surface in the neighborhood of a voxel both before and after flipping the sign of the signed distance of the voxel. If the area is reduced by flipping the sign, then our method changes the sign for the voxel. Therefore, we minimize the area of the interpolating surface by iterating this computation until convergence. Unlike methods based on partial differential equations (PDEs), our method does not require any boundary condition and the initial state that we use is automatically obtained by computing the distance to the closest point of the real surface. Moreover, because our method can be applied to an SDF of adaptive resolution, our method efficiently interpolates large holes and gaps of high curvature. We tested the proposed method with both synthesized and real objects and evaluated the interpolating surfaces.
---
paper_title: A volumetric method for building complex models from range images
paper_content:
A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.
---
paper_title: Poisson Surface Reconstruction
paper_content:
We show that surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. We describe a spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model. Experimenting with publicly available scan data, we demonstrate reconstruction of surfaces with greater detail than previously achievable.
---
paper_title: Interpolating and approximating implicit surfaces from polygon soup
paper_content:
This paper describes a method for building interpolating or approximating implicit surfaces from polygonal data. The user can choose to generate a surface that exactly interpolates the polygons, or a surface that approximates the input by smoothing away features smaller than some user-specified size. The implicit functions are represented using a moving least-squares formulation with constraints integrated over the polygons. The paper also presents an improved method for enforcing normal constraints and an iterative procedure for ensuring that the implicit surface tightly encloses the input vertices.
---
paper_title: Reconstruction and representation of 3D objects with radial basis functions
paper_content:
We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBF — previously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages. The energy-minimisation characterisation of polyharmonic splines result in a “smoothest” interpolant. This scale-independent characterisation is well-suited to reconstructing surfaces from non-uniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a non-interpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and remeshing applications. Results are presented for real-world rangefinder data.
---
paper_title: Robust tetrahedral meshing of triangle soups
paper_content:
We propose a novel approach to generate coarse tetrahedral meshes which can be used in interactive simulation frameworks. The proposed algorithm processes unconstrained, i. e. unorientable and non-manifold triangle soups. Since the volume bounded by an unconstrained surface is not defined, we tetrahedralize the pseudo volume of the surface, namely the space that is intuitively occupied by the surface. Using our approach, we can generate coarse tetrahedral meshes from damaged surfaces and even triangle soups without any connectivity. Various examples underline the robustness of our approach. The usability of the resulting meshes is illustrated in the context of interactive deformable modeling.
---
paper_title: Filling the signed distance field by fitting local quadrics
paper_content:
We propose a method of filling unmeasured regions of shape models integrated from multiple measurements of surface shapes. We use the signed distance field (SDF) as shape representation that contains information of the surface normal along with the signed distance at the closest point on the surface from the sampling point. We solve this problem by iteratively fitting quadratic function to generate smoothly connected SDF. We analyzed the relationship between the quadratic coefficients and the surface curvature, and by using the coefficients, we evenly propagated the SDF so that it satisfies the constraints of the field. The proposed method was tested on synthetic data and real data that was generated by integrating multiple range images.
---
paper_title: Filling Holes in Complex Surfaces using Oriented Voxel Diffusion
paper_content:
Range scanning devices often yield imperfect surface sampling for real-world models with complex features. These holes in the surface are commonly filled with smooth patches conforming to the boundaries. We introduce an oriented voxel diffusion method to fill holes in complex surfaces. First, an initial field of oriented distance is measured according to the existing surface. The implicit surface of the oriented distance field coincides with the existing surface. Second, the oriented distance field diffuses inward the hole until the implicit surface converges. Particularly, the orientation information in the distance field is used to control the diffusion direction accurately. Therefore this method is able to restore the sharp features.
---
paper_title: Simplification and Repair of Polygonal Models Using Volumetric Techniques
paper_content:
Two important tools for manipulating polygonal models are simplification and repair and we present voxel-based methods for performing both of these tasks. We describe a method for converting polygonal models to a volumetric representation in a way that handles models with holes, double walls, and intersecting parts. This allows us to perform polygon model repair simply by converting a model to and from the volumetric domain. We also describe a new topology-altering simplification method that is based on 3D morphological operators. Visually unimportant features such as tubes and holes may be eliminated from a model by the open and close morphological operators. Our simplification approach accepts polygonal models as input, scan converts these to create a volumetric description, performs topology modification, and then converts the results back to polygons. We then apply a topology-preserving polygon simplification technique to produce a final model. Our simplification method produces results that are everywhere manifold.
---
paper_title: Inpainting surface holes
paper_content:
An algorithm for filling-in surface holes is introduced in this paper. The basic idea is to represent the surface of interest in implicit form, and fill-in the holes with a system of geometric partial differential equations derived from image inpainting algorithms. The framework and examples with synthetic and real data are presented.
---
paper_title: Automatic restoration of polygon models
paper_content:
We present a fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object. The algorithm removes all typical mesh artifacts such as degenerate triangles, incompatible face orientation, non-manifold vertices and edges, overlapping and penetrating polygons, internal redundant geometry, as well as gaps and holes up to a user-defined maximum size ρ. Moreover, the output mesh always stays within a prescribed tolerance e to the input mesh. Due to the effective use of a hierarchical octree data structure, the algorithm achieves high voxel resolution (up to 40963 on a 2GB PC) and processing times of just a few minutes for moderately complex objects. We demonstrate our technique on various architectural CAD models to show its robustness and reliability.
---
paper_title: Dual contouring of hermite data
paper_content:
This paper describes a new method for contouring a signed grid whose edges are tagged by Hermite data (i.e; exact intersection points and normals). This method avoids the need to explicitly identify and process "features" as required in previous Hermite contouring methods. Using a new, numerically stable representation for quadratic error functions, we develop an octree-based method for simplifying contours produced by this method. We next extend our contouring method to these simpli£ed octrees. This new method imposes no constraints on the octree (such as being a restricted octree) and requires no "crack patching". We conclude with a simple test for preserving the topology of the contour during simplification.
---
paper_title: Efficient representation and extraction of 2-manifold isosurfaces using kd-trees
paper_content:
In this paper, we propose the utilization of a kd-tree based hierarchy as an implicit object representation. Compared to an octree, the kd-tree based hierarchy is superior in terms of adaptation to the object surface. In consequence, we obtain considerably more compact implicit representations especially in case of thin object structures. We describe a new isosurface extraction algorithm for this kind of implicit representation. In contrast to related algorithms for octrees, it generates 2-manifold meshes even for kd-trees with cells containing multiple surface components. The algorithm retains all the good properties of the dual contouring approach [10] like feature preservation, computational efficiency, etc. In addition, we present a simplification framework for the surfaces represented by the kd-tree based on quadric error metrics. We adapt this framework to quantify the influence of topological changes, thereby allowing controlled topological simplification of the object. The advantages of the new algorithm are demonstrated by several examples.
---
paper_title: Feature sensitive surface extraction from volume data
paper_content:
The representation of geometric objects based on volumetric data structures has advantages in many geometry processing applications that require, e.g., fast surface interrogation or boolean operations such as intersection and union. However, surface based algorithms like shape optimization (fairing) or freeform modeling often need a topological manifold representation where neighborhood information within the surface is explicitly available. Consequently, it is necessary to find effective conversion algorithms to generate explicit surface descriptions for the geometry which is implicitly defined by a volumetric data set. Since volume data is usually sampled on a regular grid with a given step width, we often observe severe alias artifacts at sharp features on the extracted surfaces. In this paper we present a new technique for surface extraction that performs feature sensitive sampling and thus reduces these alias effects while keeping the simple algorithmic structure of the standard Marching Cubes algorithm. We demonstrate the effectiveness of the new technique with a number of application examples ranging from CSG modeling and simulation to surface reconstruction and remeshing of polygonal models.
---
paper_title: Solid representation and operation using extended octrees
paper_content:
Solid modelers must be based on reliable and fast algorithms for Boolean operations. The octree model, as well as several generalizations (polytrees, integrated polytrees, extended octrees), is specially well suited for these algorithms and can be used either as a primary or as a secondary model in solid modeling systems. This paper is concerned with a precise definition of the extended octree model that allows the representation of nonmanifold objects with planar faces and, consequently, is closed under Boolean operations on polyhedrons. Boolean nodes and nearly vertex nodes are introduced, and the model is discussed in comparison with related representations. A fast algorithm for the direct generation of the extended octree from the geometry of the base polygon in extrusion solids is presented, and its complexity is studied. Boolean operation algorithms are introduced.
---
paper_title: Robust repair of polygonal models
paper_content:
We present a robust method for repairing arbitrary polygon models. The method is guaranteed to produce a closed surface that partitions the space into disjoint internal and external volumes. Given any model represented as a polygon soup, we construct an inside/outside volume using an octree grid, and reconstruct the surface by contouring. Our novel algorithm can efficiently process large models containing millions of polygons and is capable of reproducing sharp features in the original geometry.
---
paper_title: Editing the topology of 3D models by sketching
paper_content:
We present a method for modifying the topology of a 3D model with user control. The heart of our method is a guided topology editing algorithm. Given a source model and a user-provided target shape, the algorithm modifies the source so that the resulting model is topologically consistent with the target. Our algorithm permits removing or adding various topological features (e.g., handles, cavities and islands) in a common framework and ensures that each topological change is made by minimal modification to the source model. To create the target shape, we have also designed a convenient 2D sketching interface for drawing 3D line skeletons. As demonstrated in a suite of examples, the use of sketching allows more accurate removal of topological artifacts than previous methods, and enables creative designs with specific topological goals.
---
paper_title: Consistent solid and boundary representations from arbitrary polygonal data
paper_content:
Consistent repreaentations of the boundary and interior of thredimensional solid objects are required by applications ramging from interactive visualization to finite element analysis. However, most commonly available models of solid objects contain errors and inconsistencies. We describe an algorithm that automatically constructs consistent representations of the solid objects modeled by an arbitrary set of polygons. The key feature of our algorithm is that it first partitions space into a set of polyhedral regions and then determines which regions are solid based on region adjacency relationships. Fromthe solid polyhedral regions, we are able to output umsistent boundary and solid representations in a variety of iile formats. Unlike previous approaches, our solid-based approach is effective even when the input polygons intersect, overlap, are wrongly-oriented, have T-junctions, or are unconnected.
---
paper_title: Efficient representation and extraction of 2-manifold isosurfaces using kd-trees
paper_content:
In this paper, we propose the utilization of a kd-tree based hierarchy as an implicit object representation. Compared to an octree, the kd-tree based hierarchy is superior in terms of adaptation to the object surface. In consequence, we obtain considerably more compact implicit representations especially in case of thin object structures. We describe a new isosurface extraction algorithm for this kind of implicit representation. In contrast to related algorithms for octrees, it generates 2-manifold meshes even for kd-trees with cells containing multiple surface components. The algorithm retains all the good properties of the dual contouring approach [10] like feature preservation, computational efficiency, etc. In addition, we present a simplification framework for the surfaces represented by the kd-tree based on quadric error metrics. We adapt this framework to quantify the influence of topological changes, thereby allowing controlled topological simplification of the object. The advantages of the new algorithm are demonstrated by several examples.
---
paper_title: Solid representation and operation using extended octrees
paper_content:
Solid modelers must be based on reliable and fast algorithms for Boolean operations. The octree model, as well as several generalizations (polytrees, integrated polytrees, extended octrees), is specially well suited for these algorithms and can be used either as a primary or as a secondary model in solid modeling systems. This paper is concerned with a precise definition of the extended octree model that allows the representation of nonmanifold objects with planar faces and, consequently, is closed under Boolean operations on polyhedrons. Boolean nodes and nearly vertex nodes are introduced, and the model is discussed in comparison with related representations. A fast algorithm for the direct generation of the extended octree from the geometry of the base polygon in extrusion solids is presented, and its complexity is studied. Boolean operation algorithms are introduced.
---
paper_title: Interactive topology-aware surface reconstruction
paper_content:
The reconstruction of a complete watertight model from scan data is still a difficult process. In particular, since scanned data is often incomplete, the reconstruction of the expected shape is an ill-posed problem. Techniques that reconstruct poorly-sampled areas without any user intervention fail in many cases to faithfully reconstruct the topology of the model. The method that we introduce in this paper is topology-aware: it uses minimal user input to make correct decisions at regions where the topology of the model cannot be automatically induced with a reasonable degree of confidence. We first construct a continuous function over a three-dimensional domain. This function is constructed by minimizing a penalty function combining the data points, user constraints, and a regularization term. The optimization problem is formulated in a mesh-independent manner, and mapped onto a specific mesh using the finite-element method. The zero level-set of this function is a first approximation of the reconstructed surface. At complex under-sampled regions, the constraints might be insufficient. Hence, we analyze the local topological stability of the zero level-set to detect weak regions of the surface. These regions are suggested to the user for adding local inside/outside constraints by merely scribbling over a 2D tablet. Each new user constraint modifies the minimization problem, which is solved incrementally. The process is repeated, converging to a topology-stable reconstruction. Reconstructions of models acquired by a structured-light scanner with a small number of scribbles demonstrate the effectiveness of the method.
---
paper_title: Context-based surface completion
paper_content:
Sampling complex, real-world geometry with range scanning devices almost always yields imperfect surface samplings. These "holes" in the surface are commonly filled with a smooth patch that conforms with the boundary. We introduce a context-based method: the characteristics of the given surface are analyzed, and the hole is iteratively filled by copying patches from valid regions of the given surface. In particular, the method needs to determine best matching patches, and then, fit imported patches by aligning them with the surrounding surface. The completion process works top down, where details refine intermediate coarser approximations. To align an imported patch with the existing surface, we apply a rigid transformation followed by an iterative closest point procedure with non-rigid transformations. The surface is essentially treated as a point set, and local implicit approximations aid in measuring the similarity between two point set patches. We demonstrate the method at several point-sampled surfaces, where the holes either result from imperfect sampling during range scanning or manual removal.
---
paper_title: Removing excess topology from isosurfaces
paper_content:
Many high-resolution surfaces are created through isosurface extraction from volumetric representations, obtained by 3D photography, CT, or MRI. Noise inherent in the acquisition process can lead to geometrical and topological errors. Reducing geometrical errors during reconstruction is well studied. However, isosurfaces often contain many topological errors in the form of tiny handles. These nearly invisible artifacts hinder subsequent operations like mesh simplification, remeshing, and parametrization. In this article we present a practical method for removing handles in an isosurface. Our algorithm makes an axis-aligned sweep through the volume to locate handles, compute their sizes, and selectively remove them. The algorithm is designed to facilitate out-of-core execution. It finds the handles by incrementally constructing and analyzing a Reeb graph. The size of a handle is measured by a short nonseparating cycle. Handles are removed robustly by modifying the volume rather than attempting "mesh surgery." Finally, the volumetric modifications are spatially localized to preserve geometrical detail. We demonstrate topology simplification on several complex models, and show its benefits for subsequent surface processing.
---
paper_title: Robust repair of polygonal models
paper_content:
We present a robust method for repairing arbitrary polygon models. The method is guaranteed to produce a closed surface that partitions the space into disjoint internal and external volumes. Given any model represented as a polygon soup, we construct an inside/outside volume using an octree grid, and reconstruct the surface by contouring. Our novel algorithm can efficiently process large models containing millions of polygons and is capable of reproducing sharp features in the original geometry.
---
| Title: Fixing Geometric Errors on Polygonal Models: A Survey
Section 1: Introduction
Description 1: Introduce the prevalence of polygonal models in various domains, their creation methods, and the importance of geometric and topological correctness.
Section 2: Geometric Errors
Description 2: Describe the types of geometric errors found in polygonal models, such as gaps, holes, non-manifold elements, and self-intersections.
Section 3: Overview
Description 3: Provide an overview of the paper, introducing the categorization of methods into mesh-based and volume-based approaches, and referencing related literature.
Section 4: Mesh-Based Approaches
Description 4: Discuss methods that directly identify and fix errors on polygonal surfaces by performing modifications such as adding or removing vertices and altering polygon connectivity.
Section 5: Gaps and Holes
Description 5: Explain the techniques for filling gaps and holes, including boundary detection, stitching, triangulation, and example-based methods.
Section 6: Non-Manifold Edges
Description 6: Describe approaches for handling non-manifold edges that often represent redundant geometry or internal membranes.
Section 7: Geometric Intersections
Description 7: Outline the challenges and strategies in detecting and resolving geometric intersections.
Section 8: Volume-Based Approaches
Description 8: Detail methods that convert the model into a volumetric grid to determine inside and outside regions and then reconstruct a polygonal boundary.
Section 9: Comparison and Discussion
Description 9: Summarize and compare mesh-based and volume-based methods, and discuss potential research directions for improving model repair techniques. |
Antenna Performance Improvement Techniques for Energy Harvesting: A Review Study | 13 | ---
paper_title: A Study on a Gain-Enhanced Antenna for Energy Harvesting using Adaptive Particle Swarm Optimization
paper_content:
In this paper, the adaptive particle swarm optimization (APSO) algorithm is employed to design a gain-enhanced antenna with a reflector for energy harvesting. We placed the reflector below the main radiating element. Its back-radiated field is reflected and added to the forward radiated field, which could increase the antenna gain. We adopt the adaptive particle swarm optimization (APSO) algorithm, which improves the speed of convergence with a high frequency solver. The result shows that performance of the optimized design successfully satisfied the design goal of the frequency band, gain and axial ratio.
---
paper_title: Energy harvesting in wireless sensor networks: A comprehensive review
paper_content:
Recently, Wireless Sensor Networks (WSNs) have attracted lot of attention due to their pervasive nature and their wide deployment in Internet of Things, Cyber Physical Systems, and other emerging areas. The limited energy associated with WSNs is a major bottleneck of WSN technologies. To overcome this major limitation, the design and development of efficient and high performance energy harvesting systems for WSN environments are being explored. We present a comprehensive taxonomy of the various energy harvesting sources that can be used by WSNs. We also discuss various recently proposed energy prediction models that have the potential to maximize the energy harvested in WSNs. Finally, we identify some of the challenges that still need to be addressed to develop cost-effective, efficient, and reliable energy harvesting systems for the WSN environment.
---
paper_title: A Survey on RF Energy Harvesting: Circuits and Protocols
paper_content:
Abstract Recent advancement in semiconductor technology and fabrication process enable realization of the concept of Radio Frequency (RF) energy harvesting. RF energy harvesting, a process in which energy contained in electromagnetic waves is converted into useful electrical energy, will help realize perennially operating sensors. With energy replenishment capability and protocol design, RF energy harvesting sensors can attain the desirable characteristics of sensor design, lifetime and network performance. This paper investigates detailed aspects of recent research on RF energy harvesting circuits and protocols. We also discuss the impact of energy replenishment capability and protocol design on RF energy harvesting sensor networks.
---
paper_title: State-of-the-art research study for green cloud computing
paper_content:
Although cloud computing has rapidly emerged as a widely accepted computing paradigm, the research on cloud computing is still at an early stage. Cloud computing suffers from different challenging issues related to security, software frameworks, quality of service, standardization, and power consumption. Efficient energy management is one of the most challenging research issues. The core services in cloud computing system are the SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service). In this paper, we study state-of-the-art techniques and research related to power saving in the IaaS of a cloud computing system, which consumes a huge part of total energy in a cloud computing system. At the end, some feasible solutions for building green cloud computing are proposed. Our aim is to provide a better understanding of the design challenges of energy management in the IaaS of a cloud computing system.
---
paper_title: Beamforming power emitter design with 2×2 antenna array and phase control for microwave/RF-based energy harvesting
paper_content:
In this paper, a 915-MHz RF-power emitter with an antenna array controlled by phase shifters is proposed to make directional energy beam point to the energy harvesting (EH) receiver. For the beamforming request, a 2×2 patch antenna array and tunable 360° reflection-type phase shifters are designed. Measurement results show the energy transmission efficiency can be improved by 3.9 times higher than that of single patch antenna at one meter away under the same input power level. Furthermore, by applying ±15° beamforming control, transmission efficiency is increased by 2.7 times when receiving antenna placed at the angle of +24°.
---
paper_title: Parasitic stacked slot patch antenna for DTT energy harvesting
paper_content:
In this paper a rectangular patch antenna with two slots, integrated with a stacked parasitic antenna, is presented. These techniques allow a bandwidth (BW) and gain enhancement, two of the major limitations of traditional patch antennas. The antenna is designed to operate at 754 MHz for energy harvesting applications, from the Portuguese Digital Terrestrial Television (DTT) signal. The measured antenna BW is 22.5 MHz, representing an enhancement of around 17 MHz in respect to the simulated BW of the corresponding single patch configuration.
---
paper_title: Design of circular patch microstrip ultra wideband antenna with two notch filters
paper_content:
This paper presents a small size UWB patch antenna with two notch filters. U-shaped and J-shaped slots are loaded in the patch of the antenna for WiMAX and WLAN frequency band rejection. The antenna is simulated using the commercially available CST Microwave Studio software. The slots dimensions are systematically adjusted and optimized to achieve the desired band rejection responses. The achieved results demonstrate that the antenna has good performance over the entire working frequency band (3.1 GHz to 10.6 GHz) except WiMAX (3.15–3.7 GHz) and WLAN (5.15–5.85 GHz) notched frequency bands. Moreover, the antenna was fabricated and the simulation results are experimentally validated. The measured results demonstrate a good agreement with the simulations.
---
paper_title: Multi-state UWB circular patch antenna based on WiMAX and WLAN notch filters operation
paper_content:
This paper presents a multi-state reconfigurable UWB circular patch antenna with two notch filters. The two notch filters can be implemented using U-shaped and J-shaped slots embedded on the patch for WiMAX and WLAN frequency bands rejection. In order to add reconfigurable characteristics to the patch antenna, two copper strips are placed on the slots to represent the ON or OFF switching state of an ideal Pin diode. By using this simple switching technique, the current distribution of the patch changes and enables the antenna to have four modes of operation. The achieved results demonstrate that the antenna can function over the entire UWB working frequency range (3.1 GHz to 10.6 GHz) in one of the switching configurations. On the other hand, it rejects one or both WiMAX (3.13 – 3.7 GHz) and WLAN (5.15-5.85 GHz) frequency bands in the other three switching configurations. The antenna is simulated using electromagnetic simulation software CST Studio Suite. The obtained results were experimentally validated and good agreement was observed.
---
paper_title: A Study on a Gain-Enhanced Antenna for Energy Harvesting using Adaptive Particle Swarm Optimization
paper_content:
In this paper, the adaptive particle swarm optimization (APSO) algorithm is employed to design a gain-enhanced antenna with a reflector for energy harvesting. We placed the reflector below the main radiating element. Its back-radiated field is reflected and added to the forward radiated field, which could increase the antenna gain. We adopt the adaptive particle swarm optimization (APSO) algorithm, which improves the speed of convergence with a high frequency solver. The result shows that performance of the optimized design successfully satisfied the design goal of the frequency band, gain and axial ratio.
---
paper_title: An inverted-F antenna integrated with solar cells for energy harvesting
paper_content:
This paper discusses the integration of an inverted- F antenna with solar cells for power harvesting. The objective is to achieve a multi-source energy collecting system in order to improve the RF to DC conversion efficiency. The inverted-F antenna can be used either for communication or to scavenge the ambient RF energy when connected to the appropriate rectifying circuit. A prototype of the antenna is fabricated and tested as a proof of concept.
---
paper_title: Meshed Patch Antennas Integrated on Solar Cells
paper_content:
This letter presents the study of integrating meshed patch antennas directly onto the solar cells of a small satellite to save valuable surface real estate. The cover glass of the solar cell is used as the substrate for the antennas. The integrated patch antennas are designed to have sufficient optical transparency to ensure the proper functionality of the solar cells. A prototype meshed patch antenna is designed and integrated on after-market solar cells. The antenna has an optical transparency of 93%, and the measurements agree well with the design.
---
paper_title: A wideband coupled E-shaped patch antenna for RF energy harvesting
paper_content:
Wireless energy harvesting of incident waves for electronic devices through rectenna investigated in recent years has proved to be efficient. In work published till date antennas used in rectenna are of limited bandwidth or bulky. This paper presents an E-patch antenna with improved bandwidth and efficiency. Techniques to make trade-offs required for widening the frequency band without variation in efficiency are introduced. The E-patch is electro-magnetically coupled with similar patches and the feeding point location is decided for further increase in bandwidth. An increase of 33% in bandwidth is noticeable with an advantage of multiband operation. The Epatch is investigated for WLAN frequency bands and the coupled patch is investigated for ISM as well as WLAN frequencies.
---
paper_title: Designing dual-port pixel antenna for ambient RF energy harvesting using genetic algorithm
paper_content:
In this paper we provide a design for a dual-port pixel antenna for energy harvesting in ambient RF electromagnetic fields. We use Z-parameters to analyze the equivalent circuit network of the antenna and find the relationship between the connections of the pixels in the antenna and the received power. Finding the optimal connection configuration for the pixels which maximizes the power over a frequency band is then achieved by using the genetic algorithm. The simulation result shows that the proposed dual-port antenna can achieve a better power performance than a single dipole antenna of similar size. Measurement results are also provided.
---
paper_title: Optimizing RF energy harvester design for low power applications by integrating multi stage voltage doubler on patch antenna
paper_content:
The results of this paper based on analysis of design patch antenna with minimum dimensions and permittivity acquirable that produces a maximum power output. Since RF (Radio Frequency) energy harvesting is a recurring theme which enormously effective and applicable in occasions where remote charging and wireless power transmission take place. The potential use of RF energy was investigated experimentally. The aim of this work is to investigate the power levels that can be harvested from the air and processed to achieve the energy levels that are sufficient to charge low power electronic circuits. An RF collection system has been specifically designed, constructed, and shown to successfully collect enough energy to power circuits. For an equivalent incident signal the circuit can produce required voltage across load. This voltage can be used to power low power sensors in sensor networks ultimately to replace the batteries.
---
paper_title: Optimizing RF energy harvester design for low power applications by integrating multi stage voltage doubler on patch antenna
paper_content:
The results of this paper based on analysis of design patch antenna with minimum dimensions and permittivity acquirable that produces a maximum power output. Since RF (Radio Frequency) energy harvesting is a recurring theme which enormously effective and applicable in occasions where remote charging and wireless power transmission take place. The potential use of RF energy was investigated experimentally. The aim of this work is to investigate the power levels that can be harvested from the air and processed to achieve the energy levels that are sufficient to charge low power electronic circuits. An RF collection system has been specifically designed, constructed, and shown to successfully collect enough energy to power circuits. For an equivalent incident signal the circuit can produce required voltage across load. This voltage can be used to power low power sensors in sensor networks ultimately to replace the batteries.
---
| Title: Antenna Performance Improvement Techniques for Energy Harvesting: A Review Study
Section 1: INTRODUCTION
Description 1: Write about the significance of energy harvesting in communication systems, especially in contexts with limited battery capacity. Introduce the focus of the paper, which is to review different techniques to improve antenna performance for energy harvesting.
Section 2: MODELING OF ENERGY HARVESTING
Description 2: Discuss various works on antenna energy harvesting techniques and summarize their advantages for smart applications with high efficiency and low cost.
Section 3: Phase control for microstrip patch array antenna
Description 3: Review the method of controlling phase shift to enhance energy harvesting in patch array antennas and discuss the effectiveness and efficiency improvements.
Section 4: Slots enhancement of antenna bandwidth
Description 4: Explore the approach of using slots on patch antennas to improve gain and bandwidth, including specific case studies and simulation results.
Section 5: Gain-enhanced antenna with reflector
Description 5: Examine the use of reflectors with patch antennas for gain enhancement, detailing procedures, algorithms used, and achieved performance improvements.
Section 6: Solar and RF energy harvesting of patch antenna
Description 6: Describe the integration of solar cells with patch antennas for dual-source energy harvesting, focusing on design, optimization, and performance.
Section 7: Coupled E-patch for bandwidth improvement
Description 7: Analyze the introduction of coupled E-patch antennas to boost bandwidth and efficiency, including comparative performance analysis with single patch antennas.
Section 8: Dual-port pixel antenna
Description 8: Discuss the concept of dual-port pixel antennas and how they optimize power collection using genetic algorithms.
Section 9: Substrate Integrated Waveguide (SIW)
Description 9: Explain the design and benefits of using SIW with fractal patches for dual-band energy harvesting, noting the specific performance metrics and challenges.
Section 10: Convert RF to DC power by rectenna
Description 10: Detail the design and functionality of rectennas for converting RF energy to DC power, including circuit design considerations and efficiency outcomes.
Section 11: Rectifier circuit
Description 11: Illustrate the rectifier circuits used in energy harvesting systems, focusing on voltage multiplication and diode characteristics to enhance output voltage.
Section 12: ANALYSIS AND DISCUSSION
Description 12: Provide a comprehensive analysis of the different techniques discussed, comparing their efficiency, feasibility, and application scope. Highlight the key findings and implications for future research and development.
Section 13: CONCLUSION
Description 13: Summarize the findings of the paper, emphasizing the most effective techniques for antenna performance improvement in energy harvesting. Suggest directions for future work in combining multiple methods to achieve optimal energy harvesting efficiency. |
Metamodel Instance Generation: A systematic literature review | 10 | ---
paper_title: Test Sequence Generation from UML Sequence Diagrams
paper_content:
In this paper, we present an approach to generate test sequences from UML 2.0 sequence diagrams. Sequence diagrams are one of the most widely used UML models in the software industry. Although sequence diagrams are used for modeling the dynamic aspects of the system, they can also be used for model based testing. Existing work does not encompass certain important features of UML 2.0 sequence diagrams. Our work considers many of the novel features of UML 2.0 sequence diagrams like alt, loop, opt and break to generate test sequences. These areimportant features as far as testing is concerned. Our work begins with defining the important types of relationship that can exist between the messages. Based on the relationship between the messages, the message sequences are generated. Our work considers an important feature of UML 2.0 sequence diagrams called the dasiaExecution Occurrencepsila to generate message sequences. Next, an intermediate representation of the sequence diagram is built. This intermediate representation is called the Sequence Dependency Graph (SDG). The message sequences are incorporated into the SDG. Finally, we discuss a traversal algorithm to generate test sequences from SDG. Our method is fully automated and the test sequences generated can be used to check the correctness of the implementation under test.
---
paper_title: Model-based test cases synthesis using UML interaction diagrams
paper_content:
UML 2.0 interaction diagrams model interactions in complex systems by means of operation fragments and a systematic testing approach is required for the identification and selection of test cases. The major problem for test cases synthesis from such an interaction diagram is to arrive at a comprehensive system behavior in the presence of multiple, nested fragments. In this regard, our approach is towards systematic interpretation of flow of controls as well as their subsequent usage in the test case synthesis. We also simplify the proposed flow of controls on the basis of control primitives resulting from UML 2.0 fragments and bring it to a testable form known as intermediate testable model (ITM), which is suitable for deriving system level test cases.
---
paper_title: Regression testing with UML software designs: A survey
paper_content:
The unified modeling language (UML) designs come in a variety of different notations. UML designs can be quite large and interactions between various notations and the models they define can be difficult to assess. During the design phase, and between successive releases of systems, designs change. The impact of such changes and the resulting effect on behavior can be non-obvious and difficult to assess. This survey article explores techniques for such re-evaluation that can be classified as regression testing and suggests regression testing criteria for designs. These might vary depending on testing objectives and include both selective and regenerative regression testing approaches. The article provides a concise overview of regression testing approaches related to various UML design notations including use cases, class diagrams, sequence diagrams, activity diagrams, and statecharts, as well as combinations of these models. It discusses UML-related techniques involving cost and prioritization during selective regression testing. Finally, it evaluates these techniques with respect to inclusiveness, precision, efficiency, generality, accountability, and safety. An example is used throughout to illustrate how the techniques work. Copyright © 2009 John Wiley & Sons, Ltd. ::: ::: This survey article explores techniques for regression testing with UML designs and suggests regression testing criteria for designs. These might vary depending on testing objectives and include both selective and regenerative regression testing approaches. The article provides a concise overview of regression testing approaches related to various UML design notations including use cases, class diagrams, sequence diagrams, activity diagrams, and statecharts, as well as combinations of these models. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Closing the gap between modelling and java
paper_content:
Model-Driven Software Development is based on standardised models that are refined, transformed and eventually translated into executable code using code generators. However, creating plain text from well-structured models creates a gap that implies several drawbacks: Developers cannot continue to use their model-based tool machinery, relations between model elements and code fragments are hard to track and there is no easy way to rebuild models from their respective code. ::: ::: This paper presents an approach to bridge this gap for the Java programming language. It defines a full metamodel and text syntax specification for Java, from which a parser and a printer are generated. Through this, Java code can be handled like any other model. The implementation is validated with large test sets, example applications are shown, and future directions of research are discussed.
---
paper_title: Compiler test case generation methods: a survey and assessment
paper_content:
Abstract Software testing is an important and critical phase of the application software development life cycle. Testing is a time consuming and costly stage that requires a high degree of ingenuity. In the development stages of safety-critical and dependable computer software such as language compilers and real-time embedded software, testing activities consume about 50% of the project time. In this work we address the area of compiler testing. The aim of compiler testing is to verify that the compiler implementation conforms to its specifications, which is to generate an object code that faithfully corresponds to the language semantic and syntax as specified in the language documentation. A compiler should be carefully verified before its release, since it has to be used by many users. Finding an optimal and complete test suite that can be used in the testing process is often an exhaustive task. Various methods have been proposed for the generation of compiler test cases. Many papers have been published on testing compilers, most of which address classical programming languages. In this paper, we assess and compare various compiler testing techniques with respect to some selected criteria and also propose some new research directions in compiler testing of modem programming languages.
---
paper_title: The object constraint language: precise modeling with UML
paper_content:
(All chapters conclude with "Summary".) Foreword. Preface. Acknowledgments. Introduction. Who Should Read This Book. How This Book Should Be Used. Typeface Conventions. Information on Related Subjects. 1. Why Write Constraints? Definition of Constraint. Use of Constraints in Other Techniques. Design by Contract. Definition of Contract. Contents of a Contract. Advantages of Contracts. Preconditions and Postconditions. Invariants. Advantages of Constraints. Better Documentation. Improved Precision. Communication without Misunderstanding. Declarative or Operational Constraints. Advantages of a Declarative Language. Notation: Natural Language or Mathematical Expressions. Summary: Requirements for OCL. 2. OCL Basics. The "Royal and Loyal" System Example. Putting Invariants on Attributes. Putting Invariants on Associated Classes. Dealing with Collections of Objects. Sets, Bags, and Sequences. Inheritance. Working with Enumerations. Writing Preconditions and Postconditions. Where to Start Writing Invariants. Broken Constraints. Summary. 3. The Complete Overview of OCL Constructs. Types and Instances. Value Types and Object Types. OCL Expressions and OCL Constraints. The Context of an OCL Expression. The Context of an Invariant. The Context of a Pre- or Postcondition. The self Keyword. Basic Types and Operators. The Boolean Type. The Integer and Real Types. The String Type. Model Types. Attributes from the UML Model. Operations from the UML Model. Class Operations and Attributes from the UML Model. Associations and Aggregations from the UML Model. Association Classes from the UML Model. Qualified Associations from the UML Model. Using Package Names in Navigations. Using Pathnames in Inheritance Relations. Enumeration Types. The Set, Bag, and Sequence Types. Treating Instances as Collections. Flattening Collections. Operations on All Collection Types. Operations with Variant Meaning. Operations for the Set Type. Operations for the Sequence Type. Operations That Iterate over Collection Elements. The select Operation. The reject Operation. The collect Operation. Shorthand Notation for collect. The forAll Operation. The exists Operation. The iterate Operation. Constructs for Postconditions. Operations Defined on Every OCL Type. Types as Objects. Type Conformance Rules. Precedence Rules. Comments. Undefined. Summary. 4. Modeling with Constraints. Constraints in a UML Model. Invariants. Invariants for Derived Attributes or Associations. Preconditions and Postconditions. Guards in State Transition Diagrams. Using Guards and Events in Pre- and Postconditions. Change Events in State Transition Diagrams. Type Invariants for Stereotypes. Where OCL Expressions Can Be Used. Constraints and Inheritance. Styles for Specifying Constraints. Avoiding Complex Navigation Expressions. Choice of Context Object. Use of allInstances. Splitting and Constraints. Adding Extra Operations or Attributes. Using the collect Shorthand. Solving Modeling Issues with Constraints. Abstract Classes. Specifying Uniqueness Constraints. Adding Details to the Model versus Adding Constraints. Cycles in Class Models. Constraints on Associations. Multiplicity Constraints. The Subset Constraint. The Or Constraint. Optional Multiplicity in Associations. Summary. 5. Extending OCL. A Word of Caution. Extending the Standard OCL Types. Adding New OCL Types. Operational Use of Constraints. Generating Code for Constraints. When to Check Constraints. What to Do When the Constraint Fails. Summary. Appendix A. OCL Basic Types and Collection Types. Basic Types. OclType. OclAny. OclExpression. Real. Integer. String. Boolean. Enumeration. Collection-Related Types. Collection. Set. Bag. Sequence. Appendix B. Formal Grammar. Bibliography. Index. 0201379406T04062001
---
paper_title: The category-partition method for specifying and generating fuctional tests
paper_content:
An apparatus for recovering pulse-code modulated digital data from a readback signal has equipment for partially equalizing the readback signal, and equipment for completing the partial equalization of the readback signal and for detecting the digital data. The latter equipment includes a delay device for delaying the partially equalized readback signal, equipment for providing a relatively undelayed version of the readback signal, and a differential amplifier for providing a digital signal corresponding to the difference between the delayed readback signal and the relatively undelayed version of the readback signal. The delay device includes a filter for completing the above mentioned partial equalization with the aid of the equipment for providing the relatively undelayed version of the readback signal and the mentioned differential amplifier.
---
paper_title: Validation of Model Transformations - First Experiences Using a White Box Approach
paper_content:
Validation of model transformations is important for ensuring their quality. Successful validation must take into account the characteristics of model transformations and develop a suitable fault model on which test case generation can be based. In this paper, we report our experiences in validating a number of model transformations and propose three techniques that can be used for constructing test cases.
---
paper_title: Metamodel-based Test Generation for Model Transformations: an Algorithm and a Tool
paper_content:
In a Model-Driven Development context (MDE), model transformations allow memorizing and reusing design know-how, and thus automate parts of the design and refinement steps of a software development process. A model transformation program is a specific program, in the sense it manipulates models as main parameters. Each model must be an instance of a "metamodel", a metamodel being the specification of a set of models. Programming a model transformation is a difficult and error-prone task, since the manipulated data are clearly complex. In this paper, we focus on generating input test data (called test models) for model transformations. We present an algorithm to automatically build test models from a metamodel.
---
paper_title: Automatic test case optimization using a bacteriological adaptation model: application to .NET components
paper_content:
In this paper, we present several complementary computational intelligence techniques that we explored in the field of .Net component testing. Mutation testing serves as the common backbone for applying classical and new artificial intelligence (AI) algorithms. With mutation tools, we know how to estimate the revealing power of test cases. With AI, we aim at automatically improving test case efficiency. We therefore looked first at genetic algorithms (GA) to solve the problem of test. The aim of the selection process is to generate test cases able to kill as many mutants as possible. We then propose a new AI algorithm that fits better to the test optimization problem, called bacteriological algorithm (BA): BAs behave better that GAs for this problem. However, between GAs and BAs, a family of intermediate algorithms exists: we explore the whole spectrum of these intermediate algorithms to determine whether an algorithm exists that would be more efficient than BAs.: the approaches are compared on a .Net system.
---
paper_title: Verifying UML/OCL Operation Contracts
paper_content:
In current model-driven development approaches, software models are the primary artifacts of the development process. Therefore, assessment of their correctness is a key issue to ensure the quality of the final application. Research on model consistency has focused mostly on the models' static aspects. Instead, this paper addresses the verification of their dynamic aspects, expressed as a set of operations defined by means of pre/postcondition contracts. ::: ::: This paper presents an automatic method based on Constraint Programming to verify UML models extended with OCL constraints and operation contracts. In our approach, both static and dynamic aspects are translated into a Constraint Satisfaction Problem. Then, compliance of the operations with respect to several correctness properties such as operation executability or determinism are formally verified.
---
paper_title: The Design of a Relational Engine
paper_content:
The key design challenges in the construction of a SAT-based relational engine are described, and novel techniques are proposed to address them. An efficient engine must have a mechanism for specifying partial solutions, an effective symmetry detection and breaking scheme, and an economical translation from relational to boolean logic. These desiderata are addressed with three new techniques: a symmetry detection algorithm that works in the presence of partial solutions, a sparse-matrix representation of relations, and a compact representation of boolean formulas inspired by boolean expression diagrams and reduced boolean circuits. The presented techniques have been implemented and evaluated, with promising results.
---
paper_title: Z3: An Efficient SMT Solver
paper_content:
Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.
---
paper_title: Generating Instance Models from Meta Models
paper_content:
Meta modeling is a wide-spread technique to define visual languages, with the UML being the most prominent one. Despite several advantages of meta modeling such as ease of use, the meta modeling approach has one disadvantage: It is not constructive i. e. it does not offer a direct means of generating instances of the language. This disadvantage poses a severe limitation for certain applications. For example, when developing model transformations, it is desirable to have enough valid instance models available for large-scale testing. Producing such a large set by hand is tedious. In the related problem of compiler testing, a string grammar together with a simple generation algorithm is typically used to produce words of the language automatically. In this paper, we introduce instance-generating graph grammars for creating instances of meta models, thereby overcoming the main deficit of the meta modeling approach for defining languages.
---
paper_title: Integrating meta-modelling aspects with graph transformation for efficient visual language definition and model manipulation
paper_content:
Visual languages (VLs) play a central role in modelling various system aspects. Besides standard languages like UML, a variety of domain-specific languages exist which are the more used the more tool support is available for them. Different kinds of generators have been developed which produce visual modelling environments based on VL specifications. To define a VL, declarative as well as constructive approaches are used. The meta modelling approach is a declarative one where classes of symbols and relations are defined and associated to each other. Constraints describe additional language properties. Defining a VL by a graphs grammar, the constructive way is followed where graphs describe the syntax of models and graph rules formulate the language grammar. In this paper, we extend algebraic graph grammars by a node type inheritance concept which opens up the possibility to integrate both approaches by identifying symbol classes with node types and associations with edge types of some graph class. In this way, declarative as well as constructive elements may be used for language definition and model manipulation. Two concrete approaches, the GENGED and the AToM 3 approach, illustrate how VLs can be defined and models can be manipulated by the techniques described above.
---
paper_title: On the Complexity of Derivation in Propositional Calculus
paper_content:
The question of the minimum complexity of derivation of a given formula in classical propositional calculus is considered in this article and it is proved that estimates of complexity may vary considerably among the various forms of propositional calculus. The forms of propositional calculus used in the present article are somewhat unusual, † but the results obtained for them can, in principle, be extended to the usual forms of propositional calculus.
---
paper_title: A Metamodel for the Measurement of Object-Oriented Systems: An Analysis using Alloy
paper_content:
This paper presents a MOF-compliant metamodel for calculating software metrics and demonstrates how it is used to generate a metrics tool that calculates coupling and cohesion metrics. We also describe a systematic approach to the analysis of MOF-compliant metamodels and illustrate the approach using the presented metamodel. In this approach, we express the metamodel using UML and OCL and harness existing automated tools in a framework that generates a Java implementation and an Alloy specification of the metamodel, and use this both to examine the metamodel constraints, and to generate instantiations of the metamodel. Moreover, we describe how the approach can be used to generate test data for any software based on a MOF-compliant metamodel. We extend our framework to support this approach and use it to generate a test suite for the metrics calculation tool that is based on our metamodel.
---
paper_title: Alloy: a lightweight object modelling notation
paper_content:
Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
---
paper_title: Verification of UML/OCL Class Diagrams using Constraint Programming
paper_content:
In the MDD and MDA approaches, models become the primary artifacts of the development process. Therefore, assessment of the correctness of such models is a key issue to ensure the quality of the final application. In that sense, this paper presents an automatic method that uses the Constraint Programming paradigm to verify UML class diagrams extended with OCL constraints. In our approach, both class diagrams and OCL constraints are translated into a Constraint Satisfaction Problem. Then, compliance of the diagram with respect to several correctness proper- ties such as weak and strong satisfiability or absence of con- straint redundancies can be formally verified.
---
paper_title: EMFtoCSP: A tool for the lightweight verification of EMF models
paper_content:
The increasing popularity of MDE results in the creation of larger models and model transformations, hence converting the specification of MDE artefacts in an error-prone task. Therefore, mechanisms to ensure quality and absence of errors in models are needed to assure the reliability of the MDE-based development process. Formal methods have proven their worth in the verification of software and hardware systems. However, the adoption of formal methods as a valid alternative to ensure model correctness is compromised for the inner complexity of the problem. To circumvent this complexity, it is common to impose limitations such as reducing the type of constructs that can appear in the model, or turning the verification process from automatic into user assisted. Since we consider these limitations to be counterproductive for the adoption of formal methods, in this paper we present EMFtoCSP, a new tool for the fully automatic, decidable and expressive verification of EMF models that uses constraint logic programming as the underlying formalism.
---
paper_title: Automatic Model Generation Strategies for Model Transformation Testing
paper_content:
Testing model transformations requires input models which are graphs of inter-connected objects that must conform to a meta-model and meta-constraints from heterogeneous sources such as well-formedness rules, transformation pre-conditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several meta-constraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87% vs. 72% in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models.
---
paper_title: Translation of Restricted OCL Constraints into Graph Constraints for Generating Meta Model Instances by Graph Grammars
paper_content:
The meta modeling approach to syntax definition of visual modeling techniques has gained wide acceptance, especially by using it for the definition of UML. Since meta-modeling is non-constructive, it does not provide a systematic way to generate all possible meta model instances. In our approach, an instance-generating graph grammar is automatically created from a given meta model. This graph grammar ensures correct typing and cardinality constraints, but OCL constraints for the meta model are not supported yet. To satisfy also the given OCL constraints, well-formedness checks have to be done in addition. We present a restricted form of OCL constraints that can be translated to graph constraints which can be checked during the instance generation process.
---
paper_title: Defining Models - Meta Models versus Graph Grammars
paper_content:
The precise specification of software models is a major concern in model-driven design of object-oriented software. Metamodelling and graph grammars are apparent choices for such specifications. Metamodelling has several advantages: it is easy to use, and provides procedures that check automatically whether a model is valid or not. However, it is less suited for proving properties of models, or for generating large sets of example models. Graph grammars, in contrast, offer a natural procedure - the derivation process - for generating example models, and they support proofs because they define a graph language inductively. However, not all graph grammars that allow to specify practically relevant models are easily parseable. In this paper, we propose contextual star grammars as a graph grammar approach that allows for simple parsing and that is powerful enough for specifying non-trivial software models. This is demonstrated by defining program graphs, a language-independent model of object-oriented programs, with a focus on shape (static structure) rather than behavior.
---
paper_title: Qualifying input test data for model transformations
paper_content:
Model transformation is a core mechanism for model-driven engineering (MDE). Writing complex model transformations is error-prone, and efficient testing techniques are required as for any complex program development. Testing a model transformation is typically performed by checking the results of the transformation applied to a set of input models. While it is fairly easy to provide some input models, it is difficult to qualify the relevance of these models for testing. In this paper, we propose a set of rules and a framework to assess the quality of given input models for testing a given transformation. Furthermore, the framework identifies missing model elements in input models and assists the user in improving these models.
---
paper_title: Encoding OCL data types for SAT-based verification of UML/OCL models
paper_content:
Checking the correctness of UML/OCL models is a crucial task in the design of complex software and hardware systems. As a consequence, several approaches have been presented which address this problem. Methods based on satisfiability (SAT) solvers have been shown to be very promising in this domain. Here, the actual verification task is encoded as an equivalent bit-vector instance to be solved by an appropriate solving engine. However, while a bit-vector encoding for basic UML/OCL constructs has already been introduced, no encoding for nontrivial OCL data types and operations is available so far. In this paper, we close this gap and present a bit-vector encoding for more complex OCL data types, i.e. sets, bags, and their ordered counterparts. As a result, SAT-based UML/OCL verification becomes applicable for models containing these collections types. A case study illustrates the application of this encoding.
---
paper_title: Automating first-order relational logic
paper_content:
An automatic analysis method for first-order logic with sets and relations is described. A first-order formula is translated to a quantifier-free boolean formula, which has a model when the original formula has a model within a given scope (that is, involving no more than some finite number of atoms). Because the satisfiable formulas that occur in practice tend to have small models, a small scope usually suffices and the analysis is efficient. The paper presents a simple logic and gives a compositional translation scheme. It also reports briefly on experience using the Alloy Analyzer, a tool that implements the scheme.
---
paper_title: New Techniques that Improve MACE-style Finite Model Finding
paper_content:
We describe a new method for finding finite models of unsorted first-order logic clause sets. The method is a MACE-style method, i.e. it ”flattens” the first-order clauses, and for increasing model sizes, instantiates the resulting clauses into propositional clauses which are consecutively solved by a SAT-solver. We enhance the standard method by using 4 novel techniques: term definitions, which reduce the number of variables in flattened clauses, incremental SAT, which enables reuse of search information between consecutive model sizes, static symmetry reduction, which reduces the number of isomorphic models by adding extra constraints to the SAT problem, and sort inference, which allows the symmetry reduction to be applied at a finer grain. All techniques have been implemented in a new model finder, called Paradox, with very promising results.
---
paper_title: Grammar Testing
paper_content:
Grammar testing is discussed in the context of grammar engineering (i.e., software engineering for grammars). We propose a generalisation of the known rule coverage for grammars, that is, context-dependent branch coverage. We investigate grammar testing, especially coverage analysis, test set generation, and integration of testing and grammar transformations. Grammar recovery is chosen as a subfield of grammar engineering to illustrate the developed concepts. Grammar recovery is concerned with the derivation of a language's grammar from some available resource such as a semi-formal language reference.
---
paper_title: Compiler test case generation methods: a survey and assessment
paper_content:
Abstract Software testing is an important and critical phase of the application software development life cycle. Testing is a time consuming and costly stage that requires a high degree of ingenuity. In the development stages of safety-critical and dependable computer software such as language compilers and real-time embedded software, testing activities consume about 50% of the project time. In this work we address the area of compiler testing. The aim of compiler testing is to verify that the compiler implementation conforms to its specifications, which is to generate an object code that faithfully corresponds to the language semantic and syntax as specified in the language documentation. A compiler should be carefully verified before its release, since it has to be used by many users. Finding an optimal and complete test suite that can be used in the testing process is often an exhaustive task. Various methods have been proposed for the generation of compiler test cases. Many papers have been published on testing compilers, most of which address classical programming languages. In this paper, we assess and compare various compiler testing techniques with respect to some selected criteria and also propose some new research directions in compiler testing of modem programming languages.
---
paper_title: A sentence generator for testing parsers
paper_content:
A fast algorithm is given to produce a small set of short sentences from a context free grammar such that each production of the grammar is used at least once. The sentences are useful for testing parsing programs and for debugging grammars (finding errors in a grammar which causes it to specify some language other than the one intended). Some experimental results from using the sentences to test some automatically generated simpleLR(1) parsers are also given.
---
paper_title: Uniform random generation of huge metamodel instances
paper_content:
The size and the number of models is drastically increasing, preventing organizations from fully exploiting Model Driven Engineering benefits. Regarding this problem of scalability, some approaches claim to provide mechanisms that are adapted to numerous and huge models. The problem is that those approaches cannot be validated as it is not possible to obtain numerous and huge models and then to stress test them. ::: ::: In this paper, we face this problem by proposing a uniform generator of huge models. Our approach is based on the Boltzmann method, whose two main advantages are its linear complexity which makes it possible to generate huge models, and its uniformity, which guarantees that the generation has no bias.
---
paper_title: USE: A UML-based specification environment for validating UML and OCL
paper_content:
The Unified Modeling Language (UML) is accepted today as an important standard for developing software. UML tools however provide little support for validating and checking models in early development phases. There is also no substantial support for the Object Constraint Language (OCL). We present an approach for the validation of UML models and OCL constraints based on animation and certification. The USE tool (UML-based Specification Environment) supports analysts, designers and developers in executing UML models and checking OCL constraints and thus enables them to employ model-driven techniques for software production.
---
paper_title: Boltzmann Samplers for the Random Generation of Combinatorial Structures
paper_content:
This article proposes a surprisingly simple framework for the random generation of combinatorial configurations based on what we call Boltzmann models. The idea is to perform random generation of possibly complex structured objects by placing an appropriate measure spread over the whole of a combinatorial class – an object receives a probability essentially proportional to an exponential of its size. As demonstrated here, the resulting algorithms based on real-arithmetic operations often operate in linear time. They can be implemented easily, be analysed mathematically with great precision, and, when suitably tuned, tend to be very efficient in practice.
---
paper_title: The category-partition method for specifying and generating fuctional tests
paper_content:
An apparatus for recovering pulse-code modulated digital data from a readback signal has equipment for partially equalizing the readback signal, and equipment for completing the partial equalization of the readback signal and for detecting the digital data. The latter equipment includes a delay device for delaying the partially equalized readback signal, equipment for providing a relatively undelayed version of the readback signal, and a differential amplifier for providing a digital signal corresponding to the difference between the delayed readback signal and the relatively undelayed version of the readback signal. The delay device includes a filter for completing the above mentioned partial equalization with the aid of the equipment for providing the relatively undelayed version of the readback signal and the mentioned differential amplifier.
---
paper_title: Validating UML and OCL models in USE by automatic snapshot generation
paper_content:
We study the testing and certification of UML and OCL models as supported by the validation tool USE. We extend the available USE features by introducing a language for defining properties of desired snapshots and by showing how such snapshots are generated. Within the approach, it is possible to treat test cases and validation cases. Test cases show that snapshots having desired properties can be constructed. Validation cases show that given properties are consequences of the original UML and OCL model.
---
paper_title: Metamodel-based Test Generation for Model Transformations: an Algorithm and a Tool
paper_content:
In a Model-Driven Development context (MDE), model transformations allow memorizing and reusing design know-how, and thus automate parts of the design and refinement steps of a software development process. A model transformation program is a specific program, in the sense it manipulates models as main parameters. Each model must be an instance of a "metamodel", a metamodel being the specification of a set of models. Programming a model transformation is a difficult and error-prone task, since the manipulated data are clearly complex. In this paper, we focus on generating input test data (called test models) for model transformations. We present an algorithm to automatically build test models from a metamodel.
---
paper_title: Qualifying input test data for model transformations
paper_content:
Model transformation is a core mechanism for model-driven engineering (MDE). Writing complex model transformations is error-prone, and efficient testing techniques are required as for any complex program development. Testing a model transformation is typically performed by checking the results of the transformation applied to a set of input models. While it is fairly easy to provide some input models, it is difficult to qualify the relevance of these models for testing. In this paper, we propose a set of rules and a framework to assess the quality of given input models for testing a given transformation. Furthermore, the framework identifies missing model elements in input models and assists the user in improving these models.
---
paper_title: Uniform random generation of huge metamodel instances
paper_content:
The size and the number of models is drastically increasing, preventing organizations from fully exploiting Model Driven Engineering benefits. Regarding this problem of scalability, some approaches claim to provide mechanisms that are adapted to numerous and huge models. The problem is that those approaches cannot be validated as it is not possible to obtain numerous and huge models and then to stress test them. ::: ::: In this paper, we face this problem by proposing a uniform generator of huge models. Our approach is based on the Boltzmann method, whose two main advantages are its linear complexity which makes it possible to generate huge models, and its uniformity, which guarantees that the generation has no bias.
---
paper_title: USE: A UML-based specification environment for validating UML and OCL
paper_content:
The Unified Modeling Language (UML) is accepted today as an important standard for developing software. UML tools however provide little support for validating and checking models in early development phases. There is also no substantial support for the Object Constraint Language (OCL). We present an approach for the validation of UML models and OCL constraints based on animation and certification. The USE tool (UML-based Specification Environment) supports analysts, designers and developers in executing UML models and checking OCL constraints and thus enables them to employ model-driven techniques for software production.
---
paper_title: UML2ALLOY: A tool for lightweight modelling of discrete event systems
paper_content:
Alloy is a textual language developed by Daniel Jackson and his team at MIT. It is a formal language, which has a succinct syntax and allows specification and automatic analysis of a wide variety of systems. On the other hand, the Unified Modelling Language (UML) is a semi-formal language, which is accepted by the software engineering community as the defacto standard for modelling, specification and implementation of Object based systems. This paper studies the integration of the UML and Alloy into a single CASE tool, which aims to take advantage of the positive aspect of both the UML and Alloy. Alloy and UML specification provide two views of the system. In order to synchronise the two views, we make use of the MDA style transformation. In particular, we shall present a Meta Object Facility (MOF) compliant metamodel for Alloy and define a model transformation from the UML metamodel to the Alloy metamodel. Based on the approach presented in the paper, we have implemented a tool called UML2Alloy for the modelling and analysis of Discrete Event Systems. To evaluate the tool, the paper presents a case study involving the modelling and analysis of a prototype manufacturing system.
---
paper_title: An Extensible SAT-solver
paper_content:
In this article, we present a small, complete, and efficient SAT-solver in the style of conflict-driven learning, as exemplified by Chaff. We aim to give sufficient details about implementation to enable the reader to construct his or her own solver in a very short time.This will allow users of SAT-solvers to make domain specific extensions or adaptions of current state-of-the-art SAT-techniques, to meet the needs of a particular application area. The presented solver is designed with this in mind, and includes among other things a mechanism for adding arbitrary boolean constraints. It also supports solving a series of related SAT-problems efficiently by an incremental SAT-interface.
---
paper_title: A Metamodel for the Measurement of Object-Oriented Systems: An Analysis using Alloy
paper_content:
This paper presents a MOF-compliant metamodel for calculating software metrics and demonstrates how it is used to generate a metrics tool that calculates coupling and cohesion metrics. We also describe a systematic approach to the analysis of MOF-compliant metamodels and illustrate the approach using the presented metamodel. In this approach, we express the metamodel using UML and OCL and harness existing automated tools in a framework that generates a Java implementation and an Alloy specification of the metamodel, and use this both to examine the metamodel constraints, and to generate instantiations of the metamodel. Moreover, we describe how the approach can be used to generate test data for any software based on a MOF-compliant metamodel. We extend our framework to support this approach and use it to generate a test suite for the metrics calculation tool that is based on our metamodel.
---
paper_title: Alloy: a lightweight object modelling notation
paper_content:
Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.
---
paper_title: EMFtoCSP: A tool for the lightweight verification of EMF models
paper_content:
The increasing popularity of MDE results in the creation of larger models and model transformations, hence converting the specification of MDE artefacts in an error-prone task. Therefore, mechanisms to ensure quality and absence of errors in models are needed to assure the reliability of the MDE-based development process. Formal methods have proven their worth in the verification of software and hardware systems. However, the adoption of formal methods as a valid alternative to ensure model correctness is compromised for the inner complexity of the problem. To circumvent this complexity, it is common to impose limitations such as reducing the type of constructs that can appear in the model, or turning the verification process from automatic into user assisted. Since we consider these limitations to be counterproductive for the adoption of formal methods, in this paper we present EMFtoCSP, a new tool for the fully automatic, decidable and expressive verification of EMF models that uses constraint logic programming as the underlying formalism.
---
paper_title: Automatic Model Generation Strategies for Model Transformation Testing
paper_content:
Testing model transformations requires input models which are graphs of inter-connected objects that must conform to a meta-model and meta-constraints from heterogeneous sources such as well-formedness rules, transformation pre-conditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several meta-constraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87% vs. 72% in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models.
---
paper_title: The Sat4j library, release 2.2
paper_content:
Sat4j is a mature, open source library of SAT-based solvers in Java. It provides a modular SAT solver architecture designed to work with generic constraints. Such architecture is used to provide SAT, MaxSat and pseudo-boolean and solvers for lightweight constraint programming. Those solvers have been evaluated regularly in the corresponding international competitive events. The library has been adopted by several academic softwares and the widely used Eclipse platform, which relies on a pseudo-boolean solver from Sat4j for its plugins dependencies management since June 2008.
---
paper_title: Validating UML and OCL models in USE by automatic snapshot generation
paper_content:
We study the testing and certification of UML and OCL models as supported by the validation tool USE. We extend the available USE features by introducing a language for defining properties of desired snapshots and by showing how such snapshots are generated. Within the approach, it is possible to treat test cases and validation cases. Test cases show that snapshots having desired properties can be constructed. Validation cases show that given properties are consequences of the original UML and OCL model.
---
paper_title: UML2Alloy: A Challenging Model Transformation
paper_content:
Alloy is a formal language, which has been applied to modelling of systems in a wide range of application domains. It is supported by Alloy Analyzer, a tool, which allows fully automated analysis. As a result, creating Alloy code from a UML model provides the opportunity to exploit analysis capabilities of the Alloy Analyzer to discover possible design flaws at early stages of the software development. Our research makes use of model based techniques for the automated transformation of UML class diagrams with OCL constraints to Alloy code. The paper demonstrates challenging aspects of the model transformation, which originate in fundamental differences between UML and Alloy. We shall discuss some of the differences and illustrate their implications on the model transformation process. The presented approach is explained via an example of a secure e-business system.
---
paper_title: Alcoa: the alloy constraint analyzer
paper_content:
Alcoa is a tool for analyzing object models. It has a range of uses. At one end, it can act as a support tool for object model diagrams, checking for consistency of multiplicities and generating sample snapshots. At the other end, it embodies a lightweight formal method in which subtle properties of behaviour can be investigated. Alcoa's input language, Alloy, is a new notation based on Z. Its development was motivated by the need for a notation that is more closely tailored to object models (in the style of UML), and more amenable to automatic analysis. Like Z, Alloy supports the description of systems whose state involves complex relational structure. State and behavioural properties are described declaratively, by conjoining constraints. This makes it possible to develop and analyze a model incrementally, with Alcoa investigating the consequences of whatever constraints are given. Alcoa works by translating constraints to boolean formulas, and then applying state-of-the-art SAT solvers. It can analyze billions of states in seconds.
---
paper_title: Metamodel-based Test Generation for Model Transformations: an Algorithm and a Tool
paper_content:
In a Model-Driven Development context (MDE), model transformations allow memorizing and reusing design know-how, and thus automate parts of the design and refinement steps of a software development process. A model transformation program is a specific program, in the sense it manipulates models as main parameters. Each model must be an instance of a "metamodel", a metamodel being the specification of a set of models. Programming a model transformation is a difficult and error-prone task, since the manipulated data are clearly complex. In this paper, we focus on generating input test data (called test models) for model transformations. We present an algorithm to automatically build test models from a metamodel.
---
paper_title: Extensive validation of OCL models by integrating SAT solving into USE
paper_content:
The Object Constraint Language (OCL) substantially enriches modeling languages like UML, MOF or EMF with respect to formulating meaningful model properties. In model-centric approaches, an accurately defined model is a requisite for further use. During development of a model, continuous validation of properties and feedback to developers is required, since many design flaws can then be directly discovered and corrected. For this purpose, lightweight validation approaches which allow developers to perform automatic model analysis are particularly helpful. We provide a new method for efficiently searching for model instances. The existence or non-existence of model instances with certain properties allows significant conclusions about model properties. Our approach is based on the translation of UML and OCL concepts into relational logic and its realization with SAT solvers. We explain various use cases of our proposal, for example, completion of partly defined model instances so that particular properties hold in the completed model instances. Our proposal is realized by integrating a model validator as a plugin into the UML and OCL tool USE
---
paper_title: Qualifying input test data for model transformations
paper_content:
Model transformation is a core mechanism for model-driven engineering (MDE). Writing complex model transformations is error-prone, and efficient testing techniques are required as for any complex program development. Testing a model transformation is typically performed by checking the results of the transformation applied to a set of input models. While it is fairly easy to provide some input models, it is difficult to qualify the relevance of these models for testing. In this paper, we propose a set of rules and a framework to assess the quality of given input models for testing a given transformation. Furthermore, the framework identifies missing model elements in input models and assists the user in improving these models.
---
paper_title: UML2ALLOY: A tool for lightweight modelling of discrete event systems
paper_content:
Alloy is a textual language developed by Daniel Jackson and his team at MIT. It is a formal language, which has a succinct syntax and allows specification and automatic analysis of a wide variety of systems. On the other hand, the Unified Modelling Language (UML) is a semi-formal language, which is accepted by the software engineering community as the defacto standard for modelling, specification and implementation of Object based systems. This paper studies the integration of the UML and Alloy into a single CASE tool, which aims to take advantage of the positive aspect of both the UML and Alloy. Alloy and UML specification provide two views of the system. In order to synchronise the two views, we make use of the MDA style transformation. In particular, we shall present a Meta Object Facility (MOF) compliant metamodel for Alloy and define a model transformation from the UML metamodel to the Alloy metamodel. Based on the approach presented in the paper, we have implemented a tool called UML2Alloy for the modelling and analysis of Discrete Event Systems. To evaluate the tool, the paper presents a case study involving the modelling and analysis of a prototype manufacturing system.
---
paper_title: Fundamentals of Algebraic Graph Transformation
paper_content:
This is the first textbook treatment of the algebraic approach to graph transformation, based on algebraic structures and category theory. It contains an introduction to classical graphs. Basic and advanced results are first shown for an abstract form of replacement systems and are then instantiated to several forms of graph and Petri net transformation systems. The book develops typed attributed graph transformation and contains a practical case study.
---
paper_title: A Metamodel for the Measurement of Object-Oriented Systems: An Analysis using Alloy
paper_content:
This paper presents a MOF-compliant metamodel for calculating software metrics and demonstrates how it is used to generate a metrics tool that calculates coupling and cohesion metrics. We also describe a systematic approach to the analysis of MOF-compliant metamodels and illustrate the approach using the presented metamodel. In this approach, we express the metamodel using UML and OCL and harness existing automated tools in a framework that generates a Java implementation and an Alloy specification of the metamodel, and use this both to examine the metamodel constraints, and to generate instantiations of the metamodel. Moreover, we describe how the approach can be used to generate test data for any software based on a MOF-compliant metamodel. We extend our framework to support this approach and use it to generate a test suite for the metrics calculation tool that is based on our metamodel.
---
paper_title: Automatic Model Generation Strategies for Model Transformation Testing
paper_content:
Testing model transformations requires input models which are graphs of inter-connected objects that must conform to a meta-model and meta-constraints from heterogeneous sources such as well-formedness rules, transformation pre-conditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several meta-constraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87% vs. 72% in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models.
---
paper_title: Towards an automated test generation for the verification of model transformations
paper_content:
It is widely accepted that model transformations play an important role in the MDA approach. As for any software, the validation and verification are essential in the life cycle of a model transformation. The proposition of an automatic approach that is based on functional testing techniques for the verification of model transformations reveals three main issues: the automatic generation of test data, the verification criteria, and the definition of the test oracle. The scope of this paper is restricted to the automatic generation of test data issue of the verification process. We first present a background on essential methods for test case generation and we argue their adaptation for the verification of model transformations. For an automated generation of test data we propose a formal language to be used for the specification of model transformations. We also propose a data partitioning technique that focuses on the structure of models in order to take into account the structural aspect of models when generating input test models. Our partitioning technique is to be combined with existing techniques to cover the whole characteristics of the value a model.
---
paper_title: Translation of Restricted OCL Constraints into Graph Constraints for Generating Meta Model Instances by Graph Grammars
paper_content:
The meta modeling approach to syntax definition of visual modeling techniques has gained wide acceptance, especially by using it for the definition of UML. Since meta-modeling is non-constructive, it does not provide a systematic way to generate all possible meta model instances. In our approach, an instance-generating graph grammar is automatically created from a given meta model. This graph grammar ensures correct typing and cardinality constraints, but OCL constraints for the meta model are not supported yet. To satisfy also the given OCL constraints, well-formedness checks have to be done in addition. We present a restricted form of OCL constraints that can be translated to graph constraints which can be checked during the instance generation process.
---
paper_title: UML2Alloy: A Challenging Model Transformation
paper_content:
Alloy is a formal language, which has been applied to modelling of systems in a wide range of application domains. It is supported by Alloy Analyzer, a tool, which allows fully automated analysis. As a result, creating Alloy code from a UML model provides the opportunity to exploit analysis capabilities of the Alloy Analyzer to discover possible design flaws at early stages of the software development. Our research makes use of model based techniques for the automated transformation of UML class diagrams with OCL constraints to Alloy code. The paper demonstrates challenging aspects of the model transformation, which originate in fundamental differences between UML and Alloy. We shall discuss some of the differences and illustrate their implications on the model transformation process. The presented approach is explained via an example of a secure e-business system.
---
paper_title: Metamodel-based Test Generation for Model Transformations: an Algorithm and a Tool
paper_content:
In a Model-Driven Development context (MDE), model transformations allow memorizing and reusing design know-how, and thus automate parts of the design and refinement steps of a software development process. A model transformation program is a specific program, in the sense it manipulates models as main parameters. Each model must be an instance of a "metamodel", a metamodel being the specification of a set of models. Programming a model transformation is a difficult and error-prone task, since the manipulated data are clearly complex. In this paper, we focus on generating input test data (called test models) for model transformations. We present an algorithm to automatically build test models from a metamodel.
---
paper_title: Qualifying input test data for model transformations
paper_content:
Model transformation is a core mechanism for model-driven engineering (MDE). Writing complex model transformations is error-prone, and efficient testing techniques are required as for any complex program development. Testing a model transformation is typically performed by checking the results of the transformation applied to a set of input models. While it is fairly easy to provide some input models, it is difficult to qualify the relevance of these models for testing. In this paper, we propose a set of rules and a framework to assess the quality of given input models for testing a given transformation. Furthermore, the framework identifies missing model elements in input models and assists the user in improving these models.
---
| Title: Metamodel Instance Generation: A Systematic Literature Review
Section 1: Introduction
Description 1: Introduce the context of the study and explain the importance of metamodels in model-driven engineering. Highlight the need for metamodel instance generation and provide an overview of the paper's structure.
Section 2: Background
Description 2: Discuss the concept of metamodeling, its advantages, and its limitations, particularly in the context of test case generation. Outline the research questions addressed in the paper.
Section 3: Review Research Methods
Description 3: Describe the methodologies used for searching, selecting, and assessing the studies included in the literature review. Explain the process of forming search terms and the search strategy.
Section 4: Search Terms
Description 4: Detail the search terms derived from research questions, various representations of metamodels, and constraint management techniques.
Section 5: Paper Selection Study
Description 5: Explain the paper selection process, including the criteria for judging papers based on their titles, abstracts, and content. Summarize the number of papers chosen for detailed study.
Section 6: Selected Papers
Description 6: Present the selected papers in a categorized manner, detailing the main ideas and contributions of each paper.
Section 7: Discussion
Description 7: Discuss the findings from the selected papers by answering the research questions. Analyze the main areas related to metamodel instance generation, the algorithms and theoretical frameworks used, the selection criteria for instances, and the tools available for instance generation.
Section 8: Summary
Description 8: Summarize the key insights gained from the literature review. Highlight the main techniques and tools discussed and their advantages and limitations.
Section 9: Conclusion
Description 9: Conclude the paper by summarizing the main contributions of the systematic literature review and identifying potential future research directions.
Section 10: References
Description 10: List all the references cited throughout the paper. |
The Ethics of Computing: A Survey of the Computing-Oriented Literature | 14 | ---
paper_title: Philosophy of computing and information technology
paper_content:
Philosophy has been described as having taken a “computational turn,” referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions that cannot readily be approached within traditional philosophical frameworks. As such, computer technology is arguably the technology that has had the most profound impact on philosophy. Philosophers have studied computer technology and its philosophical implications extensively. Philosophers have discovered computers and information technology (IT) as research topics, and a wealth of research is taking place on philosophical issues in relation to these technologies. The research agenda is broad and diverse. Issues that are studied include the nature of computational systems, the ontological status of virtual worlds, the limitations of artificial intelligence, philosophical aspects of data modeling, the political regulation of cyberspace, the epistemology of Internet information, ethical aspects of information privacy and security, and many more. ::: ::: Philosophy has been described as having taken a ‘computational turn’, referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions that cannot readily be approached within traditional philosophical frameworks. As such, computer technology is arguably the technology that has had the most profound impact on philosophy. Philosophers have studied computer technology and its philosophical implications extensively, and this chapter gives an overview of the field. We start with definitions and historical overviews of the field and its various subfields. We then consider studies of the fundamental nature and basic principles of computing and computational systems, before moving on to philosophy of computer science, which investigates the nature, scope and methods of computer science. Under this heading, we will also address such topics as data modeling, ontology in computer science, programming languages, software engineering as an engineering discipline, management of information systems, the use of computers for simulation, and human-computer interaction. Subsequently, we will address the issue in computing that has received the most attention from philosophers, artificial intelligence (AI). The purpose of this section is to give an overview of the philosophical issues raised by the notion of creating intelligent machines. We consider philosophical critiques of different approaches within AI and pay special attention to philosophical studies of applications of AI. We then turn to a section on philosophical issues pertaining to new media and the Internet, including the convergence between media and digital computers. The theoretical and ethical issues raised by this relatively recent phenomenon are diverse. We will focus on philosophical theories of the ‘information society’, epistemological and ontological issues in relation to Internet information and virtuality, the philosophical study of social life online and cyberpolitics, and issues raised by the disappearing borders between body and artifact in cyborgs and virtual selves. The final section in this chapter is devoted to the many ethical questions raised by computers and information technology, as studied in computer ethics.
---
paper_title: The Human Use of Human Beings
paper_content:
By Norbert Wiener. (London: Eyre and Spottiswoode Ltd.) Pp. 242. Price 18s. This book is largely about the resemblances and differences between human beings and machines; and the dangers arising out of making machines too like human beings, or of treating human beings too like machines. The author ranges round the subject with a freedom reminiscent of a talk by James Stephens—successive index entries are Monkeys Paw (Jacobs), Monsters, Moral Values.
---
paper_title: Morality, Ethics, and Reflection: A Categorization of Normative IS Research
paper_content:
Moral views and perceptions, their ethical evaluation and justification, and practical concerns about how to incorporate them all play important roles in research and practice in the information systems discipline. This paper develops a model of normative issues ranging from moral intuition and explicit morality to ethical theory and meta-ethical reflection. After showing that this normative model is relevant to IS and that it allows an improved understanding of normative issues, the paper discusses these levels of normativity in the context of two of the most prominent normative topics in IS: Privacy and intellectual property. The paper then suggests that a more explicit understanding of the different aspects of normativity would benefit IS research. This would leverage the traditional empirical strengths of IS research and use them to develop research that is relevant beyond the boundaries of the discipline. Such broader relevance could be aimed at the reference disciplines. In particular, moral philosophy could benefit from understanding information technology and its role in organizations in more detail. It could, furthermore, inform policy makers who are increasingly called on to regulate new information technologies.
---
paper_title: The pledge of the computing professional: recognizing and promoting ethics in the computing professions
paper_content:
All of us in the computing community understand the importance of recognizing and promoting ethical behavior in our profession. Instruction in ethics is rapidly becoming a part of most computing-related curricula, whether as a stand-alone course or infused into existing courses. Both Computing Curricula 2005 and the current discussions on Computing Curricula 2013 recognize the significance of ethics, generally considering it a core topic across the various computing disciplines. Additionally, in their criteria for the accreditation of computing programs, ABET specifies that a student must attain by the time of graduation an understanding of ethical issues and responsibilities. What has been missing is a formal rite-of-passage ceremony to prompt student recognition and self-reflection on the transition from being a student to a computing professional. In 2009, seventeen faculty members and industry representatives from a wide range of institutions began to address this open problem by forming The Pledge of the Computing Professional [1], [2]. The Pledge exists to promote and recognize the ethical and moral behavior and responsibilities in graduates of computing-related degree programs as they transition to careers of service to society. The Pledge does not seek to define or enforce ethics --- this is the role of other organizations. Specifically, The Pledge is modeled after the Order of the Engineer [3] and provides a rite-of-passage ceremony at the time of graduation.
---
paper_title: A uniform code of ethics: business and IT professional ethics
paper_content:
Business and IT professionals have enough in common that they can share a universal code of ethics.
---
paper_title: Enabling personal privacy for pervasive computing environments
paper_content:
Protection of personal data in the Internet is already a challenge today. Users have to actively look up privacy policies of websites and decide whether they can live with the terms of use. Once discovered, they are forced to make a "'take or leave"' decision. In future living and working environments, where sensors and context-aware services are pervasive, this becomes an even greater challenge and annoyance. The environment is much more personalized and users cannot just "'leave"'. They require measures to prevent, avoid and detect misuse of sensitive data, as well as to be able to negotiate the purpose of use of data. We present a novel model of privacy protection, complementing the notion of enterprise privacy with the incorporation of personal privacy towards a holistic privacy management system. Our approach allows non-expert users not only to negotiate the desired level of privacy in a rather automated and simple way, but also to track and monitor the whole life-cycle of data.
---
paper_title: In A Different Voice: Psychological Theory And Women's Development
paper_content:
Introduction 1. Woman's Place in Man's Life Cycle 2. Images of Relationship 3. Concepts of Self and Morality 4. Crisis and Transition 5. Women's Rights and Women's Judgment 6. Visions of Maturity References Index of Study Participants General Index
---
paper_title: Computer ethics in a different voice
paper_content:
Abstract This paper argues that the potential of writing on computer ethics to contribute to a deeper understanding of inequalities surrounding the use of information and communications technologies is threatened by forms of technological determinism and liberalism. Such views are prevalent in professional and more popular literature, and even in policy documents, albeit expressed tacitly. Adopting this standpoint substantially reduces explanatory power in relation to certain computer ethics topics, especially equality and participation, particularly in relation to gender. Research on gender and information and communications technologies has analyzed inequalities between men and women both inside and outside the workplace, drawing heavily from feminist theory. The paper argues that feminist ethics, coupled with aspects of feminist legal and political theory, may offer a fruitful, novel direction for analyzing computer ethics problems, and certainly those that contain substantial differences, and therefore inequalities, in men's and women's experiences on-line. Furthermore, feminist ethics can offer a more collectivist approach toward computer ethics problems. Emerging themes in existing research on gender and computer ethics are discussed before exploring some of the outcomes of applying feminist theory to a problem of privacy in the extreme form of Internet-based harassment known as “cyberstalking”, where traditional liberal and determinist views have proved problematic.
---
paper_title: The role of metaethics and the future of computer ethics
paper_content:
In the following essay, I will discuss D.Johnson's argument in her ETHICOMP99 KeynoteSpeech (Johnson 1999) regarding the possiblefuture disappearance of computer ethics as anautonomous discipline, and I will analyze somelikely objections to Johnson's view.In the future, there are two ways in whichcomputer ethics might disappear: (1) therejection of computer ethics as an aspect ofapplied ethics, or (2) the rejection ofcomputer ethics as an autonomous discipline.The first path, it seems to me, would lead tothe death of the entire field of appliedethics, while the second path would lead onlyto the death of computer ethics as a separatesubject. Computer technology is becoming very pervasive,and each scientific field includes somediscipline-specific computing. For the likelyforeseeable future, disciplines such asbioethics and engineering ethics will have todeal with ethical issues involving the role ofcomputers. I will argue that computer ethics inthis sense is unlikely to disappear, even ifcomputer ethics ceases to be considered as aseparate discipline.In order to understand which path will befollowed by computer ethics, I will compareJohnson's argument with ideas of earlierthinkers like N. Wiener (1950) and B. Russell(1932). Although Russell did not specificallyconsider computer technology, he had somegood intuitions about the development ofsocieties by means of technology.My conclusion will be two-fold: (1) thatapplied ethics will not die, but it may make nosense in the future to talk about computerethics as a separate field; and (2) thatcomputer ethics will not simply become``ordinary ethics'', contrary to Johnson's view.
---
paper_title: The need for a new graduation rite of passage
paper_content:
The use of computers is pervasive throughout our society. Given the ever-increasing reliance placed upon software, graduates from computing-related degree programs need to be more aware than ever of their responsibilities toward ensuring that society is well served through their creative works. To assist with this effort, a new organization is being proposed for the establishment of a rite-of-passage ceremony for students graduating in the computing sciences that is similar in nature and scope to the Ring Ceremony employed by the Order of the Engineer for students graduating from engineering programs. This new organization is solely intended to promote and recognize the ethical and moral behavior in graduates of computing-related degree programs as they transition to careers of service to society. Two institutions---Ohio Northern University and the University of South Florida---have already experimented with this concept. We seek to start a larger conversation on this concept by soliciting input from the community on what we believe is a significant need for a new organization---an organization that can benefit both our graduates and the computing profession.
---
paper_title: Turning students into ethical professionals
paper_content:
At the engineering school of the University of Virginia, a primary responsibility is turning students into ethical professionals, Three Accreditation Board for Engineering and Technology (ABET) Engineering Criteria (EC 2000 outcomes are a primary focus of this effort. The three relevant ABET EC 2000 outcomes are: "f", which requires students to demonstrate an understanding of professional and ethical responsibility; "h", which requires an understanding the impact of engineering solutions in a global and societal context; and "j", which asks for knowledge contemporary issues. These outcomes are broad. In order to create more specific outcomes, it is important to talk about the types of knowledge and skills students need to acquire in order to become ethical practitioners. The work presented establishes a framework, dividing the knowledge that engineering students need into four categories, based loosely on M. Adler's Paiadeia principles (1982).
---
paper_title: Professionalism in the Digital Age
paper_content:
The increased use of social media by physicians, combined with the ease of finding information online, can blur personal and work identities, posing new considerations for physician professionalism in the information age. A professional approach is imperative in this digital age in order to maintain confidentiality, honesty, and trust in the medical profession. Although the ability of physicians to use online social networks, blogs, and media sites for personal and professional reasons should be preserved, a proactive approach is recommended that includes actively managing one's online presence and making informed choices about disclosure. The development of a "dual-citizenship" approach to online social media that separates public and private personae would allow physicians to both leverage networks for professional connections and maintain privacy in other aspects. Although social media posts by physicians enable direct communication with readers, all posts should be considered public and special consideration for patient privacy is necessary.
---
paper_title: A question of ethics: Developing information system ethics
paper_content:
This study develops a pedagogy for the teaching of ethical principles in information systems (IS) classes, and reports on an empirical study that supports the efficacy of the approach. The proposed pedagogy involves having management information systems professors lead questioning and discussion on a list of ethical issues as part of their existing IS courses. The rationale for this pedagogy involves (1) the maturational aspects of ethics, and (2) the importance of repetition, challenge, and practice in developing a personal set of ethics. A study of IS ethics using a pre-post test design found that classes receiving such treatment significantly improved their performance on an IS ethics questionnaire.
---
paper_title: Modeling IT ethics: a study in situational ethics
paper_content:
Misuse of computer information systems has caused significant losses to business and society, even though computing has benefited both businesses and professionals. To this end, several measures have been suggested that both prevent and deter losses. One deterrent measure is to identify individual and situational characteristics of people who act ethically/ unethically. This study identifies specific charIlzak Benbasat was the accepting senior editor for this paper. acteristics that are associated with and may influence the ethical behavior intention of information systems employees when faced with ethical dilemmas. The results of the study show that individual and situational characteristics do influence ethical behavior intention.
---
paper_title: The Human Use of Human Beings
paper_content:
By Norbert Wiener. (London: Eyre and Spottiswoode Ltd.) Pp. 242. Price 18s. This book is largely about the resemblances and differences between human beings and machines; and the dangers arising out of making machines too like human beings, or of treating human beings too like machines. The author ranges round the subject with a freedom reminiscent of a talk by James Stephens—successive index entries are Monkeys Paw (Jacobs), Monsters, Moral Values.
---
paper_title: Perspectives of ambient intelligence in the home environment
paper_content:
Ambient Intelligence is a vision of the future information society stemming from the convergence of ubiquitous computing, ubiquitous communication and intelligent user-friendly interfaces. It offers an opportunity to realise an old dream, i.e. the smart or intelligent home. Will it fulfil the promises or is it just an illusion--offering apparently easy living while actually increasing the complexity of life? This article touches upon this question by discussing the technologies, applications and social implications of ambient intelligence in the home environment. It explores how Ambient Intelligence may change our way of life. It concludes that there are great opportunities for Ambient Intelligence to support social developments and modern lifestyles. However, in order to gain wide acceptance a delicate balance is needed: the technology should enhance the quality of life but not be seeking domination. It should be reliable and controllable but nevertheless adaptive to human habits and changing contexts.
---
paper_title: Toward ethical information systems: the contribution of discourse ethics
paper_content:
Ethics is important in the Information Systems field as illustrated by the direct effect of the Sarbanes-Oxley Act on the work of IS professionals. There is a substantial literature on ethical issues surrounding computing and information technology in the contemporary world, but much of this work is not published nor widely cited in the mainstream IS literature. The purpose of this paper is to offer one contribution to an increased emphasis on ethics in the IS field. The distinctive contribution is a focus on Habermas's discourse ethics. After outlining some traditional theories of ethics and morality, the literature on IS and ethics is reviewed, and then the paper details the development of discourse ethics. Discourse ethics is different from other approaches to ethics as it is grounded in actual debates between those affected by decisions and proposals. Recognizing that the theory could be considered rather abstract, the paper discusses the need to pragmatize discourse ethics for the IS field through, for example, the use of existing techniques such as soft systems methodology. In addition, the practical potential of the theory is illustrated through a discussion of its application to specific IS topic areas including Web 2.0, open source software, the digital divide, and the UK biometric identity card scheme. The final section summarizes ways in which the paper could be used in IS research, teaching, and practice.
---
paper_title: Ubiquitous Computing: Any Ethical Implications?
paper_content:
In this article, the authors investigate, from an interdisciplinary perspective, possible ethical implications of the presence of ubiquitous computing systems in human perception/action. The term ubiquitous computing is used to characterize information-processing capacity from computers that are available everywhere and all the time, integrated into everyday objects and activities. The contrast in approach to aspects of ubiquitous computing between traditional considerations of ethical issues and the Ecological Philosophy view concerning its possible consequences in the context of perception/action are the underlying themes of this paper. The focus is on an analysis of how the generalized dissemination of microprocessors in embedded systems, commanded by a ubiquitous computing system, can affect the behaviour of people considered as embodied embedded agents.
---
paper_title: Safeguards in a world of ambient intelligence
paper_content:
Copy the following link for free access to the first chapter of this title: http://www.springerlink.com/content/j23468h304310755/fulltext.pdf This book is a warning. It aims to warn policy-makers, industry, academia, civil society organisations, the media and the public about the threats and vulnerabilities facing our privacy, identity, trust, security and inclusion in the rapidly approaching world of ambient intelligence (AmI). In the near future, every manufactured product our clothes, money, appliances, the paint on our walls, the carpets on our floors, our cars, everything will be embedded with intelligence, networks of tiny sensors and actuators, which some have termed smart dust. The AmI world is not far off. We already have surveillance systems, biometrics, personal communicators, machine learning and more. AmI will provide personalised services and know more about us on a scale dwarfing anything hitherto available. In the AmI vision, ubiquitous computing, communications and interfaces converge and adapt to the user. AmI promises greater user-friendliness in an environment capable of recognising and responding to the presence of different individuals in a seamless, unobtrusive and often invisible way. While most stakeholders paint the promise of AmI in sunny colours, there is a dark side to AmI. This book aims to illustrate the threats and vulnerabilities by means of four dark scenarios. The authors set out a structured methodology for analysing the four scenarios, and then identify safeguards to counter the foreseen threats and vulnerabilities. They make recommendations to policy-makers and other stakeholders about what they can do to maximise the benefits from ambient intelligence and minimise the negative consequences.
---
paper_title: Readings in Cyberethics
paper_content:
This book of readings is a flexible resource for undergraduate and graduate courses in the evolving fields of computer and Internet ethics. Each selection has been carefully chosen for its timeliness and analytical depth and is written by a well-known expert in the field. The readings are organized to take students from a discussion on ethical frameworks and regulatory issues to a substantial treatment of the four fundamental, interrelated issues of cyberethics: speech, property, privacy, and security. A chapter on professionalism rounds out the selection. This book makes an excellent companion to CyberEthics: Morality and Law in Cyberspace, Third Edition by providing articles that present both sides of key issues in cyberethics.
---
paper_title: Researching the ethical dimensions of mobile, ubiquitous and immersive technology enhanced learning (MUITEL): a thematic review and dialogue
paper_content:
In this article, we examine the ethical dimensions of researching the mobile, ubiquitous and immersive technology enhanced learning (MUITEL), with a particular focus on learning in informal settings. We begin with an analysis of the interactions between mobile, ubiquitous and immersive technologies and the wider context of the digital economy. In this analysis, we identify social, economic and educational developments that blur boundaries: between the individual and the consumer, between the formal and the informal, between education and other forms of learning. This leads to a complex array of possibilities for learning designs, and an equally complex array of ethical dimensions and challenges. We then examine the recent literature on the ethical dimensions of TEL research, and identify key trends, ethical dilemmas and issues for researchers investigating MUITEL in informal educational settings. We then present a summary of research dialogue between the authors (as TEL researchers) to illuminate these MUITEL research challenges, indicating new trends in ethical procedure that may offer inspiration for other researchers. We conclude with an outline, derived from the foregoing analysis, of ways in which ethical guidelines and processes can be developed by researchers – through interacting with participants and other professionals. We conclude that ethical issues need to remain as open questions and be revisited as part of research practices. Because technologies and relationships develop, reassessments will always be required in the light of new understandings. We hope this analysis will motivate and support continued reflection and discussion about how to conduct ethically committed MUITEL research.
---
paper_title: Turnover of Information Technology Professionals: A Narrative Review, Meta-Analytic Structural Equation Modeling, and Model Development
paper_content:
This study combines a narrative review with meta-analytic techniques to yield important insights about the existing research on turnover of information technology professionals. Our narrative review of 33 studies shows that the 43 antecedents to turnover intentions of IT professionals could be mapped onto March and Simon's (1958) distal-proximal turnover framework. Our meta-analytic structural equation modeling shows that proximal constructs of job satisfaction (reflecting the lack of desire to move) and perceived job alternatives (reflecting ease of movement) partially mediate the relationships between the more distal individual attributes, job-related and perceived organizational factors, and IT turnover intentions. Building on the findings from our review, we propose a new theoretical model of IT turnover that presents propositions for future research to address existing gaps in the IT literature.
---
paper_title: Combining IS Research Methods: Towards a Pluralist Methodology
paper_content:
This paper puts forward arguments in favor of a pluralist approach to IS research. Rather than advocating a single paradigm, be it interpretive or positivist, or even a plurality of paradigms within the discipline as a whole, it suggests that research results will be richer and more reliable if different research methods, preferably from different (existing) paradigms, are routinely combined together. The paper is organized into three sections after the Introduction. In §2, the main arguments for the desirability of multimethod research are put forward, while §3 discusses its feasibility in theory and practice. §4 outlines two frameworks that are helpful in designing mixed-method research studies. These are illustrated with a critical evaluation of three examples of empirical research.
---
paper_title: Integrating ethics in design through the value-sensitive design approach
paper_content:
The Accreditation Board of Engineering and Technology (ABET) has declared that to achieve accredited status, 'engineering programs must demonstrate that their graduates have an understanding of professional and ethical responsibility.' Many engineering professors struggle to integrate this required ethics instruction in technical classes and projects because of the lack of a formalized ethics-in-design approach. However, one methodology developed in human-computer interaction research, the Value-Sensitive Design approach, can serve as an engineering education tool which bridges the gap between design and ethics for many engineering disciplines. The three major components of Value-Sensitive Design, conceptual, technical, and empirical, exemplified through a case study which focuses on the development of a command and control supervisory interface for a military cruise missile.
---
paper_title: Real character-friends: Aristotelian friendship, living together, and technology
paper_content:
Aristotle’s account of friendship has largely withstood the test of time. Yet there are overlooked elements of his account that, when challenged by apparent threats of current and emerging communication technologies, reveal his account to be remarkably prescient. I evaluate the danger that technological advances in communication pose to the future of friendship by examining and defending Aristotle’s claim that perfect or character-friends must live together. I concede that technologically-mediated communication can aid existing character-friendships, but I argue that character-friendships cannot be created and sustained entirely through technological meditation. I examine text-based technologies, such as Facebook and email, and engage a non-text based technology that poses the greatest threat to my thesis—Skype. I then address philosophical literature on friendship and technology that has emerged in the last decade in Ethics and Information Technology to elucidate and defend my account by contrast. I engage Cocking and Matthews (2000), who argue that friendship cannot be created and sustained entirely through text-based contact, Briggle (2008), who argues that friendship can be created and sustained entirely through text-based contact, and Munn (2012), who argues that friendship cannot be created and entirely sustained through text-based contact but can be created and sustained entirely in immersive virtual worlds. My account discusses a certain kind of friendship, character-friendship, and a certain kind of technology, Skype, that these accounts do not. Examination of these essays helps to demonstrate that character friendship cannot be sustained entirely by technologically-aided communication and that character-friends must live together.
---
paper_title: The Cambridge Handbook Of Information And Computer Ethics
paper_content:
the cambridge handbook of information and computer ethics is available in our book collection an online access to it is set as public so you can get it instantly. Our books collection hosts in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the the cambridge handbook of information and computer ethics is universally compatible with any devices to read.
---
paper_title: Designing Robots for Care: Care Centered Value-Sensitive Design
paper_content:
The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship--what I refer to as care robots--require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What's more, given the stage of their development and lack of standards provided by the International Organization for Standardization to guide their development, ethics ought to be included into the design process of such robots. The manner in which this may be accomplished, as presented here, uses the blueprint of the Value-sensitive design approach as a means for creating a framework tailored to care contexts. Using care values as the foundational values to be integrated into a technology and using the elements in care, from the care ethics perspective, as the normative criteria, the resulting approach may be referred to as care centered value-sensitive design. The framework proposed here allows for the ethical evaluation of care robots both retrospectively and prospectively. By evaluating care robots in this way, we may ultimately ask what kind of care we, as a society, want to provide in the future.
---
paper_title: Professionalism in the Digital Age
paper_content:
The increased use of social media by physicians, combined with the ease of finding information online, can blur personal and work identities, posing new considerations for physician professionalism in the information age. A professional approach is imperative in this digital age in order to maintain confidentiality, honesty, and trust in the medical profession. Although the ability of physicians to use online social networks, blogs, and media sites for personal and professional reasons should be preserved, a proactive approach is recommended that includes actively managing one's online presence and making informed choices about disclosure. The development of a "dual-citizenship" approach to online social media that separates public and private personae would allow physicians to both leverage networks for professional connections and maintain privacy in other aspects. Although social media posts by physicians enable direct communication with readers, all posts should be considered public and special consideration for patient privacy is necessary.
---
paper_title: Security and Privacy Issues in Wireless Sensor Networks for Healthcare Applications
paper_content:
The use of wireless sensor networks (WSN) in healthcare applications is growing in a fast pace. Numerous applications such as heart rate monitor, blood pressure monitor and endoscopic capsule are already in use. To address the growing use of sensor technology in this area, a new field known as wireless body area networks (WBAN or simply BAN) has emerged. As most devices and their applications are wireless in nature, security and privacy concerns are among major areas of concern. Due to direct involvement of humans also increases the sensitivity. Whether the data gathered from patients or individuals are obtained with the consent of the person or without it due to the need by the system, misuse or privacy concerns may restrict people from taking advantage of the full benefits from the system. People may not see these devices safe for daily use. There may also possibility of serious social unrest due to the fear that such devices may be used for monitoring and tracking individuals by government agencies or other private organizations. In this paper we discuss these issues and analyze in detail the problems and their possible measures.
---
paper_title: Enabling personal privacy for pervasive computing environments
paper_content:
Protection of personal data in the Internet is already a challenge today. Users have to actively look up privacy policies of websites and decide whether they can live with the terms of use. Once discovered, they are forced to make a "'take or leave"' decision. In future living and working environments, where sensors and context-aware services are pervasive, this becomes an even greater challenge and annoyance. The environment is much more personalized and users cannot just "'leave"'. They require measures to prevent, avoid and detect misuse of sensitive data, as well as to be able to negotiate the purpose of use of data. We present a novel model of privacy protection, complementing the notion of enterprise privacy with the incorporation of personal privacy towards a holistic privacy management system. Our approach allows non-expert users not only to negotiate the desired level of privacy in a rather automated and simple way, but also to track and monitor the whole life-cycle of data.
---
paper_title: Data mining: Consumer privacy, ethical policy, and systems development practices
paper_content:
The growing application of data mining to boost corporate profits is raising many ethical concerns especially with regards to privacy. The volume and type of personal information that is accessible to corporations these days is far greater than in the past. This causes many consumers to be greatly concerned about potential violations of their privacy by current data collection and data mining techniques and practices. The purpose of this study is to identify the ethical issues associated with data mining and the potential risks to a corporation that is believed to be operating in an unethical manner. The paper reviewed the relevant ethical policies and proposed ten data mining systems development practices that can be incorporated into a software development lifecycle to prevent these risks from materializing.
---
paper_title: Face recognition technology: security versus privacy
paper_content:
Video surveillance and face recognition systems have become the subject of increased interest and controversy after the September 11 terrorist attacks on the United States. In favor of face recognition technology, there is the lure of a powerful tool to aid national security. On the negative side, there are fears of an Orwellian invasion of privacy. Given the ongoing nature of the controversy, and the fact that face recognition systems represent leading edge and rapidly changing technology, face recognition technology is currently a major issue in the area of social impact of technology. We analyze the interplay of technical and social issues involved in the widespread application of video surveillance for person identification.
---
paper_title: On the Morality of Artificial Agents
paper_content:
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.
---
paper_title: “But the data is already public”: on the ethics of research in Facebook
paper_content:
In 2008, a group of researchers publicly released profile data collected from the Facebook accounts of an entire cohort of college students from a US university. While good-faith attempts were made to hide the identity of the institution and protect the privacy of the data subjects, the source of the data was quickly identified, placing the privacy of the students at risk. Using this incident as a case study, this paper articulates a set of ethical concerns that must be addressed before embarking on future research in social networking sites, including the nature of consent, properly identifying and respecting expectations of privacy on social network sites, strategies for data anonymization prior to public release, and the relative expertise of institutional review boards when confronted with research projects based on data gleaned from social media.
---
paper_title: An enquiry into the ethical efficacy of the use of radio frequency identification technology
paper_content:
This paper provides an in-depth analysis of the privacy rights dilemma surrounding radio frequency identification (RFID) technology. As one example of ubiquitous information system, RFID has multitudinous applications in various industries and businesses across society. The use of this technology will have to lead to a policy setting dilemma in that a balance between individuals' privacy concerns and the benefits that they derive from it must be drawn. After describing the basic RFID technology some of its most prevalent uses, a definition of privacy is derived in the context of information systems. To illustrate current attempts at controlling the undesirable side effects of RFID, Lessig's cyberspace framework is presented. It is found that each of Lessig's components is inadequate at preventing individual privacy violations in RFID. The main focus within this framework is on the norms of society. The social norm mechanism that addresses privacy issues in cyberspace is the Fair Information Practices Principles (FIPP). After an analysis of these principles, it is posited that the FIPP only deal with procedural justice issues related to data collection and omit distributive and interactional justice reasoning related to the actual beneficial and detrimental outcomes to the individuals whose data is being obtained. Thus, RFID is presented in the context of the tension between the many benefits that are provided by the technology in social exchanges, and the risk it carries of the loss of privacy. The new, expanded framework recognizes both sides of the issue with the ultimate goal of providing a greater understanding of how privacy issues can be addressed with RFID technology.
---
paper_title: A software platform to analyse the ethical issues of electronic patient privacy policy: the S3P example
paper_content:
Paper-based privacy policies fail to resolve the new changes posed by electronic healthcare. Protecting patient privacy through electronic systems has become a serious concern and is the subject of several recent studies. The shift towards an electronic privacy policy introduces new ethical challenges that cannot be solved merely by technical measures. Structured Patient Privacy Policy (S3P) is a software tool assuming an automated electronic privacy policy in an electronic healthcare setting. It is designed to simulate different access levels and rights of various professionals involved in healthcare in order to assess the emerging ethical problems. The authors discuss ethical issues concerning electronic patient privacy policies that have become apparent during the development and application of S3P.
---
paper_title: Freedom and Privacy in Ambient Intelligence
paper_content:
This paper analyzes ethical aspects of the new paradigm of Ambient Intelligence, which is a combination of Ubiquitous Computing and Intelligent User Interfaces (IUI's). After an introduction to the approach, two key ethical dimensions will be analyzed: freedom and privacy. It is argued that Ambient Intelligence, though often designed to enhance freedom and control, has the potential to limit freedom and autonomy as well. Ambient Intelligence also harbors great privacy risks, and these are explored as well.
---
paper_title: The pledge of the computing professional: recognizing and promoting ethics in the computing professions
paper_content:
All of us in the computing community understand the importance of recognizing and promoting ethical behavior in our profession. Instruction in ethics is rapidly becoming a part of most computing-related curricula, whether as a stand-alone course or infused into existing courses. Both Computing Curricula 2005 and the current discussions on Computing Curricula 2013 recognize the significance of ethics, generally considering it a core topic across the various computing disciplines. Additionally, in their criteria for the accreditation of computing programs, ABET specifies that a student must attain by the time of graduation an understanding of ethical issues and responsibilities. What has been missing is a formal rite-of-passage ceremony to prompt student recognition and self-reflection on the transition from being a student to a computing professional. In 2009, seventeen faculty members and industry representatives from a wide range of institutions began to address this open problem by forming The Pledge of the Computing Professional [1], [2]. The Pledge exists to promote and recognize the ethical and moral behavior and responsibilities in graduates of computing-related degree programs as they transition to careers of service to society. The Pledge does not seek to define or enforce ethics --- this is the role of other organizations. Specifically, The Pledge is modeled after the Order of the Engineer [3] and provides a rite-of-passage ceremony at the time of graduation.
---
paper_title: Implementing moral decision making faculties in computers and robots
paper_content:
The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or another of what is a very rich topic. This paper will offer a brief overview of the many dimensions of this new field of inquiry.
---
paper_title: Ethical challenges of telemedicine and telehealth.
paper_content:
As healthcare institutions expand and vertically integrate, healthcare delivery is less constrained by geography, nationality, or even by institutional boundaries. As part of this trend, some aspects of the healthcare process are shifted from medical centers back into the home and communities. Telehealth applications intended for health promotion, social services, and other activities—for the healthy as well as for the ill—provide services outside clinical settings in homes, schools, libraries, and other governmental and community sites. Such developments include health information web sites, on-line support groups, automated telephone counseling, interactive health promotion programs, and electronic mail exchanges. Concomitant with these developments is the growth of consumer health informatics, in which individuals seeking medical care or information are able to find various health information resources that take advantage of new information technologies.
---
paper_title: Brain-Computer Interaction and Medical Access to the Brain: Individual, Social and Ethical Implications
paper_content:
This paper discusses current clinical applications and possible future uses of brain-computer interfaces (BCIs) as a means for communication, motor control and entertainment. After giving a brief account of the various approaches to direct brain-computer interaction, the paper will address individual, social and ethical implications of BCI technology to extract signals from the brain. These include reflections on medical and psychosocial benefits and risks, user control, informed consent, autonomy and privacy as well as ethical and social issues implicated in putative future developments with focus on human self-understanding and the idea of man. BCI use which involves direct interrelation and mutual interdependence between human brains and technical devices raises anthropological questions concerning self-perception and the technicalization of the
---
paper_title: Breaching the Contract? Privacy and the UK Census
paper_content:
Along with informed consent, anonymization is an accepted method of protecting the interests of research participants, while allowing data collected for official statistical purposes to be reused by other agencies within and outside government. The Decennial Census, carried out in a number of countries, including the United Kingdom, is a major event in the production of research data and provides an important resource for a variety of organizations. This article combines ethical evaluation, a review of relevant law and guidance, and analysis of 30 qualitative interviews carried out during the period of the 2001 UK Census, in order to explore the adequacy of the current framework for the protection of informational privacy in relation to census data. Taking account of Nissenbaum's concept of “contextual integrity,” Vedder's concept of “categorical privacy,” and Sen's call to heed of the importance of “actual behavior,” it will be argued that the current “contractarian” view of the relationship between an individual participant and the organization carrying out the Census does not engage sufficiently with actual uses of data. As a result, people have expectations of privacy that are not matched by practice and that the current normative—including the governance—framework cannot capture.
---
paper_title: Ethics of Human Enhancement: 25 Questions & Answers
paper_content:
This paper presents the principal findings from a three-year research project funded by the US National Science Foundation (NSF) on ethics of human enhancement technologies. To help untangle this ongoing debate, we have organized the discussion as a list of questions and answers, starting with background issues and moving to specific concerns, including: freedom & autonomy, health & safety, fairness & equity, societal disruption, and human dignity. Each question-andanswer pair is largely self-contained, allowing the reader to skip to those issues of interest without affecting continuity.
---
paper_title: Pervasive healthcare: the elderly perspective
paper_content:
The pervasive vision of future technologies raises important questions on how people, especially the elderly, will be able to use, trust and maintain privacy. To begin to address such issues, we conducted focus group sessions with elderly participants aged from 65 to 89 years. The groups were shown three Videotaped Activity Scenarios [5] depicting pervasive or ubiquitous computing applications in three contexts: health, commerce and e-voting. The resultant data was coded in terms of stakeholder, user and system issues. The data is discussed here from the user perspective -- specifically in terms of concerns about trust and privacy.
---
paper_title: The Nuremberg Code: its history and implications.
paper_content:
The Nuremberg Code is a foundational document in the ethics of medical research and human experimentation; the principle its authors espoused in 1946 have provided the framework for modern codes that address the same issues, and have received little challenge and only slight modification in decades since. By analyzing the Code's tragic genesis and its normative implications, it is possible to understand some of the essence of modern experimental ethics, as well as certain outstanding controversies that still plague medical science.
---
paper_title: Scientific models and ethical issues in hybrid bionic systems research
paper_content:
Research on hybrid bionic systems (HBSs) is still in its infancy but promising results have already been achieved in laboratories. Experiments on humans and animals show that artificial devices can be controlled by neural signals. These results suggest that HBS technologies can be employed to restore sensorimotor functionalities in disabled and elderly people. At the same time, HBS research raises ethical concerns related to possible exogenous and endogenous limitations to human autonomy and freedom. The analysis of these concerns requires reflecting on the availability of scientific models accounting for key aspects of sensorimotor coordination and plastic adaptation mechanisms in the brain.
---
paper_title: On the Morality of Artificial Agents
paper_content:
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.
---
paper_title: Beyond Robot Ethics: On a Legislative Consortium for Social Robotics
paper_content:
As robots are increasingly integrated into human society, associated problems will resemble or merge with those in other fields — we can refer to this phenomenon as the 'robot sociability problem'. In this paper, the author first analyzes the dynamic relationship between robot ethics, robotics and robot law, and then proposes a 'practical robots' approach for solving the robot sociability problem. As this approach is based on legal regulations, the author posits that a functional platform such as a 'legislative consortium for social robotics' is crucial at the initial stage for social robotics development. In conclusion, the author discusses how a legislative consortium for social robotics will be a useful approach for solving the robot sociability problem, especially emerging structural legislative problems that are related to autonomous robots.
---
paper_title: The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts
paper_content:
The capacity to collect and analyse data is growing exponentially. Referred to as ‘Big Data’, this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: (1) informed consent, (2) privacy (including anonymisation and data protection), (3) ownership, (4) epistemology and objectivity, and (5) ‘Big Data Divides’ created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: (6) the dangers of ignoring group-level ethical harms; (7) the importance of epistemology in assessing the ethics of Big Data; (8) the changing nature of fiduciary relationships that become increasingly data saturated; (9) the need to distinguish between ‘academic’ and ‘commercial’ Big Data practices in terms of potential harm to data subjects; (10) future problems with ownership of intellectual property generated from analysis of aggregated datasets; and (11) the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices.
---
paper_title: Big data, open science and the brain: lessons learned from genomics
paper_content:
The BRAIN Initiative aims to break new ground in the scale and speed of data collection in neuroscience, requiring tools to handle data in the magnitude of yottabytes (1024). The scale, investment and organization of it are being compared to the Human Genome Project (HGP), which has exemplified ‘big science’ for biology. In line with the trend towards Big Data in genomic research, the promise of the BRAIN Initiative, as well as the European Human Brain Project, rests on the possibility to amass vast quantities of data to model the complex interactions between the brain and behaviour and inform the diagnosis and prevention of neurological disorders and psychiatric disease. Advocates of this ‘data driven’ paradigm in neuroscience argue that harnessing the large quantities of data generated across laboratories worldwide has numerous methodological, ethical and economic advantages, but it requires the neuroscience community to adopt a culture of data sharing and open access to benefit from them. In this article, we examine the rationale for data sharing among advocates and briefly exemplify these in terms of new ‘open neuroscience’ projects. Then, drawing on the frequently invoked model of data sharing in genomics, we go on to demonstrate the complexities of data sharing, shedding light on the sociological and ethical challenges within the realms of institutions, researchers and participants, namely dilemmas around public/private interests in data, (lack of) motivation to share in the academic community, and potential loss of participant anonymity. Our paper serves to highlight some foreseeable tensions around data sharing relevant to the emergent ‘open neuroscience’ movement.
---
paper_title: The limits of privacy in automated profiling and data mining
paper_content:
Abstract Automated profiling of groups and individuals is a common practice in our information society. The increasing possibilities of data mining significantly enhance the abilities to carry out such profiling. Depending on its application, profiling and data mining may cause particular risks such as discrimination, de-individualisation and information asymmetries. In this article we provide an overview of the risks associated with data mining and the strategies that have been proposed over the years to mitigate these risks. From there we shall examine whether current safeguards that are mainly based on privacy and data protection law (such as data minimisation and data exclusion) are sufficient. Based on these findings we shall suggest alternative policy options and regulatory instruments for dealing with the risks of data mining, integrating ideas from the field of computer science and that of law and ethics.
---
paper_title: Eyes wide open: the personal genome project, citizen science and veracity in informed consent
paper_content:
I am a close observer of the Personal Genome Project (PGP) and one of the original ten participants. The PGP was originally conceived as a way to test novel DNA sequencing technologies on human samples and to begin to build a database of human genomes and traits. However, its founder, Harvard geneticist George Church, was concerned about the fact that DNA is the ultimate digital identifier – individuals and many of their traits can be identified. Therefore, he believed that promising participants privacy and confidentiality would be impractical and disingenuous. Moreover, deidentification of samples would impoverish both genotypic and phenotypic data. As a result, the PGP has arguably become best known for its unprecedented approach to informed consent. All participants must pass an exam testing their knowledge of genomic science and privacy issues and agree to forgo the privacy and confidentiality of their genomic data and personal health records. Church aims to scale up to 100,000 participants. This spe...
---
paper_title: Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?
paper_content:
In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious.
---
paper_title: Ethics and Artificial life: From Modeling to Moral Agents
paper_content:
Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is "dry" ALife, which is the study of living systems "in silico" through the use of computer simulations, and the other is "wet" ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife's most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents.
---
paper_title: Advisory services in the virtual world: an empowerment perspective
paper_content:
The virtual world is growing in popularity and incorporates many state-of-the-art technologies. In particular, many new applications are currently being explored by public and private organisations in these virtual environments. This study examines the interesting phenomenon of providing virtual advisors to assist users in accomplishing their tasks in the virtual world. The advisory areas under discussion include commerce, health (physical and mental), academia, ethics and travel. Then a systematic framework is developed to reveal the best practices in the provision of virtual advisors, eliciting thought-provoking discussions of the current status and future trends in the use of virtual advisors in the virtual world.
---
paper_title: The Ethics of Outsourcing Online Survey Research
paper_content:
The increasing level of Internet penetration over the last decade has made web surveying a viable option for data collection in academic research. Software tools and services have been developed to facilitate the development and deployment of web surveys. Many academics and research students are outsourcing the design and/or hosting of their web surveys to external service providers, yet ethical issues associated with this use have received limited attention in academic literature. In this article, the authors focus on specific ethical concerns associated with the outsourcing of web surveys with particular reference to external commercial web survey service providers. These include threats to confidentiality and anonymity, the potential for loss of control over decisions about research data, and the reduced credibility of research. Suggested guidelines for academic institutions and researchers in relation to outsourcing aspects of web-based survey research are provided.
---
paper_title: Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches
paper_content:
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.
---
paper_title: Convivial software: an end-user perspective on free and open source software
paper_content:
The free and open source software (Foss) movement deserves to be placed in an historico-ethical perspective that emphasizes the end user. Such an emphasis is able to enhance and support the Foss movement by arguing the ways it is heir to a tradition of professional ethical idealism and potentially related to important issues in the history of science, technology, and society relations. The focus on software from an end-user's perspective also leads to the concept of program conviviality. From a non-technical perspective, however, software is simply a new example of technology, and the effort to assure that technology is developed in a socially responsible manner has a significant history. The argument thus begins with observations about the history of technology. This leads to critical reflections on the development of professional engineering ethics, and to a discussion of the alternative technology movement. Finally, it concludes by indicating some criteria to consider when imagining the design of convivial software.
---
paper_title: Professionalism in the Digital Age
paper_content:
The increased use of social media by physicians, combined with the ease of finding information online, can blur personal and work identities, posing new considerations for physician professionalism in the information age. A professional approach is imperative in this digital age in order to maintain confidentiality, honesty, and trust in the medical profession. Although the ability of physicians to use online social networks, blogs, and media sites for personal and professional reasons should be preserved, a proactive approach is recommended that includes actively managing one's online presence and making informed choices about disclosure. The development of a "dual-citizenship" approach to online social media that separates public and private personae would allow physicians to both leverage networks for professional connections and maintain privacy in other aspects. Although social media posts by physicians enable direct communication with readers, all posts should be considered public and special consideration for patient privacy is necessary.
---
paper_title: How Informed Is Online Informed Consent
paper_content:
We examined participants' reading and recall of informed consent documents presented via paper or computer. Within each presentation medium, we presented the document as a continuous or paginated document to simulate common computer and paper presentation formats. Participants took slightly longer to read paginated and computer informed consent documents and recalled slightly more information from the paginated documents. We concluded that obtaining informed consent online is not substantially different than obtaining it via paper presentation. We also provide suggestions for improving informed consent-in both face-to-face and online experiments.
---
paper_title: Data-driven research: open data opportunities for growing knowledge, and ethical issues that arise
paper_content:
The Open Data Initiative in the UK offers incredible opportunities for researchers who seek to gain insight from the wealth of public and institutional data that is increasingly available from government sources – like NHS prescription and GP referral information – or the information we freely offer online. Coupled with digital technologies that can help teams generate connections and collaborations, these data sets can support large-scale innovation and insight. However, by looking at a comparable explosion in data-driven journalism, this article hopes to highlight some of the ethical questions that may arise from big data. The popularity of the social networking service Twitter to share information during the riots in London in August 2011 produced a real-time record of sense-making of enormous interest to academics, reporters and to Twitter users themselves; however, when analysed and published, academic and journalistic interpretations of aggregate content was transformed and individualized, with potential implications for a user-base that was unaware it was being observed. Similar issues arise in academic research with human subjects. Here, the questions of reflexivity in data design and research ethics are considered through a popular media frame.
---
paper_title: Ethical considerations for educational research in a virtual world
paper_content:
The combination of features in virtual worlds provides an opportunity to implement and research unique learning experiences. With increasing interest and activity from the educational research community, exploring virtual worlds for teaching and learning, there is a need to identify and understand the ethical implications of conducting research in these new environments. This paper examines the traditional ethical concerns within the context of educational research in virtual worlds and identifies how the features of the technology give rise to new ethical dilemmas in the areas of informed consent, privacy protection and identity.
---
paper_title: Researching Personal Information on the Public Web: Methods and Ethics
paper_content:
There are many personal and social issues that are rarely discussed in public and hence are difficult to study. Recently, however, the huge uptake of blogs, forums, and social network sites has created spaces in which previously private topics are publicly discussed, giving a new opportunity for researchers investigating such topics. This article describes a range of simple techniques to access personal information relevant to social research questions and illustrates them with small case studies. It also discusses ethical considerations, concluding that the default position is almost the reverse of that for traditional social science research: the text authors should not be asked for consent nor informed of the participation of their texts. Normally, however, steps should be taken to ensure that text authors are anonymous in academic publications even when their texts and identities are already public.
---
paper_title: Internet adoption by the elderly: employing IS technology acceptance theories for understanding the age-related digital divide
paper_content:
Information technology (IT) allows members of the growing elderly population to remain independent longer. However, while technology becomes more and more pervasive, an age-related underutilisation of IT remains observable. For instance, elderly people (65 years of age and older) are significantly less likely to use the Internet than the average population (see, for instance, European Commission, 2011). This age-related digital divide prevents many elderly people from using IT to enhance their quality of life through tools, such as Internet-based service delivery. Despite the significance of this phenomenon, the information systems (IS) literature lacks a comprehensive consideration and explanation of technology acceptance in general and more specifically, Internet adoption by the elderly. This paper thus studies the intentions of the elderly with regard to Internet use and identifies important influencing factors. Four alternative models based on technology acceptance theory are tested in the context of comprehensive survey data. As a result, a model that explains as much as 84% of the variance in technology adoption among the elderly is developed. We discuss the contribution of our analyses to the research on Internet adoption (and IT adoption in general) by the elderly, on the digital divide, and on technology acceptance and identify potentially effective paths for future research and theoretical development.
---
paper_title: Online survey tools: ethical and methodological concerns of human research ethics committees.
paper_content:
A SURVEY OF 750 UNIVERSITY HUMAN Research Ethics Boards (HRECs) in the United States revealed that Internet research protocols involving online or Web surveys are the type most often reviewed (94% of respondents), indicating the growing prevalence of this methodology for academic research. Respondents indicated that the electronic and online nature of these survey data challenges traditional research ethics principles such as consent, risk, privacy, anonymity, confidentiality, and autonomy, and adds new methodological complexities surrounding data storage, security, sampling, and survey design. Interesting discrepancies surfaced among respondents regarding strengths and weaknesses within extant guidelines, which are highlighted throughout the paper. The paper concludes with considerations and suggestions towards consistent protocol review of online surveys to ensure appropriate human subjects protections in the face of emergent electronic tools and methodologies.
---
paper_title: The danger of big data: Social media as computational social science
paper_content:
Social networking Web sites are amassing vast quantities of data and computational social science is providing tools to process this data. The combination of these two factors has significant implications for individuals and society. With announcements of growing data aggregation by both Google and Facebook, the need for consideration of these issues is becoming urgent. Just as Web 2.0 platforms put publishing in the hands of the masses, without adequate safeguards, computational social science may make surveillance, profiling, and targeting overly accessible. The academic study of computational social science explains the field as an interdisciplinary investigation of the social dynamics of society with the aid of advanced computational systems. Such investigation can operate at the macro level of global attitudes and trends, down to the personal level of an individual’s psychology. This paper uses the lenses of computation social science to consider the uses and dangers that may result from the data aggregation social media companies are perusing. We also consider the role ethics and regulation may play in protecting the public.
---
paper_title: Practical versus moral identities in identity management
paper_content:
Over the past decade Identity Management has become a central theme in information technology, policy, and administration in the public and private sectors. In these contexts the term `Identity Management' is used primarily to refer to ways and methods of dealing with registration and authorization issues regarding persons in organizational and service-oriented domains. Especially due to the growing range of choices and options for, and the enhanced autonomy and rights of, employees, citizens, and customers, there is a growing demand for systems that enable the regulation of rights, duties, responsibilities, entitlements and access of innumerable people simultaneously. `Identity Management' or `Identity Management Systems' have become important headings under which such systems are designed and implemented. But there is another meaning of the term `identity management', which is clearly related and which has gained currency. This second construal refers to the need to manage our moral identities and our identity related information. This paper explores the relation between the management of our (moral) identities and `Identity Management' as conceptualized in IT discourse.
---
paper_title: “But the data is already public”: on the ethics of research in Facebook
paper_content:
In 2008, a group of researchers publicly released profile data collected from the Facebook accounts of an entire cohort of college students from a US university. While good-faith attempts were made to hide the identity of the institution and protect the privacy of the data subjects, the source of the data was quickly identified, placing the privacy of the students at risk. Using this incident as a case study, this paper articulates a set of ethical concerns that must be addressed before embarking on future research in social networking sites, including the nature of consent, properly identifying and respecting expectations of privacy on social network sites, strategies for data anonymization prior to public release, and the relative expertise of institutional review boards when confronted with research projects based on data gleaned from social media.
---
paper_title: Digital Disempowerment in a Network Society
paper_content:
The objective of this article is to examine how the inequalities of participation in network society governmental systems affect the extent that individuals are empowered or disempowered within those systems. By using published data in conjunction with theories of communication, a critical secondary data analysis was conducted. This critical analysis argues that the Digital Divide involves issues concerning how democracy and democratization are related to computer-mediated communication (CMC) and its role in political communication. As the roles of CMC/ICT systems expand in political communication, existing Digital Divide gaps are likely to contribute to structural inequalities in political participation. These inequalities work against democracy and political empowerment for some people, while at the same time producing expanded opportunities of political participation for others. This raises concerns about who benefits the most from electronic government in emerging network societies.
---
paper_title: A new look at software piracy: Soft lifting primes an inauthentic sense of self, prompting further unethical behavior
paper_content:
Soft lifting refers to the process whereby a legally licensed software program is installed or copied in violation of its licensing agreement. Previous research on this pervasive kind of unethical computer use has mainly focused on the determinants of this unethical act, which are rooted in personal, economic, technological, cultural, socio-political, or legal domains. However, little is known about the symbolic power that soft lifting has on the sense of self. Based on recent advances in behavioral priming, we hypothesized that soft lifting can influence the signals one sends to oneself; more specifically, soft lifting may prime individuals to experience an inauthentic sense of self, which, in turn, prompts further unethical behavior. In Study 1, we showed that participants, primed with the memory of a recent soft lifting experience, cheated more than participants recalling a recent experience of purchasing authentic software or than control participants. Moreover, feelings of inauthenticity mediated the priming effect of soft lifting on dishonest behavior. In Study 2, participants primed with soft lifting showed a greater willingness to purchase a wide range of counterfeit products over authentic products. Besides those antecedents or correlates of soft lifting already identified in the literature, educators should pay more attention to the negative impact of soft lifting on the self-images of users, which may go beyond computer-related behaviors. Priming may provide a new direction for HCI researchers to examine the impact of computer-use-related factors on users' perceptions, motivations, and behaviors.
---
paper_title: Using APIs for Data Collection on Social Media
paper_content:
This article discusses how social media research may benefit from social media companies making data available to researchers through their application programming interfaces (APIs). An API is a back-end interface through which third-party developers may connect new add-ons to an existing service. The API is also an interface for researchers to collect data off a given social media service for empirical analysis. Presenting a critical methodological discussion of the opportunities and challenges associated with quantitative and qualitative social media research based on APIs, this article highlights a number of general methodological issues to be dealt with when collecting and assessing data through APIs. The article further discusses the legal and ethical implications of empirical research using APIs for data collection.
---
paper_title: The Digital Divide and Increasing Returns: Contradictions of Informational Capitalism
paper_content:
The far-reaching advances in information and communications technologies (ICTs) in tandem with the globalization of trade, investment, business regulation, production, and consumption have signaled the rise of “informational capitalism.” This article reflects on the social and economic inequalities of informational capitalism by examining two contradictions of ICTs-led economic development—increasing returns and the digital divide. Two main and interrelated strands of evidence are presented: First, contrary to expectations that rising income per capita will tend to reduce wealth and wage disparities, the distribution of income and wealth both between countries and individuals has sharply skewed in the information age; second, knowledge production is a self-reinforcing cycle that tends to disproportionately reward some and exclude others. The so-called digital divide is as much a symptom and a cause of these broader techno-economic phenomena, and regarding it as a simple issue of connectivity is simplistic...
---
paper_title: Studying cyborgs: re-examining internet studies as human subjects research
paper_content:
Virtual communities and social networks assume and consume more aspects of people's lives. In these evolving social spaces, the boundaries between actual and virtual reality, between living individuals and their virtual bodies, and between private and public domains are becoming ever more blurred. As a result, users and their presentations of self, as expressed through virtual bodies, are increasingly entangled. Consequently, more and more Internet users are cyborgs. For this reason, the ethical guidelines necessary for Internet research need to be revisited. We contend that the IS community has paid insufficient attention to the ethics of Internet research. To this end, we develop an understanding of issues related to online human subjects research by distinguishing between a disembodied and an entangled view of the Internet. We outline a framework to guide investigators and research ethics committees in answering a key question in the age of cyborgism: When does a proposed Internet study deal with human subjects as opposed to digital material?
---
paper_title: Big Data, Big Problems: Emerging Issues in the Ethics of Data Science and Journalism
paper_content:
As big data techniques become widespread in journalism, both as the subject of reporting and as newsgathering tools, the ethics of data science must inform and be informed by media ethics. This article explores emerging problems in ethical research using big data techniques. It does so using the duty-based framework advanced by W.D. Ross, who has significantly influenced both research science and media ethics. A successful framework must provide stability and flexibility. Without stability, ethical precommitments will vanish as technology rapidly shifts costs. Without flexibility, traditional approaches will rapidly become obsolete in the face of technological change. The article concludes that Ross's duty-based approach both provides stability in the face of rapid technological change and flexibility to innovate to achieve the original purpose of basic ethical principles.
---
paper_title: A software platform to analyse the ethical issues of electronic patient privacy policy: the S3P example
paper_content:
Paper-based privacy policies fail to resolve the new changes posed by electronic healthcare. Protecting patient privacy through electronic systems has become a serious concern and is the subject of several recent studies. The shift towards an electronic privacy policy introduces new ethical challenges that cannot be solved merely by technical measures. Structured Patient Privacy Policy (S3P) is a software tool assuming an automated electronic privacy policy in an electronic healthcare setting. It is designed to simulate different access levels and rights of various professionals involved in healthcare in order to assess the emerging ethical problems. The authors discuss ethical issues concerning electronic patient privacy policies that have become apparent during the development and application of S3P.
---
paper_title: Psycho-Informatics: Big Data shaping modern psychometrics
paper_content:
For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.
---
paper_title: From Speculative Nanoethics to Explorative Philosophy of Nanotechnology
paper_content:
In the wake of the emergence and rapid development of nanoethics there swiftly followed fundamental criticism: nanoethics was said to have become much too involved with speculative developments and was concerning itself too little with actually pending questions of nanotechnology design and applications. If this diagnosis is true, then large parts of nanoethics are misguided. Such fundamental criticism must surely either result in a radical reorientation of nanoethics or be refuted for good reasons. In this paper, I will examine the critics’ central arguments and, building on this scrutiny, formulate an answer to these alternatives. The results lead to conclusions which allow explaining and unfolding the thesis of this paper that instead of speculative nanoethics we should better speak of and develop explorative philosophy of nanotechnology.
---
paper_title: The Ethics of Synthetic Biology: Guiding Principles for Emerging Technologies
paper_content:
Some call synthetic biology an epochal development—the start of a new industrial revolution, the moment humans learned to be gods. Others think it is an incremental advance with an iffy payoff. In these essays, the chair of the Presidential Commission for the Study of Bioethical Issues and participants from a recent Hastings project examine the social challenge it presents.
---
paper_title: Developing a framework for responsible innovation
paper_content:
The governance of emerging science and innovation is a major challenge for contemporary democracies. In this paper we present a framework for understanding and supporting efforts aimed at ‘responsible innovation’. The framework was developed in part through work with one of the first major research projects in the controversial area of geoengineering, funded by the UK Research Councils. We describe this case study, and how this became a location to articulate and explore four integrated dimensions of responsible innovation: anticipation, reflexivity, inclusion and responsiveness. Although the framework for responsible innovation was designed for use by the UK Research Councils and the scientific communities they support, we argue that it has more general application and relevance.
---
paper_title: The Transnational Governance of Synthetic Biology: Scientific uncertainty, cross-borderness and the ’art’ of governance
paper_content:
This working paper summarises and appraises current thinking and proposals for the governance of synthetic biology. Considering that contemporary synthetic biology was only born around 2004, when the first international conference (SB1.0) was held, the extent of the literature already produced about the governance of this field is very extensive. Annex I lists 39 reports produced since 2004 by scientific, governmental and non-governmental organisations, and shows how activity in this field has increased rapidly in the last few years, with 28 reports published in just the last 3 years. Alongside this grey literature, there are also numerous articles published in academic journals by synthetic biologists, sociologists, legal scholars and philosophers. We have utilised this literature as a resource and particular attention has been paid to identifying the range of views expressed by different actors. This literature review is complemented by participant observation in synthetic biology laboratories, scientific meetings and policy forums in the UK, as members of the joint LSE-Imperial College Centre for Synthetic Biology and Innovation (CSynBI). Additional fieldwork consisting of laboratory visits and interviews with scientists and policy makers was conducted in the UK (by Claire Marris), in China (by Joy Zhang) and in Japan (by Caitlin Cockerton and Susanna Finlay). ::: ::: The report is organised as follows: ::: · Section 2 summarises current accounts of synthetic biology and explains how conflicting narratives occur in parallel. ::: · Sections 3 and 4 then elucidate the main sources of governance challenges exhibited by synthetic biology. Section 3 demonstrates that current concerns over synthetic biology mostly originate from two key features of synthetic biology: scientific uncertainty and cross-borderness. By examining the case of the US, the UK, China, Section 4 further illustrates both the inter-national divergences and the transnational interconnectedness that any governing attempts for synthetic ::: biology need to take into consideration. ::: · Section 5 discusses three key governance challenges that arise from these two features of synthetic biology: the salience of both knowing and non-knowing; the need for external accountability; and the fragmentation of social authorities. ::: · Section 6 outlines our proposal for the ‘art of governance’ to address these challenges.
---
| Title: The Ethics of Computing: A Survey of the Computing-Oriented Literature
Section 1: INTRODUCTION
Description 1: Provide context on the increasing relevance of ethical aspects in computing technologies, the lack of common understanding among technical communities, and the aim of the article to systematically review the literature on ethics in computing.
Section 2: THE ETHICS OF COMPUTING
Description 2: Introduce the term "ethics" and its importance for computing professionals, discussing various ethical dilemmas and the relevance of understanding philosophical ethics.
Section 3: Computer and Information Ethics as Applied Ethics
Description 3: Discuss the development and significance of computer ethics as a specialized field within applied ethics, emphasizing the role of professional organizations and the broader discourse within the academic community.
Section 4: The Ethics of Computing
Description 4: Explain the approach taken in the survey to explore the literature on ethics and computing available to technical research communities, focusing on the structure and relevance of "ethical issues."
Section 5: METHODOLOGY
Description 5: Describe the methodology employed for selecting appropriate literature, identifying and coding themes, and validating the quality of the review.
Section 6: Selection of the Literature
Description 6: Detail the selection process for the literature review, including databases used, inclusion and exclusion criteria, and the final sample size.
Section 7: Identification and Coding of Themes
Description 7: Outline the process of identifying relevant themes in the literature and the coding scheme used to categorize ethical issues, technologies, ethical theories, methodologies, contributions, and recommendations.
Section 8: Categorization of Codes
Description 8: Explain how similar and overlapping codes were grouped into useful categories to better understand past and current questions in the ethics of computing.
Section 9: FINDINGS
Description 9: Present the findings of the survey, discussing the most frequently identified ethical issues, technologies, ethical theories, methodologies, contributions, and recommendations.
Section 10: Relationship Between Key Ethical Issues and Other Categories
Description 10: Explore the relationships between different categories of ethical issues and their connections to other aspects of the discourse, providing a deeper understanding of key topics.
Section 11: DISCUSSION
Description 11: Discuss general trends and gaps in the overall discourse on ethics and computing, highlighting the static nature of ethical issues and computing technologies over time, and the limited use of ethical theories.
Section 12: CONCLUSION: ALIGNING ETHICS OF COMPUTING AND RESPONSIBLE RESEARCH AND INNOVATION
Description 12: Summarize the importance of ethical considerations in computing, the need for a more practical and actionable approach, and recommend aligning ethics in computing with the principles of responsible research and innovation.
Section 13: Limitations
Description 13: Discuss the limitations of the research, including potential biases in coding and the challenges of summarizing a large body of work within a short space.
Section 14: Implications and Recommendations: The Transition to Responsible Research and Innovation in ICT
Description 14: Provide recommendations for future research and practice, emphasizing the importance of practical contributions, explicit methodologies, and broader societal engagement to align ethics in computing with responsible research and innovation principles. |
DIMACS Series in Discrete Mathematics and Theoretical Computer Science Survey: Information flow on trees | 8 | ---
paper_title: Five surprising properties of parsimoniously colored trees
paper_content:
Trees with a coloration of their leaves have an induced “length” which forms the basis of the widely used maximum parsimony method for reconstructing evolutionary trees in biology. Here we describe five unexpected properties of this length function, including refinements of earlier results.
---
paper_title: Taxonomy with confidence
paper_content:
Abstract There are essentially three ways in which four species may be related in a phylogenetic tree graph. It is usual to compute for each of these three possibilities the smallest number of mutations that could have brought about the observed distribution of characteristics among the four species. The graph that minimizes this number is then preferred. In fact, the hypothesis that the graph chosen in this way is correct may be accepted with confidence if the minimum is strong in a sense described here. In principle, the theory could be extended to treat sets of more than four species.
---
paper_title: On the purity of the limiting gibbs state for the Ising model on the Bethe lattice
paper_content:
We give a proof that for the Ising model on the Bethe lattice, the limiting Gibbs state with zero effective field (disordered state) persists to be pure for temperature below the ferromagnetic critical temperatureT c F until the critical temperatureT c SG of the corresponding spin-glass model. This new proof revises the one proposed earlier.
---
paper_title: Elements of Information Theory
paper_content:
Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.
---
paper_title: Reconstruction on trees: Beating the second eigenvalue
paper_content:
We consider a process in which information is transmitted from a given root node on a noisy d-ary tree network T . We start with a uniform symbol taken from an alphabet A. Each edge of the tree is an independent copy of some channel (Markov chain) M, where M is irreducible and aperiodic on A. The goal is to reconstruct the symbol at the root from the symbols at the nth level of the tree. This model has been studied in information theory, genetics, and statistical physics. The basic question is: Is it possible to reconstruct (some information on) the root? In other words, does the probability of correct reconstruction tend to 1=jAj as n ! 1? It is known that reconstruction is possible if d 2 (M) > 1, where 2(M) is the second eigen-value of M. Moreover, in this case it is possible to reconstruct using a majority algorithm which ignores the location of the data at the boundary of the tree. When M is a symmetric binary channel, this threshold is sharp. In this paper we show, that both for the binary asymmetric channel and for the symmetric channel on many symbols it is sometimes possible to reconstruct even when d 2 (M) < 1. This result indicates that for many (maybe most) tree indexed Markov chains the location of the data on the boundary plays a crucial role in reconstruction problems.
---
paper_title: Glauber dynamics on trees and hyperbolic graphs
paper_content:
Abstract.We study continuous time Glauber dynamics for random configurations with local constraints (e.g. proper coloring, Ising and Potts models) on finite graphs with n vertices and of bounded degree. We show that the relaxation time (defined as the reciprocal of the spectral gap |λ1−λ2|) for the dynamics on trees and on planar hyperbolic graphs, is polynomial in n. For these hyperbolic graphs, this yields a general polynomial sampling algorithm for random configurations. We then show that for general graphs, if the relaxation time τ2 satisfies τ2=O(1), then the correlation coefficient, and the mutual information, between any local function (which depends only on the configuration in a fixed window) and the boundary conditions, decays exponentially in the distance between the window and the boundary. For the Ising model on a regular tree, this condition is sharp.
---
paper_title: Reconstruction on trees: Beating the second eigenvalue
paper_content:
We consider a process in which information is transmitted from a given root node on a noisy d-ary tree network T . We start with a uniform symbol taken from an alphabet A. Each edge of the tree is an independent copy of some channel (Markov chain) M, where M is irreducible and aperiodic on A. The goal is to reconstruct the symbol at the root from the symbols at the nth level of the tree. This model has been studied in information theory, genetics, and statistical physics. The basic question is: Is it possible to reconstruct (some information on) the root? In other words, does the probability of correct reconstruction tend to 1=jAj as n ! 1? It is known that reconstruction is possible if d 2 (M) > 1, where 2(M) is the second eigen-value of M. Moreover, in this case it is possible to reconstruct using a majority algorithm which ignores the location of the data at the boundary of the tree. When M is a symmetric binary channel, this threshold is sharp. In this paper we show, that both for the binary asymmetric channel and for the symmetric channel on many symbols it is sometimes possible to reconstruct even when d 2 (M) < 1. This result indicates that for many (maybe most) tree indexed Markov chains the location of the data on the boundary plays a crucial role in reconstruction problems.
---
paper_title: On the maximum tolerable noise for reliable computation by formulas
paper_content:
It is shown that if formulas constructed from error-prone three-input gates are used to compute Boolean functions, then a per-gate failure probability of 1/6 or more cannot be tolerated. The result is shown to be tight if the per-gate failure probability is constant and precisely known. >
---
paper_title: On the purity of the limiting gibbs state for the Ising model on the Bethe lattice
paper_content:
We give a proof that for the Ising model on the Bethe lattice, the limiting Gibbs state with zero effective field (disordered state) persists to be pure for temperature below the ferromagnetic critical temperatureT c F until the critical temperatureT c SG of the corresponding spin-glass model. This new proof revises the one proposed earlier.
---
paper_title: On the extremality of the disordered state for the Ising model on the Bethe lattice
paper_content:
We give a simple proof that the limit Ising Gibbs measure with free boundary conditions on the Bethe lattice with the forward branching ratio k≥2 is extremal if and only if β is less or equal to the spin glass transition value, given by tanh(β c SG = 1/√k.
---
paper_title: Glauber dynamics on trees and hyperbolic graphs
paper_content:
Abstract.We study continuous time Glauber dynamics for random configurations with local constraints (e.g. proper coloring, Ising and Potts models) on finite graphs with n vertices and of bounded degree. We show that the relaxation time (defined as the reciprocal of the spectral gap |λ1−λ2|) for the dynamics on trees and on planar hyperbolic graphs, is polynomial in n. For these hyperbolic graphs, this yields a general polynomial sampling algorithm for random configurations. We then show that for general graphs, if the relaxation time τ2 satisfies τ2=O(1), then the correlation coefficient, and the mutual information, between any local function (which depends only on the configuration in a fixed window) and the boundary conditions, decays exponentially in the distance between the window and the boundary. For the Ising model on a regular tree, this condition is sharp.
---
paper_title: Reconstruction on trees: Beating the second eigenvalue
paper_content:
We consider a process in which information is transmitted from a given root node on a noisy d-ary tree network T . We start with a uniform symbol taken from an alphabet A. Each edge of the tree is an independent copy of some channel (Markov chain) M, where M is irreducible and aperiodic on A. The goal is to reconstruct the symbol at the root from the symbols at the nth level of the tree. This model has been studied in information theory, genetics, and statistical physics. The basic question is: Is it possible to reconstruct (some information on) the root? In other words, does the probability of correct reconstruction tend to 1=jAj as n ! 1? It is known that reconstruction is possible if d 2 (M) > 1, where 2(M) is the second eigen-value of M. Moreover, in this case it is possible to reconstruct using a majority algorithm which ignores the location of the data at the boundary of the tree. When M is a symmetric binary channel, this threshold is sharp. In this paper we show, that both for the binary asymmetric channel and for the symmetric channel on many symbols it is sometimes possible to reconstruct even when d 2 (M) < 1. This result indicates that for many (maybe most) tree indexed Markov chains the location of the data on the boundary plays a crucial role in reconstruction problems.
---
paper_title: Large Deviations Techniques and Applications
paper_content:
LDP for Finite Dimensional Spaces.- Applications-The Finite Dimensional Case.- General Principles.- Sample Path Large Deviations.- The LDP for Abstract Empirical Measures.- Applications of Empirical Measures LDP.
---
paper_title: Reconstruction on trees: Beating the second eigenvalue
paper_content:
We consider a process in which information is transmitted from a given root node on a noisy d-ary tree network T . We start with a uniform symbol taken from an alphabet A. Each edge of the tree is an independent copy of some channel (Markov chain) M, where M is irreducible and aperiodic on A. The goal is to reconstruct the symbol at the root from the symbols at the nth level of the tree. This model has been studied in information theory, genetics, and statistical physics. The basic question is: Is it possible to reconstruct (some information on) the root? In other words, does the probability of correct reconstruction tend to 1=jAj as n ! 1? It is known that reconstruction is possible if d 2 (M) > 1, where 2(M) is the second eigen-value of M. Moreover, in this case it is possible to reconstruct using a majority algorithm which ignores the location of the data at the boundary of the tree. When M is a symmetric binary channel, this threshold is sharp. In this paper we show, that both for the binary asymmetric channel and for the symmetric channel on many symbols it is sometimes possible to reconstruct even when d 2 (M) < 1. This result indicates that for many (maybe most) tree indexed Markov chains the location of the data on the boundary plays a crucial role in reconstruction problems.
---
paper_title: Gibbs Measures and Dismantlable Graphs
paper_content:
We model physical systems with “hard constraints” by the space Hom(G, H) of homomorphisms from a locally finite graph G to a fixed finite constraint graph H. Two homomorphisms are deemed to be adjacent if they differ on a single site of G. We investigate what appears to be a fundamental dichotomy of constraint graphs, by giving various characterizations of a class of graphs that we call dismantlable. For instance, H is dismantlable if and only if, for every G, any two homomorphisms from G to H which differ at only finitely many sites are joined by a path in Hom(G, H). If H is dismantlable, then, for any G of bounded degree, there is some assignment of activities to the nodes of H for which there is a unique Gibbs measure on Hom(G, H). On the other hand, if H is not dismantlable (and not too trivial), then there is some r such that, whatever the assignment of activities on H, there are uncountably many Gibbs measures on Hom(Tr, H), where Tr is the (r+1)-regular tree.
---
paper_title: The random-cluster model on a homogeneous tree
paper_content:
The random-cluster model on a homogeneous tree is defined and studied. It is shown that for 1≦q≦2, the percolation probability in the maximal random-cluster measure is continuous inp, while forq>2 it has a discontinuity at the critical valuep=p c (q). It is also shown that forq>2, there is nonuniqueness of random-cluster measures for an entire interval of values ofp. The latter result is in sharp contrast to what happens on the integer lattice Z d .
---
paper_title: Reliable computation by formulas in the presence of noise
paper_content:
It is shown that if formulas are used to compute Boolean functions in the presence of randomly occurring failures then: (1) there is a limit strictly less than 1/2 to the failure probability per gate that can be tolerated, and (2) formulas that tolerate failures must be deeper (and, therefore, compute more slowly) than those that do not. The heart of the proof is an information-theoretic argument that deals with computation and errors in very general terms. The strength of this argument is that it applies with equal ease no matter what types of gate are available. Its weaknesses is that it does not seem to predict quantitatively the limiting value of the failure probability or the ratio by which computation proceeds more slowly in the presence of failures. >
---
paper_title: Phase transitions in phylogeny
paper_content:
We apply the theory of markov random fields on trees to derive a phase transition in the number of samples needed in order to reconstruct phylogenies. ::: We consider the Cavender-Farris-Neyman model of evolution on trees, where all the inner nodes have degree at least 3, and the net transition on each edge is bounded by e. Motivated by a conjecture by M. Steel, we show that if 2 (1 - 2 e) (1 - 2e) > 1, then for balanced trees, the topology of the underlying tree, having n leaves, can be reconstructed from O(log n) samples (characters) at the leaves. On the other hand, we show that if 2 (1 - 2 e) (1 - 2 e) < 1, then there exist topologies which require at least poly(n) samples for reconstruction. ::: Our results are the first rigorous results to establish the role of phase transitions for markov random fields on trees as studied in probability, statistical physics and information theory to the study of phylogenies in mathematical biology.
---
paper_title: Taxonomy with confidence
paper_content:
Abstract There are essentially three ways in which four species may be related in a phylogenetic tree graph. It is usual to compute for each of these three possibilities the smallest number of mutations that could have brought about the observed distribution of characteristics among the four species. The graph that minimizes this number is then preferred. In fact, the hypothesis that the graph chosen in this way is correct may be accepted with confidence if the minimum is strong in a sense described here. In principle, the theory could be extended to treat sets of more than four species.
---
paper_title: Toward Defining the Course of Evolution: Minimum Change for a Specific Tree Topology
paper_content:
Fitch, W. M. (Dept. of Physiological Chemistry, Univ. of Wisconsin, Madison, Wisconsin, 53706), 1971. Toward defining the course of evolution: minimum change for a specific tree topology. Syst. Zool., 20:406-416.-A method is presented that is asserted to provide all hypothetical ancestral character states that are consistent with describing the descent of the present-day character states in a minimum number of changes of state using a predetermined phylogenetic relationship among the taxa represented. The character states used as examples are the four messenger RNA nucleotides encoding the amino acid sequences of proteins, but the method is general. [Evolution; parsimonious trees.] It has been a goal of those attempting to deduce phylogenetic relationships from information on biological characteristics to find the ancestral relationship(s) that would permit one to account for the descent of those characteristics in a manner requiring a minimum number of evolutionary steps or changes. The result could be called the most parsimonious evolutionary tree and might be expected to have a high degree of correspondence to the true phylogeny (Camin and Sokal, 1965). It's justification lies in the most efficient use of the information available and does not presuppose that evolution follows a most parsimonious course. There are no known algorithms for finding the most parsimonious tree(s) apart from the brute force method of examining nearly every possible tree.' This is impractical for trees involving a dozen or more taxonomic units. Most numerical taxonomic procedures (Sokal and Sneath, 1963; Farris, 1969, 1970; Fitch and Margoliash, 1967) provide dendrograms that would be among the more parsimonious solutions; one just cannot be sure that a more parsimonious tree structure does not exist. Farris (1970) has explicitly considered the parsimony principle as a part of 'An elegant beginning to an attack on the problem has recently been published by Farris (1969) who developed a method which estimates the reliability of various characters and then weights the characters on the basis of that reliability. his method which, like the present method, has its roots in the Wagner tree (Wagner,
---
paper_title: On the purity of the limiting gibbs state for the Ising model on the Bethe lattice
paper_content:
We give a proof that for the Ising model on the Bethe lattice, the limiting Gibbs state with zero effective field (disordered state) persists to be pure for temperature below the ferromagnetic critical temperatureT c F until the critical temperatureT c SG of the corresponding spin-glass model. This new proof revises the one proposed earlier.
---
paper_title: Recursions on trees and the ising model at critical temperatures
paper_content:
A leak-proof stackable container for shipping ice cooled foodstuff has a bottom provided with a centrally located well. Between the upstanding walls of the container and the upstanding walls of the well there is a peripheral bottom portion uniting these walls and providing support for wooden fillet boxes or perforated plates on which perishable food mixed with ice is deposited. The container is covered with a lid having a centrally located recess of a size to receive the well of a second container stacked thereon. The lid is provided with flanges engaging the upper lip of the container to seal the contents in the container. The lid and container may be insulated with a foamed plastic. Fore greater strength the walls are strengthened with ribs.
---
paper_title: On the extremality of the disordered state for the Ising model on the Bethe lattice
paper_content:
We give a simple proof that the limit Ising Gibbs measure with free boundary conditions on the Bethe lattice with the forward branching ratio k≥2 is extremal if and only if β is less or equal to the spin glass transition value, given by tanh(β c SG = 1/√k.
---
paper_title: Signal propagation, with application to a lower bound on the depth of noisy formulas
paper_content:
We study the decay of an information signal propagating through a series of noisy channels. We obtain exact bounds on such decay, and as a result provide a new lower bound on the depth of formulas with noisy components. This improves upon previous work of N. Pippenger (1988) and significantly decreases the gap between his lower bound and the classical upper bound of von Neumann. We also discuss connections between our work and the study of mixing rates of Markov chains. >
---
paper_title: On the impossibility of reconstructing ancestral data and phylogenies
paper_content:
We prove that it is impossible to reconstruct ancestral data at the root of "deep" phylogenetic trees with high mutation rates. Moreover, we prove that it is impossible to reconstruct the topology of "deep" trees with high mutation rates from a number of characters smaller than a low-degree polynomial in the number of leaves. Our impossibility results hold for all reconstruction methods. The proofs apply tools from information theory and percolation theory.
---
paper_title: Glauber dynamics on trees and hyperbolic graphs
paper_content:
Abstract.We study continuous time Glauber dynamics for random configurations with local constraints (e.g. proper coloring, Ising and Potts models) on finite graphs with n vertices and of bounded degree. We show that the relaxation time (defined as the reciprocal of the spectral gap |λ1−λ2|) for the dynamics on trees and on planar hyperbolic graphs, is polynomial in n. For these hyperbolic graphs, this yields a general polynomial sampling algorithm for random configurations. We then show that for general graphs, if the relaxation time τ2 satisfies τ2=O(1), then the correlation coefficient, and the mutual information, between any local function (which depends only on the configuration in a fixed window) and the boundary conditions, decays exponentially in the distance between the window and the boundary. For the Ising model on a regular tree, this condition is sharp.
---
paper_title: The Ising model on trees: boundary conditions and mixing time
paper_content:
We give the first comprehensive analysis of the effect of boundary conditions on the mixing time of the Glauber dynamics for the Ising model. Specifically, we show that the mixing time on an -vertex regular tree with -boundary remains at all temperatures (in contrast to the free boundary case, where the mixing time is not bounded by any fixed polynomial at low temperatures). We also show that this bound continues to hold in the presence of an arbitrary external field. Our results are actually stronger, and provide tight bounds on the log-Sobolev constant and the spectral gap of the dynamics. In addition, our methods yield simpler proofs and stronger results for the mixing time in the regime where it is insensitive to the boundary condition. Our techniques also apply to a much wider class of models, including those with hard constraints like the antiferromagnetic Potts model at zero temperature (colorings) and the hard-core model (independent sets).
---
paper_title: A Phase Transition for a Random Cluster Model on Phylogenetic Trees
paper_content:
We investigate a simple model that generates random partitions of the leaf set of a tree. Of particular interest is the reconstruction question: what number k of independent samples (partitions) are required to correctly reconstruct the underlying tree (with high probability)? We demonstrate a phase transition for k as a function of the mutation rate, from logarithmic to polynomial dependence on the size of the tree. We also describe a simple polynomial-time tree reconstruction algorithm that applies in the logarithmic region. This model and the associated reconstruction questions are motivated by a Markov model for genomic evolution in molecular biology.
---
paper_title: Reconstruction thresholds on regular trees
paper_content:
We consider a branching random walk with binary state space and index set $T^k$, the infinite rooted tree in which each node has k children (also known as the model of "broadcasting on a tree"). The root of the tree takes a random value 0 or 1, and then each node passes a value independently to each of its children according to a 2x2 transition matrix P. We say that "reconstruction is possible" if the values at the d'th level of the tree contain non-vanishing information about the value at the root as $d\to\infty$. Adapting a method of Brightwell and Winkler, we obtain new conditions under which reconstruction is impossible, both in the general case and in the special case $p_{11}=0$. The latter case is closely related to the "hard-core model" from statistical physics; a corollary of our results is that, for the hard-core model on the (k+1)-regular tree with activity $\lambda=1$, the unique simple invariant Gibbs measure is extremal in the set of Gibbs measures, for any k.
---
paper_title: Phase transitions in phylogeny
paper_content:
We apply the theory of markov random fields on trees to derive a phase transition in the number of samples needed in order to reconstruct phylogenies. ::: We consider the Cavender-Farris-Neyman model of evolution on trees, where all the inner nodes have degree at least 3, and the net transition on each edge is bounded by e. Motivated by a conjecture by M. Steel, we show that if 2 (1 - 2 e) (1 - 2e) > 1, then for balanced trees, the topology of the underlying tree, having n leaves, can be reconstructed from O(log n) samples (characters) at the leaves. On the other hand, we show that if 2 (1 - 2 e) (1 - 2 e) < 1, then there exist topologies which require at least poly(n) samples for reconstruction. ::: Our results are the first rigorous results to establish the role of phase transitions for markov random fields on trees as studied in probability, statistical physics and information theory to the study of phylogenies in mathematical biology.
---
paper_title: Robust reconstruction on trees is determined by the second eigenvalue
paper_content:
Consider a Markov chain on an infinite tree T = (V,E) rooted at . In such a chain, once the initial root state ( ) is chosen, each vertex iteratively chooses its state from the one of its parent by an application of a Markov transition rule (and all such applications are independent). Let µj denote the resulting measure for ( ) = j. µj is defined on configurations = ( (x))x2V 2 A V , where A is some finite set. Let µ n denote the restriction of µ to the sigma-algebra generated by the variables (x) where x is at distance exactly n from . Letting n = maxi,j2A dTV (µ n ,µ n ), where dTV denotes total variation distance, we say that the reconstruction problem is solvable if liminfn!1 n > 0. Reconstruction solvability roughly means that the n’th level of the tree contains a non-vanishing amount of information on the root of the tree, as n ! 1. In this paper we study the problem of robust reconstruction. Let be a non degenerate distribution on A and > 0. Let be chosen according to µ n and 0 be obtained from by letting for each node independently (v) = 0 (v) with probability 1 and 0 (v) to be an independent sample from otherwise. We denote by µ n [, ] the resulting measure on 0 . The measure µ n [, ] is a perturbation of the measure µ n . Letting n(, ) = maxi,j2A dTV (µ n [, ],µ n [, ]), we say that the reconstruction problem is -robust-solvable if liminfn!1 n(, ) > 0 for all 0 1, where 2(M) is the second largest eigenvalue of M (in absolute value), then for all non-degenerate , the reconstruction problem is -robust-solvable. We prove a converse and show that the reconstruction problem is not -robust-solvable if B| 2(M)| 2 < 1. This proves a conjecture by the second author and Peres. We also consider other models of noise and general trees.
---
paper_title: On the impossibility of reconstructing ancestral data and phylogenies
paper_content:
We prove that it is impossible to reconstruct ancestral data at the root of "deep" phylogenetic trees with high mutation rates. Moreover, we prove that it is impossible to reconstruct the topology of "deep" trees with high mutation rates from a number of characters smaller than a low-degree polynomial in the number of leaves. Our impossibility results hold for all reconstruction methods. The proofs apply tools from information theory and percolation theory.
---
| Title: DIMACS Series in Discrete Mathematics and Theoretical Computer Science Survey: Information flow on trees
Section 1: Introduction
Description 1: Introduce the concept of information flow on trees and discuss its significance in genetic propagation and communication networks.
Section 2: The reconstruction problem
Description 2: Define the process and explore the solvability of the reconstruction problem, including census solvability and equivalent definitions.
Section 3: The Ising model
Description 3: Discuss the Ising model on trees, including known results for symmetric binary channels and various reconstruction algorithms.
Section 4: Census solvability
Description 4: Investigate the threshold for census solvability and demonstrate the conditions under which the root can be reconstructed using the census of the tree level.
Section 5: Potts models
Description 5: Explore the reconstruction problem for Potts models, including asymmetric binary channels and ferromagnetic models, with associated threshold conditions.
Section 6: General channels
Description 6: Discuss techniques and criteria applicable to the reconstruction problem for general channels, including recursive methods and spectral gaps in Glauber dynamics.
Section 7: Terminology and related problems
Description 7: Review related problems and terminology from various fields, such as statistical physics, information theory, and biology, and discuss their connections to the reconstruction problem.
Section 8: Very recent results
Description 8: Highlight new developments and recent findings related to the reconstruction problem, including better bounds and robust solvability. |
A Comprehensive Survey on Bengali Phoneme Recognition | 9 | ---
paper_title: A contrastive analysis of English and Bangla phonemics
paper_content:
Contrastive phonemics is the field of study in which different phonemic systems are laid side by side to find out similarities and dissimilarities between the phonemes of the languages concerned. Every language has its own phonemic system, which holds unique as well as common features. A language shares some phonemes with other languages, but no two languages have the same phonemic inventory. This article makes a contrastive analysis of the phonemic systems of English and Bangla. The aspects of similarities as well as dissimilarities between the two have been explored in detail. It brings into focus the inventory of phonemes of the two languages along with relevant phonetic and phonological characteristics. The vowel and consonant phonemes of the two languages have been compared with sufficient examples, making it clear where and how they are identical and different. Key words: contrastive; Bangla; English; phonemics DOI: 10.3329/dujl.v2i4.6898 Dhaka University Journal of Linguistics Vol.2(4) August 2009 pp.19-42
---
paper_title: The Indo-Aryan Languages
paper_content:
1. Introduction 2. The modern Indo-Aryan languages and dialects 3. The historical context and development of Indo-Aryan 4. The nature of the New Indo-Aryan lexicon 5. NIA descriptive phonology 6. Writing systems 7. Historical phonology 8. Nominal forms and categories 9. Verbal forms and categories 10. Syntax Appendix I Inventory of NIA languages and dialects Appendix II Schemes of NIA subclassification.
---
paper_title: Bangla phoneme recognition for ASR using multilayer neural network
paper_content:
This paper presents a Bangla phoneme recognition method for Automatic Speech Recognition (ASR). The method consists of two stages: i) a multilayer neural network (MLN), which converts acoustic features, mel frequency cepstral coefficients (MFCCs), into phoneme probabilities and ii) the phoneme probabilities obtained from the first stage and corresponding Δ and ΔΔ parameters calculated by linear regression (LR) are inserted into a hidden Markov model (HMM) based classifier to obtain more accurate phoneme strings. From the experiments on Bangla speech corpus prepared by us, it is observed that the proposed method provides higher phoneme recognition performance than the existing method. Moreover, it requires a fewer mixture components in the HMMs.
---
paper_title: Phonetic Features enhancement for Bangla automatic speech recognition
paper_content:
This paper discusses a phonetic feature (PF) based automatic speech recognition system (ASR) for Bangla (widely known as Bengali), where the PF features are enhanced. There are three stages in this method where the first step maps Acoustic Features (AFs) or Local Features (LFs) into Phonetic Features (PFs) and the second step incorporates inhibition/enhancement (In/En) algorithm to change the PF dynamic patterns where patterns are enhanced for convex patterns and inhibited for concave patterns. The final step is for normalizing the extended PF vector using Gram-Schmidt algorithm and then passing through a Hidden Markov Model (HMM) based classifier. In our experiment on speech corpus for Bangla, the proposed feature extraction method provides higher sentence correct rate (SCR), word correct rate (WCR) and word accuracy (WA) compared to the methods that not incorporated In/En network.
---
| Title: A Comprehensive Survey on Bengali Phoneme Recognition
Section 1: INTRODUCTION
Description 1: This section provides an overview of phonemes, the importance of Bengali language, and the aim of the survey.
Section 2: Bangla phoneme
Description 2: This section describes the phonetic inventory of the Bengali language, including the number of vowels and consonants.
Section 3: Scope of Automatic Speech Recognition or Phoneme Recognition
Description 3: This section discusses the relevance of phoneme recognition in ASR systems and the challenges associated with Bengali language.
Section 4: PHONEME RECOGNITION METHODS
Description 4: This section details various methods used for phoneme recognition, including hidden Markov models and their limitations in real acoustic conditions.
Section 5: Multilayer neural network
Description 5: This section explains the use of artificial neural networks, specifically multilayer neural networks, for Bengali phoneme recognition.
Section 6: Phonetic Feature Table
Description 6: This section describes the phonetic features used to identify Bengali phonemes and the specifics of their representation.
Section 7: Phonetic Features Enhancement
Description 7: This section covers advanced methods for phonetic feature extraction and enhancement, incorporating multilayer neural networks and other techniques.
Section 8: RESULT AND ANALYSIS
Description 8: This section presents the results from various phoneme recognition methods discussed in the paper, comparing their performance.
Section 9: CONCLUSION
Description 9: This section summarizes the survey, highlighting the advantages of different phoneme recognition methods and provides a brief conclusion. |
Overview of spectrum sensing for cognitive radio | 7 | ---
paper_title: Software radios-survey, critical evaluation and future directions
paper_content:
Relates the performance of enabling hardware technologies to software radio requirements, portending a decade of shift from hardware radios toward software intensive approaches. Such approaches require efficient use of computational resources through topological consistency of radio functions and host architectures. This leads to a layered topology oriented design approach encapsulated in a canonical open architecture software radio model. This model underscores challenges in simulation and computer-aided design (CAD) tools for radio engineering. It aso provides a unified mathematical framework for quantitative analysis of algorithm structures, host architectures, and system performance for radio engineering CAD environments of the 1990s.<<ETX>>
---
paper_title: Power scaling for cognitive radio
paper_content:
In this paper we explore the idea of using cognitive radios to reuse locally unused spectrum for their own transmissions. We impose the constraint that they cannot general e unacceptable levels of interference to licensed systems on the same frequency. Using received SNR as a proxy for distance, we prove that a cognitive radio can vary its transmit power while maintaining a guarantee of service to primary users. We consider the aggregate interference caused by multiple cognitive radios and show that aggregation causes a change in the effective decay rate of the interference. We examine the effects of heterogeneous propagation path loss functions and justify the feasibility of multiple secondary users with dynamic transmit powers. Finally, we prove the fundamental constraint on a cognitive radio's transmit power is the minimum SNR it can detect and explore the effect of this power cap.
---
paper_title: Cognitive radio: brain-empowered wireless communications
paper_content:
Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
---
paper_title: Cognitive radio in a frequency-planned environment: some basic limits
paper_content:
The objective of this work is to assess some fundamental limits for opportunistic spectrum reuse via cognitive radio in a frequency-planned environment. We present a first order analysis of the signal-to-noise-and-interference situation in a wireless cellular network, and analyze the impact of cognitive users starting to transmit. Two main conclusions emerge from our study. First, obtaining any substantial benefits from opportunistic spatial spectrum reuse in a frequency-planned network without causing substantial interference is going to be very challenging. Second, the cognitive users need to be more sensitive, by orders of magnitude, than the receivers in the primary system, especially if there is significant shadow fading. This latter problem can be alleviated by having cognitive users cooperate, but only if they are separated far apart so that they experience independent shadowing.
---
paper_title: Energy detection of unknown deterministic signals
paper_content:
By using Shannon's sampling formula, the problem of the detection of a deterministic signal in white Gaussian noise, by means of an energy-measuring device, reduces to the consideration of the sum of the squares of statistically independent Gaussian variates. When the signal is absent, the decision statistic has a central chi-square distribution with the number of degrees of freedom equal to twice the time-bandwidth product of the input. When the signal is present, the decision statistic has a noncentral chi-square distribution with the same number of degrees of freedom and a noncentrality parameter λ equal to the ratio of signal energy to two-sided noise spectral density. Since the noncentral chi-square distribution has not been tabulated extensively enough for our purpose, an approximate form was used. This form replaces the noncentral chi-square with a modified chi-square whose degrees of freedom and threshold are determined by the noncentrality parameter and the previous degrees of freedom. Sets of receiver operating characteristic (ROC) curves are drawn for several time-bandwidth products, as well as an extended nomogram of the chi-square cumulative probability which can be used for rapid calculation of false alarm and detection probabilities. Related work in energy detection by J. I. Marcum and E. L Kaplan is discussed.
---
paper_title: Cognitive radio in a frequency-planned environment: some basic limits
paper_content:
The objective of this work is to assess some fundamental limits for opportunistic spectrum reuse via cognitive radio in a frequency-planned environment. We present a first order analysis of the signal-to-noise-and-interference situation in a wireless cellular network, and analyze the impact of cognitive users starting to transmit. Two main conclusions emerge from our study. First, obtaining any substantial benefits from opportunistic spatial spectrum reuse in a frequency-planned network without causing substantial interference is going to be very challenging. Second, the cognitive users need to be more sensitive, by orders of magnitude, than the receivers in the primary system, especially if there is significant shadow fading. This latter problem can be alleviated by having cognitive users cooperate, but only if they are separated far apart so that they experience independent shadowing.
---
paper_title: SNR Walls for Signal Detection
paper_content:
This paper considers the detection of the presence/absence of signals in uncertain low SNR environments. Small modeling uncertainties are unavoidable in any practical system and so robustness to them is a fundamental performance metric. The impact of these modeling uncertainties can be quantified by the position of the "SNR wall" below which a detector will fail to be robust, no matter how long it can observe the channel. We propose simple mathematical models for the uncertainty in the noise and fading processes. These are used to show what aspects of the model lead to SNR walls for differing levels of knowledge of the signal to be detected. These results have implications for wireless spectrum regulators. The context is opportunistically sharing spectrum with primary users that must be detected in order to avoid causing harmful interference on a channel. Ideally, a secondary system would be able to detect primaries robustly without having to know much about their signaling strategies. We argue that the tension between primary and secondary users is captured by the technical question of computing the optimal tradeoff between the primary user's capacity and the secondary user's sensing robustness as quantified by the SNR wall. This is an open problem, but we compute this tradeoff for some simple detectors.
---
paper_title: Fundamental limits on detection in low SNR under noise uncertainty
paper_content:
In this paper we consider the problem of detecting whether a frequency band is being used by a known primary user. We derive fundamental bounds on detection performance in low SNR in the presence of noise uncertainty - the noise is assumed to be white, but we know its distribution only to within a particular set. For clarity of analysis, we focus on primary transmissions that are BPSK-modulated random data without any pilot tones or training sequences. The results should all generalize to more general primary transmissions as long as no deterministic component is present. Specifically, we show that for every 'moment detector' there exists an SNR below which detection becomes impossible in the presence of noise uncertainty. In the neighborhood of that SNR wall, we show how the sample complexity of detection approaches infinity. We also show that if our radio has a finite dynamic range (upper and lower limits to the voltages we can quantize), then at low enough SNR, any detector can be rendered useless even under moderate noise uncertainty.
---
paper_title: Cyclostationarity: Half a century of research
paper_content:
In this paper, a concise survey of the literature on cyclostationarity is presented and includes an extensive bibliography. The literature in all languages, in which a substantial amount of research has been published, is included. Seminal contributions are identified as such. Citations are classified into 22 categories and listed in chronological order. Both stochastic and nonstochastic approaches for signal analysis are treated. In the former, which is the classical one, signals are modelled as realizations of stochastic processes. In the latter, signals are modelled as single functions of time and statistical functions are defined through infinite-time averages instead of ensemble averages. Applications of cyclostationarity in communications, signal processing, and many other research areas are considered.
---
paper_title: Signal interception: performance advantages of cyclic-feature detectors
paper_content:
The problem of detecting the presence of spread-spectrum phase-shift-keyed signals in variable noise and interference backgrounds is considered, and the performances of four detectors are evaluated and compared. The detectors include the optimum radiometer, the optimum modified radiometer that jointly estimates the noise level and detects the signal, and the maximum-SNR spectral-line regenerator for spectral-line frequencies equal to the chip rate and the doubled carrier frequency. It is concluded that the spectral-line regenerators can outperform both types of radiometers by a wide margin. The performance advantages are quantified in terms of receiver operating characteristics for several noise and interference environments and receiver collection times. >
---
paper_title: Spectral correlation based signal detection method for spectrum sensing in IEEE 802.22 WRAN systems
paper_content:
In this paper, signal detection methods for spectrum sensing in IEEE 802.22 wireless RAN systems are discussed. As most of the manmade signals can be treated as a cyclostationary random process, the spectral correlation function is effective for the detection of these signals. In WRAN systems, due to the specific detection environment, the computational complexity of the spectral correlation based method is significantly reduced. Peak detection in the high SNR environment together with contour figure based unique patterns search method in the low SNR environment are proposed for the primary user signal detection.
---
paper_title: Exploitation of spectral redundancy in cyclostationary signals
paper_content:
It is shown that the cyclostationarity attribute, as it is reflected in the periodicities of (second-order) moments of the signal, can be interpreted in terms of the property that allows generation of spectral lines from the signal by putting it through a (quadratic) nonlinear transformation. The fundamental link between the spectral-line generation property and the statistical property called spectral correlation, which corresponds to the correlation that exists between the random fluctuations of components of the signal residing in distinct spectral bands, is explained. The effects on the spectral-correlation characteristics of some basic signal processing operations, such as filtering, product modulation, and time sampling, are examined. It is shown how to use these results to derive the spectral-correlation characteristics for various types of man-made signals. Some ways of exploiting the inherent spectral redundancy associated with spectral correlation to perform various signal processing tasks involving detection and estimation of highly corrupted man-made signals are described. >
---
paper_title: A cyclostationary feature detector
paper_content:
Cyclostationary models for communications signals have been shown in recent years to offer many advantages over stationary models. Stationary models are adequate in many situations, but they cause important features of the signal to be overlooked. One such important feature is the correlation between spectral components that many signals exhibit. Cyclostationary models allow this spectral correlation to be exploited. This paper presents a signal detector that exploits spectral correlation to determine the presence or absence of a cyclostationary signal in noise. The detector's probability of false alarm is analytically derived. Computer simulations verify that the analytical derivation is correct. The detector's receiver operating characteristic curves are determined from the simulation data and the analytical expression for the probability of false alarm. >
---
paper_title: Collaborative Cyclostationary Spectrum Sensing for Cognitive Radio Systems
paper_content:
This paper proposes an energy efficient collaborative cyclostationary spectrum sensing approach for cognitive radio systems. An existing statistical hypothesis test for the presence of cyclostationarity is extended to multiple cyclic frequencies and its asymptotic distributions are established. Collaborative test statistics are proposed for the fusion of local test statistics of the secondary users, and a censoring technique in which only informative test statistics are transmitted to the fusion center (FC) during the collaborative detection is further proposed for improving energy efficiency in mobile applications. Moreover, a technique for numerical approximation of the asymptotic distribution of the censored FC test statistic is proposed. The proposed tests are nonparametric in the sense that no assumptions on data or noise distributions are required. In addition, the tests allow dichotomizing between the desired signal and interference. Simulation experiments are provided that show the benefits of the proposed cyclostationary approach compared to energy detection, the importance of collaboration among spatially displaced secondary users for overcoming shadowing and fading effects, as well as the reliable performance of the proposed algorithms even in very low signal-to-noise ratio (SNR) regimes and under strict communication rate constraints for collaboration overhead.
---
paper_title: Statistical tests for presence of cyclostationarity
paper_content:
The presence of kth-order cyclostationarity is defined in terms of nonvanishing cyclic-cumulants or polyspectra. Relying upon the asymptotic normality and consistency of kth-order cyclic statistics, asymptotically optimal /spl chi//sup 2/ tests are developed to detect the presence of cycles in the kth-order cyclic cumulants or polyspectra, without assuming any specific distribution on the data. Constant false alarm rate tests are derived in both time- and frequency-domain and yield consistent estimates of possible cycles present in the kth-order cyclic statistics. Explicit algorithms for k/spl les/4 are discussed. Existing approaches are rather empirical and deal only with k/spl les/2 case. Simulation results are presented to confirm the performance of the given tests. >
---
paper_title: Cyclostationarity based air interface recognition for software radio systems
paper_content:
Reconfigurable software radio equipment is seen as the next evolutionary step in mobile communications. One of the most important properties of a software radio terminal is that it is capable of using a wide range of air interface standards, providing a seamless interoperability between different air interface standards and an enhanced roaming capability. This multimode operation has to be supported by a number of key functionalities, one of which is the air interface recognition. A software radio terminal has to be able to detect, recognize and monitor the air interfaces available in the frequency environment. In our work, we propose exploiting the distinct cyclostationary properties of signals from different air interfaces as features for air interface recognition.
---
paper_title: Spectrum Sensing Using Cyclostationary Properties and Application to IEEE 802.22 WRAN
paper_content:
Spectrum sensing in a very low SNR environment (less than -20 dB) is considered in this paper. We make use of the noise rejection property of the cyclostationary spectrum. The sensing algorithms are based on measurement of the cyclic spectrum of the received signals. The statistics of the cyclic spectrum of the stationary white Gaussian process are fully analyzed for three measurement methods of the cyclic spectrum. The application to IEEE 802.22 WRAN is presented and the probability of false alarm is analytically derived. The operating characteristic curves for the sensing algorithms are determined from computer simulations using ATSC A/74 DTV signal captures as a test database.
---
paper_title: Signal interception: a unifying theoretical framework for feature detection
paper_content:
The unifying framework of the spectral-correlation theory of cyclostationary signals is used to present a broad treatment of weak, random signal detection for interception purposes. The relationships among a variety of previously proposed ad hoc detectors, optimum detectors, and newly proposed detectors are established. The spectral-correlation-plane approach to the interception problem is put forth as especially promising for detection, classification, and estimation in particularly difficult environments involving unknown and changing noise levels and interference activity. A fundamental drawback of the popular radiometric methods in such environments is explained. >
---
paper_title: Autocorrelation-Based Decentralized Sequential Detection of OFDM Signals in Cognitive Radios
paper_content:
This paper introduces a simple and computationally efficient spectrum sensing scheme for Orthogonal Frequency Division Multiplexing (OFDM) based primary user signal using its autocorrelation coefficient. Further, it is shown that the log likelihood ratio test (LLRT) statistic is the maximum likelihood estimate of the autocorrelation coefficient in the low signal-to-noise ratio (SNR) regime. Performance of the local detector is studied for the additive white Gaussian noise (AWGN) and multipath channels using theoretical analysis. Obtained results are verified in simulation. The performance of the local detector in the face of shadowing is studied by simulations. A sequential detection (SD) scheme where many secondary users cooperate to detect the same primary user is proposed. User cooperation provides diversity gains as well as facilitates using simpler local detectors. The sequential detection reduces the delay and the amount of data needed in identification of the underutilized spectrum. The decision statistics from individual detectors are combined at the fusion center (FC). The statistical properties of the decision statistics are established. The performance of the scheme is studied through theory and validated by simulations. A comparison of the SD scheme with the Neyman-Pearson fixed sample size (FSS) test for the same false alarm and missed detection probabilities is also carried out.
---
paper_title: Significance Test for Sphericity of a Normal $n$-Variate Distribution
paper_content:
An elevator control system that utilizes a digital signal representative of the car position to provide the necessary information inputs to the selective logic system and motor control so that a smooth acceleration to maximum velocity and deceleration to stop at a selected floor. A drum selector is utilized to produce the digital number corresponding to elevator position. The motion input to this drum selector is linked to actual elevator motion. The system includes electronic provision for simulating the advance of the car position an amount dependent on it's velocity and locks out all floors behind that advanced position in order to allow adequate time for deceleration. The velocity command is compared to actual velocity and the power control section called on to make up any difference. Differences persisting for a period of time produce increasing commands in response, while damping means prevent overshoot.
---
paper_title: GLRT-Based Spectrum Sensing for Cognitive Radio
paper_content:
In this paper, we propose several spectrum sensing methods designed using the generalized likelihood ratio test (GLRT) paradigm, for application in a cognitive radio network. The proposed techniques utilize the eigenvalues of the sample covariance matrix of the received signal vector, taking advantage of the fact that in practice, the primary signal in a cognitive radio environment will either occupy a subspace of dimension strictly smaller than the dimension of the observation space, or have a spectrum that is non-white. We show that by making various assumptions on the availability of side information such as noise variance and signal space dimension, several feasible algorithms result which all outperform the standard energy detector.
---
paper_title: Exploitation of spectral redundancy in cyclostationary signals
paper_content:
It is shown that the cyclostationarity attribute, as it is reflected in the periodicities of (second-order) moments of the signal, can be interpreted in terms of the property that allows generation of spectral lines from the signal by putting it through a (quadratic) nonlinear transformation. The fundamental link between the spectral-line generation property and the statistical property called spectral correlation, which corresponds to the correlation that exists between the random fluctuations of components of the signal residing in distinct spectral bands, is explained. The effects on the spectral-correlation characteristics of some basic signal processing operations, such as filtering, product modulation, and time sampling, are examined. It is shown how to use these results to derive the spectral-correlation characteristics for various types of man-made signals. Some ways of exploiting the inherent spectral redundancy associated with spectral correlation to perform various signal processing tasks involving detection and estimation of highly corrupted man-made signals are described. >
---
paper_title: Multiple antenna spectrum sensing in cognitive radios
paper_content:
In this paper, we consider the problem of spectrum sensing by using multiple antenna in cognitive radios when the noise and the primary user signal are assumed as independent complex zero-mean Gaussian random signals. The optimal multiple antenna spectrum sensing detector needs to know the channel gains, noise variance, and primary user signal variance. In practice some or all of these parameters may be unknown, so we derive the generalized likelihood ratio (GLR) detectors under these circumstances. The proposed GLR detector, in which all the parameters are unknown, is a blind and invariant detector with a low computational complexity. We also analytically compute the missed detection and false alarm probabilities for the proposed GLR detectors. The simulation results provide the available traded-off in using multiple antenna techniques for spectrum sensing and illustrates the robustness of the proposed GLR detectors compared to the traditional energy detector when there is some uncertainty in the given noise variance.
---
paper_title: SNR Walls for Signal Detection
paper_content:
This paper considers the detection of the presence/absence of signals in uncertain low SNR environments. Small modeling uncertainties are unavoidable in any practical system and so robustness to them is a fundamental performance metric. The impact of these modeling uncertainties can be quantified by the position of the "SNR wall" below which a detector will fail to be robust, no matter how long it can observe the channel. We propose simple mathematical models for the uncertainty in the noise and fading processes. These are used to show what aspects of the model lead to SNR walls for differing levels of knowledge of the signal to be detected. These results have implications for wireless spectrum regulators. The context is opportunistically sharing spectrum with primary users that must be detected in order to avoid causing harmful interference on a channel. Ideally, a secondary system would be able to detect primaries robustly without having to know much about their signaling strategies. We argue that the tension between primary and secondary users is captured by the technical question of computing the optimal tradeoff between the primary user's capacity and the secondary user's sensing robustness as quantified by the SNR wall. This is an open problem, but we compute this tradeoff for some simple detectors.
---
paper_title: Performance of Statistical Tests for Single-Source Detection Using Random Matrix Theory
paper_content:
This paper introduces a unified framework for the detection of a source with a sensor array in the context where the noise variance and the channel between the source and the sensors are unknown at the receiver. The Generalized Maximum Likelihood Test is studied and yields the analysis of the ratio between the maximum eigenvalue of the sampled covariance matrix and its normalized trace. Using recent results of random matrix theory, a practical way to evaluate the threshold and the $p$-value of the test is provided in the asymptotic regime where the number $K$ of sensors and the number $N$ of observations per sensor are large but have the same order of magnitude. The theoretical performance of the test is then analyzed in terms of Receiver Operating Characteristic (ROC) curve. It is in particular proved that both Type I and Type II error probabilities converge to zero exponentially as the dimensions increase at the same rate, and closed-form expressions are provided for the error exponents. These theoretical results rely on a precise description of the large deviations of the largest eigenvalue of spiked random matrix models, and establish that the presented test asymptotically outperforms the popular test based on the condition number of the sampled covariance matrix.
---
paper_title: Non-Parametric Detection of the Number of Signals: Hypothesis Testing and Random Matrix Theory
paper_content:
Detection of the number of signals embedded in noise is a fundamental problem in signal and array processing. This paper focuses on the non-parametric setting where no knowledge of the array manifold is assumed. First, we present a detailed statistical analysis of this problem, including an analysis of the signal strength required for detection with high probability, and the form of the optimal detection test under certain conditions where such a test exists. Second, combining this analysis with recent results from random matrix theory, we present a new algorithm for detection of the number of sources via a sequence of hypothesis tests. We theoretically analyze the consistency and detection performance of the proposed algorithm, showing its superiority compared to the standard minimum description length (MDL)-based estimator. A series of simulations confirm our theoretical analysis.
---
paper_title: Eigenvalue-based spectrum sensing algorithms for cognitive radio
paper_content:
Spectrum sensing is a fundamental component in a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based on randomly generated signals, wireless microphone signals and captured ATSC DTV signals are presented to verify the effectiveness of the proposed methods.
---
paper_title: Signal interception: performance advantages of cyclic-feature detectors
paper_content:
The problem of detecting the presence of spread-spectrum phase-shift-keyed signals in variable noise and interference backgrounds is considered, and the performances of four detectors are evaluated and compared. The detectors include the optimum radiometer, the optimum modified radiometer that jointly estimates the noise level and detects the signal, and the maximum-SNR spectral-line regenerator for spectral-line frequencies equal to the chip rate and the doubled carrier frequency. It is concluded that the spectral-line regenerators can outperform both types of radiometers by a wide margin. The performance advantages are quantified in terms of receiver operating characteristics for several noise and interference environments and receiver collection times. >
---
paper_title: Cooperative Sensing among Cognitive Radios
paper_content:
Cognitive Radios have been advanced as a technology for the opportunistic use of under-utilized spectrum since they are able to sense the spectrum and use frequency bands if no Primary user is detected. However, the required sensitivity is very demanding since any individual radio might face a deep fade. We propose light-weight cooperation in sensing based on hard decisions to mitigate the sensitivity requirements on individual radios. We show that the "link budget" that system designers have to reserve for fading is a significant function of the required probability of detection. Even a few cooperating users (~10-20) facing independent fades are enough to achieve practical threshold levels by drastically reducing individual detection requirements. Hard decisions perform almost as well as soft decisions in achieving these gains. Cooperative gains in a environment where shadowing is correlated, is limited by the cooperation footprint (area in which users cooperate). In essence, a few independent users are more robust than many correlated users. Unfortunately, cooperative gain is very sensitive to adversarial/failing Cognitive Radios. Radios that fail in a known way (always report the presence/absence of a Primary user) can be compensated for by censoring them. On the other hand, radios that fail in unmodeled ways or may be malicious, introduce a bound on achievable sensitivity reductions. As a rule of thumb, if we believe that 1/N users can fail in an unknown way, then the cooperation gains are limited to what is possible with N trusted users.
---
paper_title: Soft Combination and Detection for Cooperative Spectrum Sensing in Cognitive Radio Networks
paper_content:
In this paper, we consider cooperative spectrum sensing based on energy detection in cognitive radio networks. Soft combination of the observed energy values from different cognitive radio users is investigated. Maximal ratio combination (MRC) is theoretically proved to be nearly optimal in low signal- to-noise ratio (SNR) region, an usual scenario in the context of cognitive radio. Both MRC and equal gain combination (EGC) exhibit significant performance improvement over conventional hard combination. Encouraged by the performance gain of soft combination, we propose a new softened hard combination scheme with two-bit overhead for each user and achieve a good tradeoff between detection performance and complexity. While traditionally energy detection suffers from an SNR wall caused by noise power uncertainty, it is shown in this paper that an SNR wall reduction can be achieved by employing cooperation among independent cognitive radio users.
---
paper_title: Decentralized Detection With Censoring Sensors
paper_content:
In the censoring approach to decentralized detection, sensors transmit real-valued functions of their observations when "informative" and save energy by not transmitting otherwise. We address several practical issues in the design of censoring sensor networks including the joint dependence of sensor decision rules, randomization of decision strategies, and partially known distributions. In canonical decentralized detection problems involving quantization of sensor observations, joint optimization of the sensor quantizers is necessary. We show that under a send/no-send constraint on each sensor and when the fusion center has its own observations, the sensor decision rules can be determined independently. In terms of design, and particularly for adaptive systems, the independence of sensor decision rules implies that minimal communication is required. We address the uncertainty in the distribution of the observations typically encountered in practice by determining the optimal sensor decision rules and fusion rule for three formulations: a robust formulation, generalized likelihood ratio tests, and a locally optimum formulation. Examples are provided to illustrate the independence of sensor decision rules, and to evaluate the partially known formulations.
---
paper_title: Censoring sensors: a low-communication-rate scheme for distributed detection
paper_content:
We consider a new scheme for distributed detection based on a "censoring" or "send/no-send" idea. The sensors are assumed to "censor" their observations so that each sensor sends to the fusion center only "informative" observations, and leaves those deemed "uninformative" untransmitted. The main result of this work is that with conditionally independent sensor data and under a communication rate constraint, in order to minimize the probability of error, transmission should occur if and only if the local likelihood ratio value observed by the sensor does not fall in a certain single interval. Similar results are derived from Neymarr-Pearson and distance-measure viewpoints. We also discuss simplifications for the most interesting case that the fusion center threshold is high and the communication constraint is severe. We compare censoring with the more common binary-transmission framework and observe its considerable decrease in communication needs. Finally, we explore the use of feedback to achieve optimal performance with very little communication.
---
paper_title: Energy-efficient detection in sensor networks
paper_content:
There is significant interest in battery or solar-powered sensor networks to be used for detec- tion in a wide variety of applications from surveillance and security to biological applications. Severe energy and bandwidth constraints at each sensor node demand system-level approaches to design that consider de- tection performance jointly with system-resource con- straints. OUT approach is to formulate detection prob- lems with constraints on the ezpected cost arising from transmission (sensor nodes to a fusion center) and measurement (at each sensor node) to address some of the system-level costs in a sensor network. Un- der a send/no-send scenario for transmission, we find that randomization over the choice of measvrement and choice of send rate achieves the best performance (in a Bayesian, Neyman-Pearson, and Ali-Silvey sense) for a given resource constraint. To facilitate design, we de- scribe some special cases where the joint optimization over the sensor nodes is eliminated.
---
paper_title: Energy-efficient spectrum sensing for cognitive sensor networks
paper_content:
We consider a combined sleeping and censoring scheme for energy-efficient spectrum sensing in cognitive sensor networks. We analyze the detection performance of this scheme by theoretically deriving the global probabilities of detection and false-alarm. Our goal is to minimize the energy consumption incurred in distributed sensing, given constraints on the global probabilities of detection and false-alarm, by optimally designing the sleeping rate and the censoring thresholds. Using specific transceiver models for sensors based on IEEE 802.15.4/ZigBee, we show the energy savings achieved under an optimum choice of the design parameters.
---
paper_title: Detection with distributed sensors
paper_content:
The extension of classical detection theory, based on the theory of statistical hypothesis testing, to the case of distributed sensors is discussed. The development is based on the formulation of a decentralized or team hypothesis testing problem. Theoretical results concerning the form of the optimal decision rule, examples, application to data fusion, and open problems are presented.
---
paper_title: Energy-efficient distributed spectrum sensing with convex optimization
paper_content:
We consider the problem of distributed spectrum sensing in cognitive radio networks with a central fusion center, from an energy efficiency viewpoint. In our scheme, each cognitive radio adopts a combination of sleeping and censoring to obtain a sensing result based on energy detection, while the fusion center combines all the sensing results using an OR decision rule. Our goal is to minimize the network energy consumption, given constraints on the global probabilities of detection and false-alarm. We show that the underlying optimization problem can be solved as a convex optimization problem. We then show the energy efficiency of our scheme via simulations using a ZigBee transceiver model.
---
paper_title: Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints
paper_content:
In cognitive radio systems, cooperative spectrum sensing is conducted among the cognitive users so as to detect the primary user accurately. However, when the number of cognitive users tends to be very large, the bandwidth for reporting their sensing results to the common receiver will be very huge. In this paper, the authors employ a censoring method with quantization to decrease the average number of sensing bits to the common receiver. By censoring the collected local observations, only the users with enough information will send their local one bit decisions (0 or 1) to the common receiver. The performance of spectrum sensing is investigated for both perfect and imperfect reporting channels. Numerical results show that the average number of sensing bits decreases greatly at the expense of a little sensing performance loss.
---
paper_title: Cognitive radio: brain-empowered wireless communications
paper_content:
Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
---
paper_title: Filter Bank Spectrum Sensing for Cognitive Radios
paper_content:
The primary task in any cognitive radio (CR) network is to dynamically explore the radio spectrum and reliably determine portion(s) of the frequency band that may be used for the communication link(s). Accordingly, each CR node in the network has to be equipped with a spectrum analyzer. In this paper, we propose filter banks as a tool for spectrum sensing in CR systems. Various choices of filter banks are suggested and their performance are evaluated theoretically and through numerical examples. Moreover, the proposed spectrum analyzer is contrasted with the Thomson's multitaper (MT) method - a method that in the recent literature has been recognized as the best choice for spectrum sensing in CR systems. A novel derivation of the MT method that facilitates our comparisons as well as reveals an important aspect of the MT method that has been less emphasized in the recent literature is also presented.
---
paper_title: Cognitive Radio Sensing Information-Theoretic Criteria Based
paper_content:
In this paper1, we explore the Information-Theoretic Criteria, namely, Akaikes Information Criterion (AIC) and Minimum Description Length (MDL) as a tool to sense vacant sub-band over the spectrum bandwidth. The proposed technique is motivated by the fact that an idle sub-band (Normal process) presents a number of independent eigenvectors appreciably larger than for an occupied sub-band (Non-normal process). It turns out that, based on the number of the independent eigenvectors of a given covariance matrix of the observed signal, one can conclude on the nature of the sensed sub-band. Our-theoretical result as well as the empirical results are first applied on experimental measurement campaign conducted at the Eurecom PLATON Platform. We then apply our method to an IEEE 802.11b Wireless Fidelity (Wi-Fi) signal in order to analyze the robustness of the proposed approach in presence of increased levels of noise. We argue that the proposed sub-space based techniques give interesting results in terms of sensing the white space in the spectrum.
---
paper_title: Spectrum Sensing for Cognitive Radio
paper_content:
Spectrum sensing is the very task upon which the entire operation of cognitive radio rests. For cognitive radio to fulfill the potential it offers to solve the spectrum underutilization problem and do so in a reliable and computationally feasible manner, we require a spectrum sensor that detects spectrum holes (i.e., underutilized subbands of the radio spectrum), provides high spectral-resolution capability, estimates the average power in each subband of the spectrum, and identifies the unknown directions of interfering signals. Cyclostationarity is another desirable property that could be used for signal detection and classification. The multitaper method (MTM) for nonparametric spectral estimation accomplishes these tasks accurately, effectively, robustly, and in a computationally feasible manner. The objectives of this paper are to present: 1) tutorial exposition of the MTM, which is expandable to perform space-time processing and time-frequency analysis; 2) cyclostationarity, viewed from the Loeve and Fourier perspectives; and 3) experimental results, using Advanced Television Systems Committee digital television and generic land mobile radio signals, followed by a discussion of the effects of Rayleigh fading.
---
| Title: Overview of Spectrum Sensing for Cognitive Radio
Section 1: INTRODUCTION
Description 1: Discuss the concept of spectrum sensing in cognitive radio, its importance, and the motivation behind it.
Section 2: MODEL
Description 2: Describe the preliminary model for signal detection, outlining the assumptions and hypotheses.
Section 3: ENERGY DETECTION
Description 3: Explain the energy detection technique, its performance, and limitations in cognitive radio.
Section 4: FUNDAMENTAL LIMITS ON DETECTION
Description 4: Discuss the fundamental limits of detection, including the impact of low SNR and imperfect noise variance knowledge.
Section 5: FEATURE DETECTION
Description 5: Present various detectors that utilize known features of the signal for improved performance.
Section 6: COOPERATIVE DETECTION
Description 6: Describe cooperative sensing techniques, including both soft and hard combining methods, and their impact on detection performance.
Section 7: CONCLUDING REMARKS
Description 7: Provide concluding remarks summarizing key points and mentioning areas not covered in the survey. |
Graphene-Based Materials for Biosensors: A Review | 13 | ---
paper_title: Prospects and Challenges of Graphene in Biomedical Applications
paper_content:
Graphene materials have entered a phase of maturity in their development that is characterized by their explorative utilization in various types of applications and fields from electronics to biomedicine. Herein, we describe the recent advances made with graphene-related materials in the biomedical field and the challenges facing these exciting new tools both in terms of biological activity and toxicological profiling in vitro and in vivo. Graphene materials today have mainly been explored as components of biosensors and for construction of matrices in tissue engineering. Their antimicrobial activity and their capacity to act as drug delivery platforms have also been reported, however, not as coherently. This report will attempt to offer some perspective as to which areas of biomedical applications can expect graphene-related materials to constitute a tool offering improved functionality and previously unavailable options.
---
paper_title: Biosensors: sense and sensibility.
paper_content:
This review is based on the Theophilus Redwood Medal and Award lectures, delivered to Royal Society of Chemistry meetings in the UK and Ireland in 2012, and presents a personal overview of the field of biosensors. The biosensors industry is now worth billions of United States dollars, the topic attracts the attention of national initiatives across the world and tens of thousands of papers have been published in the area. This plethora of information is condensed into a concise account of the key achievements to date. The reasons for success are examined, some of the more exciting emerging technologies are highlighted and the author speculates on the importance of biosensors as a ubiquitous technology of the future for health and the maintenance of wellbeing.
---
paper_title: A fluorescence turn-on biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS 2 ) nanosheets for epithelial cell adhesion molecule (EpCAM) detection
paper_content:
Abstract This paper presents a “turn-on” fluorescence biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS 2 ) nanosheets for rapid and sensitive detection of epithelial cell adhesion molecule (EpCAM). PEGylated GQDs were used as donor molecules, which could not only largely increase emission intensity but also prevent non-specific adsorption of PEGylated GQD on MoS 2 surface. The sensing platform was realized by adsorption of PEGylated GQD labelled EpCAM aptamer onto MoS 2 surface via van der Waals force. The fluorescence signal of GQD was then quenched by MoS 2 nanosheets via fluorescence resonance energy transfer (FRET) mechanism. In the presence of EpCAM protein, the stronger specific affinity interaction between aptamer and EpCAM protein could detach GQD labelled EpCAM aptamer from MoS 2 nanosheets, leading to the restoration of fluorescence intensity. By monitoring the change of fluorescence signal, the target EpCAM protein could be detected sensitively and selectively with a linear detection range from 3 nM to 54 nM and limit of detection (LOD) around 450 pM. In addition, this nanobiosensor has been successfully used for EpCAM-expressed breast cancer MCF-7 cell detection.
---
paper_title: A graphene-based electrochemical device with thermoresponsive microneedles for diabetes monitoring and therapy
paper_content:
Gold-doped graphene combined with a serpentine-shaped bilayer of gold mesh and polymeric microneedles form a wearable patch for sweat-based diabetes monitoring and feedback therapy.
---
paper_title: 3D hydrogel scaffold doped with 2D graphene materials for biosensors and bioelectronics.
paper_content:
Hydrogels consisting of three-dimensional (3D) polymeric networks have found a wide range of applications in biotechnology due to their large water capacity, high biocompatibility, and facile functional versatility. The hydrogels with stimulus-responsive swelling properties have been particularly instrumental to realizing signal transduction in biosensors and bioelectronics. Graphenes are two-dimensional (2D) nanomaterials with unprecedented physical, optical, and electronic properties and have also found many applications in biosensors and bioelectronics. These two classes of materials present complementary strengths and limitations which, when effectively coupled, can result in significant synergism in their electrical, mechanical, and biocompatible properties. This report reviews recent advances made with hydrogel and graphene materials for the development of high-performance bioelectronics devices. The report focuses on the interesting intersection of these materials wherein 2D graphenes are hybridized with 3D hydrogels to develop the next generation biosensors and bioelectronics.
---
paper_title: When biomolecules meet graphene: from molecular level interactions to material design and applications
paper_content:
Graphene-based materials have attracted increasing attention due to their atomically-thick two-dimensional structures, high conductivity, excellent mechanical properties, and large specific surface areas. The combination of biomolecules with graphene-based materials offers a promising method to fabricate novel graphene–biomolecule hybrid nanomaterials with unique functions in biology, medicine, nanotechnology, and materials science. In this review, we focus on a summarization of the recent studies in functionalizing graphene-based materials using different biomolecules, such as DNA, peptides, proteins, enzymes, carbohydrates, and viruses. The different interactions between graphene and biomolecules at the molecular level are demonstrated and discussed in detail. In addition, the potential applications of the created graphene–biomolecule nanohybrids in drug delivery, cancer treatment, tissue engineering, biosensors, bioimaging, energy materials, and other nanotechnological applications are presented. This review will be helpful to know the modification of graphene with biomolecules, understand the interactions between graphene and biomolecules at the molecular level, and design functional graphene-based nanomaterials with unique properties for various applications.
---
paper_title: Wearable smart sensor systems integrated on soft contact lenses for wireless ocular diagnostics
paper_content:
Wearable electronics have been utilized for a number of applications including for ocular use, although their use has been limited to a single function. Here, the authors developed a multifunctional contact lens with wireless electronics for measurement of glucose and intraocular pressure.
---
paper_title: An Overview of the Latest Graphene-Based Sensors for Glucose Detection: the Effects of Graphene Defects
paper_content:
In this review, we analyze several types of graphene-based sensors for glucose detection with respect to their preparation, properties and efficiency in electro-chemical processes. Graphene may display different types of defects, which play a role in the electron transfer processes. Oxygenated groups on the edges of graphene planes reduce the graphene in-plane conductivity, but may enhance the heterogeneous electron/proton transfer constant. Other positive effects of defects are related to the shortening of the distance between active centers and electrodes upon enzyme or protein immobilization. However, though by different mechanisms, all types of graphene enhance the electrochemical response at the electrode.
---
paper_title: Recent Progress in Nanomaterial-Based Electrochemical Biosensors for Cancer Biomarkers: A Review
paper_content:
This article reviews recent progress in the development of nanomaterial-based electrochemical biosensors for cancer biomarkers. Because of their high electrical conductivity, high affinity to biomolecules, and high surface area-to-weight ratios, nanomaterials, including metal nanoparticles, carbon nanotubes, and graphene, have been used for fabricating electrochemical biosensors. Electrodes are often coated with nanomaterials to increase the effective surface area of the electrodes and immobilize a large number of biomolecules such as enzymes and antibodies. Alternatively, nanomaterials are used as signaling labels for increasing the output signals of cancer biomarker sensors, in which nanomaterials are conjugated with secondary antibodies and redox compounds. According to this strategy, a variety of biosensors have been developed for detecting cancer biomarkers. Recent studies show that using nanomaterials is highly advantageous in preparing high-performance biosensors for detecting lower levels of cancer biomarkers. This review focuses mainly on the protocols for using nanomaterials to construct cancer biomarker sensors and the performance characteristics of the sensors. Recent trends in the development of cancer biomarker sensors are discussed according to the nanomaterials used.
---
paper_title: Chemical reduction of graphene oxide: a synthetic chemistry viewpoint
paper_content:
The chemical reduction of graphene oxide is a promising route towards the large scale production of graphene for commercial applications. The current state-of-the-art in graphene oxide reduction, consisting of more than 50 types of reducing agent, will be reviewed from a synthetic chemistry point of view. Emphasis is placed on the techniques, reaction mechanisms and the quality of the produced graphene. The reducing agents are reviewed under two major categories: (i) those which function according to well-supported mechanisms and (ii) those which function according to proposed mechanisms based on knowledge of organic chemistry. This review will serve as a valuable platform to understand the efficiency of these reducing agents for the reduction of graphene oxide.
---
paper_title: Market Analysis of Biosensors for Food Safety
paper_content:
This paper is presented as an overview of the pathogen detection industry. The review ::: includes pathogen detection markets and their prospects for the future. Potential markets include the ::: medical, military, food, and environmental industries. Those industries combined have a market size ::: of $563 million for pathogen detecting biosensors and are expected to grow at a compounded annual ::: growth rate (CAGR) of 4.5%. The food market is further segmented into different food product ::: industries. The overall food pathogen testing market is expected to grow to $192 million and 34 ::: million tests by 2005. The trend in pathogen testing emphasizes the need to commercialize ::: biosensors for the food safety industry as legislation creates new standards for microbial monitoring. ::: With quicker detection time and reusable features, biosensors will be important to those interested in ::: real time diagnostics of disease causing pathogens. As the world becomes more concerned with a ::: safe food and water supply, the demand for rapid detecting biosensors only increases.
---
paper_title: Direct Electrochemistry of Glucose Oxidase and Biosensing for Glucose Based on Graphene
paper_content:
We first reported that polyvinylpyrrolidone-protected graphene was dispersed well in water and had good electrochemical reduction toward O2 and H2O2. With glucose oxidase (GOD) as an enzyme model, we constructed a novel polyvinylpyrrolidone-protected graphene/polyethylenimine-functionalized ionic liquid/GOD electrochemical biosensor, which achieved the direct electron transfer of GOD, maintained its bioactivity and showed potential application for the fabrication of novel glucose biosensors with linear glucose response up to 14 mM.
---
paper_title: Real-time fluorescence assay of alkaline phosphatase in living cells using boron-doped graphene quantum dots as fluorophores.
paper_content:
This work reports a convenient and real-time assay of alkaline phosphatase (ALP) in living cells based on a fluorescence quench-recovery process at a physiological pH using the boron-doped graphene quantum dots (BGQDs) as fluorophore. The fluorescence of BGQDs is found to be effectively quenched by Ce3+ ions because of the coordination of Ce3+ ions with the carboxyl group of BGQDs. Upon addition of adenosine triphosphate (ATP) into the system, the quenched fluorescence can be recovered by the ALP-positive expressed cells (such as MCF-7 cells) due to the removal of Ce3+ ions from BGQDs surface by phosphate ions, which are generated from ATP under catalytic hydrolysis of ALP that expressed in cells. The extent of fluorescence signal recovery depends on the level of ALP in cells, which establishes the basis of ALP assay in living cells. This approach can also be used for specific discrimination of the ALP expression levels in different type of cells and thus sensitive detection of those ALP-positive expressed cells (for example MCF-7 cells) at a very low abundance (10±5 cells mL-1). The advantages of this approach are that it has high sensitivity because of the significant suppression of the background due to the Ce3+ ion quenching the fluorescence of BGQDs, and has the ability of avoiding false signals arising from the nonspecific adsorption of non-target proteins because it operates via a fluorescence quench-recovery process. In addition, it can be extended to other enzyme systems, such as ATP-related kinases.
---
paper_title: Graphene based sensors and biosensors
paper_content:
Abstract Graphene has contributed to the fabrication of sensitive sensors and biosensors due to its physical and electrochemical properties. This review discusses the role of graphene and graphene related materials for the improvement of the analytical performance of sensors and biosensors. This paper also provides an overview of recent graphene based sensors and biosensors (2012–2016), comparing their analytical performance for application in clinical, environmental, and food sciences research, and comments on future and interesting research trends in this field.
---
paper_title: Low-temperature synthesis of large-area graphene-based transparent conductive films using surface wave plasma chemical vapor deposition
paper_content:
We present a low-temperature (300–400 °C), large-area (23 cm×20 cm) and efficient synthesis method for graphene-based transparent conductive films using surface wave plasma chemical vapor deposition. The films consist of few-layer graphene sheets. Their transparency and conductivity characteristics make them suitable for practical electrical and optoelectronic applications, which have been demonstrated by the proper operation of a touch panel fabricated using the films. The results confirm that our method could be suitable for the industrial mass production of macroscopic-scale graphene-based films.
---
paper_title: Graphene-Based Ultracapacitors
paper_content:
The surface area of a single graphene sheet is 2630 m2/g, substantially higher than values derived from BET surface area measurements of activated carbons used in current electrochemical double layer capacitors. Our group has pioneered a new carbon material that we call chemically modified graphene (CMG). CMG materials are made from 1-atom thick sheets of carbon, functionalized as needed, and here we demonstrate in an ultracapacitor cell their performance. Specific capacitances of 135 and 99 F/g in aqueous and organic electrolytes, respectively, have been measured. In addition, high electrical conductivity gives these materials consistently good performance over a wide range of voltage scan rates. These encouraging results illustrate the exciting potential for high performance, electrical energy storage devices based on this new class of carbon material.
---
paper_title: Label and Label-Free Detection Techniques for Protein Microarrays
paper_content:
Protein microarray technology has gone through numerous innovative developments in recent decades. In this review, we focus on the development of protein detection methods embedded in the technology. Early microarrays utilized useful chromophores and versatile biochemical techniques dominated by high-throughput illumination. Recently, the realization of label-free techniques has been greatly advanced by the combination of knowledge in material sciences, computational design and nanofabrication. These rapidly advancing techniques aim to provide data without the intervention of label molecules. Here, we present a brief overview of this remarkable innovation from the perspectives of label and label-free techniques in transducing nano‑biological events.
---
paper_title: Lighting up left-handed Z-DNA: photoluminescent carbon dots induce DNA B to Z transition and perform DNA logic operations
paper_content:
Left-handed Z-DNA has been identified as a transient structure occurred during transcription. DNA B-Z transition has attracted much attention because of not only Z-DNA biological importance but also their relation to disease and DNA nanotechnology. Recently, photoluminescent carbon dots, especially highly luminescent nitrogen-doped carbon dots, have attracted much attention on their applications to bioimaging and gene/drug delivery because of carbon dots with low toxicity, highly stable photoluminescence and controllable surface function. However, it is still unknown whether carbon dots can influence DNA conformation or structural transition, such as B-Z transition. Herein, based on our previous series work on DNA interactions with carbon nanotubes, we report the first example that photoluminescent carbon dots can induce right-handed B-DNA to left-handed Z-DNA under physiological salt conditions with sequence and conformation selectivity. Further studies indicate that carbon dots would bind to DNA major groove with GC preference. Inspired by carbon dots lighting up Z-DNA and DNA nanotechnology, several types of DNA logic gates have been designed and constructed based on fluorescence resonance energy transfer between photoluminescent carbon dots and DNA intercalators.
---
paper_title: Fluorescent "on-off-on" switching sensor based on CdTe quantum dots coupled with multiwalled carbon nanotubes@graphene oxide nanoribbons for simultaneous monitoring of dual foreign DNAs in transgenic soybean.
paper_content:
With the increasing concern of potential health and environmental risk, it is essential to develop reliable methods for transgenic soybean detection. Herein, a simple, sensitive and selective assay was constructed based on homogeneous fluorescence resonance energy transfer (FRET) between CdTe quantum dots (QDs) and multiwalled carbon nanotubes@graphene oxide nanoribbons (MWCNTs@GONRs) to form the fluorescent "on-off-on" switching for simultaneous monitoring dual target DNAs of promoter cauliflower mosaic virus 35s (P35s) and terminator nopaline synthase (TNOS) from transgenic soybean. The capture DNAs were immobilized with corresponding QDs to obtain strong fluorescent signals (turning on). The strong π-π stacking interaction between single-stranded DNA (ssDNA) probes and MWCNTs@GONRs led to minimal background fluorescence due to the FRET process (turning off). The targets of P35s and TNOS were recognized by dual fluorescent probes to form double-stranded DNA (dsDNA) through the specific hybridization between target DNAs and ssDNA probes. And the dsDNA were released from the surface of MWCNTs@GONRs, which leaded the dual fluorescent probes to generate the strong fluorescent emissions (turning on). Therefore, this proposed homogeneous assay can be achieved to detect P35s and TNOS simultaneously by monitoring the relevant fluorescent emissions. Moreover, this assay can distinguish complementary and mismatched nucleic acid sequences with high sensitivity. The constructed approach has the potential to be a tool for daily detection of genetically modified organism with the merits of feasibility and reliability.
---
paper_title: All in the graphene family – A recommended nomenclature for two-dimensional carbon materials
paper_content:
Abstract Interest in two-dimensional, sheet-like or flake-like carbon forms has expanded beyond monolayer graphene to include related materials with significant variations in layer number, lateral dimension, rotational faulting, and chemical modification. Describing this family of “graphene materials” has been causing confusion in the Carbon journal and in the scientific literature as a whole. The international editorial team for Carbon believes that the time has come for a discussion on a rational naming system for two-dimensional carbon forms. We propose here a first nomenclature for two-dimensional carbons that could guide authors toward a more precise description of their subject materials, and could allow the field to move forward with a higher degree of common understanding.
---
paper_title: Graphene Oxide-Upconversion Nanoparticle Based Optical Sensors for Targeted Detection of mRNA Biomarkers Present in Alzheimer’s Disease and Prostate Cancer
paper_content:
The development of new sensors for the accurate detection of biomarkers in biological fluids is of utmost importance for the early diagnosis of diseases. Next to advanced laboratory techniques, there is a need for relatively simple methods which can significantly broaden the availability of diagnostic capability. Here, we demonstrate the successful application of a sensor platform based on graphene oxide and upconversion nanoparticles (NPs) for the specific detection of mRNA-related oligonucleotide markers in complex biological fluids. The combination of near-infrared light upconversion with low-background photon counting readout enables reliable detection of low quantities of small oligonucleotide sequences in the femtomolar range. We demonstrate the successful detection of analytes relevant to mRNAs present in Alzheimer’s disease as well as prostate cancer in human blood serum. The high performance and relative simplicity of the upconversion NP-graphene sensor platform enables new opportunities in early...
---
paper_title: Integration of Biosensors and Drug Delivery Technologies for Early Detection and Chronic Management of Illness
paper_content:
Recent advances in biosensor design and sensing efficacy need to be amalgamated with research in responsive drug delivery systems for building superior health or illness regimes and ensuring good patient compliance. A variety of illnesses require continuous monitoring in order to have efficient illness intervention. Physicochemical changes in the body can signify the occurrence of an illness before it manifests. Even with the usage of sensors that allow diagnosis and prognosis of the illness, medical intervention still has its downfalls. Late detection of illness can reduce the efficacy of therapeutics. Furthermore, the conventional modes of treatment can cause side-effects such as tissue damage (chemotherapy and rhabdomyolysis) and induce other forms of illness (hepatotoxicity). The use of drug delivery systems enables the lowering of side-effects with subsequent improvement in patient compliance. Chronic illnesses require continuous monitoring and medical intervention for efficient treatment to be achieved. Therefore, designing a responsive system that will reciprocate to the physicochemical changes may offer superior therapeutic activity. In this respect, integration of biosensors and drug delivery is a proficient approach and requires designing an implantable system that has a closed loop system. This offers regulation of the changes by means of releasing a therapeutic agent whenever illness biomarkers prevail. Proper selection of biomarkers is vital as this is key for diagnosis and a stimulation factor for responsive drug delivery. By detecting an illness before it manifests by means of biomarkers levels, therapeutic dosing would relate to the severity of such changes. In this review various biosensors and drug delivery systems are discussed in order to assess the challenges and future perspectives of integrating biosensors and drug delivery systems for detection and management of chronic illness.
---
paper_title: Blue photoluminescent carbon nanodots from limeade.
paper_content:
Abstract Carbon-based photoluminescent nanodot has currently been one of the promising materials for various applications. The remaining challenges are the carbon sources and the simple synthetic processes that enhance the quantum yield, photostability and biocompatibility of the nanodots. In this work, the synthesis of blue photoluminescent carbon nanodots from limeade via a single-step hydrothermal carbonization process is presented. Lime carbon nanodot (L-CnD), whose the quantum yield exceeding 50% for the 490 nm emission in gram-scale amounts, has the structure of graphene core functionalized with the oxygen functional groups. The micron-sized flake of the as-prepared L-CnD powder exhibits multicolor emission depending on an excitation wavelength. The L-CnDs are demonstrated for rapidly ferric-ion (Fe3 +) detection in water compared to Fe2 +, Cu2 +, Co2 +, Zn2 +, Mn2 + and Ni2 + ions. The photoluminescence quenching of L-CnD solution under UV light is used to distinguish the Fe3 + ions from others by naked eyes as low concentration as 100 μM. Additionally, L-CnDs provide exceptional photostability and biocompatibility for imaging yeast cell morphology. Changes in morphology of living yeast cells, i.e. cell shape variation, and budding, can be observed in a minute-period until more than an hour without the photoluminescent intensity loss.
---
paper_title: Optical Fibre Sensors Using Graphene-Based Materials: A Review
paper_content:
Graphene and its derivatives have become the most explored materials since Novoselov and Geim (Nobel Prize winners for Physics in 2010) achieved its isolation in 2004. The exceptional properties of graphene have attracted the attention of the scientific community from different research fields, generating high impact not only in scientific journals, but also in general-interest newspapers. Optical fibre sensing is one of the many fields that can benefit from the use of these new materials, combining the amazing morphological, chemical, optical and electrical features of graphene with the advantages that optical fibre offers over other sensing strategies. In this document, a review of the current state of the art for optical fibre sensors based on graphene materials is presented.
---
paper_title: Biological applications of carbon dots
paper_content:
Carbon dots (C-dots), since their first discovery in 2004 by Scrivens et al. during purification of single-walled carbon nanotubes, have gradually become a rising star in the fluorescent nanoparticles family, due to their strong fluorescence, resistance to photobleaching, low toxicity, along with their abundant and inexpensive nature. In the past decade, the procedures for preparing C-dots have become increasingly versatile and facile, and their applications are being extended to a growing number of fields. In this review, we focused on introducing the biological applications of C-dots, hoping to expedite their translation to the clinic.
---
paper_title: A protein-based electrochemical biosensor for detection of tau protein, a neurodegenerative disease biomarker
paper_content:
A protein-based electrochemical biosensor was developed for detection of tau protein aimed towards electrochemically sensing misfolding proteins. The electrochemical assay monitors tau–tau binding and misfolding during the early stage of tau oligomerization. Electrochemical impedance spectroscopy was used to detect the binding event between solution tau protein and immobilized tau protein (tau–Au), acting as a recognition element. The charge transfer resistance (Rct) of tau–Au was 2.9 ± 0.6 kΩ. Subsequent tau binding to tau–Au decreased the Rct to 0.3 ± 0.1 kΩ (90 ± 3% decrease) upon formation of a tau–tau–Au interface. A linear relationship between the Rct and the solution tau concentration was observed from 0.2 to 1.0 μM. The Rct decrease was attributed to an enhanced charge permeability of the tau–tau–Au surface to a redox probe [Fe(CN)6]3−/4−. The electrochemical and surface characterization data suggested conformational and electrostatic changes induced by tau–tau binding. The protein-based electrochemical platform was highly selective for tau protein over bovine serum albumin and allowed for a rapid sample analysis. The protein-based interface was selective for a non-phosphorylated tau441 isoform over the paired-helical filaments of tau, which were composed of phosphorylated and truncated tau isoforms. The electrochemical approach may find application in screening of the early onset of neurodegeneration and aggregation inhibitors.
---
paper_title: Advances and challenges in biosensor-based diagnosis of infectious diseases
paper_content:
Rapid diagnosis of infectious diseases and timely initiation of appropriate treatment are critical determinants that promote optimal clinical outcomes and general public health. Conventional in vitro diagnostics for infectious diseases are time-consuming and require centralized laboratories, experienced personnel and bulky equipment. Recent advances in biosensor technologies have potential to deliver point-of-care diagnostics that match or surpass conventional standards in regards to time, accuracy and cost. Broadly classified as either label-free or labeled, modern biosensors exploit micro- and nanofabrication technologies and diverse sensing strategies including optical, electrical and mechanical transducers. Despite clinical need, translation of biosensors from research laboratories to clinical applications has remained limited to a few notable examples, such as the glucose sensor. Challenges to be overcome include sample preparation, matrix effects and system integration. We review the advances of biosensors for infectious disease diagnostics and discuss the critical challenges that need to be overcome in order to implement integrated diagnostic biosensors in real world settings.
---
paper_title: Prospects and Challenges of Graphene in Biomedical Applications
paper_content:
Graphene materials have entered a phase of maturity in their development that is characterized by their explorative utilization in various types of applications and fields from electronics to biomedicine. Herein, we describe the recent advances made with graphene-related materials in the biomedical field and the challenges facing these exciting new tools both in terms of biological activity and toxicological profiling in vitro and in vivo. Graphene materials today have mainly been explored as components of biosensors and for construction of matrices in tissue engineering. Their antimicrobial activity and their capacity to act as drug delivery platforms have also been reported, however, not as coherently. This report will attempt to offer some perspective as to which areas of biomedical applications can expect graphene-related materials to constitute a tool offering improved functionality and previously unavailable options.
---
paper_title: Biosensors: sense and sensibility.
paper_content:
This review is based on the Theophilus Redwood Medal and Award lectures, delivered to Royal Society of Chemistry meetings in the UK and Ireland in 2012, and presents a personal overview of the field of biosensors. The biosensors industry is now worth billions of United States dollars, the topic attracts the attention of national initiatives across the world and tens of thousands of papers have been published in the area. This plethora of information is condensed into a concise account of the key achievements to date. The reasons for success are examined, some of the more exciting emerging technologies are highlighted and the author speculates on the importance of biosensors as a ubiquitous technology of the future for health and the maintenance of wellbeing.
---
paper_title: When biomolecules meet graphene: from molecular level interactions to material design and applications
paper_content:
Graphene-based materials have attracted increasing attention due to their atomically-thick two-dimensional structures, high conductivity, excellent mechanical properties, and large specific surface areas. The combination of biomolecules with graphene-based materials offers a promising method to fabricate novel graphene–biomolecule hybrid nanomaterials with unique functions in biology, medicine, nanotechnology, and materials science. In this review, we focus on a summarization of the recent studies in functionalizing graphene-based materials using different biomolecules, such as DNA, peptides, proteins, enzymes, carbohydrates, and viruses. The different interactions between graphene and biomolecules at the molecular level are demonstrated and discussed in detail. In addition, the potential applications of the created graphene–biomolecule nanohybrids in drug delivery, cancer treatment, tissue engineering, biosensors, bioimaging, energy materials, and other nanotechnological applications are presented. This review will be helpful to know the modification of graphene with biomolecules, understand the interactions between graphene and biomolecules at the molecular level, and design functional graphene-based nanomaterials with unique properties for various applications.
---
paper_title: An Overview of the Latest Graphene-Based Sensors for Glucose Detection: the Effects of Graphene Defects
paper_content:
In this review, we analyze several types of graphene-based sensors for glucose detection with respect to their preparation, properties and efficiency in electro-chemical processes. Graphene may display different types of defects, which play a role in the electron transfer processes. Oxygenated groups on the edges of graphene planes reduce the graphene in-plane conductivity, but may enhance the heterogeneous electron/proton transfer constant. Other positive effects of defects are related to the shortening of the distance between active centers and electrodes upon enzyme or protein immobilization. However, though by different mechanisms, all types of graphene enhance the electrochemical response at the electrode.
---
paper_title: Chemical reduction of graphene oxide: a synthetic chemistry viewpoint
paper_content:
The chemical reduction of graphene oxide is a promising route towards the large scale production of graphene for commercial applications. The current state-of-the-art in graphene oxide reduction, consisting of more than 50 types of reducing agent, will be reviewed from a synthetic chemistry point of view. Emphasis is placed on the techniques, reaction mechanisms and the quality of the produced graphene. The reducing agents are reviewed under two major categories: (i) those which function according to well-supported mechanisms and (ii) those which function according to proposed mechanisms based on knowledge of organic chemistry. This review will serve as a valuable platform to understand the efficiency of these reducing agents for the reduction of graphene oxide.
---
paper_title: Graphene based sensors and biosensors
paper_content:
Abstract Graphene has contributed to the fabrication of sensitive sensors and biosensors due to its physical and electrochemical properties. This review discusses the role of graphene and graphene related materials for the improvement of the analytical performance of sensors and biosensors. This paper also provides an overview of recent graphene based sensors and biosensors (2012–2016), comparing their analytical performance for application in clinical, environmental, and food sciences research, and comments on future and interesting research trends in this field.
---
paper_title: Low-temperature synthesis of large-area graphene-based transparent conductive films using surface wave plasma chemical vapor deposition
paper_content:
We present a low-temperature (300–400 °C), large-area (23 cm×20 cm) and efficient synthesis method for graphene-based transparent conductive films using surface wave plasma chemical vapor deposition. The films consist of few-layer graphene sheets. Their transparency and conductivity characteristics make them suitable for practical electrical and optoelectronic applications, which have been demonstrated by the proper operation of a touch panel fabricated using the films. The results confirm that our method could be suitable for the industrial mass production of macroscopic-scale graphene-based films.
---
paper_title: Graphene-Based Ultracapacitors
paper_content:
The surface area of a single graphene sheet is 2630 m2/g, substantially higher than values derived from BET surface area measurements of activated carbons used in current electrochemical double layer capacitors. Our group has pioneered a new carbon material that we call chemically modified graphene (CMG). CMG materials are made from 1-atom thick sheets of carbon, functionalized as needed, and here we demonstrate in an ultracapacitor cell their performance. Specific capacitances of 135 and 99 F/g in aqueous and organic electrolytes, respectively, have been measured. In addition, high electrical conductivity gives these materials consistently good performance over a wide range of voltage scan rates. These encouraging results illustrate the exciting potential for high performance, electrical energy storage devices based on this new class of carbon material.
---
paper_title: All in the graphene family – A recommended nomenclature for two-dimensional carbon materials
paper_content:
Abstract Interest in two-dimensional, sheet-like or flake-like carbon forms has expanded beyond monolayer graphene to include related materials with significant variations in layer number, lateral dimension, rotational faulting, and chemical modification. Describing this family of “graphene materials” has been causing confusion in the Carbon journal and in the scientific literature as a whole. The international editorial team for Carbon believes that the time has come for a discussion on a rational naming system for two-dimensional carbon forms. We propose here a first nomenclature for two-dimensional carbons that could guide authors toward a more precise description of their subject materials, and could allow the field to move forward with a higher degree of common understanding.
---
paper_title: Chemically Derived, Ultrasmooth Graphene Nanoribbon Semiconductors
paper_content:
We developed a chemical route to produce graphene nanoribbons (GNR) with width below 10 nanometers, as well as single ribbons with varying widths along their lengths or containing lattice-defined graphene junctions for potential molecular electronics. The GNRs were solution-phase–derived, stably suspended in solvents with noncovalent polymer functionalization, and exhibited ultrasmooth edges with possibly well-defined zigzag or armchair-edge structures. Electrical transport experiments showed that, unlike single-walled carbon nanotubes, all of the sub–10-nanometer GNRs produced were semiconductors and afforded graphene field effect transistors with on-off ratios of about 107 at room temperature.
---
paper_title: Ultrahigh electron mobility in suspended graphene
paper_content:
We have achieved mobilities in excess of 200,000 cm^2/Vs at electron densities of ~2*10^11 cm^-2 by suspending single layer graphene. Suspension ~150 nm above a Si/SiO_2 gate electrode and electrical contacts to the graphene was achieved by a combination of electron beam lithography and etching. The specimens were cleaned in situ by employing current-induced heating, directly resulting in a significant improvement of electrical transport. Concomitant with large mobility enhancement, the widths of the characteristic Dirac peaks are reduced by a factor of 10 compared to traditional, non-suspended devices. This advance should allow for accessing the intrinsic transport properties of graphene.
---
paper_title: Two-dimensional gas of massless Dirac fermions in graphene
paper_content:
Quantum electrodynamics (resulting from the merger of quantum mechanics and relativity theory) has provided a clear understanding of phenomena ranging from particle physics to cosmology and from astrophysics to quantum chemistry. The ideas underlying quantum electrodynamics also influence the theory of condensed matter, but quantum relativistic effects are usually minute in the known experimental systems that can be described accurately by the non-relativistic Schrödinger equation. Here we report an experimental study of a condensed-matter system (graphene, a single atomic layer of carbon) in which electron transport is essentially governed by Dirac's (relativistic) equation. The charge carriers in graphene mimic relativistic particles with zero rest mass and have an effective 'speed of light' c* approximately 10(6) m s(-1). Our study reveals a variety of unusual phenomena that are characteristic of two-dimensional Dirac fermions. In particular we have observed the following: first, graphene's conductivity never falls below a minimum value corresponding to the quantum unit of conductance, even when concentrations of charge carriers tend to zero; second, the integer quantum Hall effect in graphene is anomalous in that it occurs at half-integer filling factors; and third, the cyclotron mass m(c) of massless carriers in graphene is described by E = m(c)c*2. This two-dimensional system is not only interesting in itself but also allows access to the subtle and rich physics of quantum electrodynamics in a bench-top experiment.
---
paper_title: Electric Field Effect in Atomically Thin Carbon Films
paper_content:
We describe monocrystalline graphitic films, which are a few atoms thick but are nonetheless stable under ambient conditions, metallic, and of remarkably high quality. The films are found to be a two-dimensional semimetal with a tiny overlap between valence and conductance bands, and they exhibit a strong ambipolar electric field effect such that electrons and holes in concentrations up to 10(13) per square centimeter and with room-temperature mobilities of approximately 10,000 square centimeters per volt-second can be induced by applying gate voltage.
---
paper_title: Nanoelectronic biosensors based on CVD grown graphene
paper_content:
Graphene, a single-atom-thick and two-dimensional carbon material, has attracted great attention recently. Because of its unique electrical, physical, and optical properties, graphene has great potential to be a novel alternative to carbon nanotubes in biosensing. We demonstrate the use of large-sized CVD grown graphene films configured as field-effect transistors for real-time biomolecular sensing. Glucose or glutamate molecules were detected by the conductance change of the graphene transistor as the molecules are oxidized by the specific redox enzyme (glucose oxidase or glutamic dehydrogenase) functionalized onto the graphene film. This study indicates that graphene is a promising candidate for the development of real-time nanoelectronic biosensors.
---
paper_title: Single-layer CVD-grown graphene decorated with metal nanoparticles as a promising biosensing platform
paper_content:
Abstract A new approach to the development of a single-layer graphene sensor decorated with metal nanoparticles is presented. Chemical vapor deposition is used to grow single layer graphene on copper. Decoration of the single-layer graphene is achieved by electroless deposition of Au nanoparticles using the copper substrate as a source of electrons. Transfer of the decorated single-layer graphene on glassy carbon electrodes offers a sensitive platform for biosensor development. As a proof of concept, 10 units of glucose oxidase were deposited on the surface in a Nafion matrix to stabilize the enzyme as well as to prevent interference from ascorbic acid and uric acid. Amperometric linear response calibration in the μmol l −1 is obtained. The presented methodology enables highly sensitive platforms for biosensor development, providing a scalable roll-to-roll production with a much more reproducible scheme when compared to the graphene biosensors reported previously based on drop-cast of multi-layer graphene suspensions.
---
paper_title: Real-time reliable determination of binding kinetics of DNA hybridization using a multi-channel graphene biosensor
paper_content:
Reliable determination of binding kinetics and affinity of DNA hybridization and single-base mismatches plays an essential role in systems biology, personalized and precision medicine. The standard tools are optical-based sensors that are difficult to operate in low cost and to miniaturize for high-throughput measurement. Biosensors based on nanowire field-effect transistors have been developed, but reliable and cost-effective fabrication remains a challenge. Here, we demonstrate that a graphene single-crystal domain patterned into multiple channels can measure time- and concentration-dependent DNA hybridization kinetics and affinity reliably and sensitively, with a detection limit of 10 pM for DNA. It can distinguish single-base mutations quantitatively in real time. An analytical model is developed to estimate probe density, efficiency of hybridization and the maximum sensor response. The results suggest a promising future for cost-effective, high-throughput screening of drug candidates, genetic variations and disease biomarkers by using an integrated, miniaturized, all-electrical multiplexed, graphene-based DNA array.
---
paper_title: Large-scale pattern growth of graphene films for stretchable transparent electrodes
paper_content:
High-performance, transparent and stretchable electrodes are in demand for the development of flexible electronic and optoelectronic applications. Graphene is a candidate as the basis material, because of its excellent optical, electrical and mechanical properties. This paper describes a technique to grow centimetre-scale films using chemical vapour deposition on nickel films and a method to pattern and transfer the films to arbitrary substrates. The electrical conductance and optical transparency are as high as those for microscale graphene films.
---
paper_title: 200 GHz Maximum Oscillation Frequency in CVD Graphene Radio Frequency Transistors.
paper_content:
Graphene is a promising candidate in analog electronics with projected operation frequency well into the terahertz range. In contrast to the intrinsic cutoff frequency (fT) of 427 GHz, the maximum oscillation frequency (fmax) of graphene device still remains at low level, which severely limits its application in radio frequency amplifiers. Here, we develop a novel transfer method for chemical vapor deposition graphene, which can prevent graphene from organic contamination during the fabrication process of the devices. Using a self-aligned gate deposition process, the graphene transistor with 60 nm gate length exhibits a record high fmax of 106 and 200 GHz before and after de-embedding, respectively. This work defines a unique pathway to large-scale fabrication of high-performance graphene transistors, and holds significant potential for future application of graphene-based devices in ultra high frequency circuits.
---
paper_title: Roll-to-roll production of 30-inch graphene films for transparent electrodes
paper_content:
Graphene films with electrical and optical characteristics superior to indium tin oxide are produced in a roll-to-roll process and used to construct devices with flexible touch-screen panels.
---
paper_title: Large-area ultrathin films of reduced graphene oxide as a transparent and flexible electronic material
paper_content:
Large-area ultrathin films of reduced graphene oxide as a transparent and flexible electronic material
---
paper_title: Low-temperature synthesis of large-area graphene-based transparent conductive films using surface wave plasma chemical vapor deposition
paper_content:
We present a low-temperature (300–400 °C), large-area (23 cm×20 cm) and efficient synthesis method for graphene-based transparent conductive films using surface wave plasma chemical vapor deposition. The films consist of few-layer graphene sheets. Their transparency and conductivity characteristics make them suitable for practical electrical and optoelectronic applications, which have been demonstrated by the proper operation of a touch panel fabricated using the films. The results confirm that our method could be suitable for the industrial mass production of macroscopic-scale graphene-based films.
---
paper_title: A label-free and portable graphene FET aptasensor for children blood lead detection
paper_content:
Lead is a cumulative toxicant, which can induce severe health issues, especially in children’s case due to their immature nervous system. While realizing large-scale monitoring of children blood lead remains challenging by utilizing traditional methods, it is highly desirable to search for alternative techniques or novel sensing materials. Here we report a label-free and portable aptasensor based on graphene field effect transistor (FET) for effective children blood lead detection. With standard solutions of different Pb2+ concentrations, we obtained a dose-response curve and a detection limitation below 37.5 ng/L, which is three orders lower than the safe blood lead level (100 μg/L). The devices also showed excellent selectivity over other metal cations such as, Na+, K+, Mg2+, and Ca2+, suggesting the capability of working in a complex sample matrix. We further successfully demonstrated the detection of Pb2+ ions in real blood samples from children by using our aptasensors, and explored their potential applications for quantification. Our results underscore such graphene FET aptasensors for future applications on fast detection of heavy metal ions for health monitoring and disease diagnostics.
---
paper_title: Graphene as a Long-Term Metal Oxidation Barrier: Worse Than Nothing
paper_content:
Anticorrosion and antioxidation surface treatments such as paint or anodization are a foundational component in nearly all industries. Graphene, a single-atom-thick sheet of carbon with impressive impermeability to gases, seems to hold promise as an effective anticorrosion barrier, and recent work supports this hope. We perform a complete study of the short- and long-term performance of graphene coatings for Cu and Si substrates. Our work reveals that although graphene indeed offers effective short-term oxidation protection, over long time scales it promotes more extensive wet corrosion than that seen for an initially bare, unprotected Cu surface. This surprising result has important implications for future scientific studies and industrial applications. In addition to informing any future work on graphene as a protective coating, the results presented here have implications for graphene’s performance in a wide range of applications.
---
paper_title: High-yield production of graphene by liquid-phase exfoliation of graphite
paper_content:
Fully exploiting the properties of graphene will require a method for the mass production of this remarkable material. Two main routes are possible: large-scale growth or large-scale exfoliation. Here, we demonstrate graphene dispersions with concentrations up to approximately 0.01 mg ml(-1), produced by dispersion and exfoliation of graphite in organic solvents such as N-methyl-pyrrolidone. This is possible because the energy required to exfoliate graphene is balanced by the solvent-graphene interaction for solvents whose surface energies match that of graphene. We confirm the presence of individual graphene sheets by Raman spectroscopy, transmission electron microscopy and electron diffraction. Our method results in a monolayer yield of approximately 1 wt%, which could potentially be improved to 7-12 wt% with further processing. The absence of defects or oxides is confirmed by X-ray photoelectron, infrared and Raman spectroscopies. We are able to produce semi-transparent conducting films and conducting composites. Solution processing of graphene opens up a range of potential large-area applications, from device and sensor fabrication to liquid-phase chemistry.
---
paper_title: Electrochemical ascorbic acid sensor based on DMF-exfoliated graphene
paper_content:
This paper describes the electron transfer properties of graphene nano-sheets (GNSs) immobilised on pyrolysed photoresist film (PPF) electrodes. The former are produced by the dispersion and exfoliation of graphite in dimethylformamide, and they are characterised using transmission electron microscopy, scanning electron microscopy and Raman spectroscopy. Cyclic voltammetry and electrochemical impedance spectroscopy are used to quantify the effect of the GNSs on electrochemical surface area and on electron transfer kinetics. Compelling evidence is reported in relation to the importance of edge-plane sites and defects in the promotion of electron transfer at carbon nanostructures. A novel ascorbic acid (vitamin C) sensor is presented based on the PPF/GNS system, which is effective in the range 0.4 to 6.0 mM, with a 0.12 mM detection limit. The selectivity of the sensor is demonstrated using a commercially available vitamin C supplement. This is the first report of the electrochemical properties of graphene nano-sheets produced using liquid-phase exfoliation, and it will serve as an important benchmark in the development of inexpensive graphene-based electrodes with high surface area and electro-catalytic activity.
---
paper_title: Liquid Phase Production of Graphene by Exfoliation of Graphite in Surfactant/Water Solutions
paper_content:
We have demonstrated a method to disperse and exfoliate graphite to give graphene suspended in water-surfactant solutions. Optical characterization of these suspensions allowed the partial optimization of the dispersion process. Transmission electron microscopy showed the dispersed phase to consist of small graphitic flakes. More than 40% of these flakes had <5 layers with approximately 3% of flakes consisting of monolayers. Atomic resolution transmission electron microscopy shows the monolayers to be generally free of defects. The dispersed graphitic flakes are stabilized against reaggregation by Coulomb repulsion due to the adsorbed surfactant. We use DLVO and Hamaker theory to describe this stabilization. However, the larger flakes tend to sediment out over approximately 6 weeks, leaving only small flakes dispersed. It is possible to form thin films by vacuum filtration of these dispersions. Raman and IR spectroscopic analysis of these films suggests the flakes to be largely free of defects and oxides, although X-ray photoelectron spectroscopy shows evidence of a small oxide population. Individual graphene flakes can be deposited onto mica by spray coating, allowing statistical analysis of flake size and thickness. Vacuum filtered films are reasonably conductive and are semitransparent. Further improvements may result in the development of cheap transparent conductors.
---
paper_title: Liquid-Phase Exfoliation of Graphite Towards Solubilized Graphenes
paper_content:
Following the astonishing discoveries of fullerenes and carbon nanotubes in earlier decades, the rise of graphene has recently triggered an exciting new area in the field of carbon nanoscience with continuously growing academic and technological impetus. Currently, several methods have been proposed to prepare graphenes, such as micromechanical cleavage, thermal annealing of SiC, chemical reduction of graphite oxide, intercalative expansion of graphite, bottom-up growth, chemical vapor deposition, and liquid-phase exfoliation. Especially this latter top-down approach is very appealing from a chemist’s point of view for the following reasons: i) it is direct, simple, and benign producing graphenes just by solvent treatment of graphite powders, and ii) the as-obtained sheets form colloidal dispersions in the solvents used for the exfoliation, thereby enabling their manipulation into various processes, like mixing, blending, casting, impregnation, spin-coating, or functionalization. The key parameter for suitable solvents is that the solvent–graphene interactions must be at least comparable to those existing between the stacked graphenes in graphite. To that end, Coleman and coworkers have successfully demonstrated this concept using N-methylpyrrolidone, N,N-dimethylacetamide, g-butyrolactone, 1,3-dimethyl-2-imidazolidinone, and benzyl benzoate as
---
paper_title: Liquid Exfoliation of Defect-Free Graphene
paper_content:
Due to its unprecedented physical properties, graphene has generated huge interest over the last 7 years. Graphene is generally fabricated in one of two ways: as very high quality sheets produced in limited quantities by micromechanical cleavage or vapor growth or as a rather defective, graphene-like material, graphene oxide, produced in large quantities. However, a growing number of applications would profit from the availability of a method to produce high-quality graphene in large quantities. This Account describes recent work to develop such a processing route inspired by previous theoretical and experimental studies on the solvent dispersion of carbon nanotubes. That work had shown that nanotubes could be effectively dispersed in solvents whose surface energy matched that of the nanotubes. We describe the application of the same approach to the exfoliation of graphite to give graphene in a range of solvents. When graphite powder is exposed to ultrasonication in the presence of a suitable solvent, the powder fragments into nanosheets, which are stabilized against aggregation by the solvent. The enthalpy of mixing is minimized for solvents with surface energies close to that of graphene (∼68 mJ/m(2)). The exfoliated nanosheets are free of defects and oxides and can be produced in large quantities. Once solvent exfoliation is possible, the process can be optimized and the nanosheets can be separated by size. The use of surfactants can also stabilize exfoliated graphene in water, where the ζ potential of the surfactant-coated graphene nanosheets controls the dispersed concentration. Liquid exfoliated graphene can be used for a range of applications: graphene dispersions as optical limiters, films of graphene flakes as transparent conductors or sensors, and exfoliated graphene as a mechanical reinforcement for polymer-based composites. Finally, we have extended this process to exfoliate other layered compounds such as BN and MoS(2). Such materials will be important in a range of applications from thermoelectrics to battery electrodes. This liquid exfoliation technique can be applied to a wide range of materials and has the potential to be scaled up into an industrial process. We believe the coming decade will see an explosion in the applications involving liquid exfoliated two-dimensional materials.
---
paper_title: Existence and topological stability of Fermi points in multilayered graphene
paper_content:
We study the existence and topological stability of Fermi points in a graphene layer and stacks with many layers. We show that the discrete symmetries (space-time inversion) stabilize the Fermi points in monolayer, bilayer, and multilayer graphenes with orthorhombic stacking. The bands near $k=0$ and $ϵ=0$ in multilayers with the Bernal stacking depend on the parity of the number of layers, and Fermi points are unstable when the number of layers is odd. The low-energy changes in the electronic structure induced by commensurate perturbations which mix the two Dirac points are also investigated.
---
paper_title: Substrate-induced bandgap opening in epitaxial graphene.
paper_content:
Graphene has shown great application potential as the hostmaterial for next-generation electronic devices. However, despite itsintriguing properties, one of the biggest hurdles for graphene to beuseful as an electronic material is the lack of an energy gap in itselectronic spectra. This, for example, prevents the use of graphene inmaking transistors. Although several proposals have been made to open agap in graphene's electronic spectra, they all require complexengineering of the graphene layer. Here, we show that when graphene isepitaxially grown on SiC substrate, a gap of ~;0.26 eV is produced. Thisgap decreases as the sample thickness increases and eventually approacheszero when the number of layers exceeds four. We propose that the originof this gap is the breaking of sublattice symmetry owing to thegraphene-substrate interaction. We believe that our results highlight apromising direction for band gap engineering of graphene.
---
paper_title: EDGE STATE IN GRAPHENE RIBBONS : NANOMETER SIZE EFFECT AND EDGE SHAPE DEPENDENCE
paper_content:
Finite graphite systems having a zigzag edge exhibit a special edge state. The corresponding energy bands are almost flat at the Fermi level and thereby give a sharp peak in the density of states. The charge density in the edge state is strongly localized on the zigzag edge sites. No such localized state appears in graphite systems having an armchair edge. By utilizing the graphene ribbon model, we discuss the effect of the system size and edge shape on the special edge state. By varying the width of the graphene ribbons, we find that the nanometer size effect is crucial for determining the relative importance of the edge state. We also have extended the graphene ribbon to have edges of a general shape, which is defined as a mixture of zigzag and armchair sites. Examining the relative importance of the edge state for graphene ribbons with general edges, we find that a non-negligible edge state survives even in graphene ribbons with less developed zigzag edges. We demonstrate that such an edge shape with three or four zigzag sites per sequence is sufficient to show an edge state, when the system size is on a nanometer scale. The special characteristics of the edge state play a large role in determining the density of states near the Fermi level for graphite networks on a nanometer scale.
---
paper_title: Origins of anomalous electronic structures of epitaxial graphene on silicon carbide
paper_content:
On the basis of first-principles calculations, we report that a novel interfacial atomic structure occurs between graphene and the surface of silicon carbide, destroying the Dirac point of graphene and opening a substantial energy gap there. In the calculated atomic structures, a quasiperiodic 6x6 domain pattern emerges out of a larger commensurate 6 sqrt [3] x 6 sqrt [3]R30 degrees periodic interfacial reconstruction, resolving a long standing experimental controversy on the periodicity of the interfacial superstructures. Our theoretical energy spectrum shows a gap and midgap states at the Dirac point of graphene, which are in excellent agreement with the recently observed anomalous angle-resolved photoemission spectra. Beyond solving unexplained issues in epitaxial graphene, our atomistic study may provide a way to engineer the energy gaps of graphene on substrates.
---
paper_title: Origin of the energy bandgap in epitaxial graphene
paper_content:
We studied the effect of quantum confinement on the size of the band gap in single layer epitaxial graphene. Samples with different graphene terrace sizes are studied by using low energy electron microscopy (LEEM) and angle-resolved photoemission spectroscopy (ARPES). The direct correlation between the terrace size extracted from LEEM and the gap size extracted from ARPES shows that quantum confinement alone cannot account for the large gap observed in epitaxial graphene samples.
---
paper_title: Direct voltammetric detection of DNA and pH sensing on epitaxial graphene: an insight into the role of oxygenated defects.
paper_content:
In this paper, we carried out detailed electrochemical studies of epitaxial graphene (EG) using inner-sphere and outer-sphere redox mediators. The EG sample was anodized systematically to investigate the effect of edge plane defects on the heterogeneous charge transfer kinetics and capacitive noise. We found that anodized EG, consisting of oxygen-related defects, is a superior biosensing platform for the detection of nucleic acids, uric acids (UA), dopamine (DA), and ascorbic acids (AA). Mixtures of nucleic acids (A, T, C, G) or biomolecules (AA, UA, DA) can be resolved as individual peaks using differential pulse voltammetry. In fact, an anodized EG voltammetric sensor can realize the simultaneous detection of all four DNA bases in double stranded DNA (dsDNA) without a prehydrolysis step, and it can also differentiate single stranded DNA from dsDNA. Our results show that graphene with high edge plane defects, as opposed to pristine graphene, is the choice platform in high resolution electrochemical sensing.
---
paper_title: The growth and morphology of epitaxial multilayer graphene
paper_content:
The electronic properties of epitaxial graphene grown on SiC have shown its potential as a viable candidate for post-CMOS electronics. However, progress in this field requires a detailed understanding of both the structure and growth of epitaxial graphene. To that end, this review will focus on the current state of epitaxial graphene research as it relates to the structure of graphene grown on SiC. We pay particular attention to the similarity and differences between graphene growth on the two polar faces, (0001) and , of hexagonal SiC. Growth techniques, subsequent morphology and the structure of the graphene/SiC interface and graphene stacking order are reviewed and discussed. Where possible the relationship between film morphology and electronic properties will also be reviewed.
---
paper_title: Transfer of Graphene Layers Grown on SiC Wafers to Other Substrates and Their Integration into Field Effect Transistors
paper_content:
This letter presents a simple method for transferring epitaxial sheets of graphene on silicon carbide to other substrates. The graphene was grown on the (0001) face of 6H-SiC by thermal annealing at 1550 °C in a hydrogen atmosphere. Transfer was accomplished using a peeling process with a bilayer film of gold/polyimide, to yield graphene with square millimeters of coverage on the target substrate. Raman spectroscopy provided evidence that the transferred material is single layer. Back gated field-effect transistors fabricated on oxidized silicon substrates with Cr/Au as source-drain electrodes exhibited ambipolar characteristics with hole mobilities of ∼100 cm2/V-s, and negligible influence of resistance at the contacts.
---
paper_title: Electronic states of graphene nanoribbons studied with the Dirac equation
paper_content:
We study the electronic states of narrow graphene ribbons (``nanoribbons'') with zigzag and armchair edges. The finite width of these systems breaks the spectrum into an infinite set of bands, which we demonstrate can be quantitatively understood using the Dirac equation with appropriate boundary conditions. For the zigzag nanoribbon we demonstrate that the boundary condition allows a particlelike and a holelike band with evanescent wave functions confined to the surfaces, which continuously turn into the well-known zero energy surface states as the width gets large. For armchair edges, we show that the boundary condition leads to admixing of valley states, and the band structure is metallic when the width of the sample in lattice constant units has the form $3M+1$, with $M$ an integer, and insulating otherwise. A comparison of the wave functions and energies from tight-binding calculations and solutions of the Dirac equations yields quantitative agreement for all but the narrowest ribbons.
---
paper_title: High-throughput solution processing of large-scale graphene
paper_content:
Graphene is a promising material for the next-generation of nanoelectronic devices, but it has been difficult to produce single-layer samples in bulk quantities. A solution-based process for the large-scale production of single-layer, chemically converted graphene has now been demonstrated and used to make field-effect devices with currents that are three orders of magnitude higher than previously reported for chemically produced graphene.
---
paper_title: Improved Synthesis of Graphene Oxide
paper_content:
An improved method for the preparation of graphene oxide (GO) is described. Currently, Hummers’ method (KMnO4, NaNO3, H2SO4) is the most common method used for preparing graphene oxide. We have found that excluding the NaNO3, increasing the amount of KMnO4, and performing the reaction in a 9:1 mixture of H2SO4/H3PO4 improves the efficiency of the oxidation process. This improved method provides a greater amount of hydrophilic oxidized graphene material as compared to Hummers’ method or Hummers’ method with additional KMnO4. Moreover, even though the GO produced by our method is more oxidized than that prepared by Hummers’ method, when both are reduced in the same chamber with hydrazine, chemically converted graphene (CCG) produced from this new method is equivalent in its electrical conductivity. In contrast to Hummers’ method, the new method does not generate toxic gas and the temperature is easily controlled. This improved synthesis of GO may be important for large-scale production of GO as well as the ...
---
paper_title: A new structure model of graphite oxide
paper_content:
Fluorination of graphite oxide under mild conditions gives the same X-ray diffraction pattern as that for stage 2-type graphite fluoride, (C2F)n. Well-hydrated graphite oxide shows the presence of a superlattice along the c axis. A new structure model of graphite oxide has been proposed based on these facts. The new model consists of double carbon layers linked with each other by sp3 bonds of carbon perpendicular to the carbon network. The carbonyl and hydroxyl groups are combined with the double carbon layers from above and below. This structure model is an intermediate form between two ideal structures, C8(OH)4(c0 = 8.22 × 2 A) and C8O2(c0 = 5.52 × 2 A). It has a structure with sixfold symmetry (hexagonal system 6m2) with the stacking sequence of carbon layers, A-A'/B-B'/.… The repeat distance along the c axis decreases with increasing dehydration. The composition is expressed by C8O2−x(OH)2x(0
---
paper_title: Chemical reduction of graphene oxide: a synthetic chemistry viewpoint
paper_content:
The chemical reduction of graphene oxide is a promising route towards the large scale production of graphene for commercial applications. The current state-of-the-art in graphene oxide reduction, consisting of more than 50 types of reducing agent, will be reviewed from a synthetic chemistry point of view. Emphasis is placed on the techniques, reaction mechanisms and the quality of the produced graphene. The reducing agents are reviewed under two major categories: (i) those which function according to well-supported mechanisms and (ii) those which function according to proposed mechanisms based on knowledge of organic chemistry. This review will serve as a valuable platform to understand the efficiency of these reducing agents for the reduction of graphene oxide.
---
paper_title: Solid-State NMR Studies of the Structure of Graphite Oxide
paper_content:
Graphite oxide (GO) and its derivatives have been studied using 13C and 1H NMR. The 13C NMR lines at 60, 70, and 130 ppm are assigned to C−OH, C−O−C, and >CC CC< double bonds are relatively stable, while C−OH groups may condense to form C−O−C (ether) linkages. There are at least two magnetically inequivalent C−OH sites, and the structure does not necessarily possess long-range order. Water molecules interact very strongly with the structure. The results reveal a number of new structural features.
---
paper_title: Synthesis and solid-state NMR structural characterization of 13C-labeled graphite oxide.
paper_content:
The detailed chemical structure of graphite oxide (GO), a layered material prepared from graphite almost 150 years ago and a precursor to chemically modified graphenes, has not been previously resolved because of the pseudo-random chemical functionalization of each layer, as well as variations in exact composition. Carbon-13 (13C) solid-state nuclear magnetic resonance (SSNMR) spectra of GO for natural abundance 13C have poor signal-to-noise ratios. Approximately 100% 13C-labeled graphite was made and converted to 13C-labeled GO, and 13C SSNMR was used to reveal details of the chemical bonding network, including the chemical groups and their connections. Carbon-13-labeled graphite can be used to prepare chemically modified graphenes for 13C SSNMR analysis with enhanced sensitivity and for fundamental studies of 13C-labeled graphite and graphene.
---
paper_title: New insights into the structure and reduction of graphite oxide
paper_content:
Graphite oxide is one of the main precursors of graphene-based materials, which are highly promising for various technological applications because of their unusual electronic properties. Although epoxy and hydroxyl groups are widely accepted as its main functionalities, the complete structure of graphite oxide has remained elusive. By interpreting spectroscopic data in the context of the major functional groups believed to be present in graphite oxide, we now show evidence for the presence of five- and six-membered-ring lactols. On the basis of this chemical composition, we devised a complete reduction process through chemical conversion by sodium borohydride and sulfuric acid treatment, followed by thermal annealing. Only small amounts of impurities are present in the final product (less than 0.5 wt% of sulfur and nitrogen, compared with about 3 wt% with other chemical reductions). This method is particularly effective in the restoration of the π-conjugated structure, and leads to highly soluble and conductive graphene materials.
---
paper_title: A new structural model for graphite oxide
paper_content:
Abstract Solid-state 13 C NMR spectra of graphite oxide (GO) and its derivatives prompt us to propose a new structural model. The spectra of GO treated with KI and the course of the thermal decomposition of GO reveal the presence of epoxide groups, responsible for the oxidating nature of the material. GO is built of aromatic “islands” of variable size which have not been oxidized, and are separated from each other by aliphatic 6-membered rings containing C–OH, epoxide groups and double bonds. The carbon grid is nearly flat; a small degree of warping is caused by the carbons attached to OH groups, which are in a slightly distorted tetrahedral configuration.
---
paper_title: Layer-by-Layer Assembly of Ultrathin Composite Films from Micron-Sized Graphite Oxide Sheets and Polycations
paper_content:
Unilamellar colloids of graphite oxide (GO) were prepared from natural graphite and were grown as monolayer and multilayer thin films on cationic surfaces by electrostatic self-assembly. The multilayer films were grown by alternate adsorption of anionic GO sheets and cationic poly(allylamine hydrochloride) (PAH). The monolayer films consisted of 11−14 A thick GO sheets, with lateral dimensions between 150 nm and 9 μm. Silicon substrates primed with amine monolayers gave partial GO monolayers, but surfaces primed with Al13O4(OH)24(H2O)127+ ions gave densely tiled films that covered approximately 90% of the surface. When alkaline GO colloids were used, the monolayer assembly process selected the largest sheets (from 900 nm to 9 μm) from the suspension. In this case, many of the flexible sheets appeared folded in AFM images. Multilayer (GO/PAH)n films were invariably thicker than expected from the individual thicknesses of the sheets and the polymer monolayers, and this behavior is also attributed to folding...
---
paper_title: Evidence of Graphitic AB Stacking Order of Graphite Oxides
paper_content:
Graphite oxide (GO) samples were prepared by a simplified Brodie method. Hydroxyl, epoxide, carboxyl, and some alkyl functional groups are present in the GO, as identified by solid-state 13C NMR, Fourier-transform infrared spectroscopy, and X-ray photoemission spectroscopy. Starting with pyrolytic graphite (interlayer separation 3.36 A), the average interlayer distance after 1 h of reaction, as determined by X-ray diffraction, increased to 5.62 A and then increased with further oxidation to 7.37 A after 24 h. A smaller signal in 13C CPMAS NMR compared to that in 13C NMR suggests that carboxyl and alkyl groups are at the edges of the flakes of graphite oxide. Other aspects of the chemical bonding were assessed from the NMR and XPS data and are discussed. AB stacking of the layers in the GO was inferred from an electron diffraction study. The elemental composition of GO prepared using this simplified Brodie method is further discussed.
---
paper_title: Evolution of Surface Functional Groups in a Series of Progressively Oxidized Graphite Oxides
paper_content:
This study contributes to the sustained effort to unravel the chemical structure of graphite oxide (GO) by proposing a model based on elemental analysis, transmission electron microscopy, X-ray diffraction, 13C magic-angle spinning NMR, diffuse reflectance infrared Fourier transform spectroscopy, X-ray photoelectron spectroscopy, and electron spin resonance investigations. The model exhibits a carbon network consisting of two kinds of regions (of trans linked cyclohexane chairs and ribbons of flat hexagons with CC double bonds) and functional groups such as tertiary OH, 1,3-ether, ketone, quinone, and phenol (aromatic diol). The latter species give clear explanation for the observed planar acidity of GO, which could not be interpreted by the previous models. The above methods also confirmed the evolution of the surface functional groups upon successive oxidation steps.
---
paper_title: A protein-based electrochemical biosensor for detection of tau protein, a neurodegenerative disease biomarker
paper_content:
A protein-based electrochemical biosensor was developed for detection of tau protein aimed towards electrochemically sensing misfolding proteins. The electrochemical assay monitors tau–tau binding and misfolding during the early stage of tau oligomerization. Electrochemical impedance spectroscopy was used to detect the binding event between solution tau protein and immobilized tau protein (tau–Au), acting as a recognition element. The charge transfer resistance (Rct) of tau–Au was 2.9 ± 0.6 kΩ. Subsequent tau binding to tau–Au decreased the Rct to 0.3 ± 0.1 kΩ (90 ± 3% decrease) upon formation of a tau–tau–Au interface. A linear relationship between the Rct and the solution tau concentration was observed from 0.2 to 1.0 μM. The Rct decrease was attributed to an enhanced charge permeability of the tau–tau–Au surface to a redox probe [Fe(CN)6]3−/4−. The electrochemical and surface characterization data suggested conformational and electrostatic changes induced by tau–tau binding. The protein-based electrochemical platform was highly selective for tau protein over bovine serum albumin and allowed for a rapid sample analysis. The protein-based interface was selective for a non-phosphorylated tau441 isoform over the paired-helical filaments of tau, which were composed of phosphorylated and truncated tau isoforms. The electrochemical approach may find application in screening of the early onset of neurodegeneration and aggregation inhibitors.
---
paper_title: The chemistry of graphene oxide.
paper_content:
The chemistry of graphene oxide is discussed in this critical review. Particular emphasis is directed toward the synthesis of graphene oxide, as well as its structure. Graphene oxide as a substrate for a variety of chemical transformations, including its reduction to graphene-like materials, is also discussed. This review will be of value to synthetic chemists interested in this emerging field of materials science, as well as those investigating applications of graphene who would find a more thorough treatment of the chemistry of graphene oxide useful in understanding the scope and limitations of current approaches which utilize this material (91 references).
---
paper_title: Chemical reduction of graphene oxide: a synthetic chemistry viewpoint
paper_content:
The chemical reduction of graphene oxide is a promising route towards the large scale production of graphene for commercial applications. The current state-of-the-art in graphene oxide reduction, consisting of more than 50 types of reducing agent, will be reviewed from a synthetic chemistry point of view. Emphasis is placed on the techniques, reaction mechanisms and the quality of the produced graphene. The reducing agents are reviewed under two major categories: (i) those which function according to well-supported mechanisms and (ii) those which function according to proposed mechanisms based on knowledge of organic chemistry. This review will serve as a valuable platform to understand the efficiency of these reducing agents for the reduction of graphene oxide.
---
paper_title: The photoluminescence mechanism in carbon dots (graphene quantum dots, carbon nanodots, and polymer dots): current state and future perspective
paper_content:
At present, the actual mechanism of the photoluminescence (PL) of fluorescent carbon dots (CDs) is still an open debate among researchers. Because of the variety of CDs, it is highly important to summarize the PL mechanism for these kinds of carbon materials; doing so can guide the development of effective synthesis routes and novel applications. This review will focus on the PL mechanism of CDs. Three types of fluorescent CDs were involved: graphene quantum dots (GQDs), carbon nanodots (CNDs), and polymer dots (PDs). Four reasonable PL mechanisms have been confirmed: the quantum confinement effect or conjugated π-domains, which are determined by the carbon core; the surface state, which is determined by hybridization of the carbon backbone and the connected chemical groups; the molecule state, which is determined solely by the fluorescent molecules connected on the surface or interior of the CDs; and the crosslink-enhanced emission (CEE) effect. To give a thorough summary, the category and synthesis routes, as well as the chemical/physical properties for the CDs, are briefly introduced in advance.
---
paper_title: Size and pH dependent photoluminescence of graphene quantum dots with low oxygen content
paper_content:
The photoluminescence of graphene quantum dots (GQDs) is rigorously investigated due to their potential applications. However, GQDs from graphene oxide are inherently embedded with non-negligible defects and oxygen grafted onto the edge and basal plane, which induce a change of innate electronic structure in the GQDs. Thus, graphene oxide based GQDs can misrepresent the characteristic properties of primitive GQDs. Here we report the size and pH dependent photophysical properties of GQDs that minimize the content of oxygen and defects. From auger electron spectroscopy, the oxygen content of the GQDs with two different lateral sizes (∼2 nm and ∼18 nm) was probed and found to be ∼5% and ∼8%, respectively. Two common photoluminescence (PL) peaks were observed at 436 nm (the intrinsic bandgap) and 487 nm (the extrinsic bandgap) for both GQDs. The characteristic PL properties in extrinsic and intrinsic bandgaps examined by optical spectroscopic methods show that the emission peak was red-shifted and that the peak width was widened as the size increased. Moreover, the PL lifetime and intensity were not only reversibly changed by pH but also depended on the excitation wavelength. This is in line with our previous report and is ascribed to the size variation of sp2 subdomains and edge functionalization.
---
paper_title: Synthesis of Luminescent Graphene Quantum Dots with High Quantum Yield and Their Toxicity Study
paper_content:
High fluorescence quantum yield graphene quantum dots (GQDs) have showed up as a new generation for bioimaging. In this work, luminescent GQDs were prepared by an ameliorative photo-Fenton reaction and a subsequent hydrothermal process using graphene oxide sheets as the precursor. The as-prepared GQDs were nanomaterials with size ranging from 2.3 to 6.4 nm and emitted intense green luminescence in water. The fluorescence quantum yield was as high as 24.6% (excited at 340 nm) and the fluorescence was strongest at pH 7. Moreover, the influences of low-concentration (12.5, 25 μg/mL) GQDs on the morphology, viability, membrane integrity, internal cellular reactive oxygen species level and mortality of HeLa cells were relatively weak, and the in vitro imaging demonstrated GQDs were mainly in the cytoplasm region. More strikingly, zebrafish embryos were co-cultured with GQDs for in vivo imaging, and the results of heart rate test showed the intake of small amounts of GQDs brought little harm to the cardiovascular of zebrafish. GQDs with high quantum yield and strong photoluminescence show good biocompatibility, thus they show good promising for cell imaging, biolabeling and other biomedical applications.
---
paper_title: Lighting up left-handed Z-DNA: photoluminescent carbon dots induce DNA B to Z transition and perform DNA logic operations
paper_content:
Left-handed Z-DNA has been identified as a transient structure occurred during transcription. DNA B-Z transition has attracted much attention because of not only Z-DNA biological importance but also their relation to disease and DNA nanotechnology. Recently, photoluminescent carbon dots, especially highly luminescent nitrogen-doped carbon dots, have attracted much attention on their applications to bioimaging and gene/drug delivery because of carbon dots with low toxicity, highly stable photoluminescence and controllable surface function. However, it is still unknown whether carbon dots can influence DNA conformation or structural transition, such as B-Z transition. Herein, based on our previous series work on DNA interactions with carbon nanotubes, we report the first example that photoluminescent carbon dots can induce right-handed B-DNA to left-handed Z-DNA under physiological salt conditions with sequence and conformation selectivity. Further studies indicate that carbon dots would bind to DNA major groove with GC preference. Inspired by carbon dots lighting up Z-DNA and DNA nanotechnology, several types of DNA logic gates have been designed and constructed based on fluorescence resonance energy transfer between photoluminescent carbon dots and DNA intercalators.
---
paper_title: Electrophoretic Analysis and Purification of Fluorescent Single-Walled Carbon Nanotube Fragments
paper_content:
Arc-synthesized single-walled carbon nanotubes have been purified through preparative electrophoresis in agarose gel and glass bead matrixes. Two major impurities were isolated: fluorescent carbon and short tubular carbon. Analysis of these two classes of impurities was done. The methods described may be readily extended to the separation of other water-soluble nanoparticles. The separated fluorescent carbon and short tubule carbon species promise to be interesting nanomaterials in their own right.
---
paper_title: Semiconductor and carbon-based fluorescent nanodots: the need for consistency
paper_content:
Fluorescent nanodots have become increasingly prevalent in a wide variety of applications with special interest in analytical and biomedical fields. The present overview focuses on three main aspects: (i) a systematic description and reasonable classification of the most relevant types of fluorescent nanodots according to their nature, quantum confinement and crystalline structure is provided, starting with a clear distinction between semiconductor and carbon-based dots (graphene quantum dots, carbon quantum dots and carbon nanodots). A new set of abbreviations and definitions for them to avoid contradictions found in literature is also proposed; (ii) a rational classification allows the establishment of clear-cut differences and similarities among them. From a basic point of view, the origins of the photoluminescence of the different nanodots are also established, which is a relevant contribution of this overview. Additionally, the most outstanding similarities and differences in a great variety of criteria (i.e. year of discovery, synthesis, the physico-chemical characteristics like structure, nature, shape, size, quantum confinement, toxicity and solubility, the optical characteristics including the quantum yield and lifetime, limitations, applications as well as the evolution of publications) are thoroughly outlined; and (iii) finally, the promising future of fluorescent nanodots in both analytical and biomedical fields is discussed using selected examples of relevant applications.
---
paper_title: In Vivo NIR Fluorescence Imaging, Biodistribution, and Toxicology of Photoluminescent Carbon Dots Produced from Carbon Nanotubes and Graphite
paper_content:
Oxidization of carbon nanotubes by a mixed acid has been utilized as a standard method to functionalize carbon nanomaterials for years. Here, the products obtained from carbon nanotubes and graphite after a mixed-acid treatment are carefully studied. Nearly identical carbon dot (Cdot) products with diameters of 3–4 nm are produced using this approach from a variety of carbon starting materials, including single-walled carbon nanotubes, multiwalled carbon nanotubes, and graphite. These Cdots exhibit strong yellow fluorescence under UV irradiation and shifted emission peaks as the excitation wavelength is changed. In vivo fluorescence imaging with Cdots is then demonstrated in mouse experiments, by using varied excitation wavelengths including some in the near-infrared (NIR) region. Furthermore, in vivo biodistribution and toxicology of those Cdots in mice over different periods of time are studied; no noticeable signs of toxicity for Cdots to the treated animals are discovered. This work provides a facile method to synthesize Cdots as safe non-heavy-metal-containing fluorescent nanoprobes, promising for applications in biomedical imaging.
---
paper_title: Blue photoluminescent carbon nanodots from limeade.
paper_content:
Abstract Carbon-based photoluminescent nanodot has currently been one of the promising materials for various applications. The remaining challenges are the carbon sources and the simple synthetic processes that enhance the quantum yield, photostability and biocompatibility of the nanodots. In this work, the synthesis of blue photoluminescent carbon nanodots from limeade via a single-step hydrothermal carbonization process is presented. Lime carbon nanodot (L-CnD), whose the quantum yield exceeding 50% for the 490 nm emission in gram-scale amounts, has the structure of graphene core functionalized with the oxygen functional groups. The micron-sized flake of the as-prepared L-CnD powder exhibits multicolor emission depending on an excitation wavelength. The L-CnDs are demonstrated for rapidly ferric-ion (Fe3 +) detection in water compared to Fe2 +, Cu2 +, Co2 +, Zn2 +, Mn2 + and Ni2 + ions. The photoluminescence quenching of L-CnD solution under UV light is used to distinguish the Fe3 + ions from others by naked eyes as low concentration as 100 μM. Additionally, L-CnDs provide exceptional photostability and biocompatibility for imaging yeast cell morphology. Changes in morphology of living yeast cells, i.e. cell shape variation, and budding, can be observed in a minute-period until more than an hour without the photoluminescent intensity loss.
---
paper_title: Simple one-step synthesis of highly luminescent carbon dots from orange juice: application as excellent bio-imaging agents
paper_content:
Highly photoluminescent carbon dots with a PL quantum yield of 26% have been prepared in one step by hydrothermal treatment of orange juice. Due to high photostability and low toxicity these carbon dots are demonstrated as excellent probes in cellular imaging.
---
paper_title: Tuning photoluminescence and surface properties of carbon nanodots for chemical sensing
paper_content:
Obtaining tunable photoluminescence (PL) with improved emission properties is crucial for successfully implementing fluorescent carbon nanodots (fCDs) in all practical applications such as multicolour imaging and multiplexed detection by a single excitation wavelength. In this study, we report a facile hydrothermal approach to adjust the PL peaks of fCDs from blue, green to orange by controlling the surface passivation reaction during the synthesis. This is achieved by tuning the passivating reagents in a step-by-step manner. The as-prepared fCDs with narrow size distribution show improved PL properties with different emission wavelengths. Detailed characterization of fCDs using elemental analysis, Fourier transform infrared spectroscopy, and X-ray photoelectron spectroscopy suggested that the surface chemical composition results in this tunable PL emission. Surface passivation significantly alters the surface status, resulting in fCDs with either stronger surface oxidation or N element doping that ultimately determine their PL properties. Further experiments suggested that the as-prepared orange luminescent fCDs (O-fCDs) were sensitive and specific nanosensing platforms towards Fe3+ determination in a complex biological environment, emphasizing their potential practical applications in clinical and biological fields.
---
paper_title: Carbon Dots with Continuously Tunable Full-Color Emission and Their Application in Ratiometric pH Sensing
paper_content:
Two types of carbon dots (C dots) exhibiting respective excitation-independent blue emission and excitation-dependent full-color emissions have been synthesized via a mild one-pot process from chloroform and diethylamine. This new bottom-up synthetic strategy leads to highly stable crystalline C dots with tunable surface functionalities in high reproducibility. By detailed characterization and comparison of the two types of C dots, it is proved concretely that the surface functional groups, such as C═O and C═N, can efficiently introduce new energy levels for electron transitions and result in the continuously adjustable full-color emissions. A simplified energy level and electron transition diagram has been proposed to help understand how surface functional groups affect the emission properties. By taking advantage of the unique excitation-dependent full-color emissions, various new applications can be anticipated. Here, as an example, a ratiometric pH sensor using two emission wavelengths of the C dots a...
---
paper_title: Graphene Quantum Dots from Polycyclic Aromatic Hydrocarbon for Bioimaging and Sensing of Fe3+ and Hydrogen Peroxide
paper_content:
An easy approach for large-scale and low-cost synthesis of photoluminescent (PL) graphene quantum dots (GQDs) based on the carbonization of commercially available polycyclic aromatic hydrocarbon (PAH) precursors with strong acid and followed by hydrothermal reduction with hydrazine hydrate is reported. Transmission electron microscopy (TEM) and atomic force microscopy (AFM) characterizations indicate that the size and height of GQDs are in the range of 5–10 nm and 0.5–2.5 nm, respectively. PAH, which has more benzene rings, generally forms GQDs with relatively larger size. The GQDs show high water solubility, tunable photoluminescence, low cytotoxicity, and good optical stability, which makes them promising fluorescent probes for cellular imaging. In addition, the fluorescence of GQDs shows a sensitive and selective quenching effect to Fe3+ with a detection limit of 5 × 10−9m. By combination with the Fe2+/Fe3+ redox couple, the PL GQDs are able to detect oxidant, using H2O2 as an example. This study opens up new opportunities to make full use of GQDs because of their facile availability, cost-effective productivity, and robust functionality.
---
paper_title: Low-cost synthesis of carbon nanodots from natural products used as a fluorescent probe for the detection of ferrum(III) ions in lake water
paper_content:
A simple, low cost, and green method was developed for the synthesis of water-soluble and well-dispersed fluorescent carbon nanodots (CDs) via a one-step hydrothermal treatment of potatoes. The as-prepared CDs exhibit a strong blue fluorescence, with a quantum yield of up to 15%. We further explored the use of these CDs as a novel sensing probe for the label-free, sensitive and selective detection of Fe3+. This is based on strong fluorescence quenching due to the complex formed between the CDs and Fe3+. The detection limit was as low as 0.025 μmol L−1, and different concentrations corresponded to different sensitivities. The linear ranges were 1.0–5.0 μmol L−1 and 5.0–50.0 μmol L−1 at lower and higher concentration ranges respectively, and the recoveries of the spiked water samples were 93.7–101.5%. Therefore, the as-prepared CDs could meet the requirements for the monitoring of Fe3+ in environmental samples.
---
paper_title: Graphene Quantum Dots Derived from Carbon Fibers
paper_content:
Graphene quantum dots (GQDs), which are edge-bound nanometer-size graphene pieces, have fascinating optical and electronic properties. These have been synthesized either by nanolithography or from starting materials such as graphene oxide (GO) by the chemical breakdown of their extended planar structure, both of which are multistep tedious processes. Here, we report that during the acid treatment and chemical exfoliation of traditional pitch-based carbon fibers, that are both cheap and commercially available, the stacked graphitic submicrometer domains of the fibers are easily broken down, leading to the creation of GQDs with different size distribution in scalable amounts. The as-produced GQDs, in the size range of 1–4 nm, show two-dimensional morphology, most of which present zigzag edge structure, and are 1–3 atomic layers thick. The photoluminescence of the GQDs can be tailored through varying the size of the GQDs by changing process parameters. Due to the luminescence stability, nanosecond lifetime, ...
---
paper_title: The photoluminescence mechanism in carbon dots (graphene quantum dots, carbon nanodots, and polymer dots): current state and future perspective
paper_content:
At present, the actual mechanism of the photoluminescence (PL) of fluorescent carbon dots (CDs) is still an open debate among researchers. Because of the variety of CDs, it is highly important to summarize the PL mechanism for these kinds of carbon materials; doing so can guide the development of effective synthesis routes and novel applications. This review will focus on the PL mechanism of CDs. Three types of fluorescent CDs were involved: graphene quantum dots (GQDs), carbon nanodots (CNDs), and polymer dots (PDs). Four reasonable PL mechanisms have been confirmed: the quantum confinement effect or conjugated π-domains, which are determined by the carbon core; the surface state, which is determined by hybridization of the carbon backbone and the connected chemical groups; the molecule state, which is determined solely by the fluorescent molecules connected on the surface or interior of the CDs; and the crosslink-enhanced emission (CEE) effect. To give a thorough summary, the category and synthesis routes, as well as the chemical/physical properties for the CDs, are briefly introduced in advance.
---
paper_title: A graphene quantum dot@Fe3O4@SiO2 based nanoprobe for drug delivery sensing and dual-modal fluorescence and MRI imaging in cancer cells.
paper_content:
A novel graphene quantum dot (GQD)@Fe3O4@SiO2 based nanoprobe was reported for targeted drug delivery, sensing, dual-modal imaging and therapy. Carboxyl-terminated GQD (C-GQD) was firstly conjugated with Fe3O4@SiO2 and then functionalized with cancer targeting molecule folic acid (FA). DOX drug molecules were then loaded on GQD surface of Fe3O4@SiO2@GQD-FA nanoprobe via pi-pi stacking, which resulted in Fe3O4@SiO2@GQD-FA/DOX conjugates based on a FRET mechanism with GQD as donor molecules and DOX as acceptor molecules. Meanwhile, we successfully performed in vitro MRI and fluorescence imaging of living Hela cells and monitored intracellular drug release process using this Fe3O4@SiO2@GQD-FA/DOX nanoprobe. Cell viability study demonstrated the low cytotoxicity of Fe3O4@SiO2@GQD-FA nanocarrier and the enhanced therapeutic efficacy of Fe3O4@SiO2@GQD-FA/DOX nanoprobe for cancer cells. This luminomagnetic nanoprobe will be a potential platform for cancer accurate diagnosis and therapy.
---
paper_title: Real-time fluorescence assay of alkaline phosphatase in living cells using boron-doped graphene quantum dots as fluorophores.
paper_content:
This work reports a convenient and real-time assay of alkaline phosphatase (ALP) in living cells based on a fluorescence quench-recovery process at a physiological pH using the boron-doped graphene quantum dots (BGQDs) as fluorophore. The fluorescence of BGQDs is found to be effectively quenched by Ce3+ ions because of the coordination of Ce3+ ions with the carboxyl group of BGQDs. Upon addition of adenosine triphosphate (ATP) into the system, the quenched fluorescence can be recovered by the ALP-positive expressed cells (such as MCF-7 cells) due to the removal of Ce3+ ions from BGQDs surface by phosphate ions, which are generated from ATP under catalytic hydrolysis of ALP that expressed in cells. The extent of fluorescence signal recovery depends on the level of ALP in cells, which establishes the basis of ALP assay in living cells. This approach can also be used for specific discrimination of the ALP expression levels in different type of cells and thus sensitive detection of those ALP-positive expressed cells (for example MCF-7 cells) at a very low abundance (10±5 cells mL-1). The advantages of this approach are that it has high sensitivity because of the significant suppression of the background due to the Ce3+ ion quenching the fluorescence of BGQDs, and has the ability of avoiding false signals arising from the nonspecific adsorption of non-target proteins because it operates via a fluorescence quench-recovery process. In addition, it can be extended to other enzyme systems, such as ATP-related kinases.
---
paper_title: Hydrothermal Route for Cutting Graphene Sheets into Blue-Luminescent Graphene Quantum Dots
paper_content:
2010 WILEY-VCH Verlag Gm Graphene-based materials are promising building blocks for future nanodevices owing to their superior electronic, thermal, and mechanical properties as well as their chemical stability. However, currently available graphene-based materials produced by typical physical and chemical routes, including micromechanical cleavage, reduction of exfoliated graphene oxide (GO), and solvothermal synthesis, are generally micrometer-sized graphene sheets (GSs), which limits their direct application in nanodevices. In this context, it has become urgent to develop effective routes for cutting large GSs into nanometer-sized pieces with a well-confined shape, such as graphene nanoribbons (GNRs) and graphene quantum dots (GQDs). Theoretical and experimental studies have shown that narrow GNRs (width less than ca. 10 nm) exhibit substantial quantum confinement and edge effects that render GNRs semiconducting. By comparison, GQDs possess strong quantum confinement and edge effects when their sizes are down to 100 nm. If their sizes are reduced to ca. 10 nm, comparable with the widths of semiconducting GNRs, the two effects will become more pronounced and, hence, induce new physical properties. Up to now, nearly all experimental work on GNRs and GQDs has focused on their electron transportation properties. Little work has been done on the optical properties that are directly associated with the quantum confinement and/or edge effects. Most GNRand GQD-based electronic devices have been fabricated by lithography techniques, which can realize widths and diameters down to ca. 20 nm. This physical approach, however, is limited by the need for expensive equipment and especially by difficulties in obtaining smooth edges. Alternative chemical routes can overcome these drawbacks. Moreover, surface functionalization can be realized easily. Li et al. first reported a chemical route to functionalized and ultrasmooth GNRs with widths ranging from 50 nm to sub-10 nm. Very recently, Kosynkin et al. reported a simple solution-based oxidative process for producing GNRs by lengthwise cutting and unraveling of multiwalled carbon nanotube (CNT) side walls. Yet, no chemical routes have been reported so far for preparing functionalized GQDs with sub-10 nm sizes. Here, we report on a novel and simple hydrothermal approach for the cutting of GSs into surface-functionalized GQDs (ca. 9.6-nm average diameter). The functionalized GQDs were found to exhibit bright blue photoluminescence (PL), which has never been observed in GSs and GNRs owing to their large lateral sizes. The blue luminescence and new UV–vis absorption bands are directly induced by the large edge effect shown in the ultrafine GQDs. The starting material was micrometer-sized rippled GSs obtained by thermal reduction of GO sheets. Figure 1a shows a typical transmission electron microscopy (TEM) image of the pristine GSs. Their (002) interlayer spacing is 3.64 A (Fig. 1c), larger than that of bulk graphite (3.34 A). Before the hydrothermal treatment, the GSs were oxidized in concentrated H2SO4 and HNO3. After the oxidization treatment the GSs became slightly smaller (50 nm–2mm) and the (002) spacing slightly increased to 3.85 A (Fig. 1c). During the oxidation, oxygen-containing functional groups, including C1⁄4O/COOH, OH, and C O C, were introduced at the edge and on the basal plane, as shown in the Fourier transform infrared (FTIR) spectrum (Fig. 1d). The presence of these groups makes the GSs soluble in water. A series of more marked changes took place after the hydrothermal treatment of the oxidized GSs at 200 8C. First, the (002) spacing was reduced to 3.43 A (Fig. 1c), very close to that of bulk graphite, indicating that deoxidization occurs during the hydrothermal process. The deoxidization is further confirmed by the changes in the FTIR and C 1s X-ray photoelectron spectroscopy (XPS) spectra. After the hydrothermal treatment, the strongest vibrational absorption band of C1⁄4O/COOH at 1720 cm 1 became very weak and the vibration band of epoxy groups at 1052 cm 1 disappeared (Fig. 1d). In the XPS C 1s spectra of the oxidized and hydrothermally reduced GSs (Fig. 2a), the signal at 289 eV assigned to carboxyl groups became weak after the hydrothermal treatment, whereas the sp carbon peak at 284.4 eV was almost unchanged. Figure 2b shows the Raman spectrum of the reduced GSs. A G band at 1590 cm 1 and a D band at 1325 cm 1 were observed with a large intensity ratio ID/IG of 1.26. Second, the size of the GSs decreased dramatically and ultrafine GQDswere isolated by a dialysis process. Figure 3 shows typical TEM and atomic force microscopy (AFM) images of the GQDs. Their diameters are mainly distributed in the range of 5–13 nm (9.6 nm average diameter). Their topographic heights are mostly between 1 and 2 nm, similar to those observed in functionalized GNRs with 1–3 layers. More than 85% of the GQDs consist of 1–3 layers.
---
paper_title: Beyond a Carrier: Graphene Quantum Dots as a Probe for Programmatically Monitoring Anti-Cancer Drug Delivery, Release, and Response
paper_content:
On the basis of the unique physicochemical properties of graphene quantum dots (GQDs), we developed a novel type of theranostic agent by loading anticancer drug doxorubicin (DOX) to GQD’s surface and conjugating Cy5.5 (Cy) dye to GQD though a cathepsin D-responsive (P) peptide. Such type of agents demonstrated superior therapeutic performance both in vitro and in vivo because of the improved tissue penetration and cellular uptake. More importantly, they are capable of functioning as probes for programmed tracking the delivery and release of anticancer drug as well as drug-induced cancer cell apoptosis through GQD’s, DOX’s, and Cy’s charateristic fluorescence, respectively.
---
paper_title: Natural carbon-based dots from humic substances
paper_content:
For the first time, abundant natural carbon-based dots were found and studied in humic substances (HS). Four soluble HS including three humic acids (HA) from different sources and one fulvic acids (FA) were synthetically studied. Investigation results indicate that all the four HS contain large quantities of Carbon-based dots. Carbon-based dots are mainly small-sized graphene oxide nano-sheets or oxygen-containing functional group-modified graphene nano-sheets with heights less than 1 nm and lateral sizes less than 100 nm. Carbon-based nanomaterials not only contain abundant sp2-clusters but also a large quantity of surface states, exhibiting unique optical and electric properties, such as excitation-dependent fluorescence, surface states-originated electrochemiluminescence, and strong electron paramagnetic resonance. Optical and electric properties of these natural carbon-based dots have no obvious relationship to their morphologies, but affected greatly by their surface states. Carbon-based dots in the three HS have relative high densities of surface states whereas the FA has the lowest density of surface states, resulting in their different fluorescence properties. The finding of carbon-based dots in HS provides us new insight into HS, and the unique optical properties of these natural carbon-based dots may give HS potential applications in areas such as bio-imaging, bio-medicine, sensing and optoelectronics.
---
paper_title: In Vivo NIR Fluorescence Imaging, Biodistribution, and Toxicology of Photoluminescent Carbon Dots Produced from Carbon Nanotubes and Graphite
paper_content:
Oxidization of carbon nanotubes by a mixed acid has been utilized as a standard method to functionalize carbon nanomaterials for years. Here, the products obtained from carbon nanotubes and graphite after a mixed-acid treatment are carefully studied. Nearly identical carbon dot (Cdot) products with diameters of 3–4 nm are produced using this approach from a variety of carbon starting materials, including single-walled carbon nanotubes, multiwalled carbon nanotubes, and graphite. These Cdots exhibit strong yellow fluorescence under UV irradiation and shifted emission peaks as the excitation wavelength is changed. In vivo fluorescence imaging with Cdots is then demonstrated in mouse experiments, by using varied excitation wavelengths including some in the near-infrared (NIR) region. Furthermore, in vivo biodistribution and toxicology of those Cdots in mice over different periods of time are studied; no noticeable signs of toxicity for Cdots to the treated animals are discovered. This work provides a facile method to synthesize Cdots as safe non-heavy-metal-containing fluorescent nanoprobes, promising for applications in biomedical imaging.
---
paper_title: Blue photoluminescent carbon nanodots from limeade.
paper_content:
Abstract Carbon-based photoluminescent nanodot has currently been one of the promising materials for various applications. The remaining challenges are the carbon sources and the simple synthetic processes that enhance the quantum yield, photostability and biocompatibility of the nanodots. In this work, the synthesis of blue photoluminescent carbon nanodots from limeade via a single-step hydrothermal carbonization process is presented. Lime carbon nanodot (L-CnD), whose the quantum yield exceeding 50% for the 490 nm emission in gram-scale amounts, has the structure of graphene core functionalized with the oxygen functional groups. The micron-sized flake of the as-prepared L-CnD powder exhibits multicolor emission depending on an excitation wavelength. The L-CnDs are demonstrated for rapidly ferric-ion (Fe3 +) detection in water compared to Fe2 +, Cu2 +, Co2 +, Zn2 +, Mn2 + and Ni2 + ions. The photoluminescence quenching of L-CnD solution under UV light is used to distinguish the Fe3 + ions from others by naked eyes as low concentration as 100 μM. Additionally, L-CnDs provide exceptional photostability and biocompatibility for imaging yeast cell morphology. Changes in morphology of living yeast cells, i.e. cell shape variation, and budding, can be observed in a minute-period until more than an hour without the photoluminescent intensity loss.
---
paper_title: Graphene quantum dots for cancer targeted drug delivery.
paper_content:
A biocompatible and cell traceable drug delivery system Graphene Quantum Dots (GQD) based, for the targeted delivery of the DNA intercalating drug doxorubicin (DOX) to cancer cells, is here reported. Highly dispersible and water soluble GQD, synthesized by acidic oxidation and exfoliation of multi-walled carbon nanotubes (MWCNT), were covalently linked to the tumor targeting module biotin (BTN), able to efficiently recognize biotin receptors over-expressed on cancer cells and loaded with DOX. Biological test performed on A549 cells reported a very low toxicity of the synthesized carrier (GQD and GQD-BTN). In GQD-BTN-DOX treated cancer cells, the cytotoxicity was strongly dependent from cell uptake which was greater and delayed after treatment with GQD-BTN-DOX system with respect to what observed for cells treated with the same system lacking of the targeting module BTN (GQD-DOX) or with the free drug alone. A delayed nuclear internalization of the drug is reported, due to the drug detachment from the nanosystem, triggered by the acidic environment of cancer cells.
---
paper_title: Graphene Quantum Dots Derived from Carbon Fibers
paper_content:
Graphene quantum dots (GQDs), which are edge-bound nanometer-size graphene pieces, have fascinating optical and electronic properties. These have been synthesized either by nanolithography or from starting materials such as graphene oxide (GO) by the chemical breakdown of their extended planar structure, both of which are multistep tedious processes. Here, we report that during the acid treatment and chemical exfoliation of traditional pitch-based carbon fibers, that are both cheap and commercially available, the stacked graphitic submicrometer domains of the fibers are easily broken down, leading to the creation of GQDs with different size distribution in scalable amounts. The as-produced GQDs, in the size range of 1–4 nm, show two-dimensional morphology, most of which present zigzag edge structure, and are 1–3 atomic layers thick. The photoluminescence of the GQDs can be tailored through varying the size of the GQDs by changing process parameters. Due to the luminescence stability, nanosecond lifetime, ...
---
paper_title: A fluorescence turn-on biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS 2 ) nanosheets for epithelial cell adhesion molecule (EpCAM) detection
paper_content:
Abstract This paper presents a “turn-on” fluorescence biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS 2 ) nanosheets for rapid and sensitive detection of epithelial cell adhesion molecule (EpCAM). PEGylated GQDs were used as donor molecules, which could not only largely increase emission intensity but also prevent non-specific adsorption of PEGylated GQD on MoS 2 surface. The sensing platform was realized by adsorption of PEGylated GQD labelled EpCAM aptamer onto MoS 2 surface via van der Waals force. The fluorescence signal of GQD was then quenched by MoS 2 nanosheets via fluorescence resonance energy transfer (FRET) mechanism. In the presence of EpCAM protein, the stronger specific affinity interaction between aptamer and EpCAM protein could detach GQD labelled EpCAM aptamer from MoS 2 nanosheets, leading to the restoration of fluorescence intensity. By monitoring the change of fluorescence signal, the target EpCAM protein could be detected sensitively and selectively with a linear detection range from 3 nM to 54 nM and limit of detection (LOD) around 450 pM. In addition, this nanobiosensor has been successfully used for EpCAM-expressed breast cancer MCF-7 cell detection.
---
paper_title: When biomolecules meet graphene: from molecular level interactions to material design and applications
paper_content:
Graphene-based materials have attracted increasing attention due to their atomically-thick two-dimensional structures, high conductivity, excellent mechanical properties, and large specific surface areas. The combination of biomolecules with graphene-based materials offers a promising method to fabricate novel graphene–biomolecule hybrid nanomaterials with unique functions in biology, medicine, nanotechnology, and materials science. In this review, we focus on a summarization of the recent studies in functionalizing graphene-based materials using different biomolecules, such as DNA, peptides, proteins, enzymes, carbohydrates, and viruses. The different interactions between graphene and biomolecules at the molecular level are demonstrated and discussed in detail. In addition, the potential applications of the created graphene–biomolecule nanohybrids in drug delivery, cancer treatment, tissue engineering, biosensors, bioimaging, energy materials, and other nanotechnological applications are presented. This review will be helpful to know the modification of graphene with biomolecules, understand the interactions between graphene and biomolecules at the molecular level, and design functional graphene-based nanomaterials with unique properties for various applications.
---
paper_title: Recent Progress in Nanomaterial-Based Electrochemical Biosensors for Cancer Biomarkers: A Review
paper_content:
This article reviews recent progress in the development of nanomaterial-based electrochemical biosensors for cancer biomarkers. Because of their high electrical conductivity, high affinity to biomolecules, and high surface area-to-weight ratios, nanomaterials, including metal nanoparticles, carbon nanotubes, and graphene, have been used for fabricating electrochemical biosensors. Electrodes are often coated with nanomaterials to increase the effective surface area of the electrodes and immobilize a large number of biomolecules such as enzymes and antibodies. Alternatively, nanomaterials are used as signaling labels for increasing the output signals of cancer biomarker sensors, in which nanomaterials are conjugated with secondary antibodies and redox compounds. According to this strategy, a variety of biosensors have been developed for detecting cancer biomarkers. Recent studies show that using nanomaterials is highly advantageous in preparing high-performance biosensors for detecting lower levels of cancer biomarkers. This review focuses mainly on the protocols for using nanomaterials to construct cancer biomarker sensors and the performance characteristics of the sensors. Recent trends in the development of cancer biomarker sensors are discussed according to the nanomaterials used.
---
paper_title: The photoluminescence mechanism in carbon dots (graphene quantum dots, carbon nanodots, and polymer dots): current state and future perspective
paper_content:
At present, the actual mechanism of the photoluminescence (PL) of fluorescent carbon dots (CDs) is still an open debate among researchers. Because of the variety of CDs, it is highly important to summarize the PL mechanism for these kinds of carbon materials; doing so can guide the development of effective synthesis routes and novel applications. This review will focus on the PL mechanism of CDs. Three types of fluorescent CDs were involved: graphene quantum dots (GQDs), carbon nanodots (CNDs), and polymer dots (PDs). Four reasonable PL mechanisms have been confirmed: the quantum confinement effect or conjugated π-domains, which are determined by the carbon core; the surface state, which is determined by hybridization of the carbon backbone and the connected chemical groups; the molecule state, which is determined solely by the fluorescent molecules connected on the surface or interior of the CDs; and the crosslink-enhanced emission (CEE) effect. To give a thorough summary, the category and synthesis routes, as well as the chemical/physical properties for the CDs, are briefly introduced in advance.
---
paper_title: A graphene quantum dot@Fe3O4@SiO2 based nanoprobe for drug delivery sensing and dual-modal fluorescence and MRI imaging in cancer cells.
paper_content:
A novel graphene quantum dot (GQD)@Fe3O4@SiO2 based nanoprobe was reported for targeted drug delivery, sensing, dual-modal imaging and therapy. Carboxyl-terminated GQD (C-GQD) was firstly conjugated with Fe3O4@SiO2 and then functionalized with cancer targeting molecule folic acid (FA). DOX drug molecules were then loaded on GQD surface of Fe3O4@SiO2@GQD-FA nanoprobe via pi-pi stacking, which resulted in Fe3O4@SiO2@GQD-FA/DOX conjugates based on a FRET mechanism with GQD as donor molecules and DOX as acceptor molecules. Meanwhile, we successfully performed in vitro MRI and fluorescence imaging of living Hela cells and monitored intracellular drug release process using this Fe3O4@SiO2@GQD-FA/DOX nanoprobe. Cell viability study demonstrated the low cytotoxicity of Fe3O4@SiO2@GQD-FA nanocarrier and the enhanced therapeutic efficacy of Fe3O4@SiO2@GQD-FA/DOX nanoprobe for cancer cells. This luminomagnetic nanoprobe will be a potential platform for cancer accurate diagnosis and therapy.
---
paper_title: Market Analysis of Biosensors for Food Safety
paper_content:
This paper is presented as an overview of the pathogen detection industry. The review ::: includes pathogen detection markets and their prospects for the future. Potential markets include the ::: medical, military, food, and environmental industries. Those industries combined have a market size ::: of $563 million for pathogen detecting biosensors and are expected to grow at a compounded annual ::: growth rate (CAGR) of 4.5%. The food market is further segmented into different food product ::: industries. The overall food pathogen testing market is expected to grow to $192 million and 34 ::: million tests by 2005. The trend in pathogen testing emphasizes the need to commercialize ::: biosensors for the food safety industry as legislation creates new standards for microbial monitoring. ::: With quicker detection time and reusable features, biosensors will be important to those interested in ::: real time diagnostics of disease causing pathogens. As the world becomes more concerned with a ::: safe food and water supply, the demand for rapid detecting biosensors only increases.
---
paper_title: Real-time fluorescence assay of alkaline phosphatase in living cells using boron-doped graphene quantum dots as fluorophores.
paper_content:
This work reports a convenient and real-time assay of alkaline phosphatase (ALP) in living cells based on a fluorescence quench-recovery process at a physiological pH using the boron-doped graphene quantum dots (BGQDs) as fluorophore. The fluorescence of BGQDs is found to be effectively quenched by Ce3+ ions because of the coordination of Ce3+ ions with the carboxyl group of BGQDs. Upon addition of adenosine triphosphate (ATP) into the system, the quenched fluorescence can be recovered by the ALP-positive expressed cells (such as MCF-7 cells) due to the removal of Ce3+ ions from BGQDs surface by phosphate ions, which are generated from ATP under catalytic hydrolysis of ALP that expressed in cells. The extent of fluorescence signal recovery depends on the level of ALP in cells, which establishes the basis of ALP assay in living cells. This approach can also be used for specific discrimination of the ALP expression levels in different type of cells and thus sensitive detection of those ALP-positive expressed cells (for example MCF-7 cells) at a very low abundance (10±5 cells mL-1). The advantages of this approach are that it has high sensitivity because of the significant suppression of the background due to the Ce3+ ion quenching the fluorescence of BGQDs, and has the ability of avoiding false signals arising from the nonspecific adsorption of non-target proteins because it operates via a fluorescence quench-recovery process. In addition, it can be extended to other enzyme systems, such as ATP-related kinases.
---
paper_title: Lighting up left-handed Z-DNA: photoluminescent carbon dots induce DNA B to Z transition and perform DNA logic operations
paper_content:
Left-handed Z-DNA has been identified as a transient structure occurred during transcription. DNA B-Z transition has attracted much attention because of not only Z-DNA biological importance but also their relation to disease and DNA nanotechnology. Recently, photoluminescent carbon dots, especially highly luminescent nitrogen-doped carbon dots, have attracted much attention on their applications to bioimaging and gene/drug delivery because of carbon dots with low toxicity, highly stable photoluminescence and controllable surface function. However, it is still unknown whether carbon dots can influence DNA conformation or structural transition, such as B-Z transition. Herein, based on our previous series work on DNA interactions with carbon nanotubes, we report the first example that photoluminescent carbon dots can induce right-handed B-DNA to left-handed Z-DNA under physiological salt conditions with sequence and conformation selectivity. Further studies indicate that carbon dots would bind to DNA major groove with GC preference. Inspired by carbon dots lighting up Z-DNA and DNA nanotechnology, several types of DNA logic gates have been designed and constructed based on fluorescence resonance energy transfer between photoluminescent carbon dots and DNA intercalators.
---
paper_title: Fluorescent "on-off-on" switching sensor based on CdTe quantum dots coupled with multiwalled carbon nanotubes@graphene oxide nanoribbons for simultaneous monitoring of dual foreign DNAs in transgenic soybean.
paper_content:
With the increasing concern of potential health and environmental risk, it is essential to develop reliable methods for transgenic soybean detection. Herein, a simple, sensitive and selective assay was constructed based on homogeneous fluorescence resonance energy transfer (FRET) between CdTe quantum dots (QDs) and multiwalled carbon nanotubes@graphene oxide nanoribbons (MWCNTs@GONRs) to form the fluorescent "on-off-on" switching for simultaneous monitoring dual target DNAs of promoter cauliflower mosaic virus 35s (P35s) and terminator nopaline synthase (TNOS) from transgenic soybean. The capture DNAs were immobilized with corresponding QDs to obtain strong fluorescent signals (turning on). The strong π-π stacking interaction between single-stranded DNA (ssDNA) probes and MWCNTs@GONRs led to minimal background fluorescence due to the FRET process (turning off). The targets of P35s and TNOS were recognized by dual fluorescent probes to form double-stranded DNA (dsDNA) through the specific hybridization between target DNAs and ssDNA probes. And the dsDNA were released from the surface of MWCNTs@GONRs, which leaded the dual fluorescent probes to generate the strong fluorescent emissions (turning on). Therefore, this proposed homogeneous assay can be achieved to detect P35s and TNOS simultaneously by monitoring the relevant fluorescent emissions. Moreover, this assay can distinguish complementary and mismatched nucleic acid sequences with high sensitivity. The constructed approach has the potential to be a tool for daily detection of genetically modified organism with the merits of feasibility and reliability.
---
paper_title: Beyond a Carrier: Graphene Quantum Dots as a Probe for Programmatically Monitoring Anti-Cancer Drug Delivery, Release, and Response
paper_content:
On the basis of the unique physicochemical properties of graphene quantum dots (GQDs), we developed a novel type of theranostic agent by loading anticancer drug doxorubicin (DOX) to GQD’s surface and conjugating Cy5.5 (Cy) dye to GQD though a cathepsin D-responsive (P) peptide. Such type of agents demonstrated superior therapeutic performance both in vitro and in vivo because of the improved tissue penetration and cellular uptake. More importantly, they are capable of functioning as probes for programmed tracking the delivery and release of anticancer drug as well as drug-induced cancer cell apoptosis through GQD’s, DOX’s, and Cy’s charateristic fluorescence, respectively.
---
paper_title: Graphene Oxide-Upconversion Nanoparticle Based Optical Sensors for Targeted Detection of mRNA Biomarkers Present in Alzheimer’s Disease and Prostate Cancer
paper_content:
The development of new sensors for the accurate detection of biomarkers in biological fluids is of utmost importance for the early diagnosis of diseases. Next to advanced laboratory techniques, there is a need for relatively simple methods which can significantly broaden the availability of diagnostic capability. Here, we demonstrate the successful application of a sensor platform based on graphene oxide and upconversion nanoparticles (NPs) for the specific detection of mRNA-related oligonucleotide markers in complex biological fluids. The combination of near-infrared light upconversion with low-background photon counting readout enables reliable detection of low quantities of small oligonucleotide sequences in the femtomolar range. We demonstrate the successful detection of analytes relevant to mRNAs present in Alzheimer’s disease as well as prostate cancer in human blood serum. The high performance and relative simplicity of the upconversion NP-graphene sensor platform enables new opportunities in early...
---
paper_title: In Vivo NIR Fluorescence Imaging, Biodistribution, and Toxicology of Photoluminescent Carbon Dots Produced from Carbon Nanotubes and Graphite
paper_content:
Oxidization of carbon nanotubes by a mixed acid has been utilized as a standard method to functionalize carbon nanomaterials for years. Here, the products obtained from carbon nanotubes and graphite after a mixed-acid treatment are carefully studied. Nearly identical carbon dot (Cdot) products with diameters of 3–4 nm are produced using this approach from a variety of carbon starting materials, including single-walled carbon nanotubes, multiwalled carbon nanotubes, and graphite. These Cdots exhibit strong yellow fluorescence under UV irradiation and shifted emission peaks as the excitation wavelength is changed. In vivo fluorescence imaging with Cdots is then demonstrated in mouse experiments, by using varied excitation wavelengths including some in the near-infrared (NIR) region. Furthermore, in vivo biodistribution and toxicology of those Cdots in mice over different periods of time are studied; no noticeable signs of toxicity for Cdots to the treated animals are discovered. This work provides a facile method to synthesize Cdots as safe non-heavy-metal-containing fluorescent nanoprobes, promising for applications in biomedical imaging.
---
paper_title: Blue photoluminescent carbon nanodots from limeade.
paper_content:
Abstract Carbon-based photoluminescent nanodot has currently been one of the promising materials for various applications. The remaining challenges are the carbon sources and the simple synthetic processes that enhance the quantum yield, photostability and biocompatibility of the nanodots. In this work, the synthesis of blue photoluminescent carbon nanodots from limeade via a single-step hydrothermal carbonization process is presented. Lime carbon nanodot (L-CnD), whose the quantum yield exceeding 50% for the 490 nm emission in gram-scale amounts, has the structure of graphene core functionalized with the oxygen functional groups. The micron-sized flake of the as-prepared L-CnD powder exhibits multicolor emission depending on an excitation wavelength. The L-CnDs are demonstrated for rapidly ferric-ion (Fe3 +) detection in water compared to Fe2 +, Cu2 +, Co2 +, Zn2 +, Mn2 + and Ni2 + ions. The photoluminescence quenching of L-CnD solution under UV light is used to distinguish the Fe3 + ions from others by naked eyes as low concentration as 100 μM. Additionally, L-CnDs provide exceptional photostability and biocompatibility for imaging yeast cell morphology. Changes in morphology of living yeast cells, i.e. cell shape variation, and budding, can be observed in a minute-period until more than an hour without the photoluminescent intensity loss.
---
paper_title: Optical Fibre Sensors Using Graphene-Based Materials: A Review
paper_content:
Graphene and its derivatives have become the most explored materials since Novoselov and Geim (Nobel Prize winners for Physics in 2010) achieved its isolation in 2004. The exceptional properties of graphene have attracted the attention of the scientific community from different research fields, generating high impact not only in scientific journals, but also in general-interest newspapers. Optical fibre sensing is one of the many fields that can benefit from the use of these new materials, combining the amazing morphological, chemical, optical and electrical features of graphene with the advantages that optical fibre offers over other sensing strategies. In this document, a review of the current state of the art for optical fibre sensors based on graphene materials is presented.
---
paper_title: Biological applications of carbon dots
paper_content:
Carbon dots (C-dots), since their first discovery in 2004 by Scrivens et al. during purification of single-walled carbon nanotubes, have gradually become a rising star in the fluorescent nanoparticles family, due to their strong fluorescence, resistance to photobleaching, low toxicity, along with their abundant and inexpensive nature. In the past decade, the procedures for preparing C-dots have become increasingly versatile and facile, and their applications are being extended to a growing number of fields. In this review, we focused on introducing the biological applications of C-dots, hoping to expedite their translation to the clinic.
---
paper_title: Graphene quantum dots for cancer targeted drug delivery.
paper_content:
A biocompatible and cell traceable drug delivery system Graphene Quantum Dots (GQD) based, for the targeted delivery of the DNA intercalating drug doxorubicin (DOX) to cancer cells, is here reported. Highly dispersible and water soluble GQD, synthesized by acidic oxidation and exfoliation of multi-walled carbon nanotubes (MWCNT), were covalently linked to the tumor targeting module biotin (BTN), able to efficiently recognize biotin receptors over-expressed on cancer cells and loaded with DOX. Biological test performed on A549 cells reported a very low toxicity of the synthesized carrier (GQD and GQD-BTN). In GQD-BTN-DOX treated cancer cells, the cytotoxicity was strongly dependent from cell uptake which was greater and delayed after treatment with GQD-BTN-DOX system with respect to what observed for cells treated with the same system lacking of the targeting module BTN (GQD-DOX) or with the free drug alone. A delayed nuclear internalization of the drug is reported, due to the drug detachment from the nanosystem, triggered by the acidic environment of cancer cells.
---
paper_title: Development of a Potentiometric Chemical Sensor for the Rapid Detection of Carbofuran Based on Air Stable Lipid Films with Incorporated Calix[4]arene Phosphoryl Receptor Using Graphene Electrodes
paper_content:
The present article describes a miniaturized potentiometric carbofuran chemical sensor on graphene nanosheets with incorporated lipid films. The graphene electrode was used for the development of a very selective and sensitive chemical sensor for the detection of carbofuran by immobilizing an artificial selective receptor on stabilized lipid films. The artificial receptor was synthesized by transformation of the hydroxyl groups of resorcin[4]arene receptor into phosphoryl groups. This chemical sensor responded for the wide range of carbofuran concentrations with fast response time of ca. 20 s. The presented potentiometric carbofuran chemical sensor is easy to construct and exhibits good reproducibility, reusability, selectivity, rapid response times, long shelf life and high sensitivity of ca. 59 mV/decade over the carbofuran logarithmic concentration range from 10−6 to 10−3 M.
---
paper_title: Real-time reliable determination of binding kinetics of DNA hybridization using a multi-channel graphene biosensor
paper_content:
Reliable determination of binding kinetics and affinity of DNA hybridization and single-base mismatches plays an essential role in systems biology, personalized and precision medicine. The standard tools are optical-based sensors that are difficult to operate in low cost and to miniaturize for high-throughput measurement. Biosensors based on nanowire field-effect transistors have been developed, but reliable and cost-effective fabrication remains a challenge. Here, we demonstrate that a graphene single-crystal domain patterned into multiple channels can measure time- and concentration-dependent DNA hybridization kinetics and affinity reliably and sensitively, with a detection limit of 10 pM for DNA. It can distinguish single-base mutations quantitatively in real time. An analytical model is developed to estimate probe density, efficiency of hybridization and the maximum sensor response. The results suggest a promising future for cost-effective, high-throughput screening of drug candidates, genetic variations and disease biomarkers by using an integrated, miniaturized, all-electrical multiplexed, graphene-based DNA array.
---
paper_title: Development of an Electrochemical Biosensor for the Rapid Detection of Saxitoxin Based on Air Stable Lipid Films with Incorporated Anti-STX Using Graphene Electrodes
paper_content:
A miniaturized potentiometric saxitoxin sensor on graphene nanosheets with incorporated lipid films and Anti-STX, the natural saxitoxin receptor, immobilized on the stabilized lipid films is described in the present paper. An adequate selectivity for detection over a wide range of toxin concentrations, fast response time of ca. 5–20 min, and detection limit of 1 nM have been achieved. The proposed sensor is easy to construct and exhibits good reproducibility, reusability, selectivity, long shelf life and high sensitivity of ca. 60 mV/decade of toxin concentration. The method was implemented and evaluated in lake water and shellfish samples. This novel ultrathin film technology is currently adapted to the rapid detection of other toxins that could be used in bioterrorism.
---
paper_title: Fully integrated graphene electronic biosensor for label-free detection of lead (II) ion based on G-quadruplex structure-switching.
paper_content:
This work presents a fully integrated graphene field-effect transistor (GFET) biosensor for the label-free detection of lead ions (Pb2+) in aqueous-media, which first implements the G-quadruplex structure-switching biosensing principle in graphene nanoelectronics. We experimentally illustrate the biomolecular interplay that G-rich DNA single-strands with one-end confined on graphene surface can specifically interact with Pb2+ ions and switch into G-quadruplex structures. Since the structure-switching of electrically charged DNA strands can disrupt the charge distribution in the vicinity of graphene surface, the carrier equilibrium in graphene sheet might be altered, and manifested by the conductivity variation of GFET. The experimental data and theoretical analysis show that our devices are capable of the label-free and specific quantification of Pb2+ with a detection limit down to 163.7ng/L. These results first verify the signaling principle competency of G-quadruplex structure-switching in graphene electronic biosensors. Combining with the advantages of the compact device structure and convenient electrical signal, a label-free GFET biosensor for Pb2+ monitoring is enabled with promising application potential.
---
paper_title: A Selective Immunosensor for D-dimer Based on Antibody Immobilized on a Graphene Electrode with Incorporated Lipid Films
paper_content:
The present article describes a miniaturized potentiometric D-dimer biosensor on graphene nanosheets with incorporated lipid films. The graphene electrode was used for the development of a very selective and sensitive immunosensor for the detection of D-dimer by immobilizing the mouse anti human D-dimer antibody on stabilized lipid films. The immunosensor responded for the wide range of D-dimer concentrations with fast response time of ca. 15 s. The presented potentiometric D-dimer biosensor is easy to construct and exhibits good reproducibility, reusability, selectivity, rapid response times, long shelf life and high sensitivity of ca. 59 mV/decade over the D-dimer logarithmic concentration range from 10−6 μg/L to 10−3 μg/L.
---
paper_title: Structural Characterization of Graphene Nanosheets for Miniaturization of Potentiometric Urea Lipid Film Based Biosensors
paper_content:
The present article describes a miniaturized potentiometric urea lipid film based biosensor on graphene nanosheets. Structural characterization of graphene nanosheets for miniaturization of potentiometric urea lipid film based biosensors have been studied through atomic force microscopy (AFM) and transmission electron microscopy (TEM) measurements. UV-Vis and Fourrier transform IR (FTIR) spectroscopy have been utilized to study the pre- and postconjugated surfaces of graphene nanosheets. The presented potentiometric urea biosensor exhibits good reproducibility, reusability, selectivity, rapid response times (similar to 4 s), long shelf life and high sensitivity of ca. 70 mV/decade over the urea logarithmic concentration range from 1 x 10(-6) M to 1 x 10(-3) M.
---
paper_title: Label-free graphene biosensor targeting cancer molecules based on non-covalent modification.
paper_content:
Abstract A label-free immunosensor based on antibody-modified graphene field effect transistor (GFET) was presented. Antibodies targeting carcinoembryonic antigen (Anti-CEA) were immobilized to the graphene surface via non-covalent modification. The bifunctional molecule, 1-pyrenebutanoic acid succinimidyl ester, which is composed of a pyrene and a reactive succinimide ester group, interacts with graphene non-covalently via π-stacking. The succinimide ester group reacts with the amine group to initiate antibody surface immobilization, which was confirmed by X-ray Photoelectron Spectroscopy, Atomic Force Microscopy and Electrochemical Impedance Spectroscopy. The resulting anti-CEA modified GFET sufficiently monitored the reaction between CEA protein and anti-CEA in real-time with high specificity, which revealed selective electrical detection of CEA with a limit of detection (LOD) of less than 100 pg/ml. The dissociation constant between CEA protein and anti-CEA was estimated to be 6.35×10 −11 M, indicating the high affinity and sensitivity of anti-CEA-GFET. Taken together, the graphene biosensors provide an effective tool for clinical application and point-of-care medical diagnostics.
---
paper_title: Development of an Electrochemical Biosensor for the Rapid Detection of Cholera Toxin Using Air Stable Lipid Films with incorporated Ganglioside GM1
paper_content:
The present work describes the investigations of electrochemical interactions of cholera toxin with stabilized lipid films supported on a methacrylate polymer on a glass fiber filter with incorporated ganglioside GM1 for the development of a biosensor for cholera toxin. The analyte was injected into the flowing streams of a carrier electrolyte solution, the flow of the solution stops for 5 min and an ion current transient was obtained, the magnitude of which is correlated to the toxin concentration with detection limits of 0.06 µM. The method was applied in blood serum samples. Further work is directed to the rapid detection of other toxins used in bioterrorism using this novel ultrathin film technology.
---
paper_title: A label-free and portable graphene FET aptasensor for children blood lead detection
paper_content:
Lead is a cumulative toxicant, which can induce severe health issues, especially in children’s case due to their immature nervous system. While realizing large-scale monitoring of children blood lead remains challenging by utilizing traditional methods, it is highly desirable to search for alternative techniques or novel sensing materials. Here we report a label-free and portable aptasensor based on graphene field effect transistor (FET) for effective children blood lead detection. With standard solutions of different Pb2+ concentrations, we obtained a dose-response curve and a detection limitation below 37.5 ng/L, which is three orders lower than the safe blood lead level (100 μg/L). The devices also showed excellent selectivity over other metal cations such as, Na+, K+, Mg2+, and Ca2+, suggesting the capability of working in a complex sample matrix. We further successfully demonstrated the detection of Pb2+ ions in real blood samples from children by using our aptasensors, and explored their potential applications for quantification. Our results underscore such graphene FET aptasensors for future applications on fast detection of heavy metal ions for health monitoring and disease diagnostics.
---
paper_title: Biosensors Based on Lipid Modified Graphene Microelectrodes
paper_content:
Graphene is one of the new materials which has shown a large impact on the electronic industry due to its versatile properties, such as high specific surface area, high electrical conductivity, chemical stability, and large spectrum of electrochemical properties. The graphene material-based electronic industry has provided flexible devices which are inexpensive, simple and low power-consuming sensor tools, therefore opening an outstanding new door in the field of portable electronic devices. All these attractive advantages of graphene give a platform for the development of a new generation of devices in both food and environmental applications. Lipid-based sensors have proven to be a good route to the construction of novel devices with improved characteristics, such as fast response times, increased sensitivity and selectivity, and the possibility of miniaturization for the construction of portable biosensors. Therefore, the incorporation of a lipid substrate on graphene electrodes has provided a route to the construction of a highly sensitive and selective class of biosensors with fast response times and portability of field applications for the rapid detection of toxicants in the environment and food products.
---
paper_title: Electrochemical paper-based peptide nucleic acid biosensor for detecting human papillomavirus.
paper_content:
A novel paper-based electrochemical biosensor was developed using an anthraquinone-labeled pyrrolidinyl peptide nucleic acid (acpcPNA) probe (AQ-PNA) and graphene-polyaniline (G-PANI) modified electrode to detect human papillomavirus (HPV). An inkjet printing technique was employed to prepare the paper-based G-PANI-modified working electrode. The AQ-PNA probe baring a negatively charged amino acid at the N-terminus was immobilized onto the electrode surface through electrostatic attraction. Electrochemical impedance spectroscopy (EIS) was used to verify the AQ-PNA immobilization. The paper-based electrochemical DNA biosensor was used to detect a synthetic 14-base oligonucleotide target with a sequence corresponding to human papillomavirus (HPV) type 16 DNA by measuring the electrochemical signal response of the AQ label using square-wave voltammetry before and after hybridization. It was determined that the current signal significantly decreased after the addition of target DNA. This phenomenon is explained by the rigidity of PNA-DNA duplexes, which obstructs the accessibility of electron transfer from the AQ label to the electrode surface. Under optimal conditions, the detection limit of HPV type 16 DNA was found to be 2.3 nM with a linear range of 10-200 nM. The performance of this biosensor on real DNA samples was tested with the detection of PCR-amplified DNA samples from the SiHa cell line. The new method employs an inexpensive and disposable device, which easily incinerated after use and is promising for the screening and monitoring of the amount of HPV-DNA type 16 to identify the primary stages of cervical cancer.
---
paper_title: A fluorescence turn-on biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS 2 ) nanosheets for epithelial cell adhesion molecule (EpCAM) detection
paper_content:
Abstract This paper presents a “turn-on” fluorescence biosensor based on graphene quantum dots (GQDs) and molybdenum disulfide (MoS 2 ) nanosheets for rapid and sensitive detection of epithelial cell adhesion molecule (EpCAM). PEGylated GQDs were used as donor molecules, which could not only largely increase emission intensity but also prevent non-specific adsorption of PEGylated GQD on MoS 2 surface. The sensing platform was realized by adsorption of PEGylated GQD labelled EpCAM aptamer onto MoS 2 surface via van der Waals force. The fluorescence signal of GQD was then quenched by MoS 2 nanosheets via fluorescence resonance energy transfer (FRET) mechanism. In the presence of EpCAM protein, the stronger specific affinity interaction between aptamer and EpCAM protein could detach GQD labelled EpCAM aptamer from MoS 2 nanosheets, leading to the restoration of fluorescence intensity. By monitoring the change of fluorescence signal, the target EpCAM protein could be detected sensitively and selectively with a linear detection range from 3 nM to 54 nM and limit of detection (LOD) around 450 pM. In addition, this nanobiosensor has been successfully used for EpCAM-expressed breast cancer MCF-7 cell detection.
---
paper_title: Reduced graphene oxide-based optical sensor for detecting specific protein
paper_content:
Abstract A sensitive and selective optical biosensor based on reduced graphene oxide (RGO), which uses the polarization-dependent absorption effect of graphene under total internal reflection, is reported for determination of rabbit IgG. RGO sheets with the thickness of about 8.1 nm are fabricated by high temperature reduction and used as a sensing film, because of its strong polarization dependent absorption. This RGO-based optical sensor shows a satisfactory response to rabbit IgG with a minimum concentration of 0.0625 μg/ml. As a contrast, commercial SPR apparatus is used to investigate rabbit IgG with a minimum concentration of 0.3125 μg/ml. Moreover, with antigen–antibody binding, this sensor can also achieve label-free and real-time detection. Taking into account these factors, the RGO-based optical sensor may be a potential candidate for biosensing.
---
paper_title: Enzyme-polyelectrolyte multilayer assemblies on reduced graphene oxide field-effect transistors for biosensing applications.
paper_content:
We present the construction of layer-by-layer (LbL) assemblies of polyethylenimine and urease onto reduced-graphene-oxide based field-effect transistors (rGO FETs) for the detection of urea. This versatile biosensor platform simultaneously exploits the pH dependency of liquid-gated graphene-based transistors and the change in the local pH produced by the catalyzed hydrolysis of urea. The use of an interdigitated microchannel resulted in transistors displaying low noise, high pH sensitivity (20.3µA/pH) and transconductance values up to 800 µS. The modification of rGO FETs with a weak polyelectrolyte improved the pH response because of its transducing properties by electrostatic gating effects. In the presence of urea, the urease-modified rGO FETs showed a shift in the Dirac point due to the change in the local pH close to the graphene surface. Markedly, these devices operated at very low voltages (less than 500mV) and were able to monitor urea in the range of 1-1000µm, with a limit of detection (LOD) down to 1µm, fast response and good long-term stability. The urea-response of the transistors was enhanced by increasing the number of bilayers due to the increment of the enzyme surface coverage onto the channel. Moreover, quantification of the heavy metal Cu2+(with a LOD down to 10nM) was performed in aqueous solution by taking advantage of the urease specific inhibition.
---
paper_title: Highly Sensitive and Selective Sensor Chips with Graphene-Oxide Linking Layer
paper_content:
The development of sensing interfaces can significantly improve the performance of biological sensors. Graphene oxide provides a remarkable immobilization platform for surface plasmon resonance (SPR) biosensors due to its excellent optical and biochemical properties. Here, we describe a novel sensor chip for SPR biosensors based on graphene-oxide linking layers. The biosensing assay model was based on a graphene oxide film containing streptavidin. The proposed sensor chip has three times higher sensitivity than the carboxymethylated dextran surface of a commercial sensor chip. Moreover, the demonstrated sensor chips are bioselective with more than 25 times reduced binding for nonspecific interaction and can be used multiple times. We consider the results presented here of importance for any future applications of highly sensitive SPR biosensing.
---
paper_title: Detection of heart failure-related biomarker in whole blood with graphene field effect transistor biosensor.
paper_content:
Since brain natriuretic peptide (BNP) has become internationally recognized biomarkers in the diagnosis and prognosis of heart failure (HF), it is highly desirable to search for a novel sensing tool for detecting the patient's BNP level at the early stage. Here we report a platinum nanoparticles (PtNPs)-decorated reduced graphene oxide (rGO) field effect transistor (FET) biosensor coupled with a microfilter system for label-free and highly sensitive detection of BNP in whole blood. The PtNPs-decorated rGO FET sensor was obtained by drop-casting rGO onto the pre-fabricated FET chip and subsequently assembling PtNPs on the graphene surface. After anti-BNP was bound to the PtNPs surface, BNP was successfully detected by the anti-BNP immobilized FET biosensor. It was found that the developed FET biosensor was able to achieve a low detection limitation of 100fM. Moreover, BNP was successfully detected in human whole blood sample treated by a custom-made microfilter, suggesting the sensor's capability of working in a complex sample matrix. The developed FET biosensor provides a new sensing platform for protein detection, showing its potential applications in clinic sample.
---
paper_title: Carboxyl-functionalized graphene oxide composites as SPR biosensors with enhanced sensitivity for immunoaffinity detection
paper_content:
Abstract This work demonstrates the excellent potential of carboxyl-functionalized graphene oxide (GO–COOH) composites to form biocompatible surfaces on sensing films for use in surface plasmon resonance (SPR)-based immunoaffinity biosensors. Carboxyl-functionalization of graphene carbon can modulate its visible spectrum, and can therefore be used to improve and control the plasmonic coupling mechanism. The binding properties of the molecules between a sensing film and a protein were elucidated at various flow rates of those molecules. The bio-specific binding interaction among the molecules was investigated by performing an antigen and antibody affinity immunoassay. The results thus obtained revealed that the overall affinity binding value, K A , of the Au/GO–COOH chip can be significantly enhanced by up to ∼5.15 times that of the Au/GO chip. With respect to the shifts of the SPR angles of the chips, the affinity immunoassay interaction at a BSA concentration of 1 μg/ml for an Au/GO-COOH chip, an Au/GO chip and a traditional SPR chip are 35.5 m°, 9.128 m° and 8.816 m°, respectively. The enhancement of the antigen-antibody interaction of the Au/GO–COOH chip cause this chip to become four times as sensitive to the SPR angle shift and to have the lowest antibody detection limit of 0.01 pg/ml. These results indicate the potential of the chip in detecting specific proteins, and the development of real-time in vivo blood analysis and diagnosis based on cancer tumor markers.
---
paper_title: Highly selective organic transistor biosensor with inkjet printed graphene oxide support system.
paper_content:
Most of the reported field effect transistors (FETs) fall short of a general method to uniquely specify and detect a target analyte. For this reason, we propose a pentacene-based FET with a graphene oxide support system (GOSS), composed of functionalized graphene oxide (GO) ink. The GOSS with a specific moiety group to capture the biomaterial of interest was inkjet printed on the pentacene FET. It provided modular receptor sites on the surface of pentacene, without alteration of the device. To evaluate the performance of a GOSS-pentacene FET biosensor, we detected the artificial DNA and circulating tumor cells as a proof-of-concept. The mobility of the FET dramatically changed upon capturing the target biomolecule on the GOSS. The FET exhibited high selectivity with 0.1 pmoles of the target DNA and a few cancer cells per detection volume. This study suggests a valuable sensor for medical diagnosis that can be mass produced effortlessly at low-cost.
---
| Title: Graphene-Based Materials for Biosensors: A Review
Section 1: Introduction
Description 1: This section introduces biosensors, their components, and the importance of selectivity, sensitivity, and reproducibility in biosensing. It also discusses the emergence and properties of graphene as a significant material for biosensor applications.
Section 2: Graphene-Based Materials: Fabrication Process and Properties
Description 2: This section covers the synthesis methods and properties of various graphene-based materials, including pristine graphene, graphene oxide (GO), reduced graphene oxide (RGO), and graphene quantum dots (GQDs).
Section 3: Pristine Graphene
Description 3: This section describes the properties of monocrystalline graphene films and methods like mechanical exfoliation used to prepare them. It also discusses the electronic properties and potential applications of pristine graphene.
Section 4: Chemical Vapor Deposition
Description 4: This section discusses the chemical vapor deposition (CVD) method for industrial-scale graphene fabrication, its applications, and challenges associated with this method.
Section 5: Liquid Exfoliation
Description 5: This section explains the liquid exfoliation process for fabricating graphene using ultrasonic energy, its advantages, and applications in various fields.
Section 6: Epitaxial Growth on SiC
Description 6: This section describes the method of growing graphene on silicon carbide (SiC) substrates, discussing the process, characteristics, and potential applications.
Section 7: Functionalized Graphene
Description 7: This section covers the chemical synthesis of graphene oxide (GO) and reduced graphene oxide (RGO), including various methods and the structural models of graphite oxide.
Section 8: Reduction of Graphene Oxide
Description 8: This section details the procedures and various reducing agents used for producing reduced graphene oxide (RGO) from graphene oxide (GO).
Section 9: Graphene-Based Quantum Dots
Description 9: This section discusses the properties, synthesis methods, and applications of graphene quantum dots (GQDs) in biosensing and bioimaging.
Section 10: Engineering of Biosensor Devices Using Graphene-Based Materials and Current Progress
Description 10: This section describes the integration of graphene-based materials into biosensor devices, highlights recent progress, and discusses the biomedical applications of graphene-based biosensors.
Section 11: Engineering of Pristine Graphene–Biomolecule-Based Biosensors
Description 11: This section explains the use of pristine graphene in enhancing biosensor sensitivity and specific applications in detecting various biomolecules.
Section 12: Engineering of Biomolecules-Functionalized Graphene Based Biosensors
Description 12: This section focuses on the applications of functionalized graphene materials like GO, RGO, and GQDs in biosensors due to their large surface area and interaction capabilities with biomolecules.
Section 13: Conclusions
Description 13: This section provides a summary of the discussion on graphene-based materials, their properties, synthesis methods, and the latest developments in the field of biosensors. It also outlines future directions and potential applications in biosensing and healthcare. |
1 A REVIEW OF TECHNIQUES IN THE VERIFIED SOLUTION OF CONSTRAINED GLOBAL OPTIMIZATION PROBLEMS | 11 | ---
paper_title: Computation of rational interval functions
paper_content:
This paper presents a general algorithm for computing interval expressions. The strategy is characterized by a subdivision of the argument intervals of the expression and a recomputation of the expression with these new intervals. The precision of the result is limited only by the actual computer.
---
paper_title: Global optimization using interval analysis ? the multi-dimensional case
paper_content:
We show how interval analysis can be used to compute the global minimum of a twice continuously differentiable function ofn variables over ann-dimensional parallelopiped with sides parallel to the coordinate axes. Our method provides infallible bounds on both the globally minimum value of the function and the point(s) at which the minimum occurs.
---
paper_title: An interval arithmetic method for global optimization
paper_content:
An interval arithmetic method is described for finding the global maxima or minima of multivariable functions. The original domain of variables is divided successively, and the lower and the upper bounds of the interval expression of the function are estimated on each subregion. By discarding subregions where the global solution can not exist, one can always find the solution with rigorous error bounds. The convergence can be made fast by Newton's method after subregions are grouped. Further, constrained optimization can be treated using a special transformation or the Lagrange-multiplier technique.
---
paper_title: Use of a Real-Valued Local Minimum in Parallel Interval Global Optimization
paper_content:
We consider a parallel method for finding the global minimum (and all of the global minimizers) of a continuous non-linear function f : D ! R, where D is an n-dimensional interval. The method combines one of the well known branch-and-bound interval search methods of Skelboe, Moore and Hansen with a real-valued optimization method. Initially we use a standard real-valued optimization method to find a local minimizer xp (or rather: a prediction of a local minimizer). Then the interval Newton method is applied to an interval Ip containing xp as its midpoint. Ip is chosen as large as possible under the restriction that the Newton interval method must converge when Ip is used as starting interval. In this way the original problem has been reduced to the problem of searching a domain DnIp which does not contain the local (and perhaps global) minimizer. The remaining domain is searched by the branch-and-bound interval method, starting by splitting the remaining domain into 2n intervals and hence avoiding Ip. This branch-and-bound search then either verifies that the point xp is the global minimizer, or the opposite is detected and it finds the global minimum (and the global minimizers) in the usual way. The combined method parallizes well. On one test case the combined method is faster than the branch-and-bound method itself. However, for another test case we get the opposite result. This is explained.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: Optimization Software Guide
paper_content:
Preface Part I. Overview of Algorithms. 1. Optimization Problems and Software 2. Unconstrained Optimization 3.Nonlinear Least Squares 4. Nonlinear Equations 5. Linear Programming 6. Quadratic Programming 7. Bound-Constrained Optimization 8. Constrained Optimization 9. Network Optimization 10. Integer Programming Chapter 11. Miscellaneous Optimization Problems Part II: Software Packages. AMPL BQPD BT BTN CNM CONOPT CONSOL-OPTCAD CPLEX C-WHIZ DFNLP DOC DOT FortLP FSQP GAMS GAUSS GENESIS GENOS GINO GRG2 HOMPACK IMSL Fortran and C Library LAMPS LANCELOT LBFGS LINDO LINGO LNOS LPsolver LSGRG2 LSNNO LSSOL M1QN2 and M1QN3 MATLAB MINOS MINPACK-1 MIPIII MODULOPT NAG C library NAG Fortran Library NETFLOW NETSOLVE NITSOL NLPE NLPQL NLPQLB NLSFIT NLSSOL NLPSPR NPSOL OB1 ODRPACK OPSYC OptiA OPTIMA Library OPTPACK OSL PC- PROG PITCON PORT 3 PROC NLP Q01SUBS QAPP QPOPT SPEAKEASY SQP TENMIN TENSOLVE TN/TNBC TNPACK UNCMIN VE08 VE10 VIG and VIMDA What's Best! Appendix: Internet Software References.
---
paper_title: Second-Order Sufficient Optimality Conditions for Local and Global Nonlinear Programming
paper_content:
This paper presents a new approach to the sufficient conditions of nonlinear programming. Main result is a sufficient condition for the global optimality of a Kuhn-Tucker point. This condition can be verified constructively, using a novel convexity test based on interval analysis, and is guaranteed to prove global optimality of strong local minimizers for sufficiently narrow bounds. Hence it is expected to be a useful tool within branch and bound algorithms for global optimization.
---
paper_title: Constrained Global Optimization: Algorithms and Applications
paper_content:
Convex sets and functions.- Optimality conditions in nonlinear programming.- Combinatorial optimization problems that can be formulated as nonconvex quadratic problems.- Enumerative methods in nonconvex programming.- Cutting plane methods.- Branch and bound methods.- Bilinear programming methods for nonconvex quadratic problems.- Large scale problems.- Global minimization of indefinite quadratic problems.- Test problems for global nonconvex quadratic programming algorithms.
---
paper_title: Inclusion functions and global optimization II
paper_content:
Inclusion functions combined with special subdivision strategies are an effective means of solving the global unconstrained optimization problem. Although these techniques were determined and numerically tested about ten years ago, they are nearly unknown and scarcely used. In order to make the role of inclusion functions and subdivision strategies more widespread and transparent we will discuss a related simplified basic algorithm. It computes approximations of the global minimum and, at the same time, bounds the absolute approximation error. We will show that the algorithm works and converges under more general assumptions than it has been known hitherto, that is, only appropriate inclusion functions are expected to exist. The number of minimal points (finite or infinite) is not of importance. Lipschitz conditions or continuity are not assumed. As shown in the Appendix the required inclusion functions can be constructed and programmed without difficulty in a natural way using interval analysis.
---
paper_title: Global optimization using interval analysis ? the multi-dimensional case
paper_content:
We show how interval analysis can be used to compute the global minimum of a twice continuously differentiable function ofn variables over ann-dimensional parallelopiped with sides parallel to the coordinate axes. Our method provides infallible bounds on both the globally minimum value of the function and the point(s) at which the minimum occurs.
---
paper_title: An interval arithmetic method for global optimization
paper_content:
An interval arithmetic method is described for finding the global maxima or minima of multivariable functions. The original domain of variables is divided successively, and the lower and the upper bounds of the interval expression of the function are estimated on each subregion. By discarding subregions where the global solution can not exist, one can always find the solution with rigorous error bounds. The convergence can be made fast by Newton's method after subregions are grouped. Further, constrained optimization can be treated using a special transformation or the Lagrange-multiplier technique.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: A Review of Preconditioners for the Interval Gauss–Seidel Method
paper_content:
Interval Newton methods in conjunction with generalized bisection can form the basis of algorithms that find all real roots within a specified box X ⊂ Rn of a system of nonlinear equations F (X) = 0 with mathematical certainty, even in finite-precision arithmetic. In such methods, the system F (X) = 0 is transformed into a linear interval system 0 = F (M) + F′(X)(X −M); if interval arithmetic is then used to bound the solutions of this system, the resulting box X contains all roots of the nonlinear system. We may use the interval Gauss–Seidel method to find these solution bounds. In order to increase the overall efficiency of the interval Newton / generalized bisection algorithm, the linear interval system is multiplied by a preconditioner matrix Y before the interval Gauss–Seidel method is applied. Here, we review results we have obtained over the past few years concerning computation of such preconditioners. We emphasize importance and connecting relationships, and we cite references for the underlying elementary theory and other details.
---
paper_title: The Cluster Problem in Global Optimization -- The Univariate Case
paper_content:
The Cluster Problem in Global Optimization the Univariate Case. We consider a branch and bound method for enclosing all global minimizers of a nonlinearC 2 or C 1 objective function. In particular, we consider bounds obtained with interval arithmetic, along with the “midpoint test,” but no acceleration procedures. Unless the lower bound is exact, the algorithm without acceleration procedures in general gives an undesirable Cluster of intervals around each minimizer. In this article, we analyze this problem in the one dimensional case. Theoretical results are given which show that the problem is highly related to the behavior of the objective function near the global minimizers and to the order of the corresponding interval extension.
---
paper_title: Nonlinear parameter estimation by global optimization—efficiency and reliability
paper_content:
In this paper we first show that the objective function of a least squares type nonlinear parameter estimation problem can be any non- negative real function, and therefore this class of problems corre- sponds to global optimization. Two non-derivative implementations of a global optimization method are presented; with nine standard test functions applied to measure their efficiency. A new nonlinear test problem is then presented for testing the reliability of global op- timization algorithms. This test function has a countable infinity of local minima and only one global minimizer. The region of attraction of the global minimum is of zero measure. The results of efficiency and reliability tests are given.
---
paper_title: Algorithm 681: INTBIS, a portable interval Newton/bisection package
paper_content:
We present a portable software package for finding all real roots of a system of nonlinear equations within a region defined by bounds on the variables. Where practical, the package should find all roots with mathematical certainty. Though based on interval Newton methods, it is self-contained. It allows various control and output options and does not require programming if the equations are polynomials; it is structured for further algorithmic research. Its practicality does not depend in a simple way on the dimension of the system or on the degree of nonlinearity.
---
paper_title: Box-Splitting strategies for the interval Gauss-Seidel step in a global optimization method
paper_content:
We consider an algorithm for computing verified enclosures for all global minimizersx* and for the global minimum valuef*=f(x*) of a twice continuously differentiable functionf:ℝ n →→ within a box [x]∈I→. Our algorithm incorporates the interval Gauss-Seidel step applied to the problem of finding the zeros of the gradient off. Here, we have to deal with the gaps produced by the extended interval division. It is possible to use different box-splitting strategies for handling these gaps, producing different numbers of subboxes. We present results concerning the impact of these strategies on the interval Gauss-Seidel step and therefore on our global optimization method. First, we give an overview of some of the techniques used in our algorithm, and we describe the modifications improving the efficiency of the interval Gauss-Seidel step by applying a special box-splitting strategy. Then, we have a look on special preconditioners for the Gauss-Seidel step, and we investigate the corresponding results for different splitting strategies. Test results for standard global optimization problems are discussed for different variants of our method in its portable PASCAL-XSC implementation. These results demonstrate that there are many cases in which the splitting strategy is more important for the efficiency of the algorithm than the use of preconditioners.
---
paper_title: (1) THE CLUSTER PROBLEM IN MULTIVARIATE GLOBAL OPTIMIZATION
paper_content:
We consider branch and bound methods for enclosing all unconstrained global minimizers of a nonconvex nonlinear twice-continuously differentiable objective function. In particular, we consider bounds obtained with interval arithmetic, with the “midpoint test,” but no acceleration procedures. Unless the lower bound is exact, the algorithm without acceleration procedures in general gives an undesirable cluster of boxes around each minimizer. In a previous paper, we analyzed this problem for univariate objective functions. In this paper, we generalize that analysis to multi-dimensional objective functions. As in the univariate case, the results show that the problem is highly related to the behavior of the objective function near the global minimizers and to the order of the corresponding interval extension.
---
paper_title: An Interval Branch and Bound Algorithm for Bound Constrained Optimization Problems
paper_content:
In this paper, we propose modifications to a prototypical branch and bound algorithm for nonlinear optimization so that the algorithm efficiently handles constrained problems with constant bound constraints. The modifications involve treating subregions of the boundary identically to interior regions during the branch and bound process, but using reduced gradients for the interval Newton method. The modifications also involve preconditioners for the interval Gauss-Seidel method which are optimal in the sense that their application selectively gives a coordinate bound of minimum width, a coordinate bound whose left endpoint is as large as possible, or a coordinate bound whose right endpoint is as small as possible. We give experimental results on a selection of problems with different properties.
---
paper_title: Subdivision Direction Selection In Interval Methods For Global Optimization
paper_content:
The role of the interval subdivision-selection rule is investigated in branch-and-bound algorithms for global optimization. The class of rules that allows convergence for the model algorithm is characterized, and it is shown that the four rules investigated satisfy the conditions of convergence. A numerical study with a wide spectrum of test problems indicates that there are substantial differences between the rules in terms of the required CPU time, the number of function and derivative evaluations, and space complexity, and two rules can provide substantial improvements in efficiency.
---
paper_title: Parallel global optimization using interval analysis
paper_content:
Finding the global minimum for an arbitrary differentiable function over an n-dimensional rectangle is an important problem in computational science, with applications in many disciplines. Ongoing research in this area has led to many exhaustive search methods that require significant computational effort. Our current research is aimed at identifying opportunities for parallelizing the search wherever possible so that the computing time is reduced, and the global minimum is obtained reliably for a wide variety of objective functions commonly explored in the literature. Our approach is to use a branch of applied mathematics known as Interval Analysis, which is a commonly used tool in obtaining lower and upper bounds for machine computations for the purpose of automatic error bounding. This ensures the reliability of the solutions obtained. We present a depth-first parallel branch and bound algorithm with acceleration methods that provides a significant reduction in run time compared to a popular breadth-first search algorithm. Our algorithm reliably obtains global minima for numerous test functions reported in the literature, with the highest speedup achieved for highly multimodal functions. Our algorithm possesses desirable properties of reliability, robustness, scalability, and load balancing, all of which will be demonstrated.
---
paper_title: Use of a Real-Valued Local Minimum in Parallel Interval Global Optimization
paper_content:
We consider a parallel method for finding the global minimum (and all of the global minimizers) of a continuous non-linear function f : D ! R, where D is an n-dimensional interval. The method combines one of the well known branch-and-bound interval search methods of Skelboe, Moore and Hansen with a real-valued optimization method. Initially we use a standard real-valued optimization method to find a local minimizer xp (or rather: a prediction of a local minimizer). Then the interval Newton method is applied to an interval Ip containing xp as its midpoint. Ip is chosen as large as possible under the restriction that the Newton interval method must converge when Ip is used as starting interval. In this way the original problem has been reduced to the problem of searching a domain DnIp which does not contain the local (and perhaps global) minimizer. The remaining domain is searched by the branch-and-bound interval method, starting by splitting the remaining domain into 2n intervals and hence avoiding Ip. This branch-and-bound search then either verifies that the point xp is the global minimizer, or the opposite is detected and it finds the global minimum (and the global minimizers) in the usual way. The combined method parallizes well. On one test case the combined method is faster than the branch-and-bound method itself. However, for another test case we get the opposite result. This is explained.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: Epsilon-inflation in verification algorithms
paper_content:
Epsilon-inflation is often used in verification numerics to find an interval vector [x] e such that some interval function [f] maps [x] e into itself. We recall algorithms which use epsilon-inflation to verify this subset property [f] ([x] e ) ⊆ [x] e , and we derive criteria which guarantee that already finitely many inflation steps are sufficient to prove it. These criteria require [f] to be a P-contraction. We recall results on P-contractions, and we derive rules to verify them. We also introduce local P-contractions, and we use this concept to guarantee again the subset property above when using epsilon-inflation.
---
paper_title: Optimal centered forms
paper_content:
A simple expression for an “optimal” center of a centered form is presented. Among all possible centers within a given interval this center yields the greatest lower bound or the lowest upper bound of a centered form, respectively. It is also shown that one-sided isotonicity holds for such centered forms.
---
paper_title: Use of a Real-Valued Local Minimum in Parallel Interval Global Optimization
paper_content:
We consider a parallel method for finding the global minimum (and all of the global minimizers) of a continuous non-linear function f : D ! R, where D is an n-dimensional interval. The method combines one of the well known branch-and-bound interval search methods of Skelboe, Moore and Hansen with a real-valued optimization method. Initially we use a standard real-valued optimization method to find a local minimizer xp (or rather: a prediction of a local minimizer). Then the interval Newton method is applied to an interval Ip containing xp as its midpoint. Ip is chosen as large as possible under the restriction that the Newton interval method must converge when Ip is used as starting interval. In this way the original problem has been reduced to the problem of searching a domain DnIp which does not contain the local (and perhaps global) minimizer. The remaining domain is searched by the branch-and-bound interval method, starting by splitting the remaining domain into 2n intervals and hence avoiding Ip. This branch-and-bound search then either verifies that the point xp is the global minimizer, or the opposite is detected and it finds the global minimum (and the global minimizers) in the usual way. The combined method parallizes well. On one test case the combined method is faster than the branch-and-bound method itself. However, for another test case we get the opposite result. This is explained.
---
paper_title: Quadratic programming with one negative eigenvalue is NP-hard
paper_content:
We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hačijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
---
paper_title: An Interval Branch and Bound Algorithm for Bound Constrained Optimization Problems
paper_content:
In this paper, we propose modifications to a prototypical branch and bound algorithm for nonlinear optimization so that the algorithm efficiently handles constrained problems with constant bound constraints. The modifications involve treating subregions of the boundary identically to interior regions during the branch and bound process, but using reduced gradients for the interval Newton method. The modifications also involve preconditioners for the interval Gauss-Seidel method which are optimal in the sense that their application selectively gives a coordinate bound of minimum width, a coordinate bound whose left endpoint is as large as possible, or a coordinate bound whose right endpoint is as small as possible. We give experimental results on a selection of problems with different properties.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: A Review of Preconditioners for the Interval Gauss–Seidel Method
paper_content:
Interval Newton methods in conjunction with generalized bisection can form the basis of algorithms that find all real roots within a specified box X ⊂ Rn of a system of nonlinear equations F (X) = 0 with mathematical certainty, even in finite-precision arithmetic. In such methods, the system F (X) = 0 is transformed into a linear interval system 0 = F (M) + F′(X)(X −M); if interval arithmetic is then used to bound the solutions of this system, the resulting box X contains all roots of the nonlinear system. We may use the interval Gauss–Seidel method to find these solution bounds. In order to increase the overall efficiency of the interval Newton / generalized bisection algorithm, the linear interval system is multiplied by a preconditioner matrix Y before the interval Gauss–Seidel method is applied. Here, we review results we have obtained over the past few years concerning computation of such preconditioners. We emphasize importance and connecting relationships, and we cite references for the underlying elementary theory and other details.
---
paper_title: An interval algorithm for constrained global optimization
paper_content:
Abstract An interval algorithm for bounding the solutions of a constrained global optimization problem is described. The problem functions are assumed only to be continuous. It is shown how the computational cost of bounding a set which satisfies equality constraints can often be reduced if the equality constraint functions are assumed to be continuously differentiable. Numerical results are presented.
---
paper_title: On Verifying Feasibility in Equality Constrained Optimization Problems, preprint
paper_content:
Techniques for verifying feasibility of equality constraints are presented. The underlying verification procedures are similar to a proposed algorithm of Hansen, but various possibilities, as well as additional procedures for handling bound constraints, are investigated. The overall scheme differs from some algorithms in that it rigorously verifies exact (rather than approximate) feasibility. The scheme starts with an approximate feasible point, then constructs a box (i.e. a set of tolerances) about this point within which it is rigorously verified that a feasible point exists. Alternate ways of proceeding are compared, and numerical results on a set of test problems appear.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: On Verifying Feasibility in Equality Constrained Optimization Problems, preprint
paper_content:
Techniques for verifying feasibility of equality constraints are presented. The underlying verification procedures are similar to a proposed algorithm of Hansen, but various possibilities, as well as additional procedures for handling bound constraints, are investigated. The overall scheme differs from some algorithms in that it rigorously verifies exact (rather than approximate) feasibility. The scheme starts with an approximate feasible point, then constructs a box (i.e. a set of tolerances) about this point within which it is rigorously verified that a feasible point exists. Alternate ways of proceeding are compared, and numerical results on a set of test problems appear.
---
paper_title: Quadratic programming with one negative eigenvalue is NP-hard
paper_content:
We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hačijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
---
paper_title: An Interval Branch and Bound Algorithm for Bound Constrained Optimization Problems
paper_content:
In this paper, we propose modifications to a prototypical branch and bound algorithm for nonlinear optimization so that the algorithm efficiently handles constrained problems with constant bound constraints. The modifications involve treating subregions of the boundary identically to interior regions during the branch and bound process, but using reduced gradients for the interval Newton method. The modifications also involve preconditioners for the interval Gauss-Seidel method which are optimal in the sense that their application selectively gives a coordinate bound of minimum width, a coordinate bound whose left endpoint is as large as possible, or a coordinate bound whose right endpoint is as small as possible. We give experimental results on a selection of problems with different properties.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: An Interval Branch and Bound Algorithm for Bound Constrained Optimization Problems
paper_content:
In this paper, we propose modifications to a prototypical branch and bound algorithm for nonlinear optimization so that the algorithm efficiently handles constrained problems with constant bound constraints. The modifications involve treating subregions of the boundary identically to interior regions during the branch and bound process, but using reduced gradients for the interval Newton method. The modifications also involve preconditioners for the interval Gauss-Seidel method which are optimal in the sense that their application selectively gives a coordinate bound of minimum width, a coordinate bound whose left endpoint is as large as possible, or a coordinate bound whose right endpoint is as small as possible. We give experimental results on a selection of problems with different properties.
---
paper_title: Empirical Evaluation of Innovations in Interval Branch and Bound Algorithms for Nonlinear Algebraic Systems
paper_content:
Interval branch and bound algorithms for finding all roots use a combination of a computational existence/uniqueness procedure and a tessellation process (generalized bisection). Such algorithms identify, with mathematical rigor, a set of boxes that contains unique roots and a second set within which all remaining roots must lie. Though each root is contained in a box in one of the sets, the second set may have several boxes in clusters near a single root. Thus, the output is of higher quality if there are relatively more boxes in the first set. In contrast to previously implemented similar techniques, a box expansion technique in this paper, based on using an approximate root finder, $\epsilon$-inflation, and exact set complementation, decreases the size of the second set, increases the size of the first set, and never loses roots. ::: In addition to the expansion technique, use of second-order extensions to eliminate small boxes that do not contain roots, and interval slopes versus interval derivative matrices are studied. These items are evaluated empirically on a significant test problem set, within a Fortran-90 environment designed for such purposes. The results are compared with previous results and show that careful incorporation of the techniques yields both quantitatively and qualitatively superior computer codes.
---
paper_title: Bounds for lagrange multipliers and optimal points
paper_content:
Abstract We describe two methods for use in constrained optimization problems. One method computes guaranteed bounds on both the Lagrange multipliers and on the location of the optimal points. The other method bounds the Lagrange multipliers only. Both methods provide bounds for perturbed problems. The methods can prove (in the presence of rounding) the existence or nonexistence of a solution in a given region.
---
paper_title: A Review of Preconditioners for the Interval Gauss–Seidel Method
paper_content:
Interval Newton methods in conjunction with generalized bisection can form the basis of algorithms that find all real roots within a specified box X ⊂ Rn of a system of nonlinear equations F (X) = 0 with mathematical certainty, even in finite-precision arithmetic. In such methods, the system F (X) = 0 is transformed into a linear interval system 0 = F (M) + F′(X)(X −M); if interval arithmetic is then used to bound the solutions of this system, the resulting box X contains all roots of the nonlinear system. We may use the interval Gauss–Seidel method to find these solution bounds. In order to increase the overall efficiency of the interval Newton / generalized bisection algorithm, the linear interval system is multiplied by a preconditioner matrix Y before the interval Gauss–Seidel method is applied. Here, we review results we have obtained over the past few years concerning computation of such preconditioners. We emphasize importance and connecting relationships, and we cite references for the underlying elementary theory and other details.
---
paper_title: Use of a Real-Valued Local Minimum in Parallel Interval Global Optimization
paper_content:
We consider a parallel method for finding the global minimum (and all of the global minimizers) of a continuous non-linear function f : D ! R, where D is an n-dimensional interval. The method combines one of the well known branch-and-bound interval search methods of Skelboe, Moore and Hansen with a real-valued optimization method. Initially we use a standard real-valued optimization method to find a local minimizer xp (or rather: a prediction of a local minimizer). Then the interval Newton method is applied to an interval Ip containing xp as its midpoint. Ip is chosen as large as possible under the restriction that the Newton interval method must converge when Ip is used as starting interval. In this way the original problem has been reduced to the problem of searching a domain DnIp which does not contain the local (and perhaps global) minimizer. The remaining domain is searched by the branch-and-bound interval method, starting by splitting the remaining domain into 2n intervals and hence avoiding Ip. This branch-and-bound search then either verifies that the point xp is the global minimizer, or the opposite is detected and it finds the global minimum (and the global minimizers) in the usual way. The combined method parallizes well. On one test case the combined method is faster than the branch-and-bound method itself. However, for another test case we get the opposite result. This is explained.
---
paper_title: Optimal centered forms
paper_content:
A simple expression for an “optimal” center of a centered form is presented. Among all possible centers within a given interval this center yields the greatest lower bound or the lowest upper bound of a centered form, respectively. It is also shown that one-sided isotonicity holds for such centered forms.
---
paper_title: On Verifying Feasibility in Equality Constrained Optimization Problems, preprint
paper_content:
Techniques for verifying feasibility of equality constraints are presented. The underlying verification procedures are similar to a proposed algorithm of Hansen, but various possibilities, as well as additional procedures for handling bound constraints, are investigated. The overall scheme differs from some algorithms in that it rigorously verifies exact (rather than approximate) feasibility. The scheme starts with an approximate feasible point, then constructs a box (i.e. a set of tolerances) about this point within which it is rigorously verified that a feasible point exists. Alternate ways of proceeding are compared, and numerical results on a set of test problems appear.
---
paper_title: Bounds for lagrange multipliers and optimal points
paper_content:
Abstract We describe two methods for use in constrained optimization problems. One method computes guaranteed bounds on both the Lagrange multipliers and on the location of the optimal points. The other method bounds the Lagrange multipliers only. Both methods provide bounds for perturbed problems. The methods can prove (in the presence of rounding) the existence or nonexistence of a solution in a given region.
---
paper_title: On Verifying Feasibility in Equality Constrained Optimization Problems, preprint
paper_content:
Techniques for verifying feasibility of equality constraints are presented. The underlying verification procedures are similar to a proposed algorithm of Hansen, but various possibilities, as well as additional procedures for handling bound constraints, are investigated. The overall scheme differs from some algorithms in that it rigorously verifies exact (rather than approximate) feasibility. The scheme starts with an approximate feasible point, then constructs a box (i.e. a set of tolerances) about this point within which it is rigorously verified that a feasible point exists. Alternate ways of proceeding are compared, and numerical results on a set of test problems appear.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
paper_title: A Collection of Test Problems for Constrained Global Optimization Algorithms
paper_content:
Quadratic programming test problems.- Quadratically constrained test problems.- Nonlinear programming test problems.- Distillation column sequencing test problems.- Pooling/blending test problems.- Heat exchanger network synthesis test problems.- Phase and chemical reaction equilibrium test problems.- Comlpex chemical reactor network test problems.- Reactor-seperator-recycle system test problems.- Mechanical design test problems.- VLSI design test problems.
---
paper_title: Computation of the solutions of nonlinear polynomial systems
paper_content:
Abstract: A fundamental problem in computer aided design is the efficient computation of all roots of a system of nonlinear polynomial equations inn variables which lie within ann-dimensional @?. We present two techniques designed to solve such problems, which rely on representation of polynomials in the multivariate Bernstein basis and subdivision. In order to isolate all of the roots within the given domain, each method uses a different scheme for constructing a series of bounding @?es; the first method projects control polyhedra onto a set of coordinate planes, and the second employs linear optimization. We also examine in detail the local convergence properties of the two methods, proving that the former is quadratically convergent forn=1 and linearly convergent forn 1, while the latter is quadratically convergent for alln. Worst-case complexity analysis, as well as analysis of actual running times are performed.
---
paper_title: Algorithm 681: INTBIS, a portable interval Newton/bisection package
paper_content:
We present a portable software package for finding all real roots of a system of nonlinear equations within a region defined by bounds on the variables. Where practical, the package should find all roots with mathematical certainty. Though based on interval Newton methods, it is self-contained. It allows various control and output options and does not require programming if the equations are polynomials; it is structured for further algorithmic research. Its practicality does not depend in a simple way on the dimension of the system or on the degree of nonlinearity.
---
paper_title: Interval Methods for Processing Geometric Objects
paper_content:
In this approach, the parametric form is applied without the usual computational nightmare. The key is to view the parametric range as an interval, relying on subdivision algorithms.
---
paper_title: Comparison of three curve intersection algorithms
paper_content:
This paper treats the problem of how one can best compute the points of intersection of two planar rational curves. Three different algorithms are compared: the well known Bezier subdivision algorithm, a subdivision algorithm based on interval arithmetic, and the implicitization approach. Implementation considerations are discussed, with particular focus on how to make the implicitization method robust and fast. Report is made on a test in which the algorithms solved hundreds of randomly generated problems to eight digits of accuracy. The implicitization algorithm was faster than the others by a factor of five for degree two curves; by a factor of four for cubic curves; by a factor of three for quartic curves; and the interval method was faster for quintic curves by a factor of two.
---
paper_title: Global Optimization Using Interval Analysis
paper_content:
Employing a closed set-theoretic foundation for interval computations, Global Optimization Using Interval Analysis simplifies algorithm construction and increases generality of interval arithmetic. This Second Edition contains an up-to-date discussion of interval methods for solving systems of nonlinear equations and global optimization problems. It expands and improves various aspects of its forerunner and features significant new discussions, such as those on the use of consistency methods to enhance algorithm performance. Provided algorithms are guaranteed to find and bound all solutions to these problems despite bounded errors in data, in approximations, and from use of rounded arithmetic.
---
| Title: A Review of Techniques in the Verified Solution of Constrained Global Optimization Problems
Section 1: Introduction, Basic Ideas and Literature
Description 1: This section introduces the main concepts and reviews the literature on constrained global optimization problems.
Section 2: Early and Simplified Algorithms
Description 2: This section discusses the early and simplified interval global optimization algorithms.
Section 3: Recent Practical Algorithms
Description 3: This section reviews more recent algorithms and their practical implementations.
Section 4: Notation
Description 4: This section explains the notation used throughout the paper.
Section 5: On Constrained Optimization Problems
Description 5: This section elaborates on fundamental methods and techniques for handling constrained optimization problems.
Section 6: Checking Feasibility / Infeasibility
Description 6: This section discusses methods for verifying feasibility or infeasibility with respect to constraints.
Section 7: Handling Bound Constraints
Description 7: This section focuses on techniques to handle bound constraints in optimization problems.
Section 8: On Use of Interval Newton Methods
Description 8: This section explains how interval Newton methods are utilized in the context of optimization.
Section 9: Fritz-John Conditions
Description 9: This section introduces the Fritz-John conditions for verifying feasibility and local optimality.
Section 10: Applications
Description 10: This section reviews successful applications of verified global optimization codes.
Section 11: Summary and Present Work
Description 11: This section summarizes the findings and outlines ongoing and future work related to the topic. |
A review of non-linear structural control techniques | 7 | ---
paper_title: STRUCTURAL CONTROL: PAST, PRESENT, AND FUTURE
paper_content:
This tutorial/survey paper: (1) provides a concise point of departure for researchers and practitioners alike wishing to assess the current state of the art in the control and monitoring of civil engineering structures; and (2) provides a link between structural control and other fields of control theory, pointing out both differences and similarities, and points out where future research and application efforts are likely to prove fruitful. The paper consists of the following sections: section 1 is an introduction; section 2 deals with passive energy dissipation; section 3 deals with active control; section 4 deals with hybrid and semiactive control systems; section 5 discusses sensors for structural control; section 6 deals with smart material systems; section 7 deals with health monitoring and damage detection; and section 8 deals with research needs. An extensive list of references is provided in the references section.
---
paper_title: Dynamical interaction of an elastic system and a vibro-impact absorber
paper_content:
The nonlinear two-degree-of-freedom system under consideration consists of the linear oscillator with a relatively big mass, which is an approximation of some continuous elastic system, and of the vibro-impact oscillator with a relatively small mass, which is an absorber of the linear system vibrations. Analysis of nonlinear normal vibration modes shows that a stable localized vibration mode, which provides the vibration regime appropriate for the elastic vibration absorption, exists in a large region of the system parameters. In this regime, amplitudes of vibrations of the linear system are small, simultaneously vibrations of the absorber are significant.
---
paper_title: Seismic Vibration Control of Nonlinear Structures Using the Liquid Column Damper
paper_content:
A design methodology for a variation of the conventional liquid column damper (LCD) (by attaching a spring connection to the LCD) as a seismic vibration control device for structures with nonlinear behavior has been proposed in this paper. The proposed method with the nonconventional damper has the advantage of being applicable to nonlinear structures having high initial stiffness (short natural period) in the linear range below yield with subsequent period lengthening in the postyield phase, where the conventional LCD would be ineffective. The reason for the lack of effectiveness is not only due to the damper parameters designed for the linear structure becoming inapplicable when the structures move to the inelastic regime but also because of the restriction imposed by the high natural period of the conventional damper on its applicability to comparatively stiff structures (for tuning). The methodology in this paper, thereby incorporating the use of a modified model of the LCD system, namely a spring-connected one, removes the requirement on the natural period of the liquid in the LCD and further, the design is based on the parameters of an equivalent linear system for the nonlinear structure. The latter has been represented by a single-degree-of-freedom system with bilinear hysteresis. A procedure for obtaining the equivalent linear system for the nonlinear structure by adopting a temporally averaged linearization technique has been outlined. A few different response parameters known to have damaging effects on structures, such as number/extent of nonlinear excursions of yield level, permanent set, decay of vibration (duration of response), in addition to the reduction in peak response has been the focus of this paper. The efficacy of the proposed method has been examined through simulation studies using recorded accelerograms.
---
paper_title: Twisting of a high authority morphing structure
paper_content:
A high authority shape morphing plate is examined. The design incorporates an active Kagome back-plane capable of changing the shape of a solid face by transmitting loads through a tetrahedral core. This design is known to perform effectively upon hinging. This article examines limitations on its authority when twisting is required to attain the desired shapes. The specific objective is to ascertain designs that provide the maximum edge twist subject to specified passive load. Non-linear effects, such as face wrinkling, have been probed by using a finite element method and the fidelity of the results assessed through comparison with measurements. The numerical results have been used to validate a dimensional analysis of trends in the actuation resistance of the structure with geometry, as well as the passive load capacity. The forces determined by such analysis have been combined with the failure mechanisms for all subsystems to establish the constraints. The important domains have been visualized using mechanism maps. An optimization has been used to generate load capacity maps that guide geometric design and provide actuator capacity requirements.
---
paper_title: Dynamics of structures with wideband autoparametric vibration absorbers: experiment
paper_content:
The dynamics of a resonantly excited thin cantilever with an active controller are investigated experimentally. The controller mimics a passive wideband absorber discussed in the accompanying theory paper. Lead-zirconate-titanate patches are bonded to both sides of the beam to actuate it, while an electromagnetic shaker drives the beam near resonance. An active controller consisting of an array of coupled controllers is developed, such that the governing equations for the controller are quadratically coupled to the resonating system. The control signal, in terms of the motion of the controllers, is quadratically nonlinear. It is shown that the slow time-scale equations of this physical system are identical in form to those for the passive wideband vibration absorber. The controller is implemented using modelling software and a controller hardware board. Two sets of experiments are performed: one with a constant excitation frequency and the other with a linearly varying excitation frequency at a slow sweep rate (non-stationary excitation). The experimental results verify the analysis presented for the passive wideband autoparametric vibration absorber. The experiments also demonstrate the effectiveness of the absorber in reducing the response amplitude of structures, and its robustness to frequency mistuning.
---
paper_title: Application of passive dampers to modern helicopters
paper_content:
Boeing Helicopters has been actively testing elastomeric and dampers for application on modern rotors. Damper bench tests have been performed on various damper configurations to understand their dynamic characteristics. A recently completed series of wind tunnel tests was performed on a 1/6-Froude-scale model of the Comanche bearingless main rotor system with both elastomeric and dampers. Elastomeric dampers have also been incorporated in the full-scale Model 360 articulated rotor design. This paper summarizes elastomeric and damper activities at Boeing, and discusses issues that must be considered in the design and analysis of such systems.
---
paper_title: Static Aeroelastic Response of Chiral-core Airfoils
paper_content:
Extensive research is being devoted to the analysis and application of cellular solids for the design of innovative structural components. The chiral geometry in particular features a unique mechanical behavior which is here exploited for the design of 2D airfoils with morphing capabilities. A coupled-physics model, comprising computational fluid dynamics and structural analyses, investigates the influence of the chiral core on the aerodynamic behavior of the airfoil. Specifically, the model predicts the static deflection of the airfoil as a result of given flow conditions. The morphing capabilities of the airfoil, here quantified as camber changes, are evaluated for various design configurations of the core.
---
paper_title: Bistable composite flap for an airfoil
paper_content:
A study was conducted to address the challenges associated with investigating a bistable composite flap for an airfoil. A full-scale rotor blade section with a span of 2.114 m and a chord of 0.68 m, fitted with a 1 m span flap was wind-tunnel tested up to a speed of 60 m/s with the flap moving between two stable states for various angles of attack. The blade was approximated as a NACA 24016 section with a 20% chord trailing-edge flap to simplify the analysis. The trailing-edge flap was designed to change between its stable geometries between hover and forward flight conditions for aerodynamic performance improvements. The flap was driven by an electromechanical actuator that was mounted inside the blade D-spar at the leading edge. All of the rotor blade structure remote from this bistable flap region was unmodified and assumed to be completely rigid during wind-tunnel testing.
---
paper_title: VIBRATION SUPPRESSION FOR HIGH-SPEED RAILWAY BRIDGES USING TUNED MASS DAMPERS
paper_content:
Abstract This paper deals with the applicability of passive tuned mass dampers (PTMDs) to suppress train-induced vibration on bridges. A railway bridge is modeled as an Euler–Bernouli beam and a train is simulated as series of moving forces, moving masses or moving suspension masses to investigate the influence of various vehicle models on the bridge features with or without PTMD. According to the train load frequency analysis, the resonant effects will occur as the modal frequencies of the bridges are close to the multiple of the impact frequency of the train load to the bridge. A single PTMD system is then designed to alter the bridge dynamic characteristics to avoid excessive vibrations. Numerical results from simply supported bridges of Taiwan High-Speed Railway (THSR) under German I.C.E., Japanese S.K.S. and French T.G.V. trains show that the proposed PTMD is a useful vibration control device in reducing bridge vertical displacements, absolute accelerations, end rotations and train accelerations during resonant speeds, as the train axle arrangement is regular. It is also found that the inner space of bridge box girder of THSR is wide and deep enough for the installation and movement of PTMD.
---
paper_title: Passive Energy Dissipation Systems in Structural Engineering
paper_content:
Fundamentals. Metallic Dampers. Friction Dampers. Viscoeleastic Dampers. Viscous Fluid Dampers. Tuned Mass Dampers. Tuned Liquid Dampers. Future Direction. Glossary. Indexes.
---
paper_title: Truck Suspension Design to Minimize Road Damage
paper_content:
The objective of the work described in this paper is to establish guidelines for the design of passive suspensions that cause minimum road damage. An efficient procedure for calculating a realistic measure of road damage (the 95th percentile aggregate fourth power force) in the frequency domain is derived. Simple models of truck vibration are then used to examine the influence of suspension parameters on this road damage criterion and to select optimal values.It is found that to minimize road damage a suspension should have stiffness about one fifth of current air suspensions and damping up to twice that typically provided. The use of an anti-roll bar allows a high roll-over threshold without increasing road damage. It is thought that optimization in the pitch-plane should exclude correlation between the axles, to ensure that the optimized suspension parameters are robust to payload and speed changes.A three-dimensional ‘whole-vehicle’ model of an air suspended articulated vehicle is validated against mea...
---
paper_title: Adaptive Structures: Engineering Applications
paper_content:
List of Contributors. Preface. 1 Adaptive Structures for Structural Health Monitoring (Daniel J. Inman and Benjamin L. Grisso). 1.1 Introduction. 1.2 Structural Health Monitoring. 1.3 Impedance-Based Health Monitoring. 1.4 Local Computing. 1.5 Power Analysis. 1.6 Experimental Validation. 1.7 Harvesting, Storage and Power Management. 1.8 Autonomous Self-healing. 1.9 The Way Forward: Autonomic Structural Systems for Threat Mitigation. 1.10 Summary. Acknowledgements. References. 2 Distributed Sensing for Active Control (Suk-Min Moon, Leslie P. Fowler and Robert L. Clark). 2.1 Introduction. 2.2 Description of Experimental Test Bed. 2.3 Disturbance Estimation. 2.4 Sensor Selection. 2.5 Conclusions. Acknowledgments. References. 3 Global Vibration Control Through Local Feedback (Stephen J. Elliott). 3.1 Introduction. 3.2 Centralised Control of Vibration. 3.3 Decentralised Control of Vibration. 3.4 Control of Vibration on Structures with Distributed Excitation. 3.5 Local Control in the Inner Ear. 3.6 Conclusions. Acknowledgements. References. 4 Lightweight Shape-Adaptable Airfoils: A New Challenge for an Old Dream (L.F. Campanile). 4.1 Introduction. 4.2 Otto Lilienthal and the Flying Machine as a Shape-Adaptable Structural System. 4.3 Sir George Cayley and the Task Separation Principle. 4.4 Being Lightweight: A Crucial Requirement. 4.5 Coupling Mechanism and Structure: Compliant Systems as the Basis of Lightweight Shape-Adaptable Systems. 4.6 Extending Coupling to the Actuator System: Compliant Active Systems. 4.7 A Powerful Distributed Actuator: Aerodynamics. 4.8 The Common Denominator: Mechanical Coupling. 4.9 Concluding Remarks. Acknowledgements. References. 5 Adaptive Aeroelastic Structures (Jonathan E. Cooper). 5.1 Introduction. 5.2 Adaptive Internal Structures. 5.3 Adaptive Stiffness Attachments. 5.4 Conclusions. 5.5 The Way Forward. Acknowledgements. References. 6 Adaptive Aerospace Structures with Smart Technologies - A Retrospective and Future View (Christian Boller). 6.1 Introduction. 6.2 The Past Two Decades. 6.3 Added Value to the System. 6.4 Potential for the Future. 6.5 A Reflective Summary with Conclusions. References. 7 A Summary of Several Studies with Unsymmetric Laminates (Michael W. Hyer, Marie-Laure Dano, Marc R. Schultz, Sontipee Aimmanee and Adel B. Jilani). 7.1 Introduction and Background. 7.2 Room-Temperature Shapes of Square [02/902]T Cross-Ply Laminates. 7.3 Room-Temperature Shapes of More General Unsymmetric Laminates. 7.4 Moments Required to Change Shapes of Unsymmetric Laminates. 7.5 Use of Shape Memory Alloy for Actuation. 7.6 Use of Piezoceramic Actuation. 7.7 Consideration of Small Piezoceramic Actuators. 7.8 Conclusions. References. 8 Negative Stiffness and Negative Poisson's Ratio in Materials which Undergo a Phase Transformation (T.M. Jaglinski and R.S. Lakes). 8.1 Introduction. 8.2 Experimental Methods. 8.3 Composites. 8.4 Polycrystals. 8.5 Discussion. References. 9 Recent Advances in Self-Healing Materials Systems (M.W. Keller, B.J. Blaiszik, S.R. White and N.R. Sottos). 9.1 Introduction. 9.2 Faster Healing Systems - Fatigue Loading. 9.3 Smaller Size Scales. 9.4 Alternative Materials Systems - Elastomers. 9.5 Microvascular Autonomic Composites. 9.6 Conclusions. References. 10 Adaptive Structures - Some Biological Paradigms (Julian F.V. Vincent). 10.1 Introduction. 10.2 Deployment. 10.3 Turgor-Driven Mechanisms. 10.4 Dead Plant Tissues. 10.5 Morphing and Adapting in Animals. 10.6 Sensing in Arthropods - Campaniform and Slit Sensilla. 10.7 Developing an Interface Between Biology and Engineering. 10.8 Envoi. Acknowledgements. References. Index.
---
paper_title: Passive vibration suppression of flexible space structures via optimal geometric redesign
paper_content:
Acomputationalframework is presented for the design of large flexible space structures with nonperiodicgeometries to achieve passive vibration suppression. The present system combines an approximationmodel management framework (AMMF) developed for evolutionary optimization algorithms (EAs) with reduced basis approximate dynamic reanalysis techniques. A coevolutionary genetic search strategy is employed to ensure that design changes ::: during the optimization iterations lead to low-rank perturbations of the structural system matrices, for which thereduced basis methods give high-quality approximations. The k-means algorithm is employed for cluster analysis ::: of the population of designs to determine design points at which exact analysis should be carried out. The fitness of ::: the designs in an EA generation is then approximated using reduced basis models constructed around the points ::: where exact analysis is carried out. Results are presented for the optimal design of a two-dimensional cantilevered ::: space structure to achieve passive vibration suppression. It is shown that significant vibration isolation of the order ::: of 50 dB over a 100-Hz bandwidth can be achieved. Further, it is demonstrated that the AMMF can potentially ::: arrive at a better design compared to conventional approaches when a constraint is imposed on the computational ::: budget available for optimization.
---
paper_title: Vibration With Control
paper_content:
Preface. 1. SINGLE DEGREE OF FREEDOM SYSTEMS. Introduction. Spring-Mass System. Spring-Mass-Damper System. Forced Response. Transfer Functions and Frequency Methods. Measurement and Testing. Stability. Design and Control of Vibrations. Nonlinear Vibrations. Computing and Simulation in Matlab. Chapter Notes. References. Problems. 2. LUMPED PARAMETER MODELS. Introduction. Classifications of Systems. Feedback Control Systems. Examples. Experimental Models. Influence Methods. Nonlinear Models and Equilibrium. Chapter Notes. References. Problems. 3. MATRICES AND THE FREE RESPONSE. Introduction. Eigenvalues and Eigenvectors. Natural Frequencies and Mode Shapes. Canonical Forms. Lambda Matrices. Oscillation Results. Eigenvalue Estimates. Computational Eigenvalue Problems in Matlab. Numerical Simulation of the Time Response in Matlab. Chapter Notes. References. Problems. 4. STABILITY. Introduction. Lyapunov Stability. Conservative Systems. Systems with Damping. Semidefinite Damping . Gyroscopic Systems. Damped Gyroscopic Systems. Circulatory Systems. Asymmetric Systems. Feedback Systems. Stability in the State Space. Stability Boundaries. Chapter Notes. References. Problems. 5. FORCED RESPONSE OF LUMPED PARAMETER SYSTEMS. Introduction. Response via State Space Methods. Decoupling Conditions and Modal Analysis. Response of Systems with Damping. Bounded-Input, Bounded-Output Stability. Response Bounds. Frequency Response Methods. Numerical Simulations in Matlab. Chapter Notes. References. Problems. 6. DESIGN CONSIDERATIONS. Introduction. Isolators and Absorbers. Optimization Methods. Damping Design. Design Sensitivity and Redesign. Passive and Active Control. Design Specifications. Model Reduction. Chapter Notes. References. Problems. 7. CONTROL OF VIBRATIONS. Introduction. Controllability and Observability. Eigenstructure Assignment. Optimal Control. Observers (Estimators). Realization. Reduced-Order Modeling. Modal Control in State Space. Modal Control in Physical Space. Robustness. Positive Position Feedback Control. Matlab Commands for Control Calculations. Chapter Notes. References. Problems. 8. VIBRATION MEASUREMENT. Introduction. Measurement Hardware. Digital Signal Processing. Random Signal Analysis. Modal Data Extraction (Frequency Domain). Modal Data Extraction (Time Domain). Model Identification. Model Updating. Chapter Notes. References. Problems. 9. DISTRIBUTED PARAMETER MODELS. Introduction. Vibrations of Strings. Rods and Bars. Vibration of Beams. Membranes and Plates. Layered Materials. Viscous Damping. Chapter Notes. References. Problems. 10. FORMAL METHODS OF SOLUTION. Introduction. Boundary Value Problems and Eigenfunctions. Modal Analysis of the Free Response. Modal Analysis in Damped Systems. Transform Methods. Green's Functions. Chapter Notes. References. Problems. 11. OPERATORS AND THE FREE RESPONSE. Introduction. Hilbert Spaces. Expansion Theorems. Linear Operators. Compact Operators. Theoretical Modal Analysis. Eigenvalue Estimates. Enclosure Theorems. Oscillation Theory. Chapter Notes. References. Problems. 12. FORCED RESPONSE AND CONTROL. Introduction. Response by Modal Analysis. Modal Design Criteria. Combined Dynamical Systems. Passive Control and Design. Distribution Modal Control. Nonmodal Distributed Control. State Space Control Analysis. Chapter Notes. References. Problems. 13. APPROXIMATIONS OF DISTRIBUTED PARAMETER MODELS. Introduction. Modal Truncation. Rayleigh- Ritz-Galerkin Approximations. Finite Element Method. Substructure Analysis. Truncation in the Presence of Control. Impedance Method of Truncation and Control. Chapter Notes. References. Problems. APPENDIX A: COMMENTS ON UNITS. APPENDIX B: SUPPLEMENTARY MATHEMATICS. Index.
---
paper_title: OPTIMUM DESIGN OF A PASSIVE SUSPENSION SYSTEM OF A VEHICLE SUBJECTED TO ACTUAL RANDOM ROAD EXCITATIONS
paper_content:
Abstract Vehicles are subjected to random excitation due to road unevenness and variable velocity. In most research work reported earlier, the response analysis for Mean Square Acceleration Response (MSAR) has been carried out by considering the power spectral density (PSD) of the road excitation as white noise, and the velocity of the vehicle as constant. However, in the present paper the PSD of the actual road excitation has been found to follow an approximately exponentially decreasing curve. Also the change in vehicle velocity has a significant effect on the values of Root Mean Square Acceleration Response (RMSAR). Therefore, in this work, the RMSAR of a vehicle dynamic system subjected to actual random road excitations is obtained so as to account for the effect of the actual PSD of road excitation and the frequent changes in vehicle velocity. The RMSAR of the vehicle is calculated for actual field excitation using the Fast Fourier Transformation (FFT) technique to obtain the PSD, by recording observations at the rear wheel. The effect of time lag due to wheelbase on the RMSAR of the vehicle is studied. For this purpose, a new ratio α(τ) has been introduced. The relationship between α(τ) and the autocorrelation has been formulated. This ratio is useful for considering the effect of time lag due to wheelbase on RMSAR. Similarly, the effect of vehicle velocity on the RMSAR is obtained. Further, from a ride comfort point of view, the values of the design variables like spring stiffness and viscous damping coefficient of the front and rear suspensions have been obtained, by minimising the RMSAR using the desired boundary values of the vertical RMSAR as specified in the chart of ISO 2631, 1985(E) [1].
---
paper_title: Exploring the performance of a nonlinear tuned mass damper
paper_content:
We explore the performance of a nonlinear tuned mass damper (NTMD), which is modeled as a two degree of freedom system with a cubic nonlinearity. This nonlinearity is physically derived from a geometric configuration of two pairs of springs. The springs in one pair rotate as they extend, which results in a hardening spring stiffness. The other pair provides a linear stiffness term. We perform an extensive numerical study of periodic responses of the NTMD using the numerical continuation software AUTO. In our search for optimal design parameters we mainly employ two techniques, the optimization of periodic solutions and parameter sweeps. During our investigation we discovered a family of detached resonance curves for vanishing linear spring stiffness, a feature that was missed in an earlier study. These detached resonance response curves seem to be a weakness of the NTMD when used as a passive device, because they essentially restore a main resonance peak. However, since this family is detached from the low-amplitude responses there is an opportunity for designing a semi-active device.
---
paper_title: Design and demonstration of a high authority shape morphing structure
paper_content:
Abstract A concept for a high authority shape morphing plate is described and demonstrated. The design incorporates an active back-plane comprising a Kagome truss, capable of changing the shape of a solid face, connected to the back-plane by means of a tetrahedral truss core. The two shape deformations to be demonstrated consist of hinging and twisting. The design is performed by a combination of analytic estimation and numerical simulation, guided by previous assessments of the Kagome configuration. It is shown that, while the structure is capable of sustaining large passive loads at low weight, the demonstrable authority is actuator-limited. The full potential of the system can only be realized by developing and incorporating superior actuators. An optimization has been used to ascertain the largest displacements achievable within the force capability of the actuators. These displacements have been demonstrated and shown to correspond with values predicted by numerical simulation. The consistency between measured and calculated responses has allowed objectives to be set for alternative materials, as well as structural and actuator enhancements.
---
paper_title: Static analysis of a passive vibration isolator with quasi-zero-stiffness characteristic
paper_content:
Abstract The frequency range over which a linear passive vibration isolator is effective, is often limited by the mount stiffness required to support a static load. This can be improved upon by employing nonlinear mounts incorporating negative stiffness elements configured in such a way that the dynamic stiffness is much less than the static stiffness. Such nonlinear mounts are used widely in practice, but rigorous analysis, and hence a clear understanding of their behaviour is not readily available in the literature. In this paper, a simple system comprising a vertical spring acting in parallel with two oblique springs is studied. It is shown that there is a unique relationship between the geometry and the stiffness of the springs that yields a system with zero dynamic stiffness at the static equilibrium position. The dynamic stiffness increases monotonically with displacement either side of the equilibrium position, and this is least severe when the oblique springs are inclined at an angle between approximately 48° and 57°. Finally, it is shown that the force–displacement characteristic of the system can be approximated by a cubic equation.
---
paper_title: Inverse eigenvalue problems in vibration absorption : Passive modification and active control
paper_content:
The abiding problem of vibration absorption has occupied engineering scientists for over a century and there remain abundant examples of the need for vibration suppression in many industries. For example, in the automotive industry the resolution of noise, vibration and harshness (NVH) problems is of extreme importance to customer satisfaction. In rotorcraft it is vital to avoid resonance close to the blade passing speed and its harmonics. An objective of the greatest importance, and extremely difficult to achieve, is the isolation of the pilot's seat in a helicopter. It is presently impossible to achieve the objectives of vibration absorption in these industries at the design stage because of limitations inherent in finite element models. Therefore, it is necessary to develop techniques whereby the dynamic of the system (possibly a car or a helicopter) can be adjusted after it has been built. There are two main approaches: structural modification by passive elements and active control. The state of art of the mathematical theory of vibration absorption is presented and illustrated for the benefit of the reader with numerous simple examples.
---
paper_title: Determinate structures for wing camber control
paper_content:
An investigation of truss structures for the purpose of creating a continuously variable camber trailing edge device for an aircraft wing is presented. By creating structures that are both statically and kinematically determinate and then substituting truss elements for actuators, it is possible to impose structural deflection without inducing member stress. A limited number of actuators with limited strain capabilities are located within the structure in order to achieve a target deflected shape starting from an initially symmetric profile. Two objective functions are used to achieve this: a geometric objective for which the target displacement is fixed and a shape objective for which the target displacement is dependent on the surface shape of the targeted aerofoil. The proposed shape objective function is able to offer improvements over the geometric objective by removing some of the constraints applied to the targeted structure joint locations. Four methods for selecting the location of a set of actuators are compared, namely exhaustive search, a genetic algorithm, stepwise forward selection (SFS) and incremental forward selection (IFS). Both SFS and IFS are variations of regression methods for subset selection; in each case an approach has been created to allow the imposing of upper and lower bounds on the search space. It is shown that the genetic algorithm is well suited to addressing the problem of optimally locating a set of actuators; however, regression methods, particularly IFS, can provide a rapid tool suitable for addressing large selection problems.
---
paper_title: Nonlinear Vibration Control of a System with Dry Friction and Viscous Damping Using the Saturation Phenomenon
paper_content:
Application of saturation to provide active nonlinear vibration control was introduced not long ago. Saturation occurs when two natural frequencies of a system with quadratic nonlinearities are in a ratio of around 2:1 and the system is excited at a frequency near its higher natural frequency. Under these conditions, there is a small upper limit for the high-frequency response and the rest of the input energy is channeled to the low-frequency mode. In this way, the vibration of one of the degrees of freedom of a coupled 2 degrees of freedom system is attenuated. In the present paper, the effect of dry friction on the response of a system that implements this vibration absorber is discussed. The system is basically a plant with a permanent magnet DC (PMDC) motor excited by a harmonic forcing term and coupled with a quadratic nonlinear controller. The absorber is built in electric circuitry and takes advantage of the saturation phenomenon. The method of multiple scales is used to find approximate solutions. Various response regimes of the closed-loop system as well as the stability of these regimes are studied and the stability boundaries are obtained. Especial attention is paid on the effect of dry friction on the stability boundaries. It is shown that while dry friction tends to shrink the stable region in some parts, it enlarges other parts of the stable region. To verify the theoretical results, they have been compared with numerical solution and good agreement between the two is observed.
---
paper_title: Active and passive material optimization in a tendon-actuated morphing aircraft structure
paper_content:
Continuously morphing aircraft wings are currently a focus of considerable research. Efforts are being made to achieve effective and optimal wing shape change under different flight conditions such as take off, cruise, dash, and loiter. The present research aims to achieve wing morphing by using an internal structure consisting of actuated tendons and passive struts. An important aspect of this approach is determining the optimal layout of tendons and struts. In this paper a genetic algorithm is developed to optimize the three-dimensional tendon-strut layout for a prescribed wing geometry and shape change. The method is applied to two morphing wing applications, the NASA HECS wing and NextGen TSCh wing.
---
paper_title: Seismic control of a nonlinear benchmark building using smart dampers
paper_content:
This paper addresses the third-generation benchmark problem on structural control, and focuses on the control of a full-scale, nonlinear, seismically excited, 20-story building. A semiactive design is developed in which magnetorheological (MR) dampers are applied to reduce the structural responses of the benchmark building. Control input determination is based on a clipped-optimal control algorithm which employs absolute acceleration feedback. A phenomenological model of an MR damper, based on a Bouc–Wen element, is employed in the analysis. The semiactive system using the MR damper is compared to the performance of an active system and an ideal semiactive system, which are based on the same nominal controller as is used in the MR damper control algorithm. The results demonstrate that the MR damper is effective, and achieves similar performance to the active and ideal semiactive system, while requiring very little power.
---
paper_title: Stability of a Lyapunov controller for a semi-active structural control system with nonlinear actuator dynamics
paper_content:
Abstract We investigate semi-active control for a wide class of systems with scalar nonlinear semi-active actuator dynamics and consider the problem of designing control laws that guarantee stability and provide sufficient performance. Requiring the semi-active actuator to satisfy two general conditions, we present a method for designing quickest descent controllers generated from quadratic Lyapunov functions that guarantee asymptotic stability within the operating range of the semiactive device for the zero disturbance case. For the external excitation case, bounded-input, bounded-output stability is achieved and a stable attractor (ball of ultimate boundedness) of the system is computed based on the upper bound of the disturbances. We show that our wide class of systems covers, in particular, two nonlinear actuator models from the literature. Tuning the performance of the simple Lyapunov controllers is straightforward using either modal or state penalties. Simulation results are presented which indicate that the Lyapunov control laws can be selected to provide similar decay rates as a “time-optimal” controller for a semi-actively controlled single degree of freedom structure with no external excitation.
---
paper_title: Survey of Advanced Suspension Developments and Related Optimal Control Applications,
paper_content:
Abstract The paper surveys applications of optimal control techniques to the design of active suspensions, starting from simple quarter-car, 1D models, which are followed by their half-car, 2D, and full-car, 3D, counterparts. While the main emphasis is on Linear-Quadratic (LQ) optimal control and active suspensions, the paper also addresses a number of related subjects including semi-active suspensions; robust, adaptive and nonlinear control aspects and some of the important practical considerations. © 1997 Elsevier Science Ltd.
---
paper_title: State of the Art of Structural Control
paper_content:
In recent years, considerable attention has been paid to research and development of structural control devices, with particular emphasis on alleviation of wind and seismic response of buildings and bridges. In both areas, serious efforts have been undertaken in the last two decades to develop the structural control concept into a workable technology. Full-scale implementation of active control systems have been accomplished in several structures, mainly in Japan; however, cost effectiveness and reliability considerations have limited their wide spread acceptance. Because of their mechanical simplicity, low power requirements, and large, controllable force capacity, semiactive systems provide an attractive alternative to active and hybrid control systems for structural vibration reduction. In this paper we review the recent and rapid developments in semiactive structural control and its implementation in full-scale structures.
---
paper_title: Piecewise-Smooth Dynamical Systems : Theory and Applications
paper_content:
This book deals with the analysis of bifurcation and chaos in nonsmooth (called piecewise smooth by the authors) dynamical systems. Some of the topics covered include: a very detailed study of discontinuity-induced bifurcations (DIBs) in nonsmooth maps and nonsmooth systems; limit-cycle bifurcations in vibro-impact systems; grazing bifurcations for periodic trajectories; the derivation of Poincare maps; the influence of chattering; and sliding mode systems. This book is a strong contribution to the field of bifurcation and chaos analysis and more generally to the field of nonsmooth dynamical systems analysis and will serve as a reference to specialists in the field.
---
paper_title: Generalisation and optimisation of semi-active, on–off switching controllers for single degree-of-freedom systems
paper_content:
This paper examines generalised forms of semi-active switching control in comparison to the sky-hook semi-active controller. A switching time controller is proposed and analysed, in order to determine the optimal performance, with regard to displacement transmissibility, of semi-active switching control. In addition, the model is also used to assess the optimality of the sky-hook switching conditions. An analytical solution is then derived for the optimal switching times. A generalised form of linear switching surface controller is then presented. It is demonstrated that this controller can produce near optimal performance.
---
paper_title: Nonlinear blackbox modeling of MR-dampers for civil structural control
paper_content:
Protecting civil engineering structures from severe impacts like strong earthquakes has demanded intensive research in the past two decades. One of the most promising devices proposed for structural protection is the magnetorheological (MR) fluid dampers. To fully explore their potentials in the real-time feedback control implementations, accurate and robust modeling of the devices is a prerequisite. This paper first proposes a general nonlinear blackbox structure to model the MR damping behavior on the displacement-velocity phase plane. Two constructive parameter estimation algorithms are subsequently developed which are based on the recent mathematical advances in wavelets and ridgelets analysis. Compared with the traditional physical modeling, this research aims at improving model numerical stability and model structure generality. The achievement of these objectives is evaluated in the modeling of an experimental MR-damper in a base-isolation structural control system.
---
paper_title: Semi-active and passive control of the phase I linear base-isolated benchmark building model
paper_content:
In this paper various semi-active control strategies and linear passive dampers are applied to the benchmark building model to evaluate their effectiveness in reducing response quantities of the building subject to prescribed earthquakes. These control strategies include: semi-active continuous pulse friction (SACPF) controller, semi-active discontinuous pulse friction (SADPF) controller, linear passive viscous (LPV) damper, smooth boundary layer semi-active friction (SBLSAF) controller, resetting semi-active stiffness damper (RSASD) and visco-elastic friction (VEF) damper. Both SACPF and SADPF controllers are designed by using the augmented structure-pulse filter model recently proposed. Extensive simulation results indicate that all these controllers are very effective in mitigating the base displacements. For different controllers, a reduction in the base displacement may result in an increase of other quantities, e.g. the base and superstructure shears, inter-storey drifts and floor accelerations, during some earthquakes. Average results of the performance indices J1–J5 indicate that various semi-active friction controllers, including SACPF, SADPF, SBLSAF and VEF, are most effective followed by the RSASD and the sample semi-active controller. Overall, SACPF and SADPF controllers have better capability of reducing all response quantities compared with other semi-active controllers. Copyright © 2005 John Wiley & Sons, Ltd.
---
paper_title: Magneto-rheological tuned liquid column dampers (MR-TLCDs) for vibration mitigation of tall buildings: modelling and analysis of open-loop control
paper_content:
A novel semi-active tuned liquid column damper using magneto-rheological fluid (MR-TLCD) is devised for wind-induced vibration mitigation of tall building structures. It combines the benefits of magneto-rheological smart materials and tuned liquid column dampers. A mathematical model of the devised MR-TLCD is first established using the parallel-plate theory, and then a method for analyzing stochastic dynamic response of building structures incorporating semi-active MR-TLCDs subjected to random wind excitation is developed by means of the equivalent linearization technique. The open-loop control effectiveness of MR-TLCDs in mitigating wind-induced structural response is studied by referring to a 50-storey high-rise residential building equipped with MR-TLCDs. The analysis results show that MR-TLCDs with optimal parameters are capable of achieving much better vibration mitigation capability than conventional TLCD system.
---
paper_title: Active Control of Structures
paper_content:
About the Authors. Preface. Acknowledgements. 1 Active Damping. 1.1 Introduction. 1.2 Structural Control. 1.3 Plant Description. 1.4 Equations of Structural Dynamics. 1.5 Collocated Control System. 1.6 Active Damping with Collocated System. 1.7 Decentralized Control with Collocated Pairs. 2 Active Isolation. 2.1 Introduction. 2.2 Relaxation Isolator. 2.3 Sky-hook Damper. 2.4 Force Feedback. 2.5 Six-Axis Isolator. 2.6 Vehicle Active Suspension. 2.7 Semi-Active Suspension. 3 A Comparison of Passive, Active and Hybrid Control. 3.1 Introduction. 3.2 System Description. 3.3 The Dynamic Vibration Absorber. 3.4 Active Mass Damper. 3.5 Hybrid Control. 3.6 Shear Control. 3.7 Force Actuator, Displacement Sensor. 3.8 Displacement Actuator, Force Sensor. 4 Vibration Control Methods and Devices. 4.1 Introduction. 4.2 Classification of Vibration Control Methods. 4.3 Construction of Active Dynamic Absorber. 4.4 Control Devices for Wind Excitation Control in Civil Structures. 4.5 Real Towers Using the Connected Control Method. 4.6 Application of Active Dynamic Absorber for Controlling Vibration of Single-d.o.f. Systems. 4.7 Remarks. 5 Reduced-Order Model for Structural Control. 5.1 Introduction. 5.2 Modeling of Distributed Structures. 5.3 Spillover. 5.4 The Lumped Modeling Method. 5.5 Method of Equivalent Mass Estimation. 5.6 Modeling of Tower-like Structure. 5.7 Modeling of Plate Structures. 5.8 Modeling of a Bridge Tower. 5.9 Robust Vibration Control for Neglected Higher Modes. 5.10 Conclusions. 6 Active Control of Civil Structures. 6.1 Introduction. 6.2 Classification of Structural Control for Buildings. 6.3 Modeling and Vibration Control for Tower Structures. 6.4 Active Vibration Control of Multiple Buildings Connected with Active Control Bridges in Response to Large Earthquakes. 6.5 Vibration Control for Real Triple Towers Using CCM. 6.6 Vibration Control of Bridge Towers Using a Lumped Modeling Approach. 6.7 Conclusion. References. Index.
---
paper_title: Benchmark structural control problem for a seismically excited highway bridge-Part I: Phase I Problem definition
paper_content:
This paper presents the problem definition of the benchmark structural control problem for the seismically excited highway bridge. The benchmark problem is based on the newly constructed 91/5 highway over-crossing in southern California. The goal of this benchmark effort is to develop a standardized model of a highway bridge using which competing control strategies, including devices, algorithms and sensors, can be evaluated comparatively. To achieve this goal, a 3D finite-element model is developed in MATLAB to represent the complex behavior of the full-scale highway over-crossing. The nonlinear behavior of center columns and isolation bearings is considered in formulating the bilinear force–deformation relationship. The effect of soil–structure interaction is considered by modeling the interaction by equivalent spring and dashpot. The ground motions are considered to be applied simultaneously in two directions. A MATLAB-based nonlinear structural analysis tool has been developed and made available for nonlinear dynamic analysis. Control devices are assumed to be installed between the deck and the end abutments of the bridge. Evaluation criteria and control constraints are specified for the design of controllers. Passive, semi-active and active devices and algorithms can be used to study the benchmark model. The participants in this benchmark study are required to define their control devices, sensors and control algorithms, evaluate and report the results of their proposed control strategies. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: On the Isolation Properties of Semiactive Dampers
paper_content:
The primary purpose of this study is to evaluate and better understand the isolation properties of semiactive suspensions. Specifically, this study will answer the question regarding why semiactive dampers are able to isolate at frequencies well below those for passive dampers, even though they do not add any energy to the system. A single suspension model is used to derive and analytically evaluate the transmissibility properties of passive and semiactive dampers. The results show that for semiactive dampers, the frequency range of isolation and the transmissibility amplitude are functions of ξ, the damping ratio. In contrast, the isolation frequency range for passive dampers is completely independent of ξ. Furthermore, the results show that for sufficiently large ξ, semiactive dampers are able to provide isolation at all frequencies. This feature is useful for many applications, particularly for sensitive machinery that cannot tolerate any overshoot in power- up or power-down, and yet must have good iso...
---
paper_title: Vibration control of mechanical systems using semi-active MR-damper
paper_content:
The concept of structural vibration control is to absorb vibration energy of the structure by introducing auxiliary devices. Various types of structural vibration control theories and devices have been recently developed and introduced into mechanical systems. One of such devices is damper employing controllable fluids such as ElectroRheological (ER) or MagnetoRheological (MR) fluids. MagnetoRheological (MR) materials are suspensions of fine magnetizable ferromagnetic particles in a non-magnetic medium exhibiting controllable rheological behaviour in the presence of an applied magnetic field. This paper presents the modelling of an MRfluid damper. The damper model is developed based on Newtonian shear flow and Bingham plastic shear flow models. The geometric parameters are varied to get the optimised damper characteristics. The numerical analysis is carried out to estimate the damping coefficient and damping force. The analytical results are compared with the experimental results. The results confirm that MR damper is one of the most promising new semi-active devices for structural vibration control.
---
paper_title: Nonlinear vibration of shallow cables with semiactive tuned mass damper
paper_content:
The nonlinear vibration of shallow cables, equipped with a semiactive control device is considered in this paper. The control device is represented by a tuned mass damper with a variable out-of-plane inclination. A suitable control algorithm is designed in order to regulate the inclination of the device and to dampen the spatial cable vibrations. Numerical simulations are conducted under free spatial oscillations through a nonlinear finite element model, solved in two different computational environments. A harmonic analysis, in the region of the primary resonance, is also performed through a control-oriented nonlinear Galerkin model, including detuning effects due to the cable slackening.
---
paper_title: Semi-active control of seismic response of tall buildings with podium structure using ER/MR dampers
paper_content:
A tall building with a large podium structure under earthquake excitation may suffer from a whipping effect due to the sudden change of building lateral stiffness at the top of the podium structure. This paper thus explores the possibility of using electrorheological (ER) dampers or magnetorheological (MR) dampers to connect the podium structure to the tower structure to prevent this whipping effect and to reduce the seismic response of both structures. A set of governing equations of motion for the tower–damper–podium system is first derived, in which the stiffness of the member connecting the ER/MR damper to the structures is taken into consideration. Based on the principle of instantaneous sub-optimal active control, a semi-active sub-optimal displacement control algorithm is then proposed. To demonstrate the effectiveness of semi-active control of the system under consideration, a 20-storey tower structure with a 5-storey podium structure subjected to earthquake excitation is finally selected as a numerical example. The results from the numerical example imply that, as a kind of intelligent control device, ER/MR dampers can significantly mitigate the seismic whipping effect on the tower structure and reduce the seismic responses of both the tower structure and the podium structure. Copyright © 2001 John Wiley & Sons, Ltd.
---
paper_title: A semi-active stochastic optimal control strategy for nonlinear structural systems with MR dampers
paper_content:
A non-clipped semi-active stochastic optimal control strategy for nonlinear structural systems with MR dampers is developed based on the stochastic averaging method and stochastic dynamical programming principle. A nonlinear stochastic control structure is first modeled as a semi-actively controlled, stochastically excited and dissipated Hamiltonian system. The control force of an MR damper is separated into passive and semi-active parts. The passive control force components, coupled in structural mode space, are incorporated in the drift coefficients by directly using the stochastic averaging method. Then the stochastic dynamical programming principle is applied to establish a dynamical programming equation, from which the semi-active optimal control law is determined and implementable by MR dampers without clipping in terms of the Bingham model. Under the condition on the control performance function given in section 3, the expressions of nonlinear and linear non-clipped semi-active optimal control force components are obtained as well as the non-clipped semi-active LQG control force, and thus the value function and semi-active nonlinear optimal control force are actually existent according to the developed strategy. An example of the controlled stochastic hysteretic column is given to illustrate the application and effectiveness of the developed semi-active optimal control strategy.
---
paper_title: Semi‐active neuro‐control for base‐isolation system using magnetorheological (MR) dampers
paper_content:
Vibration mitigation using smart, reliable and cost-effective mechanisms that requires small activation power is the primary objective of this paper. A semi-active controller-based neural network for base-isolation structure equipped with a magnetorheological (MR) damper is presented and evaluated. An inverse neural network model (INV-MR) is constructed to replicate the inverse dynamics of the MR damper. Next, linear quadratic Gaussian (LQG) controller is designed to produce the optimal control force. Thereafter, the LQG controller and the INV-MR models are linked to control the structure. The coupled LQG and INV-MR system was used to train a semi-active neuro-controller, designated as SA-NC, which produces the necessary control voltage that actuates the MR damper. To evaluate the proposed method, the SA-NC is compared to passive lead–rubber bearing isolation systems (LRBs). Results revealed that the SA-NC was quite effective in seismic response reduction for wide range of motions from moderate to severe seismic events compared to the passive systems. In addition, the semi-active MR damper enjoys many desirable features, such as its inherent stability, practicality and small power requirements. The effectiveness of the SA-NC is illustrated and verified using simulated response of a six-degree-of-freedom model of a base-isolated building excited by several historical earthquake records. Copyright © 2006 John Wiley & Sons, Ltd.
---
paper_title: A Comparative Study and Analysis of Semi-Active Vibration-Control Systems
paper_content:
Semi-active (SA) vibration-control systems are those which otherwise passively generated damping or spring forces are modulated according to a parameter tuning policy with only a small amount of control effort. SA units, as their name implies, fill the gap between purely passive and fully active vibration-control systems and offer the reliability of passive systems, yet maintain the versatility and adaptability of fully active devices. During recent years there has been considerable interest towards practical implementation of these systems for their low energy requirement and cost. This paper briefly reviews the basic theoretical concepts for SA vibration-control design and implementation, and surveys recent developments and control techniques for these systems. Some related practical applications in vehicle suspensions are also presented.
---
paper_title: Dynamic modeling of large-scale magnetorheological damper systems for civil engineering applications
paper_content:
Magnetorheological (MR) dampers are one of the most promising new devices for structural vibration mitigation. Because of their mechanical simplicity, high dynamic range, low power requirements, large force capacity, and robustness, these devices have been shown to mesh well with earthquake and wind engineering application demands and constraints. Quasistatic models of MR dampers have been investigated by researchers. Although useful for damper design, these models are not sufficient to describe the MR damper behavior under dynamic loading. This paper presents a new dynamic model of the overall MR damper system which is comprised of two parts: (1) a dynamic model of the power supply and (2) a dynamic model of the MR damper. Because previous studies have demonstrated that a current-driven power supply can substantially reduce the MR damper response time, this study employs a current driver to power the MR damper. The operating principles of the current driver, and an appropriate dynamic model are provided. Subsequently, MR damper force response analysis is performed, and a phenomenological model based on the Bouc-Wen model is proposed to estimate the MR damper behavior under dynamic loading. This model accommodates the MR fluid stiction phenomenon, as well as fluid inertial and shear thinning effects. Compared with other types of models based on the Bouc-Wen model, the proposed model has been shown to be more effective, especially in describing the force rolloff in the low velocity region, force overshoots when velocity changes in sign, and two clockwise hysteresis loops at the velocity extremes.
---
paper_title: Nonsmooth Mechanics: Models, Dynamics and Control
paper_content:
1 Distributional model of impacts.- 1.1 External percussions.- 1.2 Measure differential equations.- 1.2.1 Some properties.- 1.2.2 Additional comments.- 1.3 Systems subject to unilateral constraints.- 1.3.1 General considerations.- 1.3.2 Flows with collisions.- 1.3.3 A system theoretical geometric approach.- 1.3.4 Descriptor variable systems.- 1.4 Changes of coordinates in MDEs.- 1.4.1 From measure to Caratheodory systems.- 1.4.2 Decoupling of the impulsive effects (commutativity conditions).- 1.4.3 From measure to Filippov's differential equations: the Zhuravlev-Ivanov method.- 2 Approximating problems.- 2.1 Simple examples.- 2.1.1 From elastic to hard impact.- 2.1.2 From damped to plastic impact.- 2.1.3 The general case.- 2.2 The method of penalizing functions.- 2.2.1 The elastic rebound case.- 2.2.2 A more general case.- 2.2.3 Uniqueness of solutions.- 3 Variational principles.- 3.1 Virtual displacements principle.- 3.2 Gauss' principle.- 3.2.1 Additional comments and studies.- 3.3 Lagrange's equations.- 3.4 External impulsive forces.- 3.4.1 Example: flexible joint manipulators.- 3.5 Hamilton's principle and unilateral constraints.- 3.5.1 Introduction.- 3.5.2 Modified set of curves.- 3.5.3 Modified Lagrangian function.- 3.5.4 Additional comments and studies.- 4 Two bodies colliding.- 4.1 Dynamical equations of two rigid bodies colliding.- 4.1.1 General considerations.- 4.1.2 Relationships between real-world and generalized normal di-rections.- 4.1.3 Dynamical equations at collision times.- 4.1.4 The percussion center.- 4.2 Percussion laws.- 4.2.1 Oblique percussions with friction between two bodies.- 4.2.2 Rigid body formulation: Brach's method.- 4.2.3 Additional comments and studies.- 4.2.4 Rigid body formulation: Fremond's approach.- 4.2.5 Dynamical equations during the collision process: Darboux-Keller's shock equations.- 4.2.6 Stronge's energetical coefficient.- 4.2.7 3 dimensional shocks- Ivanov's energetical coefficient.- 4.2.8 A third energetical coefficient.- 4.2.9 Additional comments and studies.- 4.2.10 Multiple micro-collisions phenomenon: towards a global coef-ficient.- 4.2.11 Conclusion.- 4.2.12 The Thomson and Tait formula.- 4.2.13 Graphical analysis of shock dynamics.- 4.2.14 Impacts in flexible structures.- 5 Multiconstraint nonsmooth dynamics.- 5.1 Introduction. Delassus' problem.- 5.2 Multiple impacts: the striking balls examples.- 5.3 Moreau's sweeping process.- 5.3.1 General formulation.- 5.3.2 Application to mechanical systems.- 5.3.3 Existential results.- 5.3.4 Shocks with friction.- 5.4 Complementarity formulations.- 5.4.1 General introduction to LCPs and Signorini's conditions.- 5.4.2 Linear Complementarity Problems.- 5.4.3 Relationships with quadratic problems.- 5.4.4 Linear complementarity systems.- 5.4.5 Additional comments and studies.- 5.5 The Painleve's example.- 5.5.1 Lecornu's frictional catastrophes.- 5.5.2 Conclusions.- 5.5.3 Additional comments and bibliography.- 5.6 Numerical analysis.- 5.6.1 General comments.- 5.6.2 Integration of penalized problems.- 5.6.3 Specific numerical algorithms.- 6 Generalized impacts.- 6.1 The frictionless case.- 6.1.1 About "complete" Newton's rules.- 6.2 The use of the kinetic metric.- 6.2.1 The kinetic energy loss at impact.- 6.3 Simple generalized impacts.- 6.3.1 2-dimensional lamina striking a plane.- 6.3.2 Shock of a particle against a pendulum.- 6.4 Multiple generalized impacts.- 6.4.1 The rocking block problem.- 6.5 General restitution rules for multiple impacts.- 6.5.1 Introduction.- 6.5.2 The rocking block example continued.- 6.5.3 Additional comments and studies.- 6.5.4 3-balls example continued.- 6.5.5 2-balls.- 6.5.6 Additional comments and studies.- 6.5.7 Summary of the main ideas.- 6.5.8 Collisions near singularities: additional comments.- 6.6 Constraints with Amontons-Coulomb friction.- 6.6.1 Lamina with friction.- 6.7 Additional comments and studies.- 7 Stability of nonsmooth dynamical systems.- 7.1 General stability concepts.- 7.1.1 Stability of measure differential equations.- 7.1.2 Stability of mechanical systems with unilateral constraints.- 7.1.3 Passivity of the collision mapping.- 7.1.4 Stability of the discrete dynamic equations.- 7.1.5 Impact oscillators.- 7.1.6 Conclusions.- 7.2 Grazing orC-bifurcations.- 7.2.1 The stroboscopic Poincare map discontinuities.- 7.2.2 The stroboscopic Poincare map around grazing-motions...- 7.2.3 Further comments and studies.- 7.3 Stability: from compliant to rigid models.- 7.3.1 System's dynamics.- 7.3.2 Lyapunov stability analysis.- 7.3.3 Analysis of quadratic stability conditions for large stiffness values.- 7.3.4 A stiffness independent convergence analysis.- 8 Feedback control.- 8.1 Controllability properties.- 8.2 Control of complete robotic tasks.- 8.2.1 Experimental control of the transition phase.- 8.2.2 The general control problem.- 8.3 Dynamic model.- 8.3.1 A general form of the dynamical system.- 8.3.2 The closed-loop formulation of the dynamics.- 8.3.3 Definition of the solutions.- 8.4 Stability analysis framework.- 8.5 A one degree-of-freedom example.- 8.5.1 Static state feedback (weakly stable task).- 8.5.2 Towards a strongly stable closed-loop scheme.- 8.5.3 Dynamic state feedback.- 8.6ndegree-of-freedom rigid manipulators.- 8.6.1 Integrable transformed velocities.- 8.6.2 Examples.- 8.6.3 Non-integrable transformed velocities: general case.- 8.6.4 Non-integrable transformed velocities: a strongly stable scheme.- 8.7 Complementary-slackness juggling systems.- 8.7.1 Some examples.- 8.7.2 Some controllability properties.- 8.7.3 Control design.- 8.7.4 Further comments.- 8.8 Systems with dynamic backlash.- 8.9 Bipedal locomotion.- A Schwartz' distributions.- A.1 The functional approach.- A.2 The sequential approach.- A.3 Notions of convergence.- B Measures and integrals.- C Functions of bounded variation in time.- C.1 Definition and generalities.- C.2 Spaces of functions of bounded variation.- C.3 Sobolev spaces.- D Elements of convex analysis.
---
paper_title: Seismic control of smart base isolated buildings with new semiactive variable damper
paper_content:
A new semiactive independently variable damper, SAIVD, is developed and shown to be effective in achieving response reductions in smart base isolated buildings in near fault earthquakes. The semiactive device consists of four linear visco-elastic elements, commonly known as Kelvin-Voigt elements, arranged in a rhombus configuration. The magnitude of force in the semiactive device can be adjusted smoothly in real-time by varying the angle of the visco-elastic elements of the device or the aspect ratio of the rhombus configuration. Such a device is essentially linear, simple to construct, and does not present the difficulties commonly associated with modelling and analysing nonlinear devices (e.g. friction devices). The smooth semiactive force variation eliminates the disadvantages associated with rapid switching devices. Experimental results are presented to verify the proposed analytical model of the device. A H ∞ control algorithm is implemented in order to reduce the response of base isolated buildings with variable damping semiactive control systems in near fault earthquakes. The central idea of the control algorithm is to design a H ∞ controller for the structural system that serves as an aid in the determination of the optimum control force in the semiactive device. The relative performance of the SAIVD device is compared to a variable friction device, recently developed by the authors in a separate study, and several key aspects of performance are discussed regarding the use of the two devices for reducing the responses of smart base isolated buildings in near fault earthquakes.
---
paper_title: Hierarchical semi-active control of base-isolated structures using a new inverse model of magnetorheological dampers
paper_content:
Magnetorheological (MR) dampers have received special attention as semi-active devices for mitigation of structural vibrations. Because of the inherent nonlinearity of these devices, it is difficult to obtain a reasonable mathematical inverse model. This paper is concerned with two related concepts. On one hand, it presents a new inverse model of MR dampers based on the normalized Bouc-Wen model. On the other hand, it considers a hybrid seismic control system for building structures, which combines a class of passive nonlinear base isolator with a semi-active control system. In this application, the MR damper is used as a semi-active device in which the voltage is updated by a feedback control loop. The management of MR dampers is performed in a hierarchical way according to the desired control force, the actual force of the dampers and its capacity to react. The control is applied to a numerical three-dimensional benchmark problem which is used by the structural control community as a state-of-the-art model for numerical experiments of seismic control attenuation. The performance indices show that the proposed semi-active controller behaves satisfactorily.
---
paper_title: Design Optimization of Quarter-car Models with Passive and Semi-active Suspensions under Random Road Excitation
paper_content:
A methodology is presented for optimizing the suspension damping and stiffness parameters of nonlinear quarter-car models subjected to random road excitation. The investigation starts with car models involving passive damping with constant or dual-rate characteristics. Then, we also examine car models where the damping coefficient of the suspension is selected so that the resulting system approximates the performance of an active suspension system with sky-hook damping. For the models with semi-active or passive dual-rate dampers, the value of the equivalent suspension damping coefficient is a function of the relative velocity of the sprung mass with respect to the wheel subsystem. As a consequence, the resulting equations of motion are strongly nonlinear. For these models, appropriate methodologies are first employed for obtaining the second moment characteristics of motions resulting from roads with a random profile. This information is next utilized in the definition of a vehicle performance index, which is optimized to yield representative numerical results for the most important suspension parameters. Special attention is paid to investigating the effect of road quality as well as on examining effects related to wheel hop. Finally, a critical comparison is performed between the results obtained for vehicles with passive linear or bilinear suspension dampers and those obtained for cars with semi-active shock absorbers.
---
paper_title: Vehicle Evaluation of the Performance of Magneto Rheological Dampers for Heavy Truck Suspensions
paper_content:
This study is intended to complement many existing analytical studies in the area of semiactive suspensions by providing afield evaluation of semiactive magneto rheological (MR) primary suspensions for heavy trucks. A set of four controllable MR dampers are fabricated and used experimentally to test the effectiveness of a semiactive skyhook suspension on a heavy truck. In order to evaluate the performance of the semiactive suspensions, the performance of the truck equipped with the MR dampers is primarily compared with the performance of the truck equipped with the stock passive dampers. The performance of the semiactive system and the original passive system are compared for two different driving conditions. First, the truck is driven over a speed bump at approximately 8-11 kmh (5-7 mph) in order to establish a comparison between the performance of the MR and stock dampers to transient inputs at the wheels. Second, the truck is driven along a stretch of relatively straight and level highway at a constant speed of 100 kmh (62 mph) in order to compare the performance of the two types of dampers in steady state driving conditions. Acceleration data for both driving conditions are analyzed in bath time and frequency domains. The data for the speed bumps indicate that the magneto rheological dampers used (with the skyhook control policy) in this study have a small effect on the vehicle body and wheel dynamics, as compared to the passive stock dampers. The highway driving data shows that magneto rheological dampers and the skyhook control policy are effective in reducing the root mean square (RMS) of the measured acceleration at most measurement points, as compared to the stock dampers.
---
paper_title: “Smart” Base Isolation Strategies Employing Magnetorheological Dampers
paper_content:
One of the most successful means of protecting structures against severe seismic events is base isolation. However, optimal design of base isolation systems depends on the magnitude of the design level earthquake that is considered. The features of an isolation system designed for an El Centro-type earthquake typically will not be optimal for a Northridge-type earthquake and vice versa. To be effective during a wide range of seismic events, an isolation system must be adaptable. To demonstrate the efficacy of recently proposed ''smart'' base isolation paradigms, this paper presents the results of an experimental study of a particular adaptable, or smart, base isolation system that employs magnetorheological ~MR! dampers. The experimental structure, constructed and tested at the Structural Dynamics and Control/Earthquake Engineering Laboratory at the Univ. of Notre Dame, is a base-isolated two-degree-of-freedom building model subjected to simulated ground motion. A sponge-type MR damper is installed between the base and the ground to provide controllable damping for the system. The effectiveness of the proposed smart base isolation system is demonstrated for both far-field and near-field earthquake excitations.
---
paper_title: Response Control of Full-Scale Irregular Buildings Using Magnetorheological Dampers
paper_content:
This paper considers the capabilities of semiactive control systems using magnetorheological dampers when applied to numerical models of full scale asymmetric buildings. Two full scale building models exhibiting coupled lateral and torsional motions are studied. The first case considered is a nine-story building with an asymmetric structural plan. The footprint of this building is rectangular, but the asymmetry is due to the distribution of shear walls. The second case considered is an L-shaped, eight-story building with additional vertical irregularity due to setbacks. Linear, lumped-parameter models of the buildings are employed herein to evaluate the potential of the control system to effectively reduce the responses of the buildings. In each case a device placement scheme based on genetic algorithms is used to place the control devices effectively. The proposed control systems are evaluated by simulating the responses of the models due to the El Centro 1940 and the Kobe 1995 earthquakes. In the second case, simulations are conducted using two-dimensional ground motions. The performance of the proposed semiactive control systems are compared to that of both ideal active control systems and passive control systems.
---
paper_title: Semiactive Control of the 20-Story Benchmark Building with Piezoelectric Friction Dampers
paper_content:
A new control algorithm is proposed in this paper to control the responses of a seismically excited, nonlinear 20-story building with piezoelectric friction dampers. The passive friction damping mechanism is used for low-amplitude vibration while the active counterpart takes over for high-amplitude vibration. Both the stick and sliding phases of dampers are taken into account. To effectively mitigate the peak story drift and floor acceleration of the 20-story building, multiple dampers are placed on the 20-story building based on a sequential procedure developed for optimal performance of the dampers. Extensive simulations indicate that the proposed semiactive dampers can effectively reduce the seismic responses of the uncontrolled building with substantially less external power than its associated active dampers, for instance, 67% less under the 1940 El Centro earthquake when the passive friction force is equal to 10% of the damper capacity.
---
paper_title: Vibration control of a suspension system via a magnetorheological fluid damper
paper_content:
Semi-active control systems are becoming more popular because they offer both the reliability of passive systems and the versatility of active control systems without imposing heavy power demands. It has been found that magneto-rheological (MR) fluids can be designed to be very effective vibration control actuators. MR fluid damper is a semi-active control device that uses MR fluids to produce controllable damping force. The objective of this paper is to study a single-degree-of- freedom suspension system with a MR fluid damper for the purpose of vibration control. A mathematical model of MR fluid damper is adopted. The model is compared with experimental results for a prototype damper through finding suitable model parameters. In this study, a sliding mode controller is developed by considering loading uncertainty to result in a robust control system. Two kinds of excitations are inputted in order to investigate the performance of the suspension system. The vibration responses are evaluated in both time and frequency domains. Compared to the passive system, the acceleration of the sprung mass is significantly reduced for the system with a controlled MR damper. Under random excitation, the ability of the MR fluid damper to reduce both peak response and root-mean-square response is also shown.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Modeling and identification of a shear mode magnetorheological damper
paper_content:
Magnetotheological (MR) dampers have emerged recently as potential devices for vibration mitigation and semi-active control in smart structures and vehicle applications. These devices are highly nonlinear and thus accurate models of these devices are important for effective simulation and control system design. In the current literature, the Bouc–Wen model is coupled with linear elements to describe these MR devices both in simulation and control. In this paper, we propose the friction Dahl model to characterize the dynamics of a shear mode MR damper. This leads to a reinterpretation of the MR damper behavior as a frictional device whose friction parameters change with the voltage. An identification technique for this new model is proposed and tested numerically using an experimentally obtained model. A good match has been observed between the model obtained from experiments and the Dahl based model of the MR device.
---
paper_title: Semi-active Vibration Control Schemes for Suspension Systems Using Magnetorheological Dampers
paper_content:
Three semi-active control methods are investigated for use in a suspension system using a commer- cial magnetorheological damper. The three control methods are the limited relative displacement method, the modified skyhook method, and the modified Rakheja-Sankar method. The method of averaging has been adopted to provide an analytical platform for analyzing the performance of the different control methods. The analytical results are verified using numerical simulation, and further are used to assess the efficiency of dif- ferent control methods. An experimental test bed has been developed to examine the three control methods under sinusoidal and random excitations. Both analytical and experimental results confirm that the Rakheja- Sankar control and modified skyhook control methods significantly reduce the root-mean-square response of both the acceleration and relative displacement of the sprung mass, while the limited relative displacement controller can only control the relative displacement of the suspension system.
---
paper_title: Preliminary Design Procedure of MR Dampers for Controlling Seismic Response of Building Structures
paper_content:
In this paper, the preliminary design procedure of magnetorheological (MR) dampers is developed for controlling the building response induced by seismic excitation. The dynamic characteristics and control effects of the modeling methods of MR dampers such as Bingham, biviscous, hysteretic biviscous, simple Bouc?Wen, Bouc?Wen with mass element, and phenomenological models are investigated. Of these models, hysteretic biviscous model which is simple and capable describing the hysteretic characteristics, is used for numerical studies. The capacity of MR damper is determined as a portion of not the building weight but the lateral restoring force. A method is proposed for optimal placement and number of MR dampers, and its effectiveness is verified by comparing it with the simplified sequential search algorithm. Numerical results indicate that the capacity, number and the placement can be reasonably determined using the proposed design procedure.
---
paper_title: Benchmark Control Problems for Seismically Excited Nonlinear Buildings
paper_content:
This paper presents the problem definition and guidelines of a set of benchmark control problems for seismically excited nonlinear buildings. Focusing on three typical steel structures, 3-, 9-, and 20-story buildings designed for the SAC project for the Los Angeles, California region, the goal of this study is to provide a clear basis to evaluate the efficacy of various structural control strategies. A nonlinear evaluation model has been developed that portrays the salient features of the structural system. Evaluation criteria and control constraints are presented for the design problems. The task of each participant in this benchmark study is to define (including sensors and control algorithms), evaluate, and report on their proposed control strategies. These strategies may be either passive, active, semiactive, or a combination thereof. The benchmark control problems will then facilitate direct comparison of the relative merits of the various control strategies. To illustrate some of the design challenges, a sample control strategy employing active control with a linear quadratic Gaussian control algorithm is applied to the 20-story building.
---
paper_title: Semiactive Control Strategies for MR Dampers-Comparative Study
paper_content:
This paper presents the results of a study to evaluate the performance of a number of recently proposed semiactive control algorithms for use with multiple magnetorheological (MR) dampers. Various control algorithms used in recent semiactive control studies are considered including the Lyapunov controller, decentralized bang-bang controller, modulated homogeneous friction algorithm, and a clipped optimal controller. Each algorithm is formulated for use with the MR damper. Additionally, each algorithm uses measurements of the absolute acceleration and device displacements for determining the control action to ensure that the algorithms could be implemented on a physical structure. The performance of the algorithms is compared through a numerical example, and the advantages of each algorithm are discussed. The numerical example considers a six-story structure controlled with MR dampers on the lower two floors. In simulation, an El Centro earthquake is used to excite the system, and the reduction in the drif...
---
paper_title: MR seat suspension for vibration control of a commercial vehicle
paper_content:
This paper presents vibration control of a commercial vehicle utilising a magnetorheological (MR) seat damper. A cylindrical type of MR seat damper is manufactured and its field-dependant damping forces are experimentally evaluated. The MR seat damper is then incorporated with a full-vehicle model in which conventional passive dampers are installed for primary and cabin suspension, respectively. After formulating the governing equations of motion of the full-vehicle model, a semi-active skyhook controller is realised by adopting a hardware-in-the-loop-simulation (HILS) methodology. Control responses such as acceleration at the driver's seat are evaluated under both bump and random road conditions.
---
paper_title: FORCE FEEDBACK VERSUS ACCELERATION FEEDBACK IN ACTIVE VIBRATION ISOLATION
paper_content:
This paper compares the force feedback and acceleration feedback implementation of the sky-hook damper when it is used to isolate a flexible structure from a disturbance source. It is shown that the use of a force sensor produces always alternating poles and zeros in the open-loop transfer function between the force actuator and the force sensor, which guarantees the stability of the closed loop. On the contrary, the acceleration feedback produces alternating poles and zeros only when the flexible structure is stiff compared to the isolation system; this property is lost when the flexible modes of the sensitive payload interfere with the isolation system.
---
paper_title: Semi-active control of sliding isolated bridges using MR dampers: an experimental and numerical study
paper_content:
Sliding base-isolation systems used in bridges reduce pier drifts, but at the expense of increased bearing displacements under near-source pulse-type earthquakes. It is common practice to incorporate supplemental passive non-linear dampers into the isolation system to counter increased bearing displacements. Non-linear passive dampers can certainly reduce bearing displacements, but only with increased isolation level forces and pier drifts. The semi-active controllable non-linear dampers, which can vary damping in real time, can reduce bearing displacements without further increase in forces and pier drifts; and hence deserve investigation. In this study performance of such a 'smart' sliding isolation System, used in a 1:20 scaled bridge model, employing semi-active controllable magneto-rheological (MR) dampers is investigated, analytically and experimentally, under several near-fault earthquakes. A non-linear analytical model, which incorporates the non-linearities of sliding bearings and the MR damper, is developed. A Lyapunov control algorithm for control of the MR damper is developed and implemented in shake table tests. Analytical and shake table test results are compared. It is shown that the smart MR damper reduces bearing displacements further than the passive low- and high-damping cases, while maintaining isolation level forces less than the passive high-damping case.
---
paper_title: Adaptive Snubber-Type Magnetorheological Fluid-Elastomeric Helicopter Lag Damper
paper_content:
A snubber-type magnetorheological fluid-elastomeric lag damper is developed to provide adaptive lead-lag damping augmentation for a hingeless helicopter rotor. The magnetorheological fluid-elastomeric lag damper consists of a flow valve, a flexible snubber body, and a flexible center wall separating the body into two fluid chambers. Magnetorheological fluid enclosed in the snubber body can flow through two magnetorheological valves and be activated by a magnetic field in the valves. Consistent with the loading conditions for a helicopter lag damper, the magnetorheological fluid-elastomeric damper is tested under single and dual frequency excitations. The complex modulus method was used to compare the magnetorheological fluid-elastomeric device damping performance with the baseline passive Fluidlastic damper. A significant controllable damping range is observed as current is applied to the magnetorheological valve in the magnetorheological fluid-elastomeric damper. Furthermore, to account for the nonlinear hysteresis behavior ofthe magnetorheological fluid-elastomeric damper and estimate the damping force, a time-domain hydromechanical model is formulated based on lumped parameters. Model parameters are established using damper geometry, material properties, and experimental data. The model is then applied to simulate the force vs displacement response and force time history under both single and dual frequency excitations.
---
paper_title: Hybrid model predictive control application towards optimal semi-active suspension
paper_content:
The optimal control problem of a quarter-car semi-active suspension has been studied in the past. Considering that a quarter-car semi-active suspension can either be modelled as a linear system with state dependent constraint on control (of actuator force) input, or a bi-linear system with a control (of variable damping coefficient) saturation, the seemingly simple problem poses several interesting questions and challenges. Does the saturated version of the optimal control law derived from the corresponding un-constrained system, i.e. “clipped-optimal”, remain optimal for the constrained case as suggested in some previous publications? Or should the optimal deviate from the “clipped-optimal” as suggested in other publications? If the optimal control law of the constrained system does deviate from its unconstrained counter-part, how different are they? What is the structure of the optimal control law? Does it retain the linear state feedback form (as the unconstrained case)? In this paper, we attempt to an...
---
paper_title: Bifurcation and chaos in nonsmooth mechanical systems
paper_content:
Introduction to Discontinuous ODEs Mathematical Background for Multivalued Formulations Properties of Numerical Schemes Stick-Slip Oscillator with Two Degrees of Freedom Piecewiselinear Approximations Chua Circuit with Discontinuities One DOF Mechanical System with Friction A Mechanical System with 7 DOF Triple Pendulum with Impacts Analytical Prediction of Stick-Slip Chaos and other topics.
---
paper_title: OPTIMAL ACTIVE SUSPENSION DESIGN USING CONSTRAINED OPTIMIZATION
paper_content:
Abstract In a quarter-car vehicle model, optimal control schemes for an active suspension are designed using a constrained optimization procedure. The control laws obtained minimize the H 2 -norm of vehicle acceleration subject to constraints on r.m.s. values of the suspension stroke, tire deformation and actuator force. Constraints imposed on feedback coefficients define quasi-optimal control laws that show increased robustness to system parameter variations and disturbances. To impact body acceleration at frequencies near the unsprung mass mode, tire damping is introduced in the model. The optimal and quasi-optimal control schemes were partially verified on a quarter-car simulator with a random road input and the preliminary results are encouraging. The tests showed significant improvement (3–5 dB) of body acceleration response in the frequency range up to 25 Hz and increased robustness of the quasi-optimal control laws, that use lower amounts of spring cancellation.
---
paper_title: Visco-hyperelastic model for filled rubbers used in vibration isolation
paper_content:
The short time and cyclic behavior of filled rubbers used in vibration isolation in the frequency range 10{sup {minus}2} to 10{sup 2} rad/s is examined. A form of the free-energy function consistent with the assumption of an additive stress decomposition is employed. A constitutive law for the inelastic part of the stress is provided, in the form of an integro-differential equation, which involves the fractional order derivative of the internal variable. It is assumed that the volumetric response of the material is elastic. The elasticity of rubber is modeled following classical models (e.g., Rivlin, Ogden), extended to include compressibility. Step-by-step integration of the constitutive law is performed. Simple shear experiments are used to assess the capability of the model to capture essential response characteristics, such as stiffness reduction under cyclic loading of increasing amplitude and the variation of dissipated energy with amplitude and frequency.
---
paper_title: SKYHOOK AND H(INFINITY) CONTROL OF SEMI-ACTIVE SUSPENSIONS: SOME PRACTICAL ASPECTS
paper_content:
This paper deals with single-wheel suspension car model. We aim to prove the benefits of controlled semi-active suspensions compared to passive ones. The contribution relies on H(infinity) control design to improve comfort and road holding of the car under industrial specifications, and on control validation through simulation on an exact nonlinear model of the suspension. Note that we define semi-active suspensions as control systems incorporating a parallel spring and an electronically controlled damper. However, the type of damper used in automotive industry can only dissipate energy. No additional force can be generated using external energy. The control issue is then to change, in an accurate way, the damping (friction) coefficient in real-time. This is what we call semi-active suspension. For this purpose, two control methodologies, H(infinity) and Skyhook control approaches, are developed, using a linear model of the suspension, and compared in terms of performances using industrial specifications. The performance analysis is done using the control-oriented linear model first, and then using an exact nonlinear model of the suspension incorporating the nonlinear characteristics of the suspension spring and damper. (A)
---
paper_title: MAGNETORHEOLOGICAL FLUID AND ELASTOMERIC LAG DAMPER FOR HELICOPTER STABILITY AUGMENTATION
paper_content:
The feasibility of utilizing a composite magnetorheological fluid plus elastomeric (MRFE) damper is assessed. To emulate the loading conditions for a helicopter lag damper, the MRFE damper emulation was subjected to single frequency (lag/rev) and dual frequency (lag/rev and 1/rev) sinusoidal loading, and equivalent viscous damping was used to compare the MRFE damping characteristics with a conventional elastomeric damper. The preliminary MRFE damper showed nonlinear behavior: damping was reduced as displacement amplitude increased. Upon application of a magnetic field, the damping level was controlled according to a specific damping objective as a function of the excitation amplitude. Under dual frequency conditions, damping degradation at lag frequency due to 1/rev motion was also mitigated by magnetic field input to the MR damper.
---
paper_title: Advanced Structural Dynamics and Active Control of Structures
paper_content:
Preface List of Symbols Chapter 1 Introduction to Structures (examples, definition, and properties) 1.1 Examples 1.1.1 A Simple Structure 1.1.2 A 2D Truss 1.1.3 A 3D Truss 1.1.4 A Beam 1.1.5 The Deep Space Network Antenna 1.1.6 The International Space Station Structure 1.2 Definition 1.3 Properties Chapter 2 Standard Models (how to describe typical structures) 2.1 Models of a Linear System 2.1.1 State-Space Representation 2.1.2 Transfer Function 2.2 Second-Order Structural Models 2.2.1 Nodal Models 2.2.2 Modal Models 2.3 State-Space Structural Models 2.3.1 Nodal Models 2.3.2 Models in Modal Coordinates 2.3.3 Modal Models Chapter 3 Special Models (how to describe less-common structures) 3.1 Models with Rigid Body Modes 3.2 Models with Accelerometers 3.2.1 State-Space Representation 3.2.2 Second-Order Representation 3.2.3 Transfer Function 3.3 Models with Actuators 3.3.1 Model with Proof-Mass Actuators 3.3.2 Model with Inertial Actuators 3.4 Models with Small Non-Proportional Damping 3.5 Generalized Model 3.5.1 State-Space Representation 3.5.2 Transfer Function 3.6 Discrete-Time Models 3.6.1 State-Space Representation 3.6.2 Transfer Function Chapter 4 Controllability and Observability (how to excite and monitor a structure) 4.1 Definition and Properties 4.1.1 Continuous-Time Systems 4.1.2 Discrete-Time Systems 4.1.3. Relationship between Continuous- and Discrete-Time Grammians 4.2 Balanced Representation 4.3 Balanced Structures with Rigid Body Modes 4.4 Input and Output Gains 4.5 Controllability and Observability of a Structural Modal Model 4.5.1 Diagonally Dominant Grammians 4.5.2 Closed-Form Grammians 4.5.3 Approximately Balanced Structure in Modal Coordinates 4.6 Controllability and Observability of a Second-Order Modal Model 4.6.1 Grammians 4.6.2 Approximately Balanced Structure in Modal Coordinates 4.7 Three Ways to Compute Hankel Singular Values 4.8 &nb
---
paper_title: Vibration control of civil structures using piezoceramic smart materials : A review
paper_content:
Abstract A review is presented for vibration suppression of civil structures. Special emphasis is laid upon smart structures with piezoelectric control actuation. The last decade has seen spiraling efforts going on around the world into development of the smart structures field. The success of these smart structures is orchestrated by the materials, such as piezoceramics, shape memory alloys, controllable fluids such as magneto-rheological fluids and electro-rheological fluids, fiber-optic sensors and various other materials. Piezoceramics have been known as low-cost, lightweight, and easy-to-implement materials for active control of structural vibration. Piezoceramics are available in various forms such as rigid patch, flexible patch, stack, Macro-Fiber Composite (MFC) actuator, and piezoceramic friction dampers. Piezoelectric patch actuators can be surface bonded to high strain areas of the structure with minimal modification of the original structure or they can be embedded into such as composites structures. On the other hand, stack type actuators can be incorporated into the structures, which require high control forces and micron level displacements, with slight modifications. This paper first presents basics about piezoceramic materials, various actuation methods and types of piezoceramic actuators. Then this paper reviews research into the application of piezoceramic actuators in various civil structures such as beams, trusses, steel frames and cable-stayed bridges.
---
paper_title: Inverse optimal control of nonlinear systems with structural uncertainty
paper_content:
Inverse optimal control for nonlinear systems with structural uncertainty is considered. Based on the control Lyapunov function, a theorem for the globally asymptotic stability is presented. From this a less conservative condition for the inverse optimal control is derived. The result is used to design an inverse optimal controller for a class of nonlinear systems, that improves and extends the existing results. The class of nonlinear systems considered is also enlarged. The simulation results show the effectiveness of the method.
---
paper_title: Stability of a Lyapunov controller for a semi-active structural control system with nonlinear actuator dynamics
paper_content:
Abstract We investigate semi-active control for a wide class of systems with scalar nonlinear semi-active actuator dynamics and consider the problem of designing control laws that guarantee stability and provide sufficient performance. Requiring the semi-active actuator to satisfy two general conditions, we present a method for designing quickest descent controllers generated from quadratic Lyapunov functions that guarantee asymptotic stability within the operating range of the semiactive device for the zero disturbance case. For the external excitation case, bounded-input, bounded-output stability is achieved and a stable attractor (ball of ultimate boundedness) of the system is computed based on the upper bound of the disturbances. We show that our wide class of systems covers, in particular, two nonlinear actuator models from the literature. Tuning the performance of the simple Lyapunov controllers is straightforward using either modal or state penalties. Simulation results are presented which indicate that the Lyapunov control laws can be selected to provide similar decay rates as a “time-optimal” controller for a semi-actively controlled single degree of freedom structure with no external excitation.
---
paper_title: Bistable composite flap for an airfoil
paper_content:
A study was conducted to address the challenges associated with investigating a bistable composite flap for an airfoil. A full-scale rotor blade section with a span of 2.114 m and a chord of 0.68 m, fitted with a 1 m span flap was wind-tunnel tested up to a speed of 60 m/s with the flap moving between two stable states for various angles of attack. The blade was approximated as a NACA 24016 section with a 20% chord trailing-edge flap to simplify the analysis. The trailing-edge flap was designed to change between its stable geometries between hover and forward flight conditions for aerodynamic performance improvements. The flap was driven by an electromechanical actuator that was mounted inside the blade D-spar at the leading edge. All of the rotor blade structure remote from this bistable flap region was unmodified and assumed to be completely rigid during wind-tunnel testing.
---
paper_title: CONSTRUCTION OF AN ACTIVE SUSPENSION SYSTEM OF A QUARTER CAR MODEL USING THE CONCEPT OF SLIDING MODE CONTROL
paper_content:
This paper is concerned with the construction of an active suspension system for a quarter car model using the concept of sliding mode control. The active control is derived by the equivalent control and switching function where the sliding surface is obtained by using Linear quadratic control (LQ control) theory. The active control is generated with non-negligible time lag by using a pneumatic actuator, and the road profile is estimated by using the minimum order observer based on a linear system transformed from the exact non-linear system. The experimental result indicates that the proposed active suspension system is more effective in the vibration isolation of the car body than the linear active suspension system based on LQ control theory and the passive suspension system.
---
paper_title: Nonlinear Dynamical Control Systems
paper_content:
Contents: Introduction.- Manifolds, Vectorfields, Lie Brackets, Distributions.- Controllability and Observability, Local Decompositions.- Input-Output Representations.- State Space Transformation and Feedback.- Feedback Linearization of Nonlinear Systems.- Controlled Invariant Distribution and the Disturbance Decoupling Problem.- The Input-Output Decoupling Problem: Geometric Considerations.- Local Stability and Stabilization of Nonlinear Systems.- Controlled Invariant Submanifolds and Nonlinear Zero Dynamics.- Mechanical Nonlinear Control Systems.- Controlled Invariance and Decoupling for General Nonlinear Systems.- Discrete-Time Nonlinear Control Systems.- Subject Index.
---
paper_title: Spatial Control of Vibration
paper_content:
Modelling spatial norms and model reduction model correction spatial control optimal placement of actuators and sensors system identification for spatially distributed systems.
---
paper_title: ACTUATOR AND SENSOR PLACEMENT FOR STRUCTURAL TESTING AND CONTROL
paper_content:
Abstract In this paper, the actuator and sensor locations of a structural test item are selected as a replacement of the disturbance inputs and the performance outputs of a real structure. The most straightforward approach is to place sensors in the areas of performance evaluation, and actuators in the areas of disturbance action. However, this solution is rarely possible due to technical and economical reasons. Therefore, the actuators and sensors need to be placed in preselected regions, and should duplicate as close as possible the disturbance action and the performance measurements. In this paper a placement problem with non-collocated actuators and disturbances, as well as non-collocated performance and sensor outputs is solved. The solution is determined by locating sensors (actuators) such that the Hankel singular value vector of a structure from actuator inputs to sensor outputs is closely correlated with the Hankel singular value vector of the structure from the disturbance inputs to performance outputs. It is shown that this approach improves additionally the cross-coupling between actuators and performance, and between disturbances and the sensors, thus improving overall closed loop performance. The method is illustrated with the determination of sensors of a truss structure, where two selected sensors replaced an original set of 36 sensors.
---
paper_title: New Semi-active Multi-modal Vibration Control Using Piezoceramic Components
paper_content:
Active vibration control using piezoelectric elements has been extensively studied due to the requirement for increasingly high performances. Semi-active control, such as Synchronized Switch Damping, is an alternative technique. It consists in switching a piezoelectric element to a specific circuit synchronously with the motion of the structure, unlike active control. This method requires very low power supply, but performances remain poor in the case of broad bandwidth excitation. This article proposes a new method which is a combination of the SSDI semi-active control and technique developed for active control and has low power supply requirements. It extends semi-active control to any type of excitation, while optimizing modal damping on several targeted modes. Experimental measurements carried out on a clamped free beam are presented and a significant damping on targeted modes is demonstrated.
---
paper_title: Semiactive Control of the 20-Story Benchmark Building with Piezoelectric Friction Dampers
paper_content:
A new control algorithm is proposed in this paper to control the responses of a seismically excited, nonlinear 20-story building with piezoelectric friction dampers. The passive friction damping mechanism is used for low-amplitude vibration while the active counterpart takes over for high-amplitude vibration. Both the stick and sliding phases of dampers are taken into account. To effectively mitigate the peak story drift and floor acceleration of the 20-story building, multiple dampers are placed on the 20-story building based on a sequential procedure developed for optimal performance of the dampers. Extensive simulations indicate that the proposed semiactive dampers can effectively reduce the seismic responses of the uncontrolled building with substantially less external power than its associated active dampers, for instance, 67% less under the 1940 El Centro earthquake when the passive friction force is equal to 10% of the damper capacity.
---
paper_title: Vibration With Control
paper_content:
Preface. 1. SINGLE DEGREE OF FREEDOM SYSTEMS. Introduction. Spring-Mass System. Spring-Mass-Damper System. Forced Response. Transfer Functions and Frequency Methods. Measurement and Testing. Stability. Design and Control of Vibrations. Nonlinear Vibrations. Computing and Simulation in Matlab. Chapter Notes. References. Problems. 2. LUMPED PARAMETER MODELS. Introduction. Classifications of Systems. Feedback Control Systems. Examples. Experimental Models. Influence Methods. Nonlinear Models and Equilibrium. Chapter Notes. References. Problems. 3. MATRICES AND THE FREE RESPONSE. Introduction. Eigenvalues and Eigenvectors. Natural Frequencies and Mode Shapes. Canonical Forms. Lambda Matrices. Oscillation Results. Eigenvalue Estimates. Computational Eigenvalue Problems in Matlab. Numerical Simulation of the Time Response in Matlab. Chapter Notes. References. Problems. 4. STABILITY. Introduction. Lyapunov Stability. Conservative Systems. Systems with Damping. Semidefinite Damping . Gyroscopic Systems. Damped Gyroscopic Systems. Circulatory Systems. Asymmetric Systems. Feedback Systems. Stability in the State Space. Stability Boundaries. Chapter Notes. References. Problems. 5. FORCED RESPONSE OF LUMPED PARAMETER SYSTEMS. Introduction. Response via State Space Methods. Decoupling Conditions and Modal Analysis. Response of Systems with Damping. Bounded-Input, Bounded-Output Stability. Response Bounds. Frequency Response Methods. Numerical Simulations in Matlab. Chapter Notes. References. Problems. 6. DESIGN CONSIDERATIONS. Introduction. Isolators and Absorbers. Optimization Methods. Damping Design. Design Sensitivity and Redesign. Passive and Active Control. Design Specifications. Model Reduction. Chapter Notes. References. Problems. 7. CONTROL OF VIBRATIONS. Introduction. Controllability and Observability. Eigenstructure Assignment. Optimal Control. Observers (Estimators). Realization. Reduced-Order Modeling. Modal Control in State Space. Modal Control in Physical Space. Robustness. Positive Position Feedback Control. Matlab Commands for Control Calculations. Chapter Notes. References. Problems. 8. VIBRATION MEASUREMENT. Introduction. Measurement Hardware. Digital Signal Processing. Random Signal Analysis. Modal Data Extraction (Frequency Domain). Modal Data Extraction (Time Domain). Model Identification. Model Updating. Chapter Notes. References. Problems. 9. DISTRIBUTED PARAMETER MODELS. Introduction. Vibrations of Strings. Rods and Bars. Vibration of Beams. Membranes and Plates. Layered Materials. Viscous Damping. Chapter Notes. References. Problems. 10. FORMAL METHODS OF SOLUTION. Introduction. Boundary Value Problems and Eigenfunctions. Modal Analysis of the Free Response. Modal Analysis in Damped Systems. Transform Methods. Green's Functions. Chapter Notes. References. Problems. 11. OPERATORS AND THE FREE RESPONSE. Introduction. Hilbert Spaces. Expansion Theorems. Linear Operators. Compact Operators. Theoretical Modal Analysis. Eigenvalue Estimates. Enclosure Theorems. Oscillation Theory. Chapter Notes. References. Problems. 12. FORCED RESPONSE AND CONTROL. Introduction. Response by Modal Analysis. Modal Design Criteria. Combined Dynamical Systems. Passive Control and Design. Distribution Modal Control. Nonmodal Distributed Control. State Space Control Analysis. Chapter Notes. References. Problems. 13. APPROXIMATIONS OF DISTRIBUTED PARAMETER MODELS. Introduction. Modal Truncation. Rayleigh- Ritz-Galerkin Approximations. Finite Element Method. Substructure Analysis. Truncation in the Presence of Control. Impedance Method of Truncation and Control. Chapter Notes. References. Problems. APPENDIX A: COMMENTS ON UNITS. APPENDIX B: SUPPLEMENTARY MATHEMATICS. Index.
---
paper_title: An Optimal Nonlinear Feedback Control Strategy for Randomly Excited Structural Systems
paper_content:
A strategy for optimal nonlinear feedback control of randomlyexcited structural systems is proposed based on the stochastic averagingmethod for quasi-Hamiltonian systems and the stochastic dynamicprogramming principle. A randomly excited structural system isformulated as a quasi-Hamiltonian system and the control forces aredivided into conservative and dissipative parts. The conservative partsare designed to change the integrability and resonance of the associatedHamiltonian system and the energy distribution among the controlledsystem. After the conservative parts are determined, the system responseis reduced to a controlled diffusion process by using the stochasticaveraging method. The dissipative parts of control forces are thenobtained from solving the stochastic dynamic programming equation. Boththe responses of uncontrolled and controlled structural systems can bepredicted analytically. Numerical results for a controlled andstochastically excited Duffing oscillator and a two-degree-of-freedomsystem with linear springs and linear and nonlinear dampings, show thatthe proposed control strategy is very effective and efficient.
---
paper_title: Stable Feedback Control of Linear Distributed Parameter Systems: Time and Frequency Domain Conditions
paper_content:
Abstract Large space structures, or any mechanically flexible structures, are inherently distributed parameter systems (DPSs) whose dynamics are modeled by partial, rather than ordinary, differential equations. Such DPSs are described by operator equations on an infinite-dimensional Hilbert (or Banach) space. However, any feedback controller for such a DPS must be a finite-dimensional (and discrete-time) system to be implemented with on-line digital computers and a finite (small) number of actuators and sensors. There are many ways to synthesize such controllers; we will emphasize the Galerkin or finite-element approach. Although the overall performance of finite-dimensional controllers is important, the first consideration is their stability in closed-loop with the actual DPS. The analysis of DPSs makes use of the theory of semigroups on the infinite-dimensional state space. We will present stability bounds in both the time and frequency domains for infinite-dimensional systems. Currently, the frequency domain approach appears to yield more easily tested stability conditions than the time domain approach; however, we will show some relationships between these two methods and emphasize the role played by the DPS semigroup and its properties. It seems to us that such stability conditions are essential for the planning and successful operation of complex systems like large aerospace structures.
---
paper_title: Constant-Gain Linear Feedback Control of Piecewise Linear Structural Systems via Nonlinear Normal Modes
paper_content:
We present a technique for using constant-gain linear position feedback control to implement eigen- structure assignment of n-degrees-of-freedom conservative structural systems with piecewise linear nonlin- earities. We employ three distinct control strategies which utilize methods for approximating the nonlinear normal mode (NNM) frequencies and mode shapes. First, the piecewise modal method (PMM) for approx- imating NNM frequencies is used to determine n constant actuator gains for eigenvalue (pole) placement. Secondly, eigenvalue placement is accomplished by finding an approximate single-degree-of-freedom re- duced model with one actuator gain for the mode to be controlled. The third strategy allows the frequencies and mode shapes (eigenstructure) to be placed by using a full n n matrix of actuator gains and employing the local equivalent linear stiffness method (LELSM) for approximating NNM frequencies and mode shapes. The techniques are applied to a two-degrees-of-freedom system with two distinct types of nonlinearities: a bilinear clearance nonlinearity and a symmetric deadzone nonlinearity.
---
paper_title: Application of Nonlinear Control Theory to Electronically Controlled Suspensions
paper_content:
SUMMARY This paper illustrates the use of nonlinear control theory for designing electro-hydraulic active suspensions. A nonlinear, “sliding” control law is developed and compared with the linear control of a quarter-car active suspension system acting under the effects of coulomb friction. A comparison will also be made with a passive quarter-car suspension system. Simulation and experimental results show that nonlinear control performs better than PID control and improves the ride quality compared to a passive suspension.
---
paper_title: Feedback control of flexible systems
paper_content:
Feedback control is developed for the class of flexible systents described by the generalized wave equation with damping. The control force distribution is provided by a number of point force actuators and the system displacements and/or their velocities are measured at various points. A feedback controller is developed for a finite number of modes of the flexible system and the controllability and observability conditions necessary for successful operation are displayed. The control and observation spillover due to the residual (uncontrolled) modes is examined and the combined effect of control and observation spillover is shown to lead to potential instabilities in the closed-loop system. Some remedies for spillover, including a straightforward phase-locked loop prefilter, are suggested to remove the instability mechanism. The concepts of this paper are illustrated by some numerical studies on the feedback control of a simply-supported Euler-Bernoulli beam with a single actuator and sensor.
---
paper_title: Adaptive Control of Flexible Structures Using a Nonlinear Vibration Absorber
paper_content:
A nonlinear adaptive vibration absorber to control the vibrations offlexible structures is investigated. The absorber is based on thesaturation phenomenon associated with dynamical systems possessingquadratic nonlinearities and a two-to-one internal resonance. Thetechnique is implemented by coupling a second-order controller with thestructure through a sensor and an actuator. Energy is exchanged betweenthe structure and the controller and, near resonance, the structure'sresponse saturates to a small value.Experimental results are presented for the control of a rectangularplate and a cantilever beam using piezoelectric ceramics andmagnetostrictive alloys as actuators. The control technique isimplemented using a digital signal processing board and a modelingsoftware. The control strategy is made adaptive by incorporating anefficient frequency-measurement technique. This is validated bysuccessfully testing the control strategy for a nonconventionalproblem, where nonlinear effects hinder the application of thenonadaptive controller.
---
paper_title: Automotive active suspensions Part 1: Basic principles
paper_content:
AbstractAutomotive suspension design is a compromise brought about by the conflicting demands of ride and handling. The past few years have seen the introduction of increasingly sophisticated, electronically controlled, components into automotive suspensions which redefine the boundaries of the compromise.The paper has been written in two parts. This first part reviews the compromises which are required in the design of a conventional passive suspension. It then goes on to show how those compromises can be changed by the inclusion of active components.The second part discusses the hardware employed which ranges from simple switched dampers, through semi-active dampers, and low bandwidth/soft active suspensions, to high bandwidth/stiff active suspensions. The benefits to be derived from each of the technologies will be assessed, together with their strengths and weaknesses.
---
paper_title: Control System Design
paper_content:
From the Publisher: ::: A key aspect of the book is the frequent use of real world design examples drawn directly from the authors' industrial experience. These are represented by over 15 substantial case studies ranging from distillation columns to satellite tracking. The book is also liberally supported by modern teaching aids available on both an accompanying CD-ROM and Companion Website. Resources to be found there include MATLAB® routines for all examples; extensive PowerPoint lecture notes based on the book; and a totally unique Java Applet-driven "virtual laboratory" that allows readers to interact with the real-world case studies.
---
paper_title: Nonlinear Systems: Analysis, Stability, and Control
paper_content:
1 Linear vs. Nonlinear.- 2 Planar Dynamical Systems.- 3 Mathematical Background.- 4 Input-Output Analysis.- 5 Lyapunov Stability Theory.- 6 Applications of Lyapunov Theory.- 7 Dynamical Systems and Bifurcations.- 8 Basics of Differential Geometry.- 9 Linearization by State Feedback.- 10 Design Examples Using Linearization.- 11 Geometric Nonlinear Control.- 12 Exterior Differential Systems in Control.- 13 New Vistas: Multi-Agent Hybrid Systems.- References.
---
paper_title: Mixing Rules for the Piezoelectric Properties of Macro Fiber Composites
paper_content:
This article focuses on the modeling of structures equipped with Macro Fiber Composite (MFC) transducers. Based on the uniform field method under the plane stress assumption, we derive analytical mixing rules in order to evaluate equivalent properties for d31 and d33 MFC transducers. In particular, mixing rules are derived for the longitudinal and transverse piezoelectric coefficients of MFCs. These mixing rules are validated using finite element computations and experimental results available from the literature.
---
paper_title: A class of proportional-integral sliding mode control with application to active suspension system
paper_content:
The purpose of this paper is to present a new robust strategy in controlling the active suspension system. The strategy utilized the proportional-integral sliding mode control scheme. A quarter-car model is used in the study and the performance of the controller is compared to the linear quadratic regulator and with the existing passive suspension system. A simulation study is performed to prove the effectiveness and robustness of the control approach.
---
paper_title: SKYHOOK AND H(INFINITY) CONTROL OF SEMI-ACTIVE SUSPENSIONS: SOME PRACTICAL ASPECTS
paper_content:
This paper deals with single-wheel suspension car model. We aim to prove the benefits of controlled semi-active suspensions compared to passive ones. The contribution relies on H(infinity) control design to improve comfort and road holding of the car under industrial specifications, and on control validation through simulation on an exact nonlinear model of the suspension. Note that we define semi-active suspensions as control systems incorporating a parallel spring and an electronically controlled damper. However, the type of damper used in automotive industry can only dissipate energy. No additional force can be generated using external energy. The control issue is then to change, in an accurate way, the damping (friction) coefficient in real-time. This is what we call semi-active suspension. For this purpose, two control methodologies, H(infinity) and Skyhook control approaches, are developed, using a linear model of the suspension, and compared in terms of performances using industrial specifications. The performance analysis is done using the control-oriented linear model first, and then using an exact nonlinear model of the suspension incorporating the nonlinear characteristics of the suspension spring and damper. (A)
---
paper_title: Adaptive Model Inversion Control of a Helicopter with Structural Load Limiting
paper_content:
An adaptive control system capable of providing consistent handling qualities throughout the operational flight envelope is a desirable feature for rotorcraft. The adaptive model inversion controller with structural load limit protection evaluated here offers the capability to adapt to changing flight conditions along with aggressive maneuvering without envelope limit violations. The controller was evaluated using a nonlinear simulation model of the UH-60 helicopter. The controller is based on a well-documented model inversion architecture with an adaptive neural network (ANN) to compensate for inversion error. The ANN was shown to improve the tracking ability of the controller at off-design point flight conditions; although at some flight conditions the controller performed well even without adaptation. The controller was modified to include a structural load-limiting algorithm to avoid exceeding prescribed limits on the longitudinal hub moment. The limiting was achieved by relating the hub moment to pitch acceleration. The acceleration limits were converted to pitch angle command limits imposed in the pitch axis command filter. Results show that the longitudinal hub moment response in aggressive maneuvers stayed within the prescribed limits for a range of operating conditions. The system was effective in avoiding the longitudinal hub moment limit without unnecessary restrictions on the aircraft performance.
---
paper_title: HYBRID WAVE/MODE ACTIVE VIBRATION CONTROL
paper_content:
A hybrid approach to active vibration control is described in this paper. It combines elements of both wave and mode approaches to active control and is an attempt to improve on the performance of these approaches individually. In the proposed hybrid approach, wave control is first applied at one or more points in the structure. It is designed on the basis of the local behaviour of the structure and is intended to absorb vibrational energy, especially at higher frequencies. Then modal control is applied, being designed on the basis of the modified global equations of motion of the structure-plus-wave controller. These are now normally non-self-adjoint. Since the higher order modes are relatively well damped, hybrid control improves the model accuracy and the robustness of the system and gives better broadband vibration attenuation performance. Hybrid wave/mode active vibration control is described with specific reference to the control of a cantilever beam. The particular case considered is that of collocated, point force/sensor feedback wave control combined with modal control designed using pole placement. Numerical and experimental results are presented.
---
paper_title: Adaptive positive position feedback for actively absorbing energy in acoustic cavities
paper_content:
A method for adaptive energy absorption in acoustic cavities is presented. The method is based on an adaptive scheme consisting of a self-tuning regulator that has the ability to target multiple modes with a single actuator. The inner control loop of the self-tuning regulator uses positive position feedback in series with a high- and low-pass Butterworth filters for each controlled mode. The outer loop consists of an algorithm that locates the zero frequencies of the collocated signal and uses these values to update the resonance frequency of the positive position feedback filter and the cut-off and cut-on frequencies of the Butterworth filters. Experimental results are provided that show how less than a 10 percent change in the frequencies of the acoustic modes of the experimental setup will cause a non-adaptive controller (using positive position feedback and Butterworth filters) to go unstable, but the self-tuning regulator will maintain stability and continue absorbing energy through a 20 percent change in the frequencies of the acoustic modes.
---
paper_title: Positive position feedback control for large space structures
paper_content:
A new technique for vibration suppression in large space structures is investigated in laboratory experiments on a thin cantilever beam. The technique, called Positive Position Feedback (PPF), makes use of generalized displacement measurements to accomplish vibration suppression. Several features of Positive Position Feedback make it attractive for the large space structure control environment. The realization of the controller is simple and straightforward. Global stability conditions can be derived which are independent of the dynamical characteristics of the structure being controlled, i.e., all spillover is stabilizing. Furthermore, the method can be made insensitive to finite actuator dynamics, and is amenable to a strain-based sensing approach. The experiments described here control the first six bending modes of a cantilever beam, and make use of piezoelectric materials for actuators and sensors, simulating a piezoelectric active-member. Modal damping ratios as high as 20% of critical are achieved.
---
paper_title: An experimental study on active tendon control of cable‐stayed bridges
paper_content:
Active tendon control of cable-stayed bridges subject to a vertical sinusoidal force is experimentally and analytically studied. Emphasis is placed on the effects of linear and non-linear internal resonances on the control (due to the presence of the cable vibration). A simple cable-supported cantilever beam is used as a model. It is found that active tendon control is very effective in vertical girder motion with small cable vibration (girder dominated motion), whereas it is not effective in verticaIlgirder motion with large cable vibration (cable dominated motion). Analytical prediction is very satisfactory except for the latter case
---
paper_title: A Mixed Robust/Optimal Active Vibration Control for Uncertain Flexible Structural Systems with Nonlinear Actuators Using Genetic Algorithm
paper_content:
In this article, a mixed robust/optimal control approach is proposed to treat the active vibration control (or active vibration suppression) problems of flexible structural systems under the effects of mode truncation, linear time-varying parameter perturbations and nonlinear actuators. A new robust stability condition is derived for the flexible structural system which is controlled by an observer-based controller and is subject to mode truncation, nonlinear actuators and linear structured time-varying parameter perturbations simultaneously. Based on the robust stability constraint and the minimization of a defined H2 performance, a hybrid Taguchi-genetic algorithm (HTGA) is employed to find the optimal state feedback gain matrix and observer gain matrix for uncertain flexible structural systems. A design example of the optimal observer-based controller for a simply supported beam is given to demonstrate the combined application of the presented sufficient condition and the HTGA.
---
paper_title: ACTIVE ISOLATION OF MULTIPLE STRUCTURAL WAVES ON A HELICOPTER GEARBOX SUPPORT STRUT
paper_content:
Abstract A helicopter gearbox support strut has been set up in the laboratory under realistic loading conditions to investigate the active control of longitudinal and lateral vibration transmission to a connected receiving structure. Three magnetostrictive actuators were clamped to the strut to introduce secondary vibration in the frequency range 250–1250 Hz, the control objective being to minimize the kinetic energy of vibration of the receiving structure. Using an extensive set of frequency response measurements, it was possible to predict on a linear basis the attenuation in the kinetic energy of the receiving structure at any discrete frequency in the measurement range for a wide range of conditions. Calculations based on frequency response measurements showed that with the installed steel bearings on the strut, attenuations in the kinetic energy of the receiving structure of 30–40 dB were possible over a range of frequencies between 250 and 1250 Hz. At some frequencies in this range, notably around 500 Hz and 800 Hz, the control was less effective. This was due to torsional motion of the strut which was amplified by the secondary actuators. Good control was also predicted when the primary excitation to the strut was applied laterally rather than longitudinally. Real-time active control has been implemented at discrete frequencies on the test strut and has generally confirmed the linear predictions. Attenuations in excess of 40 dB were measured in a number of cases. The tests confirmed that the active control of vibration transmission through a helicopter strut is practical at frequencies up to at least 1250 Hz.
---
paper_title: Experimental Control of Flexible Structures Using Nonlinear Modal Coupling: Forced and Free Vibration
paper_content:
This paper is an experimental study of free and forced vibration suppression in a piezoceramic actuated flexible beam via the nonlinear Modal Coupling Control (MCC). The method is based on transferring the oscillatory energy from the plant to an auxiliary second order system (controller), coupled to the plant through nonlinear terms. The proposed controller produces an input that can be utilized by unidirectional actuators. Unidirectional actuators do not generate symmetric power during an application. Shape memory alloys, thrusters and cable based actuators are examples of this class of actuators. Existing control methods assume a symmetric actuation and therefore application of unidirectional actuators call for new control techniques. Current control techniques implement a bias in utilizing unidirectional actuators. However, the amount of the bias is variable and depends on the control effort. Moreover, the use of a bias changes the system equilibrium point and introduces a steady state error. The propo...
---
paper_title: NONLINEAR STRUCTURAL CONTROL USING NEURAL NETWORKS
paper_content:
Recently, Ghaboussi and Joghataie presented a structural control method using neural networks, in which a neurocontroller was developed and applied for linear structural control when the response of the structure remained within the linearly elastic range. One of the advantages of the neural networks is that they can learn nonlinear as well as linear control tasks. In this paper, we study the application of the previously developed neurocontrol method in nonlinear structural control problems. First, we study the capabilities of the linearly trained neurocontrollers in nonlinear structural control. Next, we train a neurocontroller on the nonlinear data and study its capabilities. These studies are done through numerical simulations, on models of a three-story steel frame structure. The control is implemented through an actuator and tendon system in the first floor. The sensor is assumed to be a single accelerometer on the first floor. The acceleration of the first floor as well as the ground acceleration are used as feedback. In the numerical simulations we have considered the actuator dynamics and used a coupled model of the actuator-structure system. A realistic sampling period and an inherent time delay in the control loop have been used.
---
paper_title: Numerical recipes in C
paper_content:
Note: Includes bibliographical references, 3 appendixes and 2 indexes.- Diskette v 2.06, 3.5''[1.44M] for IBM PC, PS/2 and compatibles [DOS] Reference Record created on 2004-09-07, modified on 2016-08-08
---
paper_title: Vibration With Control
paper_content:
Preface. 1. SINGLE DEGREE OF FREEDOM SYSTEMS. Introduction. Spring-Mass System. Spring-Mass-Damper System. Forced Response. Transfer Functions and Frequency Methods. Measurement and Testing. Stability. Design and Control of Vibrations. Nonlinear Vibrations. Computing and Simulation in Matlab. Chapter Notes. References. Problems. 2. LUMPED PARAMETER MODELS. Introduction. Classifications of Systems. Feedback Control Systems. Examples. Experimental Models. Influence Methods. Nonlinear Models and Equilibrium. Chapter Notes. References. Problems. 3. MATRICES AND THE FREE RESPONSE. Introduction. Eigenvalues and Eigenvectors. Natural Frequencies and Mode Shapes. Canonical Forms. Lambda Matrices. Oscillation Results. Eigenvalue Estimates. Computational Eigenvalue Problems in Matlab. Numerical Simulation of the Time Response in Matlab. Chapter Notes. References. Problems. 4. STABILITY. Introduction. Lyapunov Stability. Conservative Systems. Systems with Damping. Semidefinite Damping . Gyroscopic Systems. Damped Gyroscopic Systems. Circulatory Systems. Asymmetric Systems. Feedback Systems. Stability in the State Space. Stability Boundaries. Chapter Notes. References. Problems. 5. FORCED RESPONSE OF LUMPED PARAMETER SYSTEMS. Introduction. Response via State Space Methods. Decoupling Conditions and Modal Analysis. Response of Systems with Damping. Bounded-Input, Bounded-Output Stability. Response Bounds. Frequency Response Methods. Numerical Simulations in Matlab. Chapter Notes. References. Problems. 6. DESIGN CONSIDERATIONS. Introduction. Isolators and Absorbers. Optimization Methods. Damping Design. Design Sensitivity and Redesign. Passive and Active Control. Design Specifications. Model Reduction. Chapter Notes. References. Problems. 7. CONTROL OF VIBRATIONS. Introduction. Controllability and Observability. Eigenstructure Assignment. Optimal Control. Observers (Estimators). Realization. Reduced-Order Modeling. Modal Control in State Space. Modal Control in Physical Space. Robustness. Positive Position Feedback Control. Matlab Commands for Control Calculations. Chapter Notes. References. Problems. 8. VIBRATION MEASUREMENT. Introduction. Measurement Hardware. Digital Signal Processing. Random Signal Analysis. Modal Data Extraction (Frequency Domain). Modal Data Extraction (Time Domain). Model Identification. Model Updating. Chapter Notes. References. Problems. 9. DISTRIBUTED PARAMETER MODELS. Introduction. Vibrations of Strings. Rods and Bars. Vibration of Beams. Membranes and Plates. Layered Materials. Viscous Damping. Chapter Notes. References. Problems. 10. FORMAL METHODS OF SOLUTION. Introduction. Boundary Value Problems and Eigenfunctions. Modal Analysis of the Free Response. Modal Analysis in Damped Systems. Transform Methods. Green's Functions. Chapter Notes. References. Problems. 11. OPERATORS AND THE FREE RESPONSE. Introduction. Hilbert Spaces. Expansion Theorems. Linear Operators. Compact Operators. Theoretical Modal Analysis. Eigenvalue Estimates. Enclosure Theorems. Oscillation Theory. Chapter Notes. References. Problems. 12. FORCED RESPONSE AND CONTROL. Introduction. Response by Modal Analysis. Modal Design Criteria. Combined Dynamical Systems. Passive Control and Design. Distribution Modal Control. Nonmodal Distributed Control. State Space Control Analysis. Chapter Notes. References. Problems. 13. APPROXIMATIONS OF DISTRIBUTED PARAMETER MODELS. Introduction. Modal Truncation. Rayleigh- Ritz-Galerkin Approximations. Finite Element Method. Substructure Analysis. Truncation in the Presence of Control. Impedance Method of Truncation and Control. Chapter Notes. References. Problems. APPENDIX A: COMMENTS ON UNITS. APPENDIX B: SUPPLEMENTARY MATHEMATICS. Index.
---
paper_title: Optimal structural control using neural networks
paper_content:
An optimal control algorithm using neural networks is proposed. The controller neural network is trained by a training rule developed to minimize cost function. Both the linear structure and the nonlinear structure can be controlled by the proposed neurocontroller. A bilinear hysteretic model is used to simulate nonlinear structural behavior. Three main advantages of the neurocontroller can be summarized as follows. First, it can control a structure with unknown dynamics. Second, it can easily be applied to nonlinear structural control. Third, external disturbances can be considered in the optimal control. Examples show that structural vibration can be controlled successfully.
---
paper_title: Modelling of structural response and optimization of structural control system using neural network and genetic algorithm
paper_content:
This paper proposes an integrated approach to the modelling and optimization of structural control systems in tall buildings. In this approach, an artificial neural network is applied to model the structural dynamic responses of tall buildings subjected to strong earthquakes, and a genetic algorithm is used to optimize the design problem of structural control systems, which constitutes a mixed-discrete, nonlinear and multi-modal optimization problem. The neural network model of the structural dynamic response analysis is included in the genetic algorithm and is used as a module of the structural analysis to estimate the dynamic responses of tall buildings. A numerical example is presented in which the general regression neural network is used to model the structural response analysis. The modelling method, procedure and the numerical results are discussed. Two Los Angeles earthquake records are adopted as earthquake excitations. Copyright © 2000 John Wiley & Sons, Ltd.
---
paper_title: Vibration control of piezoelectric beam-type plates with geometrically nonlinear deformation
paper_content:
Abstract This paper presents a wavelet-based approach of deformation identification and vibration control of beam-type plates with geometrically nonlinear deflection using piezoelectric sensors and actuators. The identification is performed by transferring the nonlinear equations of identifying deflection into a system of solvable nonlinear algebraic equations in terms of the measurable electric charges and currents on piezoelectric sensors. After that, a control law of negative feedback of the identified signals of deflection and velocity is employed, and the weighted residual method is chosen to determine control voltages applied on the piezoelectric actuators. Due to that the scaling function transform is like a low-pass filter which can automatically filter out high-order signals of vibration or disturbance from the measurement and the controller employed here, this control approach does not lead to the undesired phenomenon of control instability which is generated by the spilling over of high-order signals. Finally, some numerical simulations are carried out to show the efficiency of the proposed approach.
---
paper_title: Active tendon control of cable‐stayed bridges: a large‐scale demonstration
paper_content:
This paper presents a strategy for active damping of cable structures, using active tendons. The first part of the paper summarizes the theoretical background: the control law is briefly presented together with the main results of an approximate linear theory which allows the prediction of closed-loop poles with a root locus technique. The second part of the paper reports on experimental results obtained with two test structures: the first one is a small size mock-up representative of a cable-stayed bridge during the construction phase. The control of the parametric vibration of passive cables due to deck vibration is demonstrated. The second one is a 30 m long mock-up built on the reaction wall of the ELSA test facility at the JRC Ispra (Italy); this test structure is used to demonstrate the practical implementation of the control strategy with hydraulic actuators. Copyright © 2001 John Wiley & Sons, Ltd.
---
paper_title: Adaptive Model Inversion Control of a Helicopter with Structural Load Limiting
paper_content:
An adaptive control system capable of providing consistent handling qualities throughout the operational flight envelope is a desirable feature for rotorcraft. The adaptive model inversion controller with structural load limit protection evaluated here offers the capability to adapt to changing flight conditions along with aggressive maneuvering without envelope limit violations. The controller was evaluated using a nonlinear simulation model of the UH-60 helicopter. The controller is based on a well-documented model inversion architecture with an adaptive neural network (ANN) to compensate for inversion error. The ANN was shown to improve the tracking ability of the controller at off-design point flight conditions; although at some flight conditions the controller performed well even without adaptation. The controller was modified to include a structural load-limiting algorithm to avoid exceeding prescribed limits on the longitudinal hub moment. The limiting was achieved by relating the hub moment to pitch acceleration. The acceleration limits were converted to pitch angle command limits imposed in the pitch axis command filter. Results show that the longitudinal hub moment response in aggressive maneuvers stayed within the prescribed limits for a range of operating conditions. The system was effective in avoiding the longitudinal hub moment limit without unnecessary restrictions on the aircraft performance.
---
paper_title: Past, present and future of nonlinear system identification in structural dynamics
paper_content:
This survey paper contains a review of the past and recent developments in system identification of nonlinear dynamical structures. The objective is to present some of the popular approaches that have been proposed in the technical literature, to illustrate them using numerical and experimental applications, to highlight their assets and limitations and to identify future directions in this research area. The fundamental differences between linear and nonlinear oscillations are also detailed in a tutorial.
---
paper_title: CONSTRUCTION OF AN ACTIVE SUSPENSION SYSTEM OF A QUARTER CAR MODEL USING THE CONCEPT OF SLIDING MODE CONTROL
paper_content:
This paper is concerned with the construction of an active suspension system for a quarter car model using the concept of sliding mode control. The active control is derived by the equivalent control and switching function where the sliding surface is obtained by using Linear quadratic control (LQ control) theory. The active control is generated with non-negligible time lag by using a pneumatic actuator, and the road profile is estimated by using the minimum order observer based on a linear system transformed from the exact non-linear system. The experimental result indicates that the proposed active suspension system is more effective in the vibration isolation of the car body than the linear active suspension system based on LQ control theory and the passive suspension system.
---
paper_title: A modified model reference adaptive control approach for systems with noise or unmodelled dynamics
paper_content:
In this paper, a modified model reference adaptive control (MRAC) strategy is developed for use on plants with noise or unmodelled high-frequency dynamics. MRAC consists of two parts, an adaptive control part and a fixed gain control part. The adaptive algorithm uses a combination of low- and high-pass filters such that the frequency range for the adaptive part of the strategy is limited. The mechanism for noise-induced gain wind-up is demonstrated analytically, and it is shown how MRAC can be modified to eliminate this wind-up. Further to this, an additional filter is proposed to improve MRAC robustness to high-frequency unmodelled dynamics. Two test plants, one with added noise and the other with unmodelled high-frequency dynamics, are considered. Both plants exhibit unstable behaviour when controlled using standard MRAC, but, with the modified strategy, robustness is significantly improved.
---
paper_title: BIFURCATIONS AND LIMIT DYNAMICS IN ADAPTIVE CONTROL SYSTEMS
paper_content:
Adaptive controllers are used in systems where one or more parameters are unknown. Such controllers are designed to stabilize the system using an estimate for the unknown parameters that is adapted automatically as part of the stabilization. One drawback in adaptive control design is the possibility that the closed-loop limit system is not stable. The worst situation is the existence of a destabilized limit system attracting a large open subset of initial conditions. These situations lie behind bad behavior of the closed-loop adaptive control system. The main issue in this paper is to identify and characterize the occurrence of such bad behavior in the adaptive stabilization of first- and second-order systems with one unknown parameter. We develop normal forms for all possible cases and find the conditions that lead to bad behavior. In this context, we discuss a number of bifurcation-like phenomena.
---
paper_title: NONLINEAR STRUCTURAL CONTROL USING NEURAL NETWORKS
paper_content:
Recently, Ghaboussi and Joghataie presented a structural control method using neural networks, in which a neurocontroller was developed and applied for linear structural control when the response of the structure remained within the linearly elastic range. One of the advantages of the neural networks is that they can learn nonlinear as well as linear control tasks. In this paper, we study the application of the previously developed neurocontrol method in nonlinear structural control problems. First, we study the capabilities of the linearly trained neurocontrollers in nonlinear structural control. Next, we train a neurocontroller on the nonlinear data and study its capabilities. These studies are done through numerical simulations, on models of a three-story steel frame structure. The control is implemented through an actuator and tendon system in the first floor. The sensor is assumed to be a single accelerometer on the first floor. The acceleration of the first floor as well as the ground acceleration are used as feedback. In the numerical simulations we have considered the actuator dynamics and used a coupled model of the actuator-structure system. A realistic sampling period and an inherent time delay in the control loop have been used.
---
paper_title: Optimal structural control using neural networks
paper_content:
An optimal control algorithm using neural networks is proposed. The controller neural network is trained by a training rule developed to minimize cost function. Both the linear structure and the nonlinear structure can be controlled by the proposed neurocontroller. A bilinear hysteretic model is used to simulate nonlinear structural behavior. Three main advantages of the neurocontroller can be summarized as follows. First, it can control a structure with unknown dynamics. Second, it can easily be applied to nonlinear structural control. Third, external disturbances can be considered in the optimal control. Examples show that structural vibration can be controlled successfully.
---
paper_title: Output Feedback Variable Structure Adaptive Control of an Aeroelastic System
paper_content:
The paper presents an application of the variable structure model reference adaptive control theory to control of aeroelastic systems with structural nonlinearity. Interestingly, the design approach does not require any knowledge of the parameters of the system and the nonlinear functions, and only output feedback is used for the synthesis of the control systems. Control laws for the trajectory tracking of pitch angle and plunge displacement are derived. In the closed-loop system, the state vector asymptotically converges to the origin. Control laws are discontinuous functions of the tracking error, and modulation functions of the relays are generated on-line using bounds on uncertain functions and certain auxiliary signals. Digital simulation results are presented which show that in the closed-loop system, pitch angle and plunge displacement are smoothly regulated to zero in spite of the uncertainty and unmodeled functions in the system using only output feedback.
---
paper_title: Modelling of structural response and optimization of structural control system using neural network and genetic algorithm
paper_content:
This paper proposes an integrated approach to the modelling and optimization of structural control systems in tall buildings. In this approach, an artificial neural network is applied to model the structural dynamic responses of tall buildings subjected to strong earthquakes, and a genetic algorithm is used to optimize the design problem of structural control systems, which constitutes a mixed-discrete, nonlinear and multi-modal optimization problem. The neural network model of the structural dynamic response analysis is included in the genetic algorithm and is used as a module of the structural analysis to estimate the dynamic responses of tall buildings. A numerical example is presented in which the general regression neural network is used to model the structural response analysis. The modelling method, procedure and the numerical results are discussed. Two Los Angeles earthquake records are adopted as earthquake excitations. Copyright © 2000 John Wiley & Sons, Ltd.
---
paper_title: Adaptive Control of Flexible Structures Using a Nonlinear Vibration Absorber
paper_content:
A nonlinear adaptive vibration absorber to control the vibrations offlexible structures is investigated. The absorber is based on thesaturation phenomenon associated with dynamical systems possessingquadratic nonlinearities and a two-to-one internal resonance. Thetechnique is implemented by coupling a second-order controller with thestructure through a sensor and an actuator. Energy is exchanged betweenthe structure and the controller and, near resonance, the structure'sresponse saturates to a small value.Experimental results are presented for the control of a rectangularplate and a cantilever beam using piezoelectric ceramics andmagnetostrictive alloys as actuators. The control technique isimplemented using a digital signal processing board and a modelingsoftware. The control strategy is made adaptive by incorporating anefficient frequency-measurement technique. This is validated bysuccessfully testing the control strategy for a nonconventionalproblem, where nonlinear effects hinder the application of thenonadaptive controller.
---
paper_title: Adaptive Feedback Linearization for the Control of a Typical Wing Section with Structural Nonlinearity
paper_content:
Earlier results by the authors showed constructions of Lie algebraic, partial feedback linearizing control methods for pitch and plunge primary control utilizing a single trailing edge actuator. In addition, a globally stable nonlinear adaptive control method was derived for a structurally nonlinear wing section with both a leading and trailing edge actuator. However, the global stability result described in a previous paper by the authors, while highly desirable, relied on the fact that the leading and trailing edge actuators rendered the system exactly feedback linearizable via Lie algebraic methods. In this paper, the authors derive an adaptive, nonlinear feedback control methodology for a structurally nonlinear typical wing section. The technique is advantageous in that the adaptive control is derived utilizing an explicit parameterization of the structural nonlinearity and a partial feedback linearizing control that is parametrically dependent is defined via Lie algebraic methods. The closed loop stability of the system is guaranteed to be stable via application of La Salle's invariance principle.
---
paper_title: Nonlinear and Adaptive Control of Complex Systems
paper_content:
Preface. Notations and Definitions. 1. Faces of Complexity. 2. Nonlinear Systems: Analysis and Design Tools. 3. Speed-Gradient Method and Partial Stabilization. 4. Nonlinear control of Multivariable Systems. 5. Nonlinear Control of Mimo Systems. 6. Adaptive and Robus Control Design. 7. Decomposition of Adaptive Systems. 8. Control of Mechanical Systems. 9. Physics and Control. A. Appendix. References. Index.
---
| Title: A review of non-linear structural control techniques
Section 1: INTRODUCTION
Description 1: Describe the importance of controlling non-linear structural vibrations in various engineering applications and provide an overview of the paper.
Section 2: CONTROL DESIGN FOR NON-LINEAR STRUCTURAL VIBRATIONS
Description 2: Discuss the traditional methods and challenges of control design for non-linear structural vibrations, focusing on passive redesign techniques.
Section 3: SEMI-ACTIVE CONTROL
Description 3: Explain semi-active control methods and their applications, especially in civil/structural and automotive engineering, emphasizing the use of magnetorheological (MR) dampers.
Section 4: ACTIVE VIBRATION CONTROL
Description 4: Provide an introduction to active vibration control techniques and their application for low-dimensional systems, including a discussion on non-linear control techniques.
Section 5: MODAL CONTROL
Description 5: Discuss modal control, its applications for structures with multiple degrees of freedom, and the specific techniques used to achieve control objectives.
Section 6: ADAPTIVE CONTROL
Description 6: Briefly outline adaptive control techniques and their relevance to systems with significant parameter uncertainties, highlighting key literature.
Section 7: CONCLUSIONS
Description 7: Summarize the key points discussed in the paper, including the effectiveness, applications, and future potentials of passive, semi-active, and active control methods for non-linear structural dynamics. |
Security in p2p networks: survey and research directions | 8 | ---
paper_title: Trusted computing: providing security for peer-to-peer networks
paper_content:
In this paper, we demonstrate the application of trusted computing to securing peer-to-peer (P2P) networks. We identify a central challenge in providing many of the security services within these networks, namely the absence of stable verifiable peer identities. We employ the functionalities provided by trusted computing technology to establish a pseudonymous authentication scheme for peers and extend this scheme to build secure channels between peers for future communications. In support of our work, we illustrate how commands from the trusted computing group (TCG) specifications can be used to implement our approach in P2P networks.
---
paper_title: The Sybil Attack
paper_content:
Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these "Sybil attacks" is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.
---
paper_title: About the Value of Virtual Communities in P2P Networks
paper_content:
The recently introduced peer-to-peer (P2P) systems are currently the most popular Internet applications. This contribution is intended to show that such decentralized architecture could be served as a suitable structure to support virtual communities.
---
paper_title: PlanetP: using gossiping to build content addressable peer-to-peer information sharing communities
paper_content:
We introduce PlanetP, content addressable publish/subscribe service for unstructured peer-to-peer (P2P) communities. PlanetP supports content addressing by providing: (1) a gossiping layer used to globally replicate a membership directory and an extremely compact content index; and (2) a completely distributed content search and ranking algorithm that help users find the most relevant information. PlanetP is a simple, yet powerful system for sharing information. PlanetP is simple because each peer must only perform a periodic, randomized, point-to-point message exchange with other peers. PlanetP is powerful because it maintains a globally content-ranked view of the shared data. Using simulation and a prototype implementation, we show that PlanetP achieves ranking accuracy that is comparable to a centralized solution and scales easily to several thousand peers while remaining resilient to rapid membership changes.
---
paper_title: Defending against spoofed DDoS attacks with path fingerprint *
paper_content:
In this paper, we propose a new scheme, called ANTID, for detecting and filtering DDoS attacks which use spoofed packets to circumvent the conventional intrusion detection schemes. The proposed anti-DDoS scheme intends to complement, rather than replace conventional schemes. By embedding in each IP packet a unique path fingerprint that represents the route an IP packet has traversed, ANTID is able to distinguish IP packets that traverse different Internet paths. In ANTID, a server maintains for each of its communicating clients the mapping from the client's IP address to the corresponding path fingerprint. The construction and renewal of these mappings is performed in an on-demand fashion that helps to reduce the cost of maintenance. With presence of the mapping table, the onset of a spoofed DDoS attack can be detected by observing a surge of spoofed packets. Consequently, spoofed attack packets are filtered so as to sustain the quality of protected Internet services. ANTID is lightweight, robust, and incrementally deployable. Our experiment results showed that the proposed scheme can detect 99.95% spoofed IP packets and can discard them with little collateral damage to legitimate clients. It also showed that the higher the aggregated attack rate is, the sooner the attack can be detected.
---
paper_title: Byzantine Fault Tolerant Public Key Authentication in Peer-to-Peer Systems
paper_content:
We describe Byzantine fault tolerant authentication, a mechanism for public key authentication in peer-to-peer systems. Authentication is done without trusted third parties, tolerates Byzantine faults and is eventually correct if more than a threshold of the peers are honest. This paper addresses the design, correctness, and fault tolerance of authentication over insecure asynchronous networks. An anti-entropy version of the protocol is developed to provide lazy authentication with logarithmic messaging cost. The cost implications of the authentication mechanism are studied by simulation.
---
paper_title: Overcoming free-riding behavior in peer-to-peer systems
paper_content:
While the fundamental premise of peer-to-peer (P2P) systems is that of voluntary resource sharing among individual peers, there is an inherent tension between individual rationality and collective welfare that threatens the viability of these systems. This paper surveys recent research at the intersection of economics and computer science that targets the design of distributed systems consisting of rational participants with diverse and selfish interests. In particular, we discuss major findings and open questions related to free-riding in P2P systems: factors affecting the degree of free-riding, incentive mechanisms to encourage user cooperation, and challenges in the design of incentive mechanisms for P2P systems.
---
paper_title: Tarzan: a peer-to-peer anonymizing network layer
paper_content:
Tarzan is a peer-to-peer anonymous IP network overlay. Because it provides IP service, Tarzan is general-purpose and transparent to applications. Organized as a decentralized peer-to-peer overlay, Tarzan is fault-tolerant, highly scalable, and easy to manage.Tarzan achieves its anonymity with layered encryption and multi-hop routing, much like a Chaumian mix. A message initiator chooses a path of peers pseudo-randomly through a restricted topology in a way that adversaries cannot easily influence. Cover traffic prevents a global observer from using traffic analysis to identify an initiator. Protocols toward unbiased peer-selection offer new directions for distributing trust among untrusted entities.Tarzan provides anonymity to either clients or servers, without requiring that both participate. In both cases, Tarzan uses a network address translator (NAT) to bridge between Tarzan hosts and oblivious Internet hosts.Measurements show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route.
---
paper_title: Crowds: Anonymity for Web Transactions
paper_content:
In this paper we introduce a system called Crowds for protecting users' anonymity on the world-wide-web. Crowds, named for the notion of “blending into a crowd,” operates by grouping users into a large and geographically diverse group (crowd) that collectively issues requests on behalf of its members. Web servers are unable to learn the true source of a request because it is equally likely to have originated from any member of the crowd, and even collaborating crowd members cannot distinguish the originator of a request from a member who is merely forwarding the request on behalf of another. We describe the design, implementation, security, performance, and scalability of our system. Our security analysis introduces degrees of anonymity as an important tool for describing and proving anonymity properties.
---
paper_title: Pricing via processing or combatting junk mail
paper_content:
We present a computational technique for combatting junk mail in particular and controlling access to a shared resource in general. The main idea is to require a user to compute a moderately hard, but not intractable, function in order to gain access to the resource, thus preventing frivolous use. To this end we suggest several pricing Junctions, based on, respectively, extracting square roots modulo a prime, the Fiat-Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signature scheme.
---
paper_title: Using speakeasy for ad hoc peer-to-peer collaboration
paper_content:
Peer-to-peer systems appear promising in terms of their ability to support ad hoc, spontaneous collaboration. However, current peer-to-peer systems suffer from several deficiencies that diminish their ability to support this domain, such as inflexibility in terms of discovery protocols, network usage, and data transports. We have developed the Speakeasy framework, which addresses these issues, and supports these types of applications. We show how Speakeasy addresses the shortcomings of current peer-to-peer systems, and describe a demonstration application, called Casca, that supports ad hoc peer-to-peer collaboration by taking advantages of the mechanisms provided by Speakeasy.
---
paper_title: Byzantine Fault Tolerant Public Key Authentication in Peer-to-Peer Systems
paper_content:
We describe Byzantine fault tolerant authentication, a mechanism for public key authentication in peer-to-peer systems. Authentication is done without trusted third parties, tolerates Byzantine faults and is eventually correct if more than a threshold of the peers are honest. This paper addresses the design, correctness, and fault tolerance of authentication over insecure asynchronous networks. An anti-entropy version of the protocol is developed to provide lazy authentication with logarithmic messaging cost. The cost implications of the authentication mechanism are studied by simulation.
---
paper_title: Impeding attrition attacks in P2P systems
paper_content:
P2P systems are exposed to an unusually broad range of attacks. These include a spectrum of denial-of-service, or attrition, attacks from low-level packet flooding to high-level abuse of the peer communication protocol. We identify a set of defenses that systems can deploy against such attacks and potential synergies among them. We illustrate the application of these defenses in the context of the LOCKSS digital preservation system.
---
paper_title: Pricing via processing or combatting junk mail
paper_content:
We present a computational technique for combatting junk mail in particular and controlling access to a shared resource in general. The main idea is to require a user to compute a moderately hard, but not intractable, function in order to gain access to the resource, thus preventing frivolous use. To this end we suggest several pricing Junctions, based on, respectively, extracting square roots modulo a prime, the Fiat-Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signature scheme.
---
paper_title: Using speakeasy for ad hoc peer-to-peer collaboration
paper_content:
Peer-to-peer systems appear promising in terms of their ability to support ad hoc, spontaneous collaboration. However, current peer-to-peer systems suffer from several deficiencies that diminish their ability to support this domain, such as inflexibility in terms of discovery protocols, network usage, and data transports. We have developed the Speakeasy framework, which addresses these issues, and supports these types of applications. We show how Speakeasy addresses the shortcomings of current peer-to-peer systems, and describe a demonstration application, called Casca, that supports ad hoc peer-to-peer collaboration by taking advantages of the mechanisms provided by Speakeasy.
---
paper_title: Byzantine Fault Tolerant Public Key Authentication in Peer-to-Peer Systems
paper_content:
We describe Byzantine fault tolerant authentication, a mechanism for public key authentication in peer-to-peer systems. Authentication is done without trusted third parties, tolerates Byzantine faults and is eventually correct if more than a threshold of the peers are honest. This paper addresses the design, correctness, and fault tolerance of authentication over insecure asynchronous networks. An anti-entropy version of the protocol is developed to provide lazy authentication with logarithmic messaging cost. The cost implications of the authentication mechanism are studied by simulation.
---
paper_title: Crowds: Anonymity for Web Transactions
paper_content:
In this paper we introduce a system called Crowds for protecting users' anonymity on the world-wide-web. Crowds, named for the notion of “blending into a crowd,” operates by grouping users into a large and geographically diverse group (crowd) that collectively issues requests on behalf of its members. Web servers are unable to learn the true source of a request because it is equally likely to have originated from any member of the crowd, and even collaborating crowd members cannot distinguish the originator of a request from a member who is merely forwarding the request on behalf of another. We describe the design, implementation, security, performance, and scalability of our system. Our security analysis introduces degrees of anonymity as an important tool for describing and proving anonymity properties.
---
paper_title: Chord: A scalable peer-to-peer lookup service for internet applications
paper_content:
A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
---
paper_title: A scalable content-addressable network
paper_content:
Hash tables - which map "keys" onto "values" - are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation.
---
paper_title: Impeding attrition attacks in P2P systems
paper_content:
P2P systems are exposed to an unusually broad range of attacks. These include a spectrum of denial-of-service, or attrition, attacks from low-level packet flooding to high-level abuse of the peer communication protocol. We identify a set of defenses that systems can deploy against such attacks and potential synergies among them. We illustrate the application of these defenses in the context of the LOCKSS digital preservation system.
---
paper_title: PlanetP: using gossiping to build content addressable peer-to-peer information sharing communities
paper_content:
We introduce PlanetP, content addressable publish/subscribe service for unstructured peer-to-peer (P2P) communities. PlanetP supports content addressing by providing: (1) a gossiping layer used to globally replicate a membership directory and an extremely compact content index; and (2) a completely distributed content search and ranking algorithm that help users find the most relevant information. PlanetP is a simple, yet powerful system for sharing information. PlanetP is simple because each peer must only perform a periodic, randomized, point-to-point message exchange with other peers. PlanetP is powerful because it maintains a globally content-ranked view of the shared data. Using simulation and a prototype implementation, we show that PlanetP achieves ranking accuracy that is comparable to a centralized solution and scales easily to several thousand peers while remaining resilient to rapid membership changes.
---
| Title: Security in P2p Networks: Survey and Research Directions
Section 1: Introduction
Description 1: Provide an overview of P2P networks and introduce the various security issues.
Section 2: Overlay Networks
Description 2: Discuss the classification of overlay networks, their structures, and relevant security concerns.
Section 3: User Community
Description 3: Explore the challenges related to the user community in P2P networks, particularly high node transience and lack of centralized authority.
Section 4: Trust and Social Profiles
Description 4: Explain the importance of cooperation in P2P networks and the use of reputation schemes to build trust and incentivize good behavior.
Section 5: Identification vs. Anonymity
Description 5: Analyze the balance between node identification and anonymity, along with related security risks and existing solutions.
Section 6: Node Authentication and Access Control
Description 6: Review methods of node authentication and access control, including digital signatures and ACLs.
Section 7: Content
Description 7: Address issues related to content availability, integrity, and authentication, and discuss strategies for replication and protection.
Section 8: Analysis
Description 8: Present a comparative analysis of security issues in different P2P architectures using detailed tables summarizing the protection mechanisms against various attacks.
Section 9: Conclusions and Research Directions
Description 9: Summarize the state of security in P2P networks and outline potential future research directions in areas such as node cooperation and P2P systems for mobile networks. |
Machine Translation Approaches and Survey for Indian Languages | 11 | ---
paper_title: METIS-II: Example-based machine translation using monolingual corpora - System description
paper_content:
The METIS-II project1 is an example-based machine translation system, making use of minimal resources and tools for both source and target language, making use of a target-language (TL) corpus, but not of any parallel corpora. In the current paper, we discuss the view of our team on the general philosophy and outline of the METIS-II system.
---
paper_title: Statistical Post-Editing on SYSTRAN's Rule-Based Translation System
paper_content:
This article describes the combination of a SYSTRAN system with a "statistical post-editing" (SPE) system. We document qualitative analysis on two experiments performed in the shared task of the ACL 2007 Workshop on Statistical Machine Translation. Comparative results and more integrated "hybrid" techniques are discussed.
---
paper_title: Hybrid Example-Based SMT: The Best Of Both Worlds?
paper_content:
(Way and Gough, 2005) provide an in-depth comparison of their Example-Based Machine Translation (EBMT) system with a Statistical Machine Translation (SMT) system constructed from freely available tools. According to a wide variety of automatic evaluation metrics, they demonstrated that their EBMT system outperformed the SMT system by a factor of two to one. ::: ::: Nevertheless, they did not test their EBMT system against a phrase-based SMT system. Obtaining their training and test data for English--French, we carry out a number of experiments using the Pharaoh SMT Decoder. While better results are seen when Pharaoh is seeded with Giza++ word- and phrase-based data compared to EBMT sub-sentential alignments, in general better results are obtained when combinations of this 'hybrid' data is used to construct the translation and probability models. While for the most part the EBMT system of (Gough & Way, 2004b) outperforms any flavour of the phrase-based SMT systems constructed in our experiments, combining the data sets automatically induced by both Giza++ and their EBMT system leads to a hybrid system which improves on the EBMT system per se for French--English.
---
paper_title: Rule-Based Translation with Statistical Phrase-Based Post-Editing
paper_content:
This article describes a machine translation system based on an automatic post-editing strategy: initially translate the input text into the target-language using a rule-based MT system, then automatically post-edit the output using a statistical phrase-based system. An implementation of this approach based on the SYSTRAN and PORTAGE MT systems was used in the shared task of the Second Workshop on Statistical Machine Translation. Experimental results on the test data of the previous campaign are presented.
---
paper_title: ANGLABHARTI: a multilingual machine aided translation project on translation from English to Indian languages
paper_content:
An English to Indian languages machine aided translation system, named ANGLABHARTI, has been developed. It uses pattern directed approach using context free grammar like structures. A 'pseudo-target' is generated which is applicable to a group of Indian languages. Set of rules are acquired through corpus analysis to identify the plausible constituents with respect to which movement rules for the 'pseudo-target' are constructed. A number of semantic tags are used to resolve sense ambiguity in the source language. Alternative meanings for the unresolved ambiguities are retained in the pseudo target language code. A text generator module for each of the target languages transforms the pseudo target language to the target language. A corrector for ill-formed sentences is used for each of the target languages. Finally, a human-engineered post-editing package is used to make the final corrections. The post-editor needs to know only the target language. The strategy used in ANGLABHARTI lies in between the transfer and the interlingua approach. It is better than the transfer approach, as the translation is valid for a host of target language sentences, but falls short of genuine interlingua, in the sense that it ignores complete disambiguation/understanding of the text to be translated.
---
paper_title: ANGLABHARTI: a multilingual machine aided translation project on translation from English to Indian languages
paper_content:
An English to Indian languages machine aided translation system, named ANGLABHARTI, has been developed. It uses pattern directed approach using context free grammar like structures. A 'pseudo-target' is generated which is applicable to a group of Indian languages. Set of rules are acquired through corpus analysis to identify the plausible constituents with respect to which movement rules for the 'pseudo-target' are constructed. A number of semantic tags are used to resolve sense ambiguity in the source language. Alternative meanings for the unresolved ambiguities are retained in the pseudo target language code. A text generator module for each of the target languages transforms the pseudo target language to the target language. A corrector for ill-formed sentences is used for each of the target languages. Finally, a human-engineered post-editing package is used to make the final corrections. The post-editor needs to know only the target language. The strategy used in ANGLABHARTI lies in between the transfer and the interlingua approach. It is better than the transfer approach, as the translation is valid for a host of target language sentences, but falls short of genuine interlingua, in the sense that it ignores complete disambiguation/understanding of the text to be translated.
---
paper_title: Anusaaraka: Machine Translation in Stages
paper_content:
Fully-automatic general-purpose high-quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. This would explain the difficulty of building a machine translation system. Since, the machine is not capable of interpreting a general text with sufficient accuracy automatically at present - let alone re-expressing it for a given audience, it fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine faces in interpreting a given text is the lack of general world knowledge or common sense knowledge.}
---
paper_title: Anusaaraka: Machine Translation in Stages
paper_content:
Fully-automatic general-purpose high-quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. This would explain the difficulty of building a machine translation system. Since, the machine is not capable of interpreting a general text with sufficient accuracy automatically at present - let alone re-expressing it for a given audience, it fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine faces in interpreting a given text is the lack of general world knowledge or common sense knowledge.}
---
paper_title: Evaluation of Hindi to Punjabi Machine Translation System
paper_content:
Machine Translation in India is relatively young. The earliest efforts date from the late 80s and early 90s. The success of every system is judged from its evaluation experimental results. Number of machine translation systems has been started for development but to the best of author knowledge, no high quality system has been completed which can be used in real applications. Recently, Punjabi University, Patiala, India has developed Punjabi to Hindi Machine translation system with high accuracy of about 92%. Both the systems i.e. system under question and developed system are between same closely related languages. Thus, this paper presents the evaluation results of Hindi to Punjabi machine translation system. It makes sense to use same evaluation criteria as that of Punjabi to Hindi Punjabi Machine Translation System. After evaluation, the accuracy of the system is found to be about 95%.
---
paper_title: Phrase based English – Tamil Translation System by Concept Labeling using Translation Memory
paper_content:
this paper, we present a novel framework for phrase based translation system using translation memory by concept labeling. The concepts are labeled on the input text, followed by the conversion of text into phrases. The phrase is searched throughout the translation memory, where the parallel corpus is stored. The translation memory displays all source and target phrases, wherever the input phrase is present in them. Target phrase corresponding to the output source phrase having the same concept as that of input source phrase, is chosen as the best translated phrase. The system is implemented for English to Tamil translation.
---
paper_title: Phrase based English – Tamil Translation System by Concept Labeling using Translation Memory
paper_content:
this paper, we present a novel framework for phrase based translation system using translation memory by concept labeling. The concepts are labeled on the input text, followed by the conversion of text into phrases. The phrase is searched throughout the translation memory, where the parallel corpus is stored. The translation memory displays all source and target phrases, wherever the input phrase is present in them. Target phrase corresponding to the output source phrase having the same concept as that of input source phrase, is chosen as the best translated phrase. The system is implemented for English to Tamil translation.
---
paper_title: Rule based Sentence Simplification for English to Tamil Machine Translation System
paper_content:
Machine translation is the process by which computer software is used to translate a text from one natural language to another but handling complex sentences by any machine translation system is generally considered to be difficult. In order to boost the translation quality of the machine translation system, simplifying an input sentence becomes mandatory. Many approaches are available for simplifying the complex sentences. In this paper, Rule based technique is proposed to simplify the complex sentences based on connectives like relative pronouns, coordinating and subordinating conjunction. Sentence simplification is expressed as the list of sub-sentences that are portions of the original sentence. The meaning of the simplified sentence remains unaltered. Characters such as („.‟,‟?‟) are used as delimiters. One of the important pre-requisite is the presence of delimiter in the given sentence. Initial splitting is based on delimiters and then the simplification is based on connectives. This method is useful as a preprocessing tool for machine translation.
---
| Title: Machine Translation Approaches and Survey for Indian Languages
Section 1: Introduction
Description 1: Introduce the concept of Machine Translation (MT), its importance, and challenges, especially in the context of Indian languages.
Section 2: History of MT
Description 2: Provide a historical overview of the development of MT systems, including key milestones and technologies.
Section 3: MT Approaches
Description 3: Discuss the various methodologies employed in MT, including rule-based, statistical-based, hybrid-based, example-based, knowledge-based, principle-based, and online interactive systems.
Section 4: Rule-based Approach
Description 4: Explain the rule-based MT approach and its different categories: Direct Translation, Interlingua Based Translation, and Transfer-Based Translation.
Section 5: Statistical-based Approach
Description 5: Dive into the statistical-based MT approaches, including Word Based Translation, Phrase Based Translation, and Hierarchical Phrase Based Model.
Section 6: Hybrid-based Translation
Description 6: Describe the hybrid approach that combines statistical and rule-based methodologies for MT.
Section 7: Example-based Translation
Description 7: Elaborate on the example-based MT approach that uses analogical reasoning and bilingual corpora for translation.
Section 8: Knowledge-Based MT
Description 8: Discuss the knowledge-based MT approach that relies on deep linguistic and world knowledge for translation.
Section 9: Principle-Based MT
Description 9: Describe the principle-based MT approach derived from Chomsky's Generative Grammar principles.
Section 10: Online Interactive Systems
Description 10: Explain how user interaction is integrated into MT systems to improve translation accuracy.
Section 11: Major MT Developments in India: A Literature Survey
Description 11: Provide a detailed survey of the major MT systems developed in India, including their methodologies, language pairs, and current status. |
Overview of Full-Dimension MIMO in LTE-Advanced Pro | 12 | ---
paper_title: Noncooperative Cellular Wireless with Unlimited Numbers of Base Station Antennas
paper_content:
A cellular base station serves a multiplicity of single-antenna terminals over the same time-frequency interval. Time-division duplex operation combined with reverse-link pilots enables the base station to estimate the reciprocal forward- and reverse-link channels. The conjugate-transpose of the channel estimates are used as a linear precoder and combiner respectively on the forward and reverse links. Propagation, unknown to both terminals and base station, comprises fast fading, log-normal shadow fading, and geometric attenuation. In the limit of an infinite number of antennas a complete multi-cellular analysis, which accounts for inter-cellular interference and the overhead and errors associated with channel-state information, yields a number of mathematically exact conclusions and points to a desirable direction towards which cellular wireless could evolve. In particular the effects of uncorrelated noise and fast fading vanish, throughput and the number of terminals are independent of the size of the cells, spectral efficiency is independent of bandwidth, and the required transmitted energy per bit vanishes. The only remaining impairment is inter-cellular interference caused by re-use of the pilot sequences in other cells (pilot contamination) which does not vanish with unlimited number of antennas.
---
paper_title: Full-dimension MIMO (FD-MIMO) for next generation cellular technology
paper_content:
This article considers a practical implementation of massive MIMO systems [1]. Although the best performance can be achieved when a large number of active antennas are placed only in the horizontal domain, BS form factor limitation often makes horizontal array placement infeasible. To cope with this limitation, this article introduces full-dimension MIMO (FD-MIMO) cellular wireless communication system, where active antennas are placed in a 2D grid at BSs. For analysis of the FD-MIMO systems, a 3D spatial channel model is introduced, on which system-level simulations are conducted. The simulation results show that the proposed FD-MIMO system with 32 antenna ports achieves 2-3.6 times cell average throughput gain and 1.5-5 times cell edge throughput gain compared to the 4G LTE system of two antenna ports at the BS.
---
paper_title: Full dimension mimo (FD-MIMO): the next evolution of MIMO in LTE systems
paper_content:
Full dimension MIMO has attracted significant attention in the wireless industry and academia in the past few years as a candidate technology for the next generation evolution toward beyond fourth generation and fifth generation cellular systems. FD-MIMO utilizes a large number of antennas placed in a 2D antenna array panel for realizing spatially separated transmission links to a large number of mobile stations. The arrangement of these antennas on a 2D panel allows the extension of spatial separation to the elevation domain as well as the traditional azimuth domain. This article discusses features and performance benefits of FD-MIMO along with the ongoing standardization efforts in 3GPP to incorporate FD-MIMO features in the next evolution of LTE. Furthermore, a design of a 2D antenna array, which plays a key role in the implementation of FD-MIMO, is also discussed. Finally, in order to demonstrate the performance benefits of FD-MIMO, system-level evaluation results are provided.
---
paper_title: Field trial and future enhancements for TDD massive MIMO networks
paper_content:
Massive MIMO is one of the promising techniques to improve the spectral efficiency and network performance in future 5G networks. Compared to FDD, it is relatively easier to realize downlink massive MIMO for TDD as downlink channel information can be obtained via uplink-downlink channel reciprocity. This paper provides our field test results of massive MIMO system with a base station prototype equipped with 64 transmit antennas. Significant throughput gain is observed by performing 3D-beamforming to current LTE-Advanced handsets by using standard-transparent Multiuser(MU) MIMO techniques. With the massive MIMO base station prototype, MU-MIMO is realized by multiplexing maximum of eight handsets in spatial domain considering both azimuth and elevation directions. In addition to the field trial test results, future potential enhancements for TDD massive MIMO system are discussed. Evaluation results of evaluating some enhancements on uplink reference signal are provided.
---
paper_title: Noncooperative Cellular Wireless with Unlimited Numbers of Base Station Antennas
paper_content:
A cellular base station serves a multiplicity of single-antenna terminals over the same time-frequency interval. Time-division duplex operation combined with reverse-link pilots enables the base station to estimate the reciprocal forward- and reverse-link channels. The conjugate-transpose of the channel estimates are used as a linear precoder and combiner respectively on the forward and reverse links. Propagation, unknown to both terminals and base station, comprises fast fading, log-normal shadow fading, and geometric attenuation. In the limit of an infinite number of antennas a complete multi-cellular analysis, which accounts for inter-cellular interference and the overhead and errors associated with channel-state information, yields a number of mathematically exact conclusions and points to a desirable direction towards which cellular wireless could evolve. In particular the effects of uncorrelated noise and fast fading vanish, throughput and the number of terminals are independent of the size of the cells, spectral efficiency is independent of bandwidth, and the required transmitted energy per bit vanishes. The only remaining impairment is inter-cellular interference caused by re-use of the pilot sequences in other cells (pilot contamination) which does not vanish with unlimited number of antennas.
---
paper_title: A Leakage-Based Precoding Scheme for Downlink Multi-User MIMO Channels
paper_content:
In multiuser MIMO downlink communications, it is necessary to design precoding schemes that are able to suppress co-channel interference. This paper proposes designing precoders by maximizing the so-called signal-to-leakage-and-noise ratio (SLNR) for all users simultaneously. The presentation considers communications with both single- and multi-stream cases, as well as MIMO systems that employ Alamouti coding. The effect of channel estimation errors on system performance is also studied. Compared with zero-forcing solutions, the proposed method does not impose a condition on the relation between the number of transmit and receive antennas, and it also avoids noise enhancement. Simulations illustrate the performance of the scheme
---
paper_title: MIMO broadcast channels with finite rate feedback
paper_content:
Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e. multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this paper, a system where the receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. Simple expressions for the capacity degradation due to finite rate feedback as well as the required increases in feedback load per mobile as a function of the number of access point antennas and the system SNR are provided.
---
paper_title: Antenna Grouping based Feedback Compression for FDD-based Massive MIMO Systems
paper_content:
Recent works on massive multiple-input multiple-output (MIMO) have shown that a potential breakthrough in capacity gains can be achieved by deploying a very large number of antennas at the base station. In order to achieve the performance that massive MIMO systems promise, accurate transmit-side channel state information (CSI) should be available at the base station. While transmit-side CSI can be obtained by employing channel reciprocity in time division duplexing (TDD) systems, explicit feedback of CSI from the user terminal to the base station is needed for frequency division duplexing (FDD) systems. In this paper, we propose an antenna grouping based feedback reduction technique for FDD-based massive MIMO systems. The proposed algorithm, dubbed antenna group beamforming (AGB), maps multiple correlated antenna elements to a single representative value using predesigned patterns. The proposed method modifies the feedback packet by introducing the concept of a header to select a suitable group pattern and a payload to quantize the reduced dimension channel vector. Simulation results show that the proposed method achieves significant feedback overhead reduction over conventional approach performing the vector quantization of whole channel vector under the same target sum rate requirement.
---
| Title: Overview of Full-Dimension MIMO in LTE-Advanced Pro
Section 1: Key Features of FD-MIMO Systems
Description 1: Discuss the key features of FD-MIMO systems, including a large number of basestation antennas, two dimensional active antenna array, 3D channel propagation, and new pilot transmission with CSI feedback.
Section 2: Increase the Number of Transmit Antennas
Description 2: Explain the significance of increasing the number of transmit antennas in FD-MIMO systems and the challenges related to CSI acquisition.
Section 3: 2D Active Antenna System (AAS)
Description 3: Describe the introduction of the active antenna with a 2D planar array and the benefits of 3D beamforming.
Section 4: 3D Channel Environment
Description 4: Discuss the design considerations for maximizing performance in realistic 3D channel environments, including height-dependent pathloss and elevation angular-spread of departure (ESD).
Section 5: RS Transmission for CSI Acquisition
Description 5: Highlight the evolution of reference signal (RS) schemes from LTE to LTE-Advanced systems and the use of beamformed CSI-RS in FD-MIMO.
Section 6: System Design and Standardization of FD-MIMO Systems
Description 6: Provide an overview of the system design and standardization efforts for FD-MIMO systems, focusing on deployment scenarios, TXRU structures, and new RS strategies.
Section 7: Deployment Scenarios
Description 7: Outline typical deployment scenarios for FD-MIMO systems, such as 3D urban macro and micro scenarios.
Section 8: Antenna Configurations
Description 8: Detail the antenna configurations for FD-MIMO systems, including the number of elements in vertical and horizontal directions and polarization considerations.
Section 9: TXRU Architectures
Description 9: Describe the TXRU (transceiver unit) architectures and their role in controlling the gain and phase of individual antenna elements.
Section 10: New CSI-RS Transmission Strategy
Description 10: Discuss the strategies for transmitting CSI-RS, including conventional non-precoded and beamformed CSI-RS transmissions.
Section 11: CSI Feedback Mechanisms for FD-MIMO Systems
Description 11: Explore various CSI feedback mechanisms, such as composite codebook, beam index feedback, partial CSI-RS with dimensional feedbacks, and adaptive CSI feedback.
Section 12: Performance of FD-MIMO System
Description 12: Analyze the performance of FD-MIMO systems through system-level simulations under realistic multicell environments, using metrics like spectral efficiency for cell average and cell edge. |
A Survey of Arabic Dialogues Understanding for Spontaneous Dialogues and Instant Message | 10 | ---
paper_title: Critical Survey of the Freely Available Arabic Corpora
paper_content:
The availability of corpora is a major factor in building natural language processing applications. However, the costs of acquiring corpora can prevent some researchers from going further in their endeavours. The ease of access to freely available corpora is urgent needed in the NLP research community especially for language such as Arabic. Currently, there is not easy was to access to a comprehensive and updated list of freely available Arabic corpora. We present in this paper, the results of a recent survey conducted to identify the list of the freely available Arabic corpora and language resources. Our preliminary results showed an initial list of 66 sources. We presents our findings in the various categories studied and we provided the direct links to get the data when possible.
---
paper_title: Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech
paper_content:
We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as STATEMENT, QUESTION, BACKCHANNEL, AGREEMENT, DISAGREEMENT, and APOLOGY. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.
---
paper_title: Discriminative framework for spoken tunisian dialect understanding
paper_content:
In this paper, we propose to evaluate the performance of a discriminative model to semantically label spoken Tunisian dialect turns which are not segmented into utterances. We evaluate discriminative algorithm based on Conditional Random Fields (CRF). We check the performance of the CRF model to concept labeling on raw data in Tunisian dialect which are not analyzed in advance. We compared its performance with different types of preprocessing data until arriving to well treated data. CRF model showed the ability to ameliorate the accuracy of labeling task for spoken language understanding of not segmented and not treated speech in Tunisian dialect.
---
paper_title: Arabic Dialect Processing Tutorial
paper_content:
Language exists in a natural continuum, both historically and geographically. The term language as opposed to dialect is only an expression of power and dominance of one group/ideology over another. In the Arab world, politics (Arab nationalism) and religion (Islam) are what shape the perception of the distinction between the Arabic language and an Arabic dialect. This power relationship is similar to others that exist between languages and their dialects. However, the high degree of difference between standard Arabic and its dialects and the fact that standard Arabic is not any Arab's native language sets the Arabic linguistic situation apart.
---
paper_title: Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech
paper_content:
We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as STATEMENT, QUESTION, BACKCHANNEL, AGREEMENT, DISAGREEMENT, and APOLOGY. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.
---
paper_title: Discriminative framework for spoken tunisian dialect understanding
paper_content:
In this paper, we propose to evaluate the performance of a discriminative model to semantically label spoken Tunisian dialect turns which are not segmented into utterances. We evaluate discriminative algorithm based on Conditional Random Fields (CRF). We check the performance of the CRF model to concept labeling on raw data in Tunisian dialect which are not analyzed in advance. We compared its performance with different types of preprocessing data until arriving to well treated data. CRF model showed the ability to ameliorate the accuracy of labeling task for spoken language understanding of not segmented and not treated speech in Tunisian dialect.
---
paper_title: The Hcrc Map Task Corpus
paper_content:
This paper describes a corpus of unscripted, task-oriented dialogues which has been designed, digitally recorded, and transcribed to support the study of spontaneous speech on many levels. The corpus uses the Map Task (Brown, Anderson, Yule, and Shillcock, 1983) in which speakers must collaborate verbally to reproduce on one participant's map a route printed on the other's. In all, the corpus includes four conversations from each of 64 young adults and manipulates the following variables: familiarity of speakers, eye contact between speakers, matching between landmarks on the participants' maps, opportunities for contrastive stress, and phonological characteristics of landmark names. The motivations for the design are set out and basic corpus statistics are presented.
---
paper_title: Critical Survey of the Freely Available Arabic Corpora
paper_content:
The availability of corpora is a major factor in building natural language processing applications. However, the costs of acquiring corpora can prevent some researchers from going further in their endeavours. The ease of access to freely available corpora is urgent needed in the NLP research community especially for language such as Arabic. Currently, there is not easy was to access to a comprehensive and updated list of freely available Arabic corpora. We present in this paper, the results of a recent survey conducted to identify the list of the freely available Arabic corpora and language resources. Our preliminary results showed an initial list of 66 sources. We presents our findings in the various categories studied and we provided the direct links to get the data when possible.
---
paper_title: Discriminative framework for spoken tunisian dialect understanding
paper_content:
In this paper, we propose to evaluate the performance of a discriminative model to semantically label spoken Tunisian dialect turns which are not segmented into utterances. We evaluate discriminative algorithm based on Conditional Random Fields (CRF). We check the performance of the CRF model to concept labeling on raw data in Tunisian dialect which are not analyzed in advance. We compared its performance with different types of preprocessing data until arriving to well treated data. CRF model showed the ability to ameliorate the accuracy of labeling task for spoken language understanding of not segmented and not treated speech in Tunisian dialect.
---
paper_title: Automatic linguistic segmentation of conversational speech
paper_content:
As speech recognition moves toward more unconstrained domains such as conversational speech, we encounter a need to be able to segment (or resegment) waveforms and recognizer output into linguistically meaningful units such a sentences. Toward this end, we present a simple automatic segmenter of transcripts based on N-gram language modeling. We also study the relevance of several word-level features for segmentation performance. Using only word-level information, we achieve 85% recall and 70% precision on linguistic boundary detection.
---
paper_title: Data-Driven Strategies For An Automated Dialogue System
paper_content:
We present a prototype natural-language problem-solving application for a financial services call center, developed as part of the Amities multilingual human-computer dialogue project. Our automated dialogue system, based on empirical evidence from real call-center conversations, features a data-driven approach that allows for mixed system/customer initiative and spontaneous conversation. Preliminary evaluation results indicate efficient dialogues and high user satisfaction, with performance comparable to or better than that of current conversational travel information systems.
---
paper_title: Transformation-Based-Error-Driven Learning And Natural Language Processing: A Case Study In Part-Of-Speech Tagging
paper_content:
Recently, there has been a rebirth of empiricism in the field of natural language processing. Manual encoding of linguistic information is being challenged by automated corpus-based learning as a method of providing a natural language processing system with linguistic knowledge. Although corpus-based approaches have been successful in many different areas of natural language processing, it is often the case that these methods capture the linguistic information they are modelling indirectly in large opaque tables of statistics. This can make it difficult to analyze, understand and improve the ability of these approaches to model underlying linguistic behavior. In this paper, we will describe a simple rule-based approach to automated learning of linguistic knowledge. This approach has been shown for a number of tasks to capture information in a clearer and more direct fashion without a compromise in performance. We present a detailed case study of this learning method applied to part-of-speech tagging.
---
paper_title: Interactive Problem Solving And Dialogue In The ATIS Domain
paper_content:
This paper describes the present status of the discourse and dialogue models within the MIT ATIS system, extended to support the notion of booking a flight. The discourse model includes not only the resolution of explicit anaphoric references, but also indirect and direct references to information mentioned earlier in the conversation, such as a direct reference to an entry in a previously displayed table or an indirect reference to a date, as in "the following Thursday." The system keeps a history table containing objects such as flights and dates, represented as semantic frames, as well as the active ticket, previously booked tickets, and previously displayed tables. During flight reservations scenarios, the system monitors the state of the ticket (which is displayed to the user), making sure that all information is complete (by querying the user) before allowing a booking. It may even initiate calls to the database to provide additional unsolicited information as appropriate. We have collected several dialogues of subjects using the system to make reservations, and from these, we are learning how to design better dialogue models.
---
paper_title: Discriminative framework for spoken tunisian dialect understanding
paper_content:
In this paper, we propose to evaluate the performance of a discriminative model to semantically label spoken Tunisian dialect turns which are not segmented into utterances. We evaluate discriminative algorithm based on Conditional Random Fields (CRF). We check the performance of the CRF model to concept labeling on raw data in Tunisian dialect which are not analyzed in advance. We compared its performance with different types of preprocessing data until arriving to well treated data. CRF model showed the ability to ameliorate the accuracy of labeling task for spoken language understanding of not segmented and not treated speech in Tunisian dialect.
---
paper_title: User's utterance classification using machine learning for Arabic Conversational Agents
paper_content:
This paper presents a novel technique for the classification of Arabic sentences as Dialogue Acts, based on structural information contained in Arabic function words. It focuses on classifying questions and non-questions utterances as they are used in Conversational Agents. The proposed technique extracts function words features by replacing them with numeric tokens and replacing each content word with a standard numeric token. The Decision Tree has been chosen for this work to extract the classification rules. Experiments provide evidence for highly effective classification. The extracted classification rules will be embedded into a Conversational Agent called ArabChat in order to classify Arabic utterances before further processing on these utterances. This paper presents a complement work for the ArabChat to improve its performance by differentiating among question-based and non question-based utterances.
---
| Title: A Survey of Arabic Dialogues Understanding for Spontaneous Dialogues and Instant Message
Section 1: Concepts and Terminologies
Description 1: This section presents the concepts related to language understanding and used in this paper.
Section 2: Dialogue Act
Description 2: This section discusses the terminology of dialogue acts based on speech act theory and its applications in dialogue systems.
Section 3: Turn vs Utterance
Description 3: This section explains the difference between turns and utterances in natural conversation and in spoken dialogue systems.
Section 4: Dialectal Arabic
Description 4: This section describes the different categories of Arabic language, specifically Classic Arabic, Modern Standard Arabic, and dialectal Arabic, and their characteristics.
Section 5: Language Understanding Component
Description 5: This section presents recent research for the four parts of building a language understanding component for Arabic dialogue systems.
Section 6: Dialogue Acts Annotation Schema
Description 6: This section elaborates on the importance and application of dialogue acts annotation schema in building annotated dialogue corpora and dialogue management systems.
Section 7: Arabic Dialogue Acts Corpora
Description 7: This section discusses the use of corpora in NLP research, specifically focusing on Arabic dialogue acts corpora and their properties.
Section 8: Arabic Dialogue Segmentation
Description 8: This section explains the segmentation process for dividing dialogues into meaningful units and various approaches used for segmentation in Arabic dialogues.
Section 9: Arabic Dialogue Acts Classification
Description 9: This section explores different methods and approaches for classifying dialogue acts in Arabic, including both shallow and deeper linguistic analysis methods.
Section 10: Conclusion and Future Work
Description 10: This section provides the conclusions of the survey and suggests directions for future research in the field. |
Building integrated solar thermal (BIST) technologies and their applications : A review of structural design and architectural integration | 15 | ---
paper_title: Solar thermal collectors and applications
paper_content:
In this paper a survey of the various types of solar thermal collectors and applications is presented. Initially, an analysis of the environmental problems related to the use of conventional sources of energy is presented and the benefits offered by renewable energy systems are outlined. A historical introduction into the uses of solar energy is attempted followed by a description of the various types of collectors including flat-plate, compound parabolic, evacuated tube, parabolic trough, Fresnel lens, parabolic dish and heliostat field collectors. This is followed by an optical, thermal and thermodynamic analysis of the collectors and a description of the methods used to evaluate their performance. Typical applications of the various types of collectors are presented in order to show to the reader the extent of their applicability. These include solar water heating, which comprise thermosyphon, integrated collector storage, direct and indirect systems and air systems, space heating and cooling, which comprise, space heating and service hot water, air and water systems and heat pumps, refrigeration, industrial process heat, which comprise air and water systems and steam generation systems, desalination, thermal power systems, which comprise the parabolic trough, power tower and dish systems, solar furnaces, and chemistry applications. As can be seen solar energy systems can be used for a wide range of applications and provide significant benefits, therefore, they should be used whenever possible.
---
paper_title: Active Solar Thermal Facades (ASTFs): From concept, application to research questions
paper_content:
The aim of the paper is to report a comprehensive review into a recently emerging building integrated solar thermal technology, namely, Active Solar Thermal Facades (ASTFs), in terms of concept, classification, standard, performance evaluation, application, as well as research questions. This involves the combined effort of literature review, analysis, extraction, integration, critics, prediction and conclusion. It is indicated that the ASTFs are sort of building envelope elements incorporating the solar collecting devices, thus enabling the dual functions, e.g., space shielding and solar energy collection, to be performed. Based on the function of the building envelopes, the ASTF systems can be generally classified as wall-, window-, balcony-and roof-based types; while the ASTFs could also be classified by the thermal collection typologies, transparency, application, and heat-transfer medium. Currently, existing building and solar collector standards are brought together to evaluate the performance of the ASTFs. The research questions relating to the ASTFs are numerous, but the major points lie in: (1) whole structure and individual components layout, sizing and optimisation; (2) theoretical analysis; (3) experimental measurement; and (4) energy saving, economic and environmental performance assessment. Based on the analysis of the identified research questions, achievements made on each question, and outstanding problems remaining with the ASTFs, further development opportunities on this topic are suggested: (1) development of an integrated database/software enabling both architecture design and engineering performance simulation; (2) real-time measurement of the ASTFs integrated buildings on a long-term scheme; (3) economic and environmental performance assessment and social acceptance analysis; (4) dissemination, marketing and exploitation strategies study. This study helps in identifying the current status, potential problems in existence, future directions in research, development and practical application of the ASTFs technologies in buildings. It will also promote development of renewable energy technology and thus contribute to achieving the UK and international targets in energy saving, renewable energy utilization, and carbon emission reduction in building sector.
---
paper_title: Recent advances in the solar water heating systems: A review
paper_content:
Solar water heating (SWH) systems have a widespread usage and applications in both domestic and industrial sectors. According to Renewable Energy Policy Network data (2010), 70 million houses worldwide were reported to be using SWH systems. Solar water heating is not only environmentally friendly but requires minimal maintenance and operation cost compared to other solar energy applications. SWH systems are cost effective with an attractive payback period of 2–4 years depending on the type and size of the system. Extensive research has been performed to further improve the thermal efficiency of solar water heating. This paper presents a detailed review exclusively on the design aspects of SWH systems. The first part of the paper provides a consolidated summary on the development of various system components that includes the collector, storage tank and heat exchanger. The later part of this paper covers the alternative refrigerant technology and technological advancements in improving the performance as well as the cost effectiveness of the SWH system.
---
paper_title: Review of R&D progress and practical application of the solar photovoltaic/thermal (PV/T) technologies.
paper_content:
In this paper, the global market potential of solar thermal, photovoltaic (PV) and combined photovoltaic/thermal (PV/T) technologies in current time and near future was discussed. The concept of the PV/T and the theory behind the PV/T operation were briefly introduced, and standards for evaluating technical, economic and environmental performance of the PV/T systems were addressed. A comprehensive literature review into RD (2) optimise the structural/geometrical configurations of the existing PV/T systems; (3) study long term dynamic performance of the PV/T systems; (4) demonstrate the PV/T systems in real buildings and conduct the feasibility study; and (5) carry on advanced economic and environmental analyses. This review research helps finding the questions remaining in PV/T technology, identify new research topics/directions to further improve the performance of the PV/T, remove the barriers in PV/T practical application, establish the standards/regulations related to PV/T design and installation, and promote its market penetration throughout the world.
---
paper_title: Aspects and improvements of hybrid photovoltaic/thermal solar energy systems
paper_content:
Hybrid photovoltaic/thermal (PV/T or PVT) solar systems consist of PV modules coupled to water or air heat extraction devices, which convert the absorbed solar radiation into electricity and heat. At the University of Patras, an extended research on PV/T systems has been performed aiming at the study of several modifications for system performance improvement. In this paper a new type of PV/T collector with dual heat extraction operation, either with water or with air circulation is presented. This system is simple and suitable for building integration, providing hot water or air depending on the season and the thermal needs of the building. Experiments with dual type PV/T models of alternative arrangement of the water and the air heat exchanging elements were performed. The most effective design was further studied, applying to it low cost modifications for the air heat extraction improvement. These modifications include a thin metallic sheet placed in the middle of the air channel, the mounting of fins on the opposite wall to PV rear surface of the air channel and the placement of the sheet combined with small ribs on the opposite air channel wall. The modified dual PV/T collectors were combined with booster diffuse reflectors, achieving a significant increase in system thermal and electrical energy output. The improved PV/T systems have aesthetic and energy advantages and could be used instead of separate installation of plain PV modules and thermal collectors, mainly if the available building surface is limited and the thermal needs are associated with low temperature water or air heating.
---
paper_title: Experimental investigation of performance for the novel flat plate solar collector with micro-channel heat pipe array (MHPA-FPC)
paper_content:
A novel flat plate solar collector with micro-channel heat pipe array (MHPA-FPC) is presented in this paper. Firstly, a preliminary test was conducted to investigate the thermal performance of the MHPA. It has been found that the surface temperature along the length of MHPA can get stable within 2 min. The temperature difference between the evaporator and condenser sections was less than 1 °C, which indicates that the MHPA has excellent isothermal ability and quick thermal respond speed. Based on these advantages, the MHPA was applied to the development of a novel solar collector. The performance test was conducted following the Chinese standard GB/T4271-2007, and a linear correlation between the instantaneous efficiency η and the reduced temperature parameter (Twi−Ta)·I−1 was established. The maximum instantaneous efficiency was found to be 80%, and the slope was −4.72. These values are 11.4% and 21.3% superior to the technical required values of the Chinese national standard. Test results were further compared with 6 groups of 15 samples coming either from either open literature or commercial products. The comparisons indicated that the maximum instantaneous efficiency of the MHPA-FPC surpassed 25% over the average level of those selected samples and better thermal insulation ability was presented by the MHPA-FPC. These results from this study demonstrate that the novel MHPA-FPC is one of the top level solar collectors among the current products.
---
paper_title: Fabrication and testing of a non-glass vacuum-tube collector for solar energy utilization
paper_content:
An evacuated tubular solar collector was fabricated from acrylics for improved resistance to shattering. A plasmatron was employed to apply a thin gas-barrier coating to the surfaces of the plastic tube to prevent/alleviate gas infiltration. Experiments were conducted to investigate the effect of vacuum level on the performance of the non-glass vacuum-tube solar collector. Inserted in the evacuated tube was a finned heat pipe for solar energy collection and heat transfer to a water tank. Time variations of temperatures on the heat pipe surface and in the water tank were recorded and analyzed for different degrees of vacuum in the collector. The steady-state temperature of the non-glass collector was compared to that of a commercial glass vacuum-tube collector to assess the feasibility of the use of evacuated plastic tubes for solar energy collection. A simple analytical model was also developed to assist in understanding and analyzing the transient behavior and heat losses of the vacuum-tube solar collector.
---
paper_title: Operational performance of a novel heat pump assisted solar façade loop-heat-pipe water heating system
paper_content:
This paper aims to present an investigation into the operational performance of a novel heat pump assisted solar facade loop-heat-pipe (LHP) water heating system using both theoretical and experimental methods. This involved (1) development of a computer numerical model; (2) simulation of the operational performance of the system by using the model; (3) test rig construction; and (4) dedicated experiment for verification of the model. It was found that the established model is able to predict the operational performance of the system at a reasonable accuracy. Analyses of the research results indicated that under the selected testing conditions, the average thermal efficiency of the LHP module was around 71%, much higher than that of the loop heat pipe without heat pump assistance. The thermal efficiency of the LHP module grew when the heat pump was turned-on and fell when the heat pump was turned-off. The water temperature remained a steadily growing trend throughout the heat pump turned-on period. Neglecting the heat loss of the water tank, the highest coefficient of the performance could reach up to 6.14 and its average value was around 4.93. In overall, the system is a new facade integrated, highly efficient and aesthetically appealing solar water heating configuration; wide deployment of the system will help reduce fossil fuel consumption in the building sector and carbon emission to the environment.
---
paper_title: Review of R&D progress and practical application of the solar photovoltaic/thermal (PV/T) technologies.
paper_content:
In this paper, the global market potential of solar thermal, photovoltaic (PV) and combined photovoltaic/thermal (PV/T) technologies in current time and near future was discussed. The concept of the PV/T and the theory behind the PV/T operation were briefly introduced, and standards for evaluating technical, economic and environmental performance of the PV/T systems were addressed. A comprehensive literature review into RD (2) optimise the structural/geometrical configurations of the existing PV/T systems; (3) study long term dynamic performance of the PV/T systems; (4) demonstrate the PV/T systems in real buildings and conduct the feasibility study; and (5) carry on advanced economic and environmental analyses. This review research helps finding the questions remaining in PV/T technology, identify new research topics/directions to further improve the performance of the PV/T, remove the barriers in PV/T practical application, establish the standards/regulations related to PV/T design and installation, and promote its market penetration throughout the world.
---
paper_title: Recent advances in the solar water heating systems: A review
paper_content:
Solar water heating (SWH) systems have a widespread usage and applications in both domestic and industrial sectors. According to Renewable Energy Policy Network data (2010), 70 million houses worldwide were reported to be using SWH systems. Solar water heating is not only environmentally friendly but requires minimal maintenance and operation cost compared to other solar energy applications. SWH systems are cost effective with an attractive payback period of 2–4 years depending on the type and size of the system. Extensive research has been performed to further improve the thermal efficiency of solar water heating. This paper presents a detailed review exclusively on the design aspects of SWH systems. The first part of the paper provides a consolidated summary on the development of various system components that includes the collector, storage tank and heat exchanger. The later part of this paper covers the alternative refrigerant technology and technological advancements in improving the performance as well as the cost effectiveness of the SWH system.
---
paper_title: PCM thermal storage in buildings: A state of art
paper_content:
A comprehensive review of various possible methods for heating and cooling in buildings are discussed in this paper. The thermal performance of various types of systems like PCM trombe wall, PCM wallboards, PCM shutters, PCM building blocks, air-based heating systems, floor heating, ceiling boards, etc., is presented in this paper. All systems have good potential for heating and cooling in building through phase change materials and also very beneficial to reduce the energy demand of the buildings.
---
paper_title: Thermal performance of PCM thermal storage unit for a roof integrated solar heating system
paper_content:
Abstract The thermal performance of a phase change thermal storage unit is analysed and discussed. The storage unit is a component of a roof integrated solar heating system being developed for space heating of a home. The unit consists of several layers of phase change material (PCM) slabs with a melting temperature of 29 °C. Warm air delivered by a roof integrated collector is passed through the spaces between the PCM layers to charge the storage unit. The stored heat is utilised to heat ambient air before being admitted to a living space. The study is based on both experimental results and a theoretical two dimensional mathematical model of the PCM employed to analyse the transient thermal behaviour of the storage unit during the charge and discharge periods. The analysis takes into account the effects of sensible heat which exists when the initial temperature of the PCM is well below or above the melting point during melting or freezing. The significance of natural convection occurring inside the PCM on the heat transfer rate during melting which was previously suspected as the cause of faster melting process in one of the experiments is discussed. The results are compared with a previous analysis based on a one dimensional model which neglected the effect of sensible heat. A comparison with experimental results for a specific geometry is also made.
---
paper_title: State of the art on phase change material slurries
paper_content:
The interest in using phase change slurry (PCS) media as thermal storage and heat transfer fluids is increasing and thus leading to an enhancement in the number of articles on the subject. In air-conditioning and refrigeration applications, PCS systems represent a pure benefit resulting in the increase of thermal energy storage capacity, high heat transfer characteristics and positive phase change temperatures which can occur under low pressures. Hence, they allow the increase of energy efficiency and reduce the quantity of thermal fluids. This review describes the formation, thermo-physical, rheological, heat transfer properties and applications of four PCS systems: Clathrate hydrate slurry (CHS), Microencapsulated Phase Change Materials Slurry (MPCMS), shape-stabilized PCM slurries (SPCMSs) and Phase Change Material Emulsions (PCMEs). It regroups a bibliographic summary of important information that can be very helpful when such systems are used. It also gives interesting and valuable insights on the choice of the most suitable PCS media for laboratory and industrial applications.
---
paper_title: Development of a thermally activated ceiling panel with PCM for application in lightweight and retrofitted buildings
paper_content:
Abstract This paper describes the development of a thermally activated ceiling panel for incorporation in lightweight and retrofitted buildings. The system allows use of renewable energy sources for the heating and cooling of office and industrial buildings. The design for the new ceiling panel exploits the properties of the phase change material (PCM) paraffin. Its high thermal storage capacity during phase change—up to 300 Wh/(m 2 day)—enables the overall panel thickness to be limited to a mere 5 cm. Active control of the thermal storage is achieved by means of an integrated water capillary tube system. The research project also included the development of a numerical model for computation of the thermal behavior of wall and ceiling systems incorporating PCMs. Simulation calculations were performed to determine the necessary thermal properties of the ceiling panels and specify requirements for the materials to be used. Laboratory tests were performed to verify the system’s performance and a pilot application is soon to be tried out in practice.
---
paper_title: Applications of Phase Change Material in highly energy-efficient houses
paper_content:
Abstract Thermal mass combined with other passive strategies can play an important role in buildings energy efficiency, minimizing the need of space-conditioning mechanical systems. However, the use of lightweight materials with low thermal mass is becoming increasingly common. Phase Change Materials (PCMs) can add thermal energy storage benefits to lightweight constructions. There are many studies about the use of PCM in buildings, but there are still some difficulties for effective use and practical application of these materials. The study of applications used in real buildings could help us find solutions to these difficulties. This paper analyzed different PCM applications presented in highly efficient lightweight construction houses that have participated in the American Solar Decathlon, an international competition organized by U.S. Department of Energy. These houses have been tested and monitored in Washington DC, place with suitable climate conditions for short term thermal storage systems. The study started with a classification of the PCM applications and included an analysis of the systems, materials, switching temperatures, containments and design strategies used to improve the houses energy performance. Also, it included results of numerical simulations and experimentation that the participant team had done, complemented with information available in the literature about similar materials or applications.
---
paper_title: Experiment on heat storage characteristic of microencapsulated phase change material slurry
paper_content:
Abstract Heat storage experiment by natural convection in rectangular enclosures heated from bottom has been conducted with fluid slurry composed of microencapsulated phase change material (PCM). The microencapsulated PCM is prepared by in-situ polymerization method, where the core materials are composed of several kinds of n-paraffin waxes (mainly nonadecane) and the membrane is a type of melamine resin. Its slurry mixed with water is used in this study, and shows a peak value in the specific heat capacity with latent heat at the temperature of about T =31 °C. The influences of the phase change material on heat storage and the heat transfer process, as well as effects of PCM mass concentration C m on the microcapsule slurry, temperature of heat storage T H and a horizontal enclosure height H are also investigated. Transient heat transfer coefficient α , heat storage capacity Q and completion time of heat storage t c are discussed.
---
paper_title: Heat transfer enhancement for thermal energy storage using metal foams embedded within phase change materials (PCMs)
paper_content:
In this paper the experimental investigation on the solid/liquid phase change (melting and solidification) processes have been carried out. Paraffin wax RT58 is used as phase change material (PCM), in which metal foams are embedded to enhance the heat transfer. During the melting process, the test samples are electrically heated on the bottom surface with a constant heat flux. The PCM with metal foams has been heated from the solid state to the pure liquid phase. The temperature differences between the heated wall and PCM have been analysed to examine the effects of heat flux and metal foam structure (pore size and relative density). Compared to the results of the pure PCM sample, the effect of metal foam on solid/liquid phase change heat transfer is very significant, particularly at the solid zone of PCMs. When the PCM starts melting, natural convection can improve the heat transfer performance, thereby reducing the temperature difference between the wall and PCM. The addition of metal foam can increase the overall heat transfer rate by 3-10 times (depending on the metal foam structures and materials) during the melting process (two-phase zone) and the pure liquid zone. The tests for investigating the solidification process under different cooling conditions (e.g. natural convection and forced convection) have been carried out. The results show that the use of metal foams can make the sample solidified much faster than pure PCM samples, evidenced by the solidification time being reduced by more than half. In addition, a two-dimensional numerical analysis has been carried out for heat transfer enhancement in PCMs by using metal foams, and the prediction results agree reasonably well with the experimental data.
---
paper_title: Ventilated active façades with PCM
paper_content:
This article describes an evaluation of the thermal performance of a new type of ventilated active facade that includes a phase change material (PCM) in its outer layer. The research was carried out experimentally by means of a real-scale PASLINK test cell facility, located in the city of Vitoria-Gasteiz in Spain. The results of an experiment performed in March 2010 are presented and evaluated. The behavior of the facade was compared with different traditional constructive systems, using the results of computational simulations performed with the Design Builder software. The experimental results showed that the melting–solidification processes that take place in the PCM led to an increase in the heat absorption during the phase-change temperature intervals, which reduced overheating of the facade. The air circulating through the ventilated chamber was overheated up to 12°C during the daytime. Because of the PCM solidification, 2.5h after the solar radiation faded out, the air circulating through the chamber was still warmed by 2°C. The energy efficiency of the facade during the testing period is attributable to the 10–12% incident radiation gains. This efficiency was found to be a function of the circulating air flow rate. The simulations results showed that the thermal inertia of the ventilated facade with a PCM is higher than that of the four traditional solutions evaluated in the study. Further research is required to study the influence of the air flow rate through the ventilated chamber.
---
paper_title: Theoretical investigation of the energy performance of a novel MPCM (Microencapsulated Phase Change Material) slurry based PV/T module
paper_content:
Aim of the paper is to present a theoretical investigation into the energy performance of a novel PV/T module that employs the MPCM (Micro-encapsulated Phase Change Material) slurry as the working fluid. This involved (1) development of a dedicated mathematical model and computer program; (2) validation of the model by using the published data; (3) prediction of the energy performance of the MPCM (Microencapsulated Phase Change Material) slurry based PV/T module; and (4) investigation of the impacts of the slurry flow state, concentration ratio, Reynolds number and slurry serpentine size onto the energy performance of the PV/T module. It was found that the established model, based on the Hottel–Whillier assumption, is able to predict the energy performance of the MPCM slurry based PV/T system at a very good accuracy, with 0.3–0.4% difference compared to a validated model. Analyses of the simulation results indicated that laminar flow is not a favorite flow state in terms of the energy efficiency of the PV/T module. Instead, turbulent flow is a desired flow state that has potential to enhance the energy performance of PV/T module. Under the turbulent flow condition, increasing the slurry concentration ratio led to the reduced PV cells' temperature and increased thermal, electrical and overall efficiency of the PV/T module, as well as increased flow resistance. As a result, the net efficiency of the PV/T module reached the peak level at the concentration ratio of 5% at a specified Reynolds number of 3,350. Remaining all other parameters fixed, increasing the diameter of the serpentine piping led to the increased slurry mass flow rate, decreased PV cells' temperature and consequently, increased thermal, electrical, overall and net efficiencies of the PV/T module. In overall, the MPCM slurry based PV/T module is a new, highly efficient solar thermal and power configuration, which has potential to help reduce fossil fuel consumption and carbon emission to the environment.
---
paper_title: Experimental analysis of a microencapsulated PCM slurry as thermal storage system and as heat transfer fluid in laminar flow
paper_content:
Abstract A microencapsulated PCM (Phase Change Material) slurry is a dispersion where the PCM, microencapsulated by a polymeric capsule, is dispersed in water. Compared to water, these new fluids have a higher heat capacity during the phase change and a possible enhancement, as a result of this phase change, in the heat transfer phenomenon. From the literature review, the existing experimental results are found incomplete and contradictory in many cases. For this reason the objective of this investigation is to analyze the heat transfer phenomenon in mPCM slurries, proposing a new methodology, developed by the authors. In this manner, an experimental analysis using a slurry with a 10% weight concentration of paraffin has been conducted to study it as a thermal storage material and as a heat transfer fluid. The results demonstrated an improvement of approximately 25% on the convective heat transfer coefficient when compared to water.
---
paper_title: Aluminum heat pipes applied in solar collectors
paper_content:
Abstract The previous researchers have developed a variety of liquid thermal solar collectors designs for water heating. It was reported by the other authors, that metal heat pipes applications to liquid solar collectors, especially to evacuated glass tube ones, is an efficient solution for water heating plants. However, the majority of thermal solar collectors do not meet the requirements on small weight, easy assembly and installation, versatility, scalability, and adaptability of the design, which are particularly important when they are facade integrated. Very high hydraulic resistance, from 2000 Pa to 20,000 Pa, in liquid solar collectors and low thermal efficiency of some of them, less than 0.5, also are the problems to be solved by the developers. Current research is proposing to apply extruded aluminum alloy made heat pipes of original cross-sectional profile with wide fins and longitudinal grooves in order to avoid the above-mentioned drawbacks of liquid thermal collectors. Absorber plate of flat collectors could be composed of several fins. Fins at the opposite end of the heat pipe serve as a heat sink surface. Multiple tests proved that new lightweight and inexpensive heat pipes show high thermal performances. Maximum heat transfer power of one heat pipe is up to 210 W; and its thermal resistance is very low – from 0.02 to 0.07 °C/W. Hydraulic resistance of flat plate solar collector and evacuated one utilizing aluminum profiled heat pipes, could be reduced to less than 100 Pa, at the same time their thermal efficiency is rather high, up to 0.72. In the issue of authors study, the feasibility of the developed aluminum profiled heat pipes application to thermal solar collectors was proved; and they can be successfully integrated to building facades and roofs.
---
paper_title: Parallel experimental study of a novel super-thin thermal absorber based photovoltaic/thermal (PV/T) system against conventional photovoltaic (PV) system
paper_content:
Photovoltaic (PV) semiconductor degrades in performance due to temperature rise. A super thin-conductive thermal absorber is therefore developed to regulate the PV working temperature by retrofitti ...
---
paper_title: Architectural Integration and Design of Solar Thermal Systems
paper_content:
Although mature technologies at competitive prices are largely available, solar thermal is not yet playing the important role it deserves in the reduction of buildings fossil energy consumption. The generally low architectural quality characterizing existing building integrations of solar thermal systems pinpoints the lack of design as one major reason for the low spread of the technology. As confirmed by the example of photovoltaics, the improvement of the architectural quality of building integrated systems can increase the use of a solar technology even more than price and technique. This thesis investigates the possible ways to enhance the architectural quality of building integrated solar thermal systems, and focuses on integration into facade, where the formal constraints are major and have most impact. The architectural integration problematic is structured into functional, constructive and formal issues, so that integration criteria are given for each architectural category. As the functional and constructive criteria are already recognized by the scientific community, the thesis concentrates on the definition of the formal ones, yet underestimated or misunderstood. The results of a large European survey over architects and engineers perception of building integration quality are presented, showing that for architects formal issues are not a matter of personal taste, but that they relate to professional competences, and consequently can be described. The solar system characteristics having an impact on the formal quality of the integration are identified (formal characteristics), the related integration criteria are assessed, and finally integration guidelines to support architect integration design work are given. The limits imposed by the collectors available in the market are pointed out, showing that the lack of appropriate products is nowadays the main barrier to BIST (Building Integrated Solar Thermal) architectural quality. A methodology for the development of new solar thermal collectors systems responding at the same time to energy production needs and building integration requirements is defined. The importance to ensure, within the design team, the due professional competences in both these fields is stressed. Three progressive levels of system "integrability" are defined in the path leading to the concept of "active envelope systems" and the main role of facade manufacturers is highlighted. The methodology is applied to unglazed and glazed flat plate systems, and new facade system designs are proposed that show the relevance of the proposed approach.
---
paper_title: Dynamic performance of a façade-based solar loop heat pipe water heating system
paper_content:
Abstract This paper reported a dedicated study of a novel facade-based solar loop heat pipe (LHP) water heating system using both theoretical and experimental methods. This system employs a modular panel incorporating a unique loop heat pipe that is able to serve as part of the building facade or a decoration layer of the facade, thus creating a facade integrated, low cost, highly efficient and aesthetically appealing solar water heating structure. Taking into account heat balances occurring in different parts of the system, e.g., solar absorber, heat pipes loop, heat exchanger and storage tank, a dedicated computer model was developed to investigate the dynamic performance of the system. An experimental rig was also established to evaluate the performance of such a prototype system through measurement of various operational parameters, e.g., solar radiation, temperatures and flow rates of the heat pipe fluid and water. Through comparison between the testing and modelling results, the model has been approved to be able to give a reasonable accuracy for predicting the performance of the LHP system. Two types of glass covers, i.e., double glazed/evacuated tubes and single-glazing plate, were applied to the prototype. It was found that for both covers, the heat pipe fluid temperature rose dramatically at the start-up operation and afterwards remained a slow but steady growth; while the water temperature remained a steadily growing trend throughout the operational day. The temperature rise of the circulated water at 1.6 l/min of flow rate was around 13.5 °C in the double-glazed/evacuated tubes based system and 10 °C in the single-glazing based system; correspondingly, their average solar conversion efficiencies were 48.8% and 36%, and the COPs were 14 and 10.5 respectively. In overall, the double-glazed/evacuated tubes based system presented a better performance than the single glazing based one.
---
paper_title: Facade Integration of Solar Thermal Collectors: A Breakthrough
paper_content:
One main barrier to the acceptability of facade use of solar thermal collectors is their black appearance and the visibility of piping or absorber irregularities through the glazing. To facilitate facade integration, a project was set up to develop selective filters reflecting only a small part of the solar spectrum in the visible range while letting the rest of the radiation heat the absorber. These filters were successfully produced and, combined with a diffusing glass treatment, have achieved the desired masking effect with minor impact on the collector efficiency (less than 10%). Glasses of various colours combined with several diffusing finishing (acid etching, structured glass etc…) can be produced that are able to hide the absorber. Such glazings will allow the use of the same product both in front of facade areas equipped with solar absorbers (as collector external glass) and in front of the non exposed areas (as facade cladding), opening the way to a broad variety of active facade designs. The active elements can then be positioned at will on the exposed areas, and their quantity determined only by thermal needs. By freeing the dimension of the facade area that can be clad with this glazing from the thermally needed surface for collectors, a major step to help architects use solar thermal on facades has been taken.
---
paper_title: Active Solar Thermal Facades (ASTFs): From concept, application to research questions
paper_content:
The aim of the paper is to report a comprehensive review into a recently emerging building integrated solar thermal technology, namely, Active Solar Thermal Facades (ASTFs), in terms of concept, classification, standard, performance evaluation, application, as well as research questions. This involves the combined effort of literature review, analysis, extraction, integration, critics, prediction and conclusion. It is indicated that the ASTFs are sort of building envelope elements incorporating the solar collecting devices, thus enabling the dual functions, e.g., space shielding and solar energy collection, to be performed. Based on the function of the building envelopes, the ASTF systems can be generally classified as wall-, window-, balcony-and roof-based types; while the ASTFs could also be classified by the thermal collection typologies, transparency, application, and heat-transfer medium. Currently, existing building and solar collector standards are brought together to evaluate the performance of the ASTFs. The research questions relating to the ASTFs are numerous, but the major points lie in: (1) whole structure and individual components layout, sizing and optimisation; (2) theoretical analysis; (3) experimental measurement; and (4) energy saving, economic and environmental performance assessment. Based on the analysis of the identified research questions, achievements made on each question, and outstanding problems remaining with the ASTFs, further development opportunities on this topic are suggested: (1) development of an integrated database/software enabling both architecture design and engineering performance simulation; (2) real-time measurement of the ASTFs integrated buildings on a long-term scheme; (3) economic and environmental performance assessment and social acceptance analysis; (4) dissemination, marketing and exploitation strategies study. This study helps in identifying the current status, potential problems in existence, future directions in research, development and practical application of the ASTFs technologies in buildings. It will also promote development of renewable energy technology and thus contribute to achieving the UK and international targets in energy saving, renewable energy utilization, and carbon emission reduction in building sector.
---
paper_title: Mathematical modeling and thermal performance analysis of unglazed transpired solar collectors
paper_content:
Abstract Unglazed transpired collectors or UTC (also known as perforated collectors) are a relatively new development in solar collector technology, introduced in the early nineties for ventilation air heating. These collectors are used in several large buildings in Canada, USA and Europe, effecting considerable savings in energy and heating costs. Transpired collectors are a potential replacement for glazed flat plate collectors. This paper presents the details of a mathematical model for UTC using heat transfer expressions for the collector components, and empirical relations for estimating the various heat transfer coefficients. It predicts the thermal performance of unglazed transpired solar collectors over a wide range of design and operating conditions. Results of the model were analysed to predict the effects of key parameters on the performance of a UTC for a delivery air temperature of 45–55 °C for drying applications. The parametric studies were carried out by varying the porosity, airflow rate, solar radiation, and solar absorptivity/thermal emissivity, and finding their influence on collector efficiency, heat exchange effectiveness, air temperature rise and useful heat delivered. Results indicate promising thermal performance of UTC in this temperature band, offering itself as an attractive alternate to glazed solar collectors for drying of food products. The results of the model have been used to develop nomograms, which can be a valuable tool for a collector designer in optimising the design and thermal performance of UTC. It also enables the prediction of the absolute thermal performance of a UTC under a given set of conditions.
---
paper_title: Optimal control of flow in solar collectors for maximum exergy extraction
paper_content:
The best operation strategies for open loop flat-plate solar collector systems are considered. A direct optimal control method (the TOMP algorithm) is implemented. A detailed collector model and realistic meteorological data from both cold and warm seasons are used in applications. The maximum exergetic efficiency is low (usually less than 3%), in good agreement with experimental measurements reported in literature. The optimum mass-flow rate increases near sunrise and sunset and by increasing the fluid inlet temperature. The optimum mass-flow rate is well correlated with global solar irradiance during the warm season. Also, operation at a properly defined constant mass-flow rate may be close to the optimal operation.
---
paper_title: Effect of wind on flow distribution in unglazed transpired-plate collectors
paper_content:
Abstract Unglazed transpired-plate solar air heaters have proven to be effective devices for heating air directly from ambient on a once through basis. They have found applications in ventilation-air preheating and in crop-drying. Large collectors are now routinely built that cover the sides of sizeable buildings, and the problem of designing the system so that the flow of the air through the collector face is reasonably uniform and so that there is no ‘outflow’ over part of the collector face has been seen as a challenging one. The flow distribution was analyzed in an earlier study using a computational fluid dynamics (CFD) model, but that study was limited to the case where there is no wind present. The present paper extends the earlier study to the case where there is wind. Various building orientations are examined, at a wind speed of 5 m/s. The wind was found to reinforce those factors that tend to produce outflow, and in light of this study, the recommended minimum suction velocity required to avoid outflow has been raised from about 0.0125 m/s to about 0.03 m/s, depending on the building shape. On the other hand, there are possible strategies that can be adopted to reduce the effect of wind, and these are discussed.
---
paper_title: A field study of the wind effects on the performance of an unglazed transpired solar collector
paper_content:
An experimental study was carried out on a working unglazed transpired solar collector (UTSC) to determine what effects ambient wind has on its performance. The monitoring system included instruments to measure temperatures, collector outlet flow rates, solar radiation, wind speed, and wind direction; as well as an ultrasonic anemometer placed near the centre of the collector. Efficiency was defined as the fraction of incident solar heat flux that went to preheating the transpired air. Our observations indicate a high degree of turbulence near the wall which feeds the near wall region. This is supported by observations of efficiency which decrease monotonically with increasing turbulence intensities. It was also observed that peak efficiencies did not occur at the lowest wind speeds. Both these findings seem to contradict existing laminar boundary layer models for UTSC performance.
---
paper_title: Energetic and exergetic evaluation of flat plate solar collectors
paper_content:
Energy efficiency is generally used as one of the most important parameters in order to introduce and compare thermal systems including flat plate solar collectors despite of the fact that the first law of thermodynamics is not solely capable of demonstrating quantitative and qualitative performance of such systems. In this paper, a theoretical and comprehensive model for energy and exergy analysis of flat plate solar collectors is presented through which the effect of the entire design parameters on performance can be examined. Upon the verification and confirmation of the model based on the experimental data, effect of parameters such as fluid flow rate and temperature, type of working fluid and thickness of the back insulation on the energy and exergy efficiency of the collector has been examined and based on the analysis and comparison of results, the optimal working condition of the system has been determined. According to the results, designing the system with inlet water temperature approximately 40° more than the ambient temperature as well as a lower flow rate will enhance the overall performance.
---
paper_title: Approximate method for computation of glass cover temperature and top heat-loss coefficient of solar collectors with single glazing
paper_content:
An improved equation form for computing the glass cover temperature of flat-plate solar collectors with single glazing is developed. A semi-analytical correlation for the factor f—the ratio of inner to outer heat-transfer coefficients—as a function of collector parameters and atmospheric variables is obtained by regression analysis. This relation readily provides the glass cover temperature (Tg). The results are compared with those obtained by numerical solution of heat balance equations. Computational errors in Tg and hence in the top heat loss coefficient (Ut) are reduced by a factor of five or more. With such low errors in computation of Tg and Ut, a numerical solution of heat balance equations is not required. The method is applicable over an extensive range of variables: the error in the computation of Ut is within 2% with the range of air gap spacing 8 mm to 90 mm and the range of ambient temperature 0°C to 45°C. In this extended range of variables the errors due to simplified method based on empirical relations for Ut are substantially higher.
---
paper_title: A state of art review on the performance of transpired solar collector
paper_content:
Utilisation of solar radiation to heat air for various purposes, e.g. ventilation, pre heat, process air heat and their applications has attracted more and more interests. Wide range of applications adapted for different climates and in different building types ranging from houses to large industrial buildings. Recently in many European countries, USA and Canada this concept developed rapidly. Transpired solar collectors (TSCs) have proven reliable for various applications, e.g. heating spaces, providing warm ventilation air, supplying domestic hot water in summer, etc.
---
paper_title: Solar collector overheating protection
paper_content:
Abstract Prismatic structures in a thermal solar collector are used as overheating protection. Such structures reflect incoming light efficiently back whenever less thermal power is extracted from the solar collector. Maximum thermal power is generated when the prismatic structure is surrounded by a switching fluid with an index of refraction comparable to that of the prismatic structure. Thermal heat can be harvested via extra fluid channels in the solar absorber or directly via the switching fluid near the prisms. The light reducing effect of prismatic structures is demonstrated for a typical day and a season cycle of the Earth around the Sun. The switchability and the light reducing effect are also demonstrated in a prototype solar collector.
---
paper_title: Optimization of latent heat storage in solar air heating system with vacuum tube air solar collector
paper_content:
Abstract This paper presents the design and modelling of the heat transfer of a solar air heating system, which consists of a vacuum tube air solar collector (SC) and latent heat thermal energy storage (LHTES), and a parametric analysis of the performance of this system. LHTES is a form of short-term daily storage that stores the SC heat during the day and releases it into the building during the night. Especially in low energy buildings with a high share of passive heating, this can significantly improve the utilization of solar energy for heating. The design of concentric-tube LHTES was optimized regarding the air temperature at the exit of LHTES during the day and the peak shift of heat supply. The results showed that optimal mass of PCM in LHTES is 150–200 kg/m 2 and the optimal air flow-rate is 40 m 3 /h per m 2 of the SC aperture area. The analysis of the system performance at different levels of daily solar irradiation has shown that 54–67% of the heat produced by solar air heating system in daytime can be delivered during the night time for building heating.
---
paper_title: Optimal control of flow in solar collectors for maximum exergy extraction
paper_content:
The best operation strategies for open loop flat-plate solar collector systems are considered. A direct optimal control method (the TOMP algorithm) is implemented. A detailed collector model and realistic meteorological data from both cold and warm seasons are used in applications. The maximum exergetic efficiency is low (usually less than 3%), in good agreement with experimental measurements reported in literature. The optimum mass-flow rate increases near sunrise and sunset and by increasing the fluid inlet temperature. The optimum mass-flow rate is well correlated with global solar irradiance during the warm season. Also, operation at a properly defined constant mass-flow rate may be close to the optimal operation.
---
paper_title: A critical review of photovoltaic―thermal solar collectors for air heating
paper_content:
Integrated photovoltaic–thermal solar collectors have become of great interest in the solar thermal and photovoltaic (PV) research communities. Solar thermal systems and solar PV systems have each advanced markedly, and combining the two technologies provides the opportunity for increased efficiency and expanded utilization of solar energy. In this article, the authors critically review photovoltaic–thermal solar collectors for air heating. Included is a review of photovoltaic thermal technology and recent advances, particularly as applied to air heaters. It is determined that the photovoltaic–thermal (PV/T) air heater is or may in the future be practicable for preheating air for many applications, including space heating and drying, and that integrated PV/T collectors deliver more useful energy per unit collector area than separate PV and thermal systems. Although PV/T collectors are promising, it is evident that further research is required to improve efficiency, reduce costs and resolve several technical design issues related to the collectors.
---
paper_title: A simple predictive controller for use on large scale arrays of parabolic trough collectors
paper_content:
Abstract Efficient operation of a distributed collector field requires effective regulation of the outlet temperature. Control schemes utilising PI based controllers, whether adaptive or fixed parameter, have been shown to be unsuitable for this application. The reason for this is that such collector fields possess resonance dynamics at a low frequency which tend to restrict the bandwidth of such controllers. In this article a predictive controller is proposed whose purpose is to effectively counter such dynamics. This control technique is based upon a simple transfer function representation of the resonance dynamics which can be tuned using easily obtained experimental data and is computationally efficient. When applied to a validated non-linear computer model of the collector field the controller is seen to exhibit a fast well damped control response superior to that achievable using PI control.
---
paper_title: Solar collector with temperature limitation using shape memory metal
paper_content:
Application of Heat pipe in photo-thermal conversion as well as the heat transfer mechanism in a heat pipe is reported. The problem of no-load conditions combined with high stagnation temperature of (in particular by evacuated) solar collectors is addressed. The use of shape memory alloys in solar collectors is discussed. The Shape Memory Effect as well as the superelasticity is explained. The physical phenomena of Shape Memory Effect as well as superelasticity is briefly reviewed. The application of a shape memory alloy in a commercially available evacuated heat pipe solar collector and its mechanism is discussed.
---
paper_title: A review of strategies for the control of high temperature stagnation in solar collectors and systems
paper_content:
Abstract High temperatures occurring during stagnation conditions can be very detrimental to the reliability, durability and safety of solar thermal systems. Various approaches to mitigate the effects to stagnation have been employed in the past, however as collector and system efficiencies improve, and larger solar systems are installed, the need for reliable and cost effective stagnation control schemes is increasing. In this paper, the impacts of stagnation and various approaches to stagnation control are discussed and compared with regard to their features and limitations.
---
paper_title: Enhanced heat transfer using oscillatory flows in solar collectors
paper_content:
In this work, we propose the use of oscillatory laminar flows to enhance the transfer of heat from solar collectors. The idea is to explore the possibility of transferring the heat collected from a solar device to a storage tank by means of a zero-mean oscillating fluid contained in a tube. This method takes advantage of the fact that the effective thermal diffusivity of a fluid in oscillatory motion is several orders of magnitude higher than the fluid molecular diffusivity. Therefore, the axial transport of heat along the tube is substantially higher when the fluid oscillates than when the fluid is static. Also, preliminary estimations show a dramatic heat transfer enhancement using oscillatory flows compared with the forced convection of heat by standard unidirectional flows. We explore the behavior of the effective thermal diffusivity using both Newtonian and viscoelastic fluids. For the Newtonian fluid a single maximum value of this quantity is exhibited for a given oscillation frequency. In contrast, several maxima for different resonant frequencies are observed for the viscoelastic fluid. Further, the absolute maximum of the enhanced thermal diffusivity for the viscoelastic fluid is several orders of magnitude larger than that of the Newtonian fluid.
---
paper_title: Flow distribution in a solar collector panel with horizontally inclined absorber strips
paper_content:
Abstract The objective of this work is to theoretically and experimentally investigate the flow and temperature distribution in a solar collector panel with an absorber consisting of horizontally inclined strips. Fluid flow and heat transfer in the collector panel are studied by means of computational fluid dynamics (CFD) calculations. Further, experimental investigations of a 12.5 m2 solar collector panel with 16 parallel connected horizontal fins are carried out. The flow distribution through the absorber is evaluated by means of temperature measurements on the backside of the absorber tubes. The measured temperatures are compared to the temperatures determined by the CFD model and there is a good similarity between the measured and calculated results. Calculations with the CFD model elucidate the flow and temperature distribution in the collector. The influences of different operating conditions such as flow rate, properties of solar collector fluid, solar collector fluid inlet temperature and collector tilt angle are shown. The flow distribution through the absorber fins is uniform if high flow rates are used. By decreased flow rate and decreased content of glycol in the glycol/water mixture used as solar collector fluid, and by increased collector tilt and inlet temperature, the flow distribution gets worse resulting in an increased risk of boiling in the upper part of the collector panel.
---
paper_title: Exergetic optimization of flat plate solar collectors
paper_content:
In this paper, an exergetic optimization of flat plate solar collectors is developed to determine the optimal performance and design parameters of these solar to thermal energy conversion systems. A detailed energy and exergy analysis is carried out for evaluating the thermal and optical performance, exergy flows and losses as well as exergetic efficiency for a typical flat plate solar collector under given operating conditions. In this analysis, the following geometric and operating parameters are considered as variables: the absorber plate area, dimensions of solar collector, pipes' diameter, mass flow rate, fluid inlet, outlet temperature, the overall loss coefficient, etc. A simulation program is developed for the thermal and exergetic calculations. The results of this computational program are in good agreement with the experimental measurements noted in the previous literature. Finally, the exergetic optimization has been carried out under given design and operating conditions and the optimum values of the mass flow rate, the absorber plate area and the maximum exergy efficiency have been found. Thus, more accurate results and beneficial applications of the exergy method in the design of solar collectors have been obtained.
---
| Title: Building Integrated Solar Thermal (BIST) Technologies and Their Applications: A Review of Structural Design and Architectural Integration
Section 1: Introduction
Description 1: Provide an overview of the importance of renewable energy sources, particularly solar thermal energy, for building applications and highlight the potential of BIST systems.
Section 2: Water-based BIST technology
Description 2: Discuss the use of water as a heat transfer medium in BIST systems, including its advantages, challenges, and structural designs.
Section 3: Category of BIST Technologies
Description 3: Classify BIST systems based on the heat transfer medium used, such as air, hydraulic (water/heat pipe/refrigerant), and PCM-based systems, with descriptions of each type.
Section 4: Air-based BIST technology
Description 4: Examine the characteristics, advantages, and structural designs of air-based solar thermal systems that use air as the working fluid.
Section 5: Refrigerant-based BIST technology
Description 5: Explain the use of refrigerants in BIST systems, their advantages over water, various refrigerant types, and relevant structural designs.
Section 6: PCM-based facade BIST technology
Description 6: Describe the application of Phase Change Materials (PCM) in BIST systems, their benefits for thermal storage, and potential applications.
Section 7: General comparison of different SFT technologies
Description 7: Provide a comparison of different BIST technologies, including their efficiencies and suitability for various applications.
Section 8: BIST Structural Design in Terms of Architectural Element
Description 8: Discuss the integration of BIST systems into various architectural elements such as walls, windows, balconies, and roofs, including structural considerations.
Section 9: Typical BIST structures in terms of architectural elements
Description 9: Present examples and schematic structures of BIST systems integrated into different building elements.
Section 10: Design standards
Description 10: Outline the technical and safety standards relevant to the design and implementation of BIST systems in building envelopes.
Section 11: Architectural consideration
Description 11: Explore the architectural considerations, functional aspects, and aesthetic integration of BIST systems in building designs.
Section 12: Operation Conditions of BIST Systems
Description 12: Analyze the operational conditions, such as temperature and flow rates, that affect the performance and efficiency of BIST systems.
Section 13: Practical applications
Description 13: Summarize various pilot projects and examples of successful BIST system integrations in real-world buildings.
Section 14: Critical analysis
Description 14: Critically evaluate the current state of BIST technologies, including their benefits, limitations, and areas for improvement.
Section 15: Conclusion
Description 15: Summarize the findings of the review, emphasizing the potential of BIST systems and highlighting future research and development directions. |
A REVIEW ON THE CURRENT SEGMENTATION ALGORITHMS FOR MEDICAL IMAGES | 6 | ---
paper_title: Seeded region growing
paper_content:
We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures. >
---
paper_title: Segmentation of medical images using adaptive region growing
paper_content:
Interaction increases flexibility of segmentation but it leads to undesirable behavior of an algorithm if knowledge being requested is inappropriate. In region growing, this is the case for defining the homogeneity criterion as its specification depends also on image formation properties that are not known to the user. We developed a region growing algorithm that learns its homogeneity criterion automatically from characteristics of the region to be segmented. The method is based on a model that describes homogeneity and simple shape properties of the region. Parameters of the homogeneity criterion are estimated from sample locations in the region. These locations are selected sequentially in a random walk starting at the seed point, and the homogeneity criterion is updated continuously. The method was tested for segmentation on test images and of structures in CT images. We found the method to work reliable if the model assumption on homogeneity and region characteristics are true. Furthermore, the model is simple but robust, thus allowing for a certain degree of deviation from model constraints and still delivering the expected segmentation result. This approach was extended to a fully automatic and complete segmentation method by using the pixels with the smallest gradient length in the not yet segmented image region as a seed point.
---
paper_title: Medical Image Segmentation Using K-Means Clustering and Improved Watershed Algorithm
paper_content:
We propose a methodology that incorporates k-means and improved watershed segmentation algorithm for medical image segmentation. The use of the conventional watershed algorithm for medical image analysis is widespread because of its advantages, such as always being able to produce a complete division of the image. However, its drawbacks include over-segmentation and sensitivity to false edges. We address the drawbacks of the conventional watershed algorithm when it is applied to medical images by using k-means clustering to produce a primary segmentation of the image before we apply our improved watershed segmentation algorithm to it. The k-means clustering is an unsupervised learning algorithm, while the improved watershed segmentation algorithm makes use of automated thresholding on the gradient magnitude map and post-segmentation merging on the initial partitions to reduce the number of false edges and over-segmentation. By comparing the number of partitions in the segmentation maps of 50 images, we showed that our proposed methodology produced segmentation maps which have 92% fewer partitions than the segmentation maps produced by the conventional watershed algorithm
---
paper_title: Neural network based segmentation of magnetic resonance images of the brain
paper_content:
Presents a study investigating the potential of artificial neural networks (ANN's) for the classification and segmentation of magnetic resonance (MR) images of the human brain. In this study, the authors present the application of a learning vector quantization (LVQ) ANN for the multispectral supervised classification of MR images. The authors have modified the LVQ for better and more accurate classification. They have compared the results using LVQ ANN versus back-propagation ANN. This comparison shows that, unlike back-propagation ANN, the authors' method is insensitive to the gray-level variation of MR images between different slices. It shows that tissue segmentation using LVQ ANN also performs better and faster than that using back-propagation ANN.
---
paper_title: On Active Contour Models and Balloons
paper_content:
Abstract The use of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkin & Terzopoulos ( Int. J. Comput. Vision 1, 1987, 321–331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.
---
paper_title: Segmentation of dynamic PET images using cluster analysis
paper_content:
Quantitative positron emission tomography (PET) studies provide in vivo measurements of dynamic physiological and biochemical processes in humans. A limitation of PET is an inability to provide precise anatomic localization due to relatively poor spatial resolution when compared to magnetic resonance (MR) imaging. Manual placement of region-of-interest (ROI) is commonly used in clinical and research settings in analysis of PET datasets. However, this approach is operator dependent and time-consuming. A semi- or fully-automated ROI delineation (or segmentation) method offers advantages by reducing operator error/subjectivity and thereby improving reproducibility. In this work, we describe an approach to automatically segment dynamic PET images using cluster analysis and we validate our approach with a simulated phantom study and assess its performance with real dynamic PET data. Our preliminary results suggest that cluster analysis can automatically segment tissues in dynamic PET studies and has the potential to replace manual ROI delineation for some applications.
---
paper_title: The application of competitive Hopfield neural network to medical image segmentation
paper_content:
In this paper, a parallel and unsupervised approach using the competitive Hopfield neural network (CHNN) is proposed for medical image segmentation. It is a kind of Hopfield network which incorporates the winner-takes-all (WTA) learning mechanism. The image segmentation is conceptually formulated as a problem of pixel clustering based upon the global information of the gray level distribution. Thus, the energy function for minimization is defined as the mean of the squared distance measures of the gray levels within each class. The proposed network avoids the onerous procedure of determining values for the weighting factors in the energy function. In addition, its training scheme enables the network to learn rapidly and effectively. For an image of n gray levels and c interesting objects, the proposed CHNN would consist of n by c neurons and be independent of the image size. In both simulation studies and practical medical image segmentation, the CHNN method shows promising results in comparison with two well-known methods: the hard and the fuzzy c-means (FCM) methods.
---
paper_title: Shape deformation: SVM regression and application to medical image segmentation
paper_content:
This paper presents a novel landmark-based shape deformation method. This method effectively solves two problems inherent in landmark-based shape deformation: (a) identification of landmark points from a given input image, and (b) regularized deformation the shape of an an object defined in a template. The second problem is solved using a new constrained support vector machine (SVM) regression technique, in which a thin-plate kernel is utilized to provide non-rigid shape deformations. This method offers several advantages over existing landmark-based methods. First, it has a unique capability to detect and use multiple candidate landmark points in an input image to improve landmark detection. Second, it can handle the case of missing landmarks, which often arises in dealing with occluded images. We have applied the proposed method to extract the scalp contours from brain cryosection images with very encouraging results.
---
paper_title: Atlas Guided Identification of Brain Structures by Combining 3D Segmentation and SVM Classification
paper_content:
This study presents a novel automatic approach for the identification of anatomical brain structures in magnetic resonance images (MRI). The method combines a fast multiscale multi-channel three dimensional (3D) segmentation algorithm providing a rich feature vocabulary together with a support vector machine (SVM) based classifier. The segmentation produces a full hierarchy of segments, expressed by an irregular pyramid with only linear time complexity. The pyramid provides a rich, adaptive representation of the image, enabling detection of various anatomical structures at different scales. A key aspect of the approach is the thorough set of multiscale measures employed throughout the segmentation process which are also provided at its end for clinical analysis. These features include in particular the prior probability knowledge of anatomic structures due to the use of an MRI probabilistic atlas. An SVM classifier is trained based on this set of features to identify the brain structures. We validated the approach using a gold standard real brain MRI data set. Comparison of the results with existing algorithms displays the promise of our approach.
---
paper_title: Modified fuzzy c-mean in medical image segmentation
paper_content:
This paper describes the application of fuzzy set theory in medical imaging, namely the segmentation of brain images. We propose a fully automatic technique to obtain image clusters. A modified fuzzy c-mean (FCM) classification algorithm is used to provide a fuzzy partition. Our new method, inspired from the Markov Random Field (MRF), is less sensitive to noise as it filters the image while clustering it, and the filter parameters are enhanced in each iteration by the clustering process. We applied the new method on a noisy CT scan and on a single channel MRI scan. We recommend using a methodology of over segmentation to the textured MRI scan and a user guided-interface to obtain the final clusters. One of the applications of this technique is TBI recovery prediction in which it is important to consider the partial volume. It is shown that the system stabilizes after a number of iterations with the membership value of the region contours reflecting the partial volume value. The final stage of the process is devoted to decision making or the defuzzification process.
---
paper_title: Atlas-guided segmentation of brain images via optimizing neural networks
paper_content:
Automated segmentation of magnetic resonance (MR) brain imagery into anatomical regions is a complex task that appears to need contextual guidance in order to overcome problems associated with noise, missing data, and the overlap of features associated with different anatomical regions. In this work, the contextual information is provided in the form of an anatomical brain atlas. The atlas provides defaults that supplement the low-level MR image data and guide its segmentation. The matching of atlas to image data is represented by a set of deformable contours that seek compromise fits between expected model information and image data. The dynamics that deform the contours solves both a correspondence problem (which element of the deformable contour corresponds to which elements of the atlas and image data?) and a fitting problem (what is the optimal contour that corresponds to a compromise of atlas and image data while maintaining smoothness?). Some initial results on simple 2D contours are shown.
---
paper_title: A dynamic finite element surface model for segmentation and tracking in multidimensional medical images with application to cardiac 4D image analysis
paper_content:
This paper presents a physics-based approach to anatomical surface segmentation, reconstruction, and tracking in multidimensional medical images. The approach makes use of a dynamic "balloon" model--a spherical thin-plate under tension surface spline which deforms elastically to fit the image data. The fitting process is mediated by internal forces stemming from the elastic properties of the spline and external forces which are produced form the data. The forces interact in accordance with Lagrangian equations of motion that adjust the model's deformational degrees of freedom to fit the data. We employ the finite element method to represent the continuous surface in the form of weighted sums of local polynomial basis functions. We use a quintic triangular finite element whose nodal variables include positions as well as the first and second partial derivatives of the surface. We describe a system, implemented on a high performance graphics workstation, which applies the model fitting technique to the segmentation of the cardiac LV surface in volume (3D) CT images and LV tracking in dynamic volume (4D) CT images to estimate its nonrigid motion over the cardiac cycle. The system features a graphical user interface which minimizes error by affording specialist users interactive control over the dynamic model fitting process.
---
paper_title: Topology-independent shape modeling scheme
paper_content:
Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This dissertation presents a new approach to shape modeling which retains the most attractive features of existing methods, and overcomes their prominent limitations. Our technique can be applied to model arbitrarily complex shapes, which include shapes with significant protrusions, and to situations where no a priori assumption about the object's topology is made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. This method is based on the ideas developed by Osher and Sethian to model propagating solid/liquid interfaces with curvature-dependent speeds. The interface (front) is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. It is moved by solving a "Hamilton-Jacobi" type equation written for a function in which the interface is a particular level set. A speed term synthesized from the image is used to stop the interface in the vicinity of the object boundaries. The resulting equation of motion is solved by employing entropy-satisfying upwind finite difference schemes. We also introduce a new algorithm for rapid advancement of the front using what we call a narrow-band updation scheme. This leads to significant improvement in the time complexity of the shape recovery procedure in 2D. An added advantage of our modeling scheme is that it can easily be extended to any number of space dimensions. The efficacy of the scheme is demonstrated with numerical experiments on low contrast medical images. We also demonstrate the recovery of 3D shapes.
---
paper_title: Snakes , Shapes , and Gradient Vector Flow
paper_content:
Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to boundary concavities, however, have limited their utility. This paper presents a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. It differs fundamentally from traditional snake external forces in that it cannot be written as the negative gradient of a potential function, and the corresponding snake is formulated directly from a force balance condition rather than a variational formulation. Using several two-dimensional (2-D) examples and one three-dimensional (3-D) example, we show that GVF has a large capture range and is able to move snakes into boundary concavities.
---
paper_title: Fronts propagating with curvature dependent speed: algorithms based on Hamilton–Jacobi formulations
paper_content:
We devise new numerical algorithms, called PSC algorithms, for following fronts propagating with curvature-dependent speed. The speed may be an arbitrary function of curvature, and the front also can be passively advected by an underlying flow. These algorithms approximate the equations of motion, which resemble Hamilton-Jacobi equations with parabolic right-hand sides, by using techniques from hyperbolic conservation laws. Non-oscillatory schemes of various orders of accuracy are used to solve the equations, providing methods that accurately capture the formation of sharp gradients and cusps in the moving fronts. The algorithms handle topological merging and breaking naturally, work in any number of space dimensions, and do not require that the moving surface be written as a function. The methods can be also used for more general Hamilton-Jacobi-type problems. We demonstrate our algorithms by computing the solution to a variety of surface motion problems.
---
paper_title: On Active Contour Models and Balloons
paper_content:
Abstract The use of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkin & Terzopoulos ( Int. J. Comput. Vision 1, 1987, 321–331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.
---
paper_title: Snakes: Active contour models
paper_content:
A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.
---
paper_title: Geodesic Active Contours
paper_content:
A novel scheme for the detection of object boundaries is presented. The technique is based on active contours deforming according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric as defined by the image content. This geodesic approach for object segmentation allows to connect classical "snakes" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved as showed by a number of examples. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. >
---
| Title: A REVIEW ON THE CURRENT SEGMENTATION ALGORITHMS FOR MEDICAL IMAGES
Section 1: INTRODUCTION
Description 1: Provide an overview of imaging techniques like CT and MRI, the significance of segmentation in medical imaging, challenges, and an outline of the paper structure.
Section 2: Algorithms Based on Threshold
Description 2: Discuss the main ideas and types of threshold-based segmentation algorithms, including edge-based, region-based, and hybrid algorithms, along with their strengths and weaknesses.
Section 3: Algorithms Based on Pattern Recognition Techniques
Description 3: Review supervised and unsupervised classification algorithms used in medical image segmentation, such as artificial neural networks, support vector machines, and clustering algorithms, and highlight their applications and limitations.
Section 4: Algorithms Based on Deformable Models
Description 4: Explore parametric and geometric deformable models, their underlying methodologies, advantages, and applications in complex medical image segmentations.
Section 5: DISCUSSION
Description 5: Summarize the comparative advantages and disadvantages of the different categories of segmentation algorithms, discussing their potential applications and constraints in medical imaging.
Section 6: CONCLUSION
Description 6: Conclude with a summary of the main points discussed in the paper, the implications of combining multiple segmentation techniques, and considerations for designing effective segmentation algorithms based on specific medical imaging tasks. |
Alternative communication systems for people with severe motor disabilities: a survey | 16 | ---
paper_title: Conception and Experimentation of a Communication Device with Adaptive Scanning
paper_content:
For some people with motor disabilities and speech disorders, the only way to communicate and to have some control over their environment is through the use of a controlled scanning system operated by a single switch. The main problem with these systems is that the communication process tends to be exceedingly slow, since the system must scan through the available choices one at a time until the desired message is reached. One way of raising the speed of message selection is to optimize the elementary scanning delay in real time so that it allows the user to make selections as quickly as possible without making too many errors. With this objective in mind, this article presents a method for optimizing the scanning delay, which is based on an analysis of the data recorded in “log files” while applying the EDiTH system [Digital Teleaction Environment for People with Disabilities]. This analysis makes it possible to develop a human-machine interaction model specific to the study, and then to establish an adaptive algorithm for the calculation of the scanning delay. The results obtained with imposed scenarios and then in ecological situations provides a confirmation that our algorithms are effective in dynamically adapting a scan speed. The main advantage offered by the procedure proposed is that it works on timing information alone and thus does not require any knowledge of the scanning device itself. This allows it to work with any scanning device.
---
paper_title: Conception and Experimentation of a Communication Device with Adaptive Scanning
paper_content:
For some people with motor disabilities and speech disorders, the only way to communicate and to have some control over their environment is through the use of a controlled scanning system operated by a single switch. The main problem with these systems is that the communication process tends to be exceedingly slow, since the system must scan through the available choices one at a time until the desired message is reached. One way of raising the speed of message selection is to optimize the elementary scanning delay in real time so that it allows the user to make selections as quickly as possible without making too many errors. With this objective in mind, this article presents a method for optimizing the scanning delay, which is based on an analysis of the data recorded in “log files” while applying the EDiTH system [Digital Teleaction Environment for People with Disabilities]. This analysis makes it possible to develop a human-machine interaction model specific to the study, and then to establish an adaptive algorithm for the calculation of the scanning delay. The results obtained with imposed scenarios and then in ecological situations provides a confirmation that our algorithms are effective in dynamically adapting a scan speed. The main advantage offered by the procedure proposed is that it works on timing information alone and thus does not require any knowledge of the scanning device itself. This allows it to work with any scanning device.
---
paper_title: A practical EMG-based human-computer interface for users with motor disabilities
paper_content:
In line with the mission of the Assistive Technology Act of 1998 (ATA), this study proposes an integrated assistive real-time system which "affirms that technology is a valuable tool that can be used to improve the lives of people with disabilities." An assistive technology device is defined by the ATA as "any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities." The purpose of this study is to design and develop an alternate input device that can be used even by individuals with severe motor disabilities. This real-time system design utilizes electromyographic (EMG) biosignals from cranial muscles and electroencephalographic (EEG) biosignals from the cerebrum's occipital lobe, which are transformed into controls for two-dimensional (2-D) cursor movement, the left-click (Enter) command, and an ON/OFF switch for the cursor-control functions. This HCI system classifies biosignals into "mouse" functions by applying amplitude thresholds and performing power spectral density (PSD) estimations on discrete windows of data. Spectral power summations are aggregated over several frequency bands between 8 and 500 Hz and then compared to produce the correct classification. The result is an affordable DSP-based system that, when combined with an on-screen keyboard, enables the user to fully operate a computer without using any extremities.
---
paper_title: Geometric optimization of a tongue-operated switch array
paper_content:
An oral tactile interface provides an approach for silent and hands-free communication between humans or between human and machine. An efficient tongue-operated switch array (TOSA), which provides an alternate input or manipulating method for a computer or operative system, is described. A TOSA has been designed and fabricated using printed circuit board technology and a membrane-switching mechanism, and is integrated with a dental palate mold made from a silicone impression material. The TOSA has four switches laid out in cardinal directions with a fifth switch in the center. Human subject experiments have been conducted to evaluate device performance. The characteristics of tactile sensation and mobility of the tongue are used to quantify the performance and optimize the geometric design of the TOSA. Results indicate that operation on all switches are highly accurate and fast enough for use as an alternative input method.
---
paper_title: Enhanced hybrid electromyogram/Eye Gaze Tracking cursor control system for hands-free computer interaction.
paper_content:
This paper outlines the development and initial testing of a new hybrid computer cursor control system based on Eye Gaze Tracking (EGT) and electromyogram (EMG) processing for hands-free control of the computer cursor. The ultimate goal of the system is to provide an efficient computer interaction mechanism for individuals with severe motor disabilities (or specialized operators whose hands are committed to other tasks, such as surgeons, pilots, etc.) The paper emphasizes the enhancements that have been made on different areas of the architecture, with respect to a previous prototype developed by our group, and demonstrates the performance improvement verified for some of the enhancements.
---
paper_title: Human-machine interface for wheelchair control with EMG and its evaluation
paper_content:
The objective of this paper is to develop a powered wheelchair controller based on EMG for users with high-level spinal cord injury. EMG is very naturally measured when the user indicating a certain direction and the force information which will be used for the speed of wheelchair is easily extracted from EMG. Furthermore, the emergency situation based on EMG will be checked relatively ease. We classified the pre-defined motions such as rest case, forward movement, left movement, and right movement by fuzzy min-max neural networks (FMMNN). This classification results and evaluation results with real users shows the feasibility of EMG as an input interface for powered wheelchair.
---
paper_title: EMG signal decomposition: how can it be accomplished and used?
paper_content:
Electromyographic (EMG) signals are composed of the superposition of the activity of individual motor units. Techniques exist for the decomposition of an EMG signal into its constituent components. Following is a review and explanation of the techniques that have been used to decompose EMG signals. Before describing the decomposition techniques, the fundamental composition of EMG signals is explained and after, potential sources of information from and various uses of decomposed EMG signals are described.
---
paper_title: Myoelectric Signals for Multimodal Speech Recognition
paper_content:
A Coupled Hidden Markov Model (CHMM) is proposed in this paper to perform multimodal speech recognition using myoeletric signals (MES) from the muscles of vocal articulation. MES signals are immune to noise, and words that are acoustically similar manifest distinctly in MES. Hence, they would effectively complement the acoustic data in a multimodal speech recognition system. Research in Audio-Visual Speech Recognition has shown that CHMMs model the asynchrony between different data streams effectively. Hence, we propose CHMM for multimodal speech recognition using audio and MES as the two data streams. Our experiments indicate that the multimodal CHMM system significantly outperforms the audio only system at different SNRs. We have also provided a comparison between different features for MES and have found that wavelet features provide the best results.
---
paper_title: A practical EMG-based human-computer interface for users with motor disabilities
paper_content:
In line with the mission of the Assistive Technology Act of 1998 (ATA), this study proposes an integrated assistive real-time system which "affirms that technology is a valuable tool that can be used to improve the lives of people with disabilities." An assistive technology device is defined by the ATA as "any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities." The purpose of this study is to design and develop an alternate input device that can be used even by individuals with severe motor disabilities. This real-time system design utilizes electromyographic (EMG) biosignals from cranial muscles and electroencephalographic (EEG) biosignals from the cerebrum's occipital lobe, which are transformed into controls for two-dimensional (2-D) cursor movement, the left-click (Enter) command, and an ON/OFF switch for the cursor-control functions. This HCI system classifies biosignals into "mouse" functions by applying amplitude thresholds and performing power spectral density (PSD) estimations on discrete windows of data. Spectral power summations are aggregated over several frequency bands between 8 and 500 Hz and then compared to produce the correct classification. The result is an affordable DSP-based system that, when combined with an on-screen keyboard, enables the user to fully operate a computer without using any extremities.
---
paper_title: Multimodal neuroelectric interface development
paper_content:
We are developing electromyographic and electroencephalographic methods, which draw control signals for human-computer interfaces from the human nervous system. We have made progress in four areas: 1) real-time pattern recognition algorithms for decoding sequences of forearm muscle activity associated with control gestures; 2) signal-processing strategies for computer interfaces using electroencephalogram (EEG) signals; 3) a flexible computation framework for neuroelectric interface research; and d) noncontact sensors, which measure electromyogram or EEG signals without resistive contact to the body.
---
paper_title: Neural Control of the Computer Cursor Based on Spectral Analysis of the Electromyogram
paper_content:
A classification algorithm is developed to translate facial movements into five cursor actions: i) left, ii) right, iii) up, iv) down, and v) left-click. The algorithm utilizes the unique spectral characteristics exhibited by electromyogram (EMG) signals obtained from different muscles in the face to assist in the classification process. A previous three-electrode, EMG-based system was utilized in to perform a similar translation of facial movements into cursor actions. This system also made use of spectral analysis to classify muscle activity. It was found that this system does not always discriminate between the EMG activity assigned to up and down cursor actions efficiently. To remedy this matter, a fourth electrode was added and a new classification algorithm was devised. This paper details the classification algorithm utilized with the four-electrode system. It also compares the effectiveness of the four-electrode system to that of the three-electrode system in classifying EMG activity into cursor actions. This was done through the use of Matlab simulations. It will be shown that the new four-electrode system produces significant improvements in classification performance
---
paper_title: Can electromyography objectively detect voluntary movement in disorders of consciousness?
paper_content:
Determining conscious processing in unresponsive patients relies on subjective behavioural assessment. Using data from hand electromyography, the authors studied the occurrence of subthreshold muscle activity in response to verbal command, as an objective indicator of awareness in 10 disorders of consciousness patients. One out of eight vegetative state patients and both minimally conscious patients (n = 2) demonstrated an increased electromyography signal specifically linked to command. These findings suggest electromyography could be used to assess awareness objectively in pathologies of consciousness.
---
paper_title: Adaptive EMG-driven communication for the disabled
paper_content:
We suggest a communication method for severely disabled persons, who have lost both mobility and speech, and their family using Morse code derived by Masseter muscle EMG. We developed a portable system that comprises EMG amplifier, A/D conversion, text-to-speech module, remote control module and serial communication to the host system. After training, the patient can make speech by composing Morse code by moving his/her chin. Calibration and remote control mode is supported. It also supports the adaptive encoding method for fatigue.
---
paper_title: Communication aid for speech disabled people using Morse codification
paper_content:
A synthetic voice aid for speech disabled people is presented in this work. The system is based on the Morse code, given through four keys. A PIC was used to detect and decode Morse code as well as to make the translation to ASCII code. The data processing was performed by an MC68HC11 microcontroller. Voice was generated by a V8600 commercial synthesizer. The aid includes a liquid crystal display for text exhibition, that serves as a feedback reference. The display was mounted on a structure strategically located to avoid interference with the frontal visual field.
---
paper_title: Real-time implementation of electromyogram pattern recognition as a control command of man-machine interface.
paper_content:
The purpose of this study was to develop a real-time electromyogram (EMG) discrimination system to provide control commands for man-machine interface applications. A host computer with a plug-in data acquisition and processing board containing a TMS320 C31 floating-point digital signal processor was used to attain real-time EMG classification. Two-channel EMG signals were collected by two pairs of surface electrodes located bilaterally between the sternocleidomastoid and the upper trapezius. Five motions of the neck and shoulders were discriminated for each subject. The zero-crossing rate was employed to detect the onset of muscle contraction. The cepstral coefficients, derived from autoregressive coefficients and estimated by a recursive least square algorithm, were used as the recognition features. These features were then discriminated using a modified maximum likelihood distance classifier. The total response time of this EMG discrimination system was achieved about within 0.17 s. Four able bodied and two C5/6 quadriplegic subjects took part in the experiment, and achieved 95% mean recognition rate in discrimination between the five specific motions. The response time and the reliability of recognition indicate that this system has the potential to discriminate body motions for man-machine interface applications.
---
paper_title: A method for evaluating head-controlled computer input devices using Fitts law
paper_content:
The discrete movement task employed in this study consisted of moving a cursor from the center of a computer display screen to circular targets located 24.4 and 110.9 mm in eight radial directions. The target diameters were 2.7, 8.1, and 24.2 mm. Performance measures included movement time, cursor path distance, and root-mean-square cursor deviation. Ten subjects with no movement disabilities were studied using a conventional mouse and a lightweight ultrasonic head-controlled computer input pointing device
---
paper_title: Enhanced hybrid electromyogram/Eye Gaze Tracking cursor control system for hands-free computer interaction.
paper_content:
This paper outlines the development and initial testing of a new hybrid computer cursor control system based on Eye Gaze Tracking (EGT) and electromyogram (EMG) processing for hands-free control of the computer cursor. The ultimate goal of the system is to provide an efficient computer interaction mechanism for individuals with severe motor disabilities (or specialized operators whose hands are committed to other tasks, such as surgeons, pilots, etc.) The paper emphasizes the enhancements that have been made on different areas of the architecture, with respect to a previous prototype developed by our group, and demonstrates the performance improvement verified for some of the enhancements.
---
paper_title: Development of a hybrid hands-off human computer interface based on electromyogram signals and eye-gaze tracking
paper_content:
A hybrid hands-off human computer interface that uses infrared video eye gaze tracking (EGT) and electromyogram (EMG) signals is introduced. This system combines the advantages of both sub-systems, providing quick cursor displacement In long excursions and steady, accurate movement in small position adjustments. The hybrid system also provides a reliable clicking mechanism. The evaluation protocol used to test the system is described and the results for the hybrid, an EMG Only interface, and the standard hand-held mouse are described and compared. These results show that the hybrid system is, in an average, faster than the EMG-only system by a factor of 2 or more.
---
paper_title: A practical EMG-based human-computer interface for users with motor disabilities
paper_content:
In line with the mission of the Assistive Technology Act of 1998 (ATA), this study proposes an integrated assistive real-time system which "affirms that technology is a valuable tool that can be used to improve the lives of people with disabilities." An assistive technology device is defined by the ATA as "any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities." The purpose of this study is to design and develop an alternate input device that can be used even by individuals with severe motor disabilities. This real-time system design utilizes electromyographic (EMG) biosignals from cranial muscles and electroencephalographic (EEG) biosignals from the cerebrum's occipital lobe, which are transformed into controls for two-dimensional (2-D) cursor movement, the left-click (Enter) command, and an ON/OFF switch for the cursor-control functions. This HCI system classifies biosignals into "mouse" functions by applying amplitude thresholds and performing power spectral density (PSD) estimations on discrete windows of data. Spectral power summations are aggregated over several frequency bands between 8 and 500 Hz and then compared to produce the correct classification. The result is an affordable DSP-based system that, when combined with an on-screen keyboard, enables the user to fully operate a computer without using any extremities.
---
paper_title: Evaluation of Head Orientation and Neck Muscle EMG Signals as Command Inputs to a Human–Computer Interface for Individuals With High Tetraplegia
paper_content:
We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, electromyography (EMG) from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2-D, center-out, Fitts' Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application.
---
paper_title: Application of facial electromyography in computer mouse access for people with disabilities.
paper_content:
Purpose. This study develops a newly facial EMG human – computer interface for people with disabilities for controllng the movement of the cursor on a computer screen.Method. We access the computer cursor according to different facial muscle activity patterns. In order to exactly detect the muscle activity threshold, this study adopts continuous wavelet transformation to estimate the single motor unit action potentials dynamically.Result. The experiment indicates that the accuracy of using the facial mouse is greater than 80%, and this result indicates the feasibility of the proposed system. Moreover, the subject can improve performance of manipulation by repeated training.Conclusion. Compared with previous works, the proposed system achieves complete cursor function and provides an inexpensive solution. Although there are still some drawbacks in the facial EMG-based human – computer interface, the facial mouse can provide an alternative among other expensive and complicated assistive technologies.
---
paper_title: A novel position sensors-controlled computer mouse for the disabled
paper_content:
This research reports on the development of a headset-type position sensors-controlled computer mouse for the disabled. This system may serve to assist those who suffer from spinal cord injuries or other handicaps to operate a computer. This system is composed of three major components: (A) the position sensor module; (B) the signal-processing module; and (C) a main controller, the Intel-8951 microprocessor. This concept of design is mainly based on the idea that the use of the position sensor module fastened to the headset could allow the convenient control of the operation of a computer mouse. Through this system, the disabled are competent for some types of work, such as a computer operator. The increase of opportunity to do a job for the disabled would help them live independently.
---
paper_title: New classification algorithm for electromyography-based computer cursor control system
paper_content:
At present, a three-input electromyography (EMG) system has been created to provide real-time, hands-free cursor control. The system uses the real-time spectral analysis of three EMG signals to produce the following five cursor actions: i) LEFT, ii) RIGHT, iii) UP, iv) DOWN, v) LEFT-CLICK. The three EMG signals are obtained from two surface electrodes placed on the left and right temples of the head and one electrode placed in the forehead region. The present system for translating EMG activity into cursor actions does not always discriminate between up and down EMG activity efficiently. To resolve this problem it was proposed that the three-electrode system be converted into a four-electrode system, using two electrodes in the forehead of the user, instead of one. This paper compares the effectiveness of the four-electrode system to that of the three-electrode system in classifying EMG activity into cursor actions through the use of Matlab simulations. It will be shown that the new four-electrode system produces significant improvements in classification performance.
---
paper_title: Face direction-based human-computer interface using image observation and EMG signal for the disabled
paper_content:
The paralytic caused by the high-level spinal cord injury needs a human-computer interface (HCI) for expressing their intention. In this paper, we propose a novel method to estimate face direction angle using both image observation and electromyogram (EMG) signal from neck muscles. The EMG signal to neck muscle is linear to the face direction angle, but it is undetectable in relatively small angle. Using the image observation, the small face angle is estimated by the geometric relationship between pupils and the face. If two measurements obtained by the EMG signal and the image observation are available, the face direction angle is estimated accurately by a linear interpolation method. We implemented a camera viewing direction control system using the face direction interface, and showed that the proposed method can be used to construct an HCI system for the disabled people with severe motor disabilities.
---
paper_title: Design of the human/computer interface for human with disability using myoelectric signal control
paper_content:
The purpose of this study is to develop a human-computer interface (HCI) application based on a real-time EMG discrimination system. A personal computer with a plug-in data acquisition and processing board containing a floating-point digital signal processor are used to attain real-time EMG classification. The integrated EMG is employed to detect the onset of muscle contraction. The cepstral coefficients derived from AR coefficients and estimated by a recursive least square algorithm, are used as the recognition feature. These features are then discriminated using a modified maximum likelihood distance classifier. The identified commands control the mouse cursor. It is fully compatible with a Microsoft serial mouse. This system can move the cursor in four directions, and double-click the icon in GUI operating systems.
---
paper_title: Neural Control of the Computer Cursor Based on Spectral Analysis of the Electromyogram
paper_content:
A classification algorithm is developed to translate facial movements into five cursor actions: i) left, ii) right, iii) up, iv) down, and v) left-click. The algorithm utilizes the unique spectral characteristics exhibited by electromyogram (EMG) signals obtained from different muscles in the face to assist in the classification process. A previous three-electrode, EMG-based system was utilized in to perform a similar translation of facial movements into cursor actions. This system also made use of spectral analysis to classify muscle activity. It was found that this system does not always discriminate between the EMG activity assigned to up and down cursor actions efficiently. To remedy this matter, a fourth electrode was added and a new classification algorithm was devised. This paper details the classification algorithm utilized with the four-electrode system. It also compares the effectiveness of the four-electrode system to that of the three-electrode system in classifying EMG activity into cursor actions. This was done through the use of Matlab simulations. It will be shown that the new four-electrode system produces significant improvements in classification performance
---
paper_title: Two-Dimensional Cursor-to-Target Control From Single Muscle Site sEMG Signals
paper_content:
In this study, human subjects achieve two-dimensional cursor-to-target control using the surface electromyogram (sEMG) from a single muscle site. The X-coordinate and the Y-coordinate of the computer cursor were simultaneously controlled by the active manipulation of power within two frequency bands of the sEMG power-spectrum. Success of the method depends on the sEMG frequency bandwidths and their midpoints. We acquired the sEMG signals at a single facial muscle site of four able-bodied subjects and trained them, by visual feedback, to control the position of the cursor. After training, all four subjects were able to simultaneously control the X and Y positions of the cursor to accurately and consistently hit three widely-separated targets on a computer screen. This technology has potential application in a wide variety of human-machine interfaces to assistive technologies.
---
paper_title: Gazing and frowning as a new human--computer interaction technique
paper_content:
The present aim was to study a new technique for human--computer interaction. It combined the use of two modalities, voluntary gaze direction and voluntary facial muscle activation for object pointing and selection. Fourteen subjects performed a series of pointing tasks with the new technique and with a mouse. At short distances the mouse was significantly faster than the new technique. However, there were no statistically significant differences at medium and long distances between the techniques. Fitts' law analyses were performed both by using only error-free trials and using also data including error trials (i.e., effective target width). In all cases both techniques seemed to follow Fitts' law, although for the new technique the effective target width correlation coefficient was smaller R = 0.776 than for the mouse R = 0.991. The regression slopes suggested that at very long distances (i.e., beyond 800 pixels) the new technique might be faster than the mouse. The new technique showed promising results already after a short practice and in the future it could be useful especially for physically challenged persons.
---
paper_title: EMG-Based Speech Recognition Using Hidden Markov Models With Global Control Variables
paper_content:
It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.
---
paper_title: Clinical ventilator adjustments that improve speech.
paper_content:
Study objectives: We sought to improve speech in tracheostomized individuals receiving positivepressure ventilation. Such individuals often speak with short phrases, long pauses, and have problems with loudness and voice quality. Subjects: We studied 15 adults with spinal cord injuries or neuromuscular diseases receiving long-term ventilation. Interventions: The ventilator was adjusted using lengthened inspiratory time (TI), positive end-expiratory pressure (PEEP), and combinations thereof. Results: When TI was lengthened (by 8 to 35% of the ventilator cycle), speaking time increased by 19% and pause time decreased by 12%. When PEEP was added (5 to 10 cm H2O), speaking time was 25% longer and obligatory pauses were 21% shorter. When lengthened TI and PEEP were combined (with or without reduced tidal volume), their effects were additive, increasing speaking time by 55% and decreasing pause time by 36%. The combined intervention improved speech timing, loudness, voice quality, and articulation. Individual differences in subject response to the interventions were substantial in some cases. We also tested high PEEP (15 cm H2O) in three subjects and found speech to be essentially identical to that produced with a one-way valve. Conclusions: These simple interventions markedly improve ventilator-supported speech and are safe, at least when used on a short-term basis. High PEEP is a safer alternative than a one-way valve. (CHEST 2003; 124:1512–1521)
---
paper_title: Neck and Face Surface Electromyography for Prosthetic Voice Control After Total Laryngectomy
paper_content:
The electrolarynx (EL) is a common rehabilitative speech aid for individuals who have undergone total laryngectomy, but they typically lack pitch control and require the exclusive use of one hand. The viability of using neck and face surface electromyography (sEMG) to control the onset, offset, and pitch of an EMG-controlled EL (EMG-EL) was studied. Eight individuals who had undergone total laryngectomy produced serial and running speech using a typical handheld EL and the EMG-EL while attending to real-time visual sEMG biofeedback. Running speech tokens produced with the EMG-EL were examined for naturalness by 10 listeners relative to those produced with a typical EL using a visual analog scale. Serial speech performance was assessed as the percentage of words that were fully voiced and pauses that were successfully produced. Results of the visual analog scale assessment indicated that individuals were able to use the EMG-EL without training to produce running speech perceived as natural as that produced with a typical handheld EL. All participants were able to produce running and serial speech with the EMG-EL controlled by sEMG from multiple recording locations, with the superior ventral neck or submental surface locations providing at least one of the two best control locations.
---
paper_title: Silent speech interfaces
paper_content:
The possibility of speech processing in the absence of an intelligible acoustic signal has given rise to the idea of a 'silent speech' interface, to be used as an aid for the speech-handicapped, or as part of a communications system operating in silence-required or high-background-noise environments. The article first outlines the emergence of the silent speech interface from the fields of speech production, automatic speech processing, speech pathology research, and telecommunications privacy issues, and then follows with a presentation of demonstrator systems based on seven different types of technologies. A concluding section underlining some of the common challenges faced by silent speech interface researchers, and ideas for possible future directions, is also provided.
---
paper_title: Session independent non-audible speech recognition using surface electromyography
paper_content:
In this paper we introduce a speech recognition system based on myoelectric signals. The system handles audible and non-audible speech. Major challenges in surface electromyography based speech recognition ensue from repositioning electrodes between recording sessions, environmental temperature changes, and skin tissue properties of the speaker. In order to reduce the impact of these factors, we investigate a variety of signal normalization and model adaptation methods. An average word accuracy of 97.3% is achieved using seven EMG channels and the same electrode positions. The performance drops to 76.2% after repositioning the electrodes if no normalization or adaptation is performed. By applying our adaptation methods we manage to restore the recognition rates to 87.1%. Furthermore, we compare audibly to non-audibly spoken speech. The results suggest that large differences exist between the corresponding muscle movements. Still, our recognition system recognizes both speech manners accurately when trained on pooled data
---
paper_title: Myoelectric Signals for Multimodal Speech Recognition
paper_content:
A Coupled Hidden Markov Model (CHMM) is proposed in this paper to perform multimodal speech recognition using myoeletric signals (MES) from the muscles of vocal articulation. MES signals are immune to noise, and words that are acoustically similar manifest distinctly in MES. Hence, they would effectively complement the acoustic data in a multimodal speech recognition system. Research in Audio-Visual Speech Recognition has shown that CHMMs model the asynchrony between different data streams effectively. Hence, we propose CHMM for multimodal speech recognition using audio and MES as the two data streams. Our experiments indicate that the multimodal CHMM system significantly outperforms the audio only system at different SNRs. We have also provided a comparison between different features for MES and have found that wavelet features provide the best results.
---
paper_title: Speech recognition as a function of channel capacity in a discrete set of channels.
paper_content:
There have been many attempts to perform a frequency analysis of speech and to use the outputs of this analysis to provide cutaneous stimulation. The results of these tests have left in doubt the issue of whether or not cutaneous recognition of speech is actually possible. In addition to other difficulties, an optimum frequency analysis has never been achieved. Instead, filtering configurations have been chosen essentially arbitrarily. The systems considered may well have had insufficient channel capacity for speech recognition even in the event that the tactile stimulators were arranged in an optimum manner. In this paper, several sets of finite frequency band to discrete channels filters were considered. The frequency of each discrete channel was constrained to the fixed center frequency of the corresponding band so as to be directly translatable to the position of a tactile stimulator. Tests were conducted to measure the auditory recognition rate of speech, resynthesized as the sum of these discrete ch...
---
paper_title: Electrode position optimization for facial EMG measurements for human-computer interface
paper_content:
The aim of this work was to model facial electromyography (fEMG) to find optimal electrode positions for wearable human-computer interface system. The system is a head cap developed in our institute and with it we can measure fEMG and electro-oculogram (EOG). The signals can be used to control the computer interface: gaze directions move the cursor and muscle activations correspond to clicking. In this work a very accurate 3D model of the human head was developed and it was used in the modeling of fEMG. The optimal positions of four electrodes on the forehead measuring the activations of frontalis and corrugator muscles were defined. It resulted that the electrode pairs used in frontalis and corrugator measurements should be placed orthogonally by comparison with each other to get a signal that enables the separation of those two different activations.
---
paper_title: Multi-stream HMM for EMG-based speech recognition
paper_content:
A technique for improving the recognition accuracy of EMG-based speech recognition by applying existing speech recognition technologies is proposed. The authors have proposed an EMG-based speech recognition system that requires only mouth movements, voice need not be generated. A multi-stream HMM (hidden Markov model) and feature extraction technique are applied to EMG-based speech recognition. 3 channel facial EMG signals are collected from ten subjects when uttering 10 Japanese isolated digits. One channel corresponds to one stream. By examining various features, we found that the delta component of the static parameter leads to higher accuracy. Compared to equal stream weighting, the individual optimization of stream weights increased recognition accuracy by 4.0% which corresponds to a 12.8% reduction in error rate. This result shows that multistream HMM is effective for the classification of EMG.
---
paper_title: EMG based voice recognition
paper_content:
Besides its clinical applications, various researchers have shown that EMG can be utilised in areas such as computer human interface and in developing intelligent prosthetic devices. The paper presents results from a preliminary study. The work describes the outcome in using an artificial neural network (ANN) to recognise and classify human speech based on EMG. The EMG signals were acquired from three articulatory facial muscles. Three subjects were selected and participated in the experiments. Preliminarily, five English vowels were used as recognition variables. The root mean square (RMS) values of the EMG signals were estimated and used as a set of features to feed the ANN. The findings indicate that such a system may have the capacity to recognise and classify speech signals with an accuracy of up to 88%.
---
paper_title: Design of an Electroocular Computing Interface
paper_content:
The human retina consists of an electrically-charged nerve membrane. This potential is a constant value for a given adaptation without stimulation; it is the retinal resting potential. The retinal resting potential causes an electric field around the eyeball, centered on the optical axis, which can be measured by placing electrodes near to the eye. As a result, the motion of the eye causes a measurable change of DC voltage between the surface electrodes. The same vector coordinate system employed in the modern computer mouse may be adapted for use with our electro-ocular interface. Such a device would provide a relative position of gaze and have application in both hands-busy and assistive research. The theory behind our device, hardware design, the experimental results, and efficacy of the system are presented.
---
paper_title: Implementation of the EOG-Based Human Computer Interface System
paper_content:
Bio-based human computer interface(HCI) has attracted more and more attention of researches all over the world in recent years. In this paper, an EOG-based HCI system is introduced. It is composed of three parts: EOG amplifying and acquisition, EOG pattern recognition and control command output. Three plane electrodes are employed to detects the EOG signals, which contains the information related to the eye blinking and vertical( or horizontal ) eye movements referred to pre-designed command table. An online signal processing algorithm is designated to get the command information contained in EOG signals, and these commands could be used to control the computer or other instruments. Based on this HCI system, the remote control experiments driven by EOG are realized.
---
paper_title: A Practical Biosignal-Based Human Interface Applicable to the Assistive Systems for People with Motor Impairment
paper_content:
An alternative human interface enabling the handicapped with severe motor disabilities to control an assistive system is presented. Since this interface relies on the biosignals originating from the contraction of muscles on the face during particular movements, even individuals with a paralyzed limb can use it with ease. For real-world application, a dedicated hardware module employing a general-purpose DSP was implemented and its validity tested on an electrically powered wheelchair. Furthermore, an additional attempt to reduce error rates to a minimum for stable operation was also made based on the entropy information inherent in the signals during the classification phase. In the experiments in which 11 subjects participated, it was found most of them could control the target system at their own will, and thus the proposed interface could be considered a potential alternative for the interaction of the severely handicapped with electronic systems.
---
paper_title: EOG and EMG based virtual keyboard: A brain-computer interface
paper_content:
This paper discusses a brain-computer interface through electrooculogram (EOG) and electromyogram (EMG) signals. In situations of disease or trauma, there may be inability to communicate with others through means such as speech or typing. Eye movement tends to be one of the last remaining active muscle capabilities for people with neurodegenerative disorders, such as amyotrophic lateral sclerosis (ALS) also known as Lou Gehrig's disease. Thus, there is a need for eye movement based systems to enable communication. To meet this need, we proposed a system to accept eye-gaze controlled navigation of a particular letter and EMG based click to enter the letter. Eye -gaze direction (angle) is obtained from EOG signals and EMG signal is recorded from eyebrow muscle activity. A virtual screen keyboard may be used to examine the usability of the proposed system.
---
paper_title: Eye Movement-Based Human-Computer Interaction Techniques: Toward Non-Command Interfaces
paper_content:
User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.
---
paper_title: Drifting and Blinking Compensation in Electro-oculography (EOG) Eye-gaze Interface
paper_content:
This paper describes an eye-gaze interface using a biological signal, electro-oculorgram (EOG). This interface enables a user to move a computer cursor on a graphical user interface using eye gaze movement alone. It will be useful as a communication aid for individuals with mobility handicaps. Although EOG is easily recordable, drifting and blinking problems must be solved to produce a reliable eye-gaze interface. Here we introduced a calibration method and a feedback control to overcome these problems.
---
paper_title: Development of human-mobile communication system using electrooculogram signals
paper_content:
Present the development of a human-mobile robot communication system using electrooculogram (EOG) signals. An ideal velocity shape signal processing algorithm is proposed to extract position data where the eyes are focusing on from the noise and drift included in EOG signals. Additionally, an efficient algorithm for the detection of various eye-lip movements such as blink and wink is suggested. Two experiments were performed for the validation of the human-mobile communication system. One is point stabilization of a mobile robot, using extracted eye focusing position data. The other is a moving target following experiment using various eye-lip movements as mobile robot commands.
---
paper_title: Wheelchair Guidance Strategies Using EOG
paper_content:
This paper describes an eye-control method, based on electrooculography (EOG), for guiding and controlling a wheelchair for disabled people; the control is actually effected by eye movements within the socket. An eye model based on an electrooculographic signal is proposed and its validity is studied. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. This control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
---
paper_title: System for assisted mobility using eye movements based on electrooculography
paper_content:
This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
---
paper_title: Development of communication supporting device controlled by eye movements and voluntary eye blink
paper_content:
A communication support interface controlled by eye movements and voluntary eye blink has been developed for disabled individuals with motor paralysis who cannot speak. Horizontal and vertical electro-oculograms were measured using two surface electrodes attached above and beside the dominant eye and referring to an earlobe electrode and amplified with AC-coupling in order to reduce the unnecessary drift. Four directional cursor movements ---up, down, right, and left--- and one selected operation were realized by logically combining the two detected channel signals based on threshold settings specific to the individual. Letter input experiments were conducted on a virtual screen keyboard. The method's usability was enhanced by minimizing the number of electrodes and applying training to both the subject and the device. As a result, an accuracy of 90.1 ± 3.6% and a processing speed of 7.7 ± 1.9 letters/min. were obtained using our method.
---
paper_title: Implementation of the EOG-Based Human Computer Interface System
paper_content:
Bio-based human computer interface(HCI) has attracted more and more attention of researches all over the world in recent years. In this paper, an EOG-based HCI system is introduced. It is composed of three parts: EOG amplifying and acquisition, EOG pattern recognition and control command output. Three plane electrodes are employed to detects the EOG signals, which contains the information related to the eye blinking and vertical( or horizontal ) eye movements referred to pre-designed command table. An online signal processing algorithm is designated to get the command information contained in EOG signals, and these commands could be used to control the computer or other instruments. Based on this HCI system, the remote control experiments driven by EOG are realized.
---
paper_title: A low-cost interface for control of computer functions by means of eye movements
paper_content:
Human-computer interactions (HCI) have become an important area of research and development in computer science and psychology. Appropriate use of computers could be of primary importance for communication and education of those subjects which could not move, speak, see or hear properly. The aim of our study was to develop a reliable, low-cost and easy-to-use HCI based on electrooculography signal analysis, to allow physically impaired patients to control a computer as assisted communication. Twenty healthy subjects served as volunteers: eye movements were captured by means of four electrodes and a two-channel amplifier. The output signal was then transmitted to an ''Analog to Digital'' (AD) converter, which digitized the signal of the amplifier at a rate of 500Hz, before being sent to a laptop. We designed and coded a specific software, which analyzed the input signal to give an interpretation of eye movements. By means of a single ocular movement (up, down, left and right) the subjects were then able to move a cursor over a screen keyboard, passing from one letter to another; a double eye blink was then necessary to select and write the active letter. After a brief training session, all the subjects were able to confidently control the cursor and write words using only ocular movements and blinking. For each subject we presented three series of randomized words: mean time required to enter a single character was about 8.5s, while input errors were very limited (less than 1 per 250 characters). Our results confirm those obtained in previous studies: eye-movement interface can be used to properly control computer functions and to assist communication of movement-impaired patients.
---
paper_title: Development of EOG-Based Communication System Controlled by Eight-Directional Eye Movements
paper_content:
A communication support interface controlled by eye movements and voluntary eye blink has been developed for disabled individuals with motor paralysis who cannot speak. Horizontal and vertical electro-oculograms were measured using two electrodes attached above and beside the dominant eye and referring to an earlobe electrode and amplified with AC-coupling in order to reduce the unnecessary drift. Eight directional cursor movements and one selected operation were realized by logically combining the two detected channel signals based on threshold setting specific to the individuals. As experimental results using a projected screen keyboard, processing speed was improved to 12.1 letters/min. while the accuracy was 90.4%.
---
paper_title: Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans
paper_content:
Brain-computer interfaces (BCIs) can provide communication and control to people who are totally paralyzed. BCIs can use noninvasive or invasive methods for recording the brain signals that convey the user's commands. Whereas noninvasive BCIs are already in use for simple applications, it has been widely assumed that only invasive BCIs, which use electrodes implanted in the brain, can provide multidimensional movement control of a robotic arm or a neuroprosthesis. We now show that a noninvasive BCI that uses scalp-recorded electroencephalographic activity and an adaptive algorithm can provide humans, including people with spinal cord injuries, with multidimensional point-to-point movement control that falls within the range of that reported with invasive methods in monkeys. In movement time, precision, and accuracy, the results are comparable to those with invasive BCIs. The adaptive algorithm used in this noninvasive BCI identifies and focuses on the electroencephalographic features that the person is best able to control and encourages further improvement in that control. The results suggest that people with severe motor disabilities could use brain signals to operate a robotic arm or a neuroprosthesis without needing to have electrodes implanted in their brains.
---
paper_title: Two-dimensional movement control using electrocorticographic signals in humans
paper_content:
We show here that a brain-computer interface (BCI) using electrocorticographic activity (ECoG) and imagined or overt motor tasks enable humans to control a computer cursor in two dimensions. Over a brief training period of 12-36 min, each of five human subjects acquired substantial control of particular ECoG features recorded from several locations over the same hemisphere, and achieved average success rates of 53-73% in a two-dimensional four-target center-out task in which chance accuracy was 25%. Our results support the expectation that ECoG-based BCIs can combine high performance with technical and clinical practicality, and also indicate promising directions for further research.
---
paper_title: Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials
paper_content:
Abstract This paper describes the development and testing of a system whereby one can communicate through a computer by using the P300 component of the event-related brain potential (ERP). Such a system may be used as a communication aid by individuals who cannot use any motor system for communication (e.g., ‘locked-in’ patients). The 26 letters of the alphabet, together with several other symbols and commands, are displayed on a computer screen which serves as the keyboard or prosthetic device. The subject focuses attention successively on the characters he wishes to communicate. The computer detects the chosen character on-line and in real time. This detection is achieved by repeatedly flashing rows and columns of the matrix. When the elements containing the chosen character are flashed, a P300 is elicited, and it is this P300 that is detected by the computer. We report an analysis of the operating characteristics of the system when used with normal volunteers, who took part in 2 experimental sessions. In the first session (the pilot study/training session) subjects attempted to spell a word and convey it to a voice synthesizer for production. In the second session (the analysis of the operating characteristics of the system) subjects were required simply to attend to individual letters of a word for a specific number of trials while data were recorded for off-line analysis. The analyses suggest that this communication channel can be operated accurately at the rate of 0.20 bits/sec. In other words, under the conditions we used, subjects can communicate 12.0 bits, or 2.3 characters, per min.
---
paper_title: Study of discriminant analysis applied to motor imagery bipolar data
paper_content:
We present a study of linear, quadratic and regularized discriminant analysis (RDA) applied to motor imagery data of three subjects. The aim of the work was to find out which classifier can separate better these two-class motor imagery data: linear, quadratic or some function in between the linear and quadratic solutions. Discriminant analysis methods were tested with two different feature extraction techniques, adaptive autoregressive parameters and logarithmic band power estimates, which are commonly used in brain–computer interface research. Differences in classification accuracy of the classifiers were found when using different amounts of data; if a small amount was available, the best classifier was linear discriminant analysis (LDA) and if enough data were available all three classifiers performed very similar. This suggests that the effort needed to find regularizing parameters for RDA can be avoided by using LDA.
---
paper_title: EEG-based neuroprosthesis control: A step towards clinical practice
paper_content:
Abstract This case study demonstrates the coupling of an electroencephalogram (EEG)-based Brain–Computer Interface (BCI) with an implanted neuroprosthesis (Freehand ® system). Because the patient was available for only 3 days, the goal was to demonstrate the possibility of a patient gaining control over the motor imagery-based Graz BCI system within a very short training period. By applying himself to an organized and coordinated training procedure, the patient was able to generate distinctive EEG-patterns by the imagination of movements of his paralyzed left hand. These patterns consisted of power decreases in specific frequency bands that could be classified by the BCI. The output signal of the BCI emulated the shoulder joystick usually used, and by consecutive imaginations the patient was able to switch between different grasp phases of the lateral grasp that the Freehand ® system provided. By performing a part of the grasp-release test, the patient was able to move a simple object from one place to another. The results presented in this work give evidence that Brain–Computer Interfaces are an option for the control of neuroprostheses in patients with high spinal cord lesions. The fact that the user learned to control the BCI in a comparatively short time indicates that this method may also be an alternative approach for clinical purposes.
---
paper_title: Brain–computer interfaces as new brain output pathways
paper_content:
Brain–computer interfaces (BCIs) can provide non-muscular communication and control for people with severe motor disabilities. Current BCIs use a variety of invasive and non-invasive methods to record brain signals and a variety of signal processing methods. Whatever the recording and processing methods used, BCI performance (e.g. the ability of a BCI to control movement of a computer cursor) is highly variable and, by the standards applied to neuromuscular control, could be described as ataxic. In an effort to understand this imperfection, this paper discusses the relevance of two principles that underlie the brain's normal motor outputs. The first principle is that motor outputs are normally produced by the combined activity of many CNS areas, from the cortex to the spinal cord. Together, these areas produce appropriate control of the spinal motoneurons that activate muscles. The second principle is that the acquisition and life-long preservation of motor skills depends on continual adaptive plasticity throughout the CNS. This plasticity optimizes the control of spinal motoneurons. In the light of these two principles, a BCI may be viewed as a system that changes the outcome of CNS activity from control of spinal motoneurons to, instead, control of the cortical (or other) area whose signals are used by the BCI to determine the user's intent. In essence, a BCI attempts to assign to cortical neurons the role normally performed by spinal motoneurons. Thus, a BCI requires that the many CNS areas involved in producing normal motor actions change their roles so as to optimize the control of cortical neurons rather than spinal motoneurons. The disconcerting variability of BCI performance may stem in large part from the challenge presented by the need for this unnatural adaptation. This difficulty might be reduced, and BCI development might thereby benefit, by adopting a ‘goal-selection’ rather than a ‘process- control’ strategy. In ‘process control’, a BCI manages all the intricate high-speed interactions involved in movement. In ‘goal selection’, by contrast, the BCI simply communicates the user's goal to software that handles the high–speed interactions needed to achieve the goal. Not only is ‘goal selection’ less demanding, but also, by delegating lower-level aspects of motor control to another structure (rather than requiring that the cortex do everything), it more closely resembles the distributed operation characteristic of normal motor control.
---
paper_title: Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients
paper_content:
We have previously demonstrated that an EEG-controlled web browser based on self-regulation of slow cortical potentials (SCPs) enables severely paralyzed patients to browse the internet independently of any voluntary muscle control. However, this system had several shortcomings, among them that patients could only browse within a limited number of web pages and had to select links from an alphabetical list, causing problems if the link names were identical or if they were unknown to the user (as in graphical links). Here we describe a new EEG-controlled web browser, called Nessi, which overcomes these shortcomings. In Nessi, the open source browser, Mozilla, was extended by graphical in-place markers, whereby different brain responses correspond to different frame colors placed around selectable items, enabling the user to select any link on a web page. Besides links, other interactive elements are accessible to the user, such as e-mail and virtual keyboards, opening up a wide range of hypertext-based applications.
---
paper_title: Bipolar electrode selection for a motor imagery based brain?computer interface
paper_content:
A motor imagery based brain?computer interface (BCI) provides a non-muscular communication channel that enables people with paralysis to control external devices using their motor imagination. Reducing the number of electrodes is critical to improving the portability and practicability of the BCI system. A novel method is proposed to reduce the number of electrodes to a total of four by finding the optimal positions of two bipolar electrodes. Independent component analysis (ICA) is applied to find the source components of mu and alpha rhythms, and optimal electrodes are chosen by comparing the projection weights of sources on each channel. The results of eight subjects demonstrate the better classification performance of the optimal layout compared with traditional layouts, and the stability of this optimal layout over a one week interval was further verified.
---
paper_title: Event-related EEG/MEG synchronization and desynchronization: basic principles
paper_content:
Abstract An internally or externally paced event results not only in the generation of an event-related potential (ERP) but also in a change in the ongoing EEG/MEG in form of an event-related desynchronization (ERD) or event-related synchronization (ERS). The ERP on the one side and the ERD/ERS on the other side are different responses of neuronal structures in the brain. While the former is phase-locked, the latter is not phase-locked to the event. The most important difference between both phenomena is that the ERD/ERS is highly frequency band-specific, whereby either the same or different locations on the scalp can display ERD and ERS simultaneously. Quantification of ERD/ERS in time and space is demonstrated on data from a number of movement experiments.
---
paper_title: "Virtual keyboard" controlled by spontaneous EEG activity
paper_content:
A "virtual keyboard" (VK) is a letter spelling device operated for example by spontaneous electroencephalogram (EEG), whereby the EEC is modulated by mental hand and leg motor imagery. We report on three able-bodied subjects, operating the VK. The ability in the use of the VK varies between 0.85 and 0.5 letters/min in error-free writing.
---
paper_title: How many people are able to control a P300-based brain–computer interface (BCI)?
paper_content:
An EEG-based brain–computer system can be used to control external devices such as computers, wheelchairs or Virtual Environments. One of the most important applications is a spelling device to aid severely disabled individuals with communication, for example people disabled by amyotrophic lateral sclerosis (ALS). P300-based BCI systems are optimal for spelling characters with high speed and accuracy, as compared to other BCI paradigms such as motor imagery. In this study, 100 subjects tested a P300based BCI system to spell a 5-character word with only 5 min of training. EEG data were acquired while the subject looked at a 36-character matrix to spell the word WATER. Two different versions of the P300 speller were used: (i) the row/column speller (RC) that flashes an entire column or row of characters and (ii) a single character speller (SC) that flashes each character individually. The subjects were free to decide which version to test. Nineteen subjects opted to test both versions. The BCI system classifier was trained on the data collected for the word WATER. During the real-time phase of the experiment, the subject spelled the word LUCAS, and was provided with the classifier selection accuracy after each of the five letters. Additionally, subjects filled out a questionnaire about age, sex, education, sleep duration, working duration, cigarette consumption, coffee consumption, and level of disturbance that the flashing characters produced. 72.8% (N = 81) of the subjects were able to spell with 100% accuracy in the RC paradigm and 55.3% (N = 38) of the subjects spelled with 100% accuracy in the SC paradigm. Less than 3% of the subjects did not spell any character correctly. People who slept less than 8 h performed significantly better than other subjects. Sex, education, working duration, and cigarette and coffee consumption were not statistically related to differences in accuracy. The disturbance of the flashing characters was rated with a median score of 1 on a scale from 1 to 5 (1, not disturbing; 5, highly disturbing). This study shows that high spelling accuracy can be achieved with the P300 BCI system using approximately 5 min of training data for a large number of non-disabled subjects, and that the RC paradigm is superior to the SC paradigm. 89% of the 81 RC subjects were able to spell with accuracy 80–100%. A similar study using a motor imagery BCI with 99 subjects showed that only 19% of the subjects were able to achieve accuracy of 80–100%. These large differences in accuracy suggest that with limited amounts of training data the P300-based BCI is superior to the motor imagery BCI. Overall, these results are very encouraging and a similar study should be conducted with subjects who have ALS to determine if their accuracy levels are similar.
---
paper_title: A P300 event-related potential brain–computer interface (BCI): The effects of matrix size and inter stimulus interval on performance
paper_content:
We describe a study designed to assess properties of a P300 brain–computer interface (BCI). The BCI presents the user with a matrix containing letters and numbers. The user attends to a character to be communicated and the rows and columns of the matrix briefly intensify. Each time the attended character is intensified it serves as a rare event in an oddball sequence and it elicits a P300 response. The BCI works by detecting which character elicited a P300 response. We manipulated the size of the character matrix (either 3 � 3o r 6� 6) and the duration of the inter stimulus interval (ISI) between intensifications (either 175 or 350 ms). Online accuracy was highest for the 3 � 3 matrix 175-ms ISI condition, while bit rate was highest for the 6 � 6 matrix 175-ms ISI condition. Average accuracy in the best condition for each subject was 88%. P300 amplitude was significantly greater for the attended stimulus and for the 6 � 6 matrix. This work demonstrates that matrix size and ISI are important variables to consider when optimizing a BCI system for individual users and that a P300-BCI can be used for effective communication. # 2006 Elsevier B.V. All rights reserved.
---
paper_title: A BCI-based environmental controller for the motion-disabled
paper_content:
With the development of brain-computer interface (BCI) technology, researchers are now attempting to put current BCI techniques into practical application. This paper presents an environmental controller using a BCI technique based on steady-state visual evoked potential. The system is composed of a stimulator, a digital signal processor, and a trainable infrared remote-controller. The attractive features of this system include noninvasive signal recording, little training requirement, and a high information transfer rate. Our test results have shown that this system can distinguish at least 48 targets and provide a transfer rate up to 68 b/min. The system has been applied to the control of an electric apparatus successfully.
---
paper_title: A human computer interface using SSVEP-based BCI technology
paper_content:
To address the issue of system simplicity and subject applicability, a brain controlled HCI system derived from steady state visual evoked potential (SSVEP) based brain computer interface (BCI) is proposed in this paper. Aiming at an external input device for personal computer, key issues of hardware and software design for better performance and user-friendly interface are introduced systematically. With proper parameter customization for each individual, an average information transfer rate of 46bits/min was achieved in the operation of dialing a phone number. With encouraging online performance and advantages of system simplicity, the proposed HCI using SSVEP-based BCI technology is promising for a substitute of standard computer input device for both health and disabled computer users.
---
paper_title: P300-based brain computer interface: Reliability and performance in healthy and paralysed participants
paper_content:
Abstract Objective This study aimed to describe the use of the P300 event-related potential as a control signal in a brain computer interface (BCI) for healthy and paralysed participants. Methods The experimental device used the P300 wave to control the movement of an object on a graphical interface. Visual stimuli, consisting of four arrows (up, right, down, left) were randomly presented in peripheral positions on the screen. Participants were instructed to recognize only the arrow indicating a specific direction for an object to move. P300 epochs, synchronized with the stimulus, were analyzed on-line via Independent Component Analysis (ICA) with subsequent feature extraction and classification by using a neural network. Results We tested the reliability and the performance of the system in real-time. The system needed a short training period to allow task completion and reached good performance. Nonetheless, severely impaired patients had lower performance than healthy participants. Conclusions The proposed system is effective for use with healthy participants, whereas further research is needed before it can be used with locked-in syndrome patients. Significance The P300-based BCI described can reliably control, in ‘real time’, the motion of a cursor on a graphical interface, and no time-consuming training is needed in order to test possible applications for motor-impaired patients.
---
paper_title: BCI2000: a general-purpose brain-computer interface (BCI) system
paper_content:
Many laboratories have begun to develop brain-computer interface (BCI) systems that provide communication and control capabilities to people with severe motor disabilities. Further progress and realization of practical applications depends on systematic evaluations and comparisons of different brain signals, recording methods, processing algorithms, output formats, and operating protocols. However, the typical BCI system is designed specifically for one particular BCI method and is, therefore, not suited to the systematic studies that are essential for continued progress. In response to this problem, we have developed a documented general-purpose BCI research and development platform called BCI2000. BCI2000 can incorporate alone or in combination any brain signals, signal processing methods, output devices, and operating protocols. This report is intended to describe to investigators, biomedical engineers, and computer scientists the concepts that the BCI2000 system is based upon and gives examples of successful BCI implementations using this system. To date, we have used BCI2000 to create BCI systems for a variety of brain signals, processing methods, and applications. The data show that these systems function well in online operation and that BCI2000 satisfies the stringent real-time requirements of BCI systems. By substantially reducing labor and cost, BCI2000 facilitates the implementation of different BCI systems and other psychophysiological experiments. It is available with full documentation and free of charge for research or educational purposes and is currently being used in a variety of studies by many research groups.
---
paper_title: Friend: A Communication Aid for Persons With Disabilities
paper_content:
Computers offer valuable opportunities to people with physical disabilities. For example, a computer can allow someone with severe speech and motor-impairment to engage more fully with the world. This paper describes the design of a communication aid for motor-impaired users, who literally use computers as their communication partners. Currently, a low cost interface suitable for different types of motor-impaired users is hardly available. Additionally, the target audience of existing such systems is very much limited. The present work solved these problems by its adaptation mechanism. The adaptation mechanism provides an appropriate interface from an interface bank for each user before start of an interaction. The adaptation mechanism is continued during as well as after the end of interactions to make the system personalized to individual user.
---
paper_title: Users with Disabilities: Maximum Control with Minimum Effort
paper_content:
People with disabilities can benefit greatly from services provided by computers and robots. Access to remote communications and information as well as to interpersonal communication and environmental control are assisted by current ubiquitous computers, wired and wireless networks and intelligent environments. Sensory, physical or/and cognitive restrictions to interacting with computers can be avoided by means of alternative interaction devices and procedures. Nevertheless, alternative methods are usually much slower than standard communications, frequently leading users with disabilities into unbalanced or unsafe situations. Therefore, the main challenge of human-machine interaction systems that have to be used by people with disabilities is to obtain the maximum communication and control with the minimum physical and cognitive effort from the user. This lecture overviews the main techniques used to optimize the control and communication flow, resulting in higher user satisfaction and security.
---
paper_title: The Tufts-MIT Prescription Guide: Assessment of Users to Predict the Suitability of Augmentative Communication Devices
paper_content:
Under contract to the National Institute of Neurological and Communicative Disorders and Stroke, we developed a system for prescribing augmentative communication devices for motor-impaired, nonvocal people. The system is novel in that it scores devices for their suitability for a client based on assessments that do not involve trial-and-error evaluation of client performance with devices. The “benchmarks” it calculates are designed to be predictive of the overall utility of a device given the client's needs, and of the communication rate the client will achieve with it once it has become completely familiar. The scoring process is performed by a program known as the Tufts-MIT Prescription Guide, which runs on IBM XT-compatible computers. Special-purpose assessment instrumentation has been developed to perform the motor-assessment tasks required by the Guide to estimate expert rate. An exhaustive questionnaire is used to get at the device features and functions that will be useful and preferable to a clien...
---
paper_title: Defining brain–machine interface applications by matching interface performance with device requirements
paper_content:
Abstract Interaction with machines is mediated by human–machine interfaces (HMIs). Brain–machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
---
paper_title: Information transfer rate in a five-classes brain-computer interface
paper_content:
The information transfer rate, given in bits per trial, is used as an evaluation measurement in a brain-computer interface (BCI). Three subjects performed four motor-imagery (left hand, right hand, foot, and tongue) and one mental-calculation task. Classification of the EEG patterns is based on band power estimates and hidden Markov models. We propose a method that combines the EEG patterns based on separability into subsets of two, three, four, and five mental tasks. The information transfer rates of the BCI systems comprised of these subsets are reported. The achieved information transfer rates vary from 0.42 to 0.81 bits per trial and reveal that the upper limit of different mental tasks for a BCI system is three. In each subject, different combinations of three tasks resulted in the best performance.
---
paper_title: Brain–computer interfaces for communication and control
paper_content:
Abstract For many years people have speculated that electroencephalographic activity or other electrophysiological measures of brain function might provide a new non-muscular channel for sending messages and commands to the external world – a brain–computer interface (BCI). Over the past 15 years, productive BCI research programs have arisen. Encouraged by new understanding of brain function, by the advent of powerful low-cost computer equipment, and by growing recognition of the needs and potentials of people with disabilities, these programs concentrate on developing new augmentative communication and control technology for those with severe neuromuscular disorders, such as amyotrophic lateral sclerosis, brainstem stroke, and spinal cord injury. The immediate goal is to provide these users, who may be completely paralyzed, or ‘locked in’, with basic communication capabilities so that they can express their wishes to caregivers or even operate word processing programs or neuroprostheses. Present-day BCIs determine the intent of the user from a variety of different electrophysiological signals. These signals include slow cortical potentials, P300 potentials, and mu or beta rhythms recorded from the scalp, and cortical neuronal activity recorded by implanted electrodes. They are translated in real-time into commands that operate a computer display or other device. Successful operation requires that the user encode commands in these signals and that the BCI derive the commands from the signals. Thus, the user and the BCI system need to adapt to each other both initially and continually so as to ensure stable performance. Current BCIs have maximum information transfer rates up to 10–25 bits/min. This limited capacity can be valuable for people whose severe disabilities prevent them from using conventional augmentative communication methods. At the same time, many possible applications of BCI technology, such as neuroprosthesis control, may require higher information transfer rates. Future progress will depend on: recognition that BCI research and development is an interdisciplinary problem, involving neurobiology, psychology, engineering, mathematics, and computer science; identification of those signals, whether evoked potentials, spontaneous rhythms, or neuronal firing rates, that users are best able to control independent of activity in conventional motor output pathways; development of training methods for helping users to gain and maintain that control; delineation of the best algorithms for translating these signals into device commands; attention to the identification and elimination of artifacts such as electromyographic and electro-oculographic activity; adoption of precise and objective procedures for evaluating BCI performance; recognition of the need for long-term as well as short-term assessment of BCI performance; identification of appropriate BCI applications and appropriate matching of applications and users; and attention to factors that affect user acceptance of augmentative technology, including ease of use, cosmesis, and provision of those communication and control capacities that are most important to the user. Development of BCI technology will also benefit from greater emphasis on peer-reviewed research publications and avoidance of the hyperbolic and often misleading media attention that tends to generate unrealistic expectations in the public and skepticism in other researchers. With adequate recognition and effective engagement of all these issues, BCI systems could eventually provide an important new communication and control option for those with motor disabilities and might also give those without disabilities a supplementary control channel or a control channel useful in special circumstances.
---
paper_title: Pointing Device Usage Guidelines for People With Quadriplegia: A Simulation and Validation Study Utilizing an Integrated Pointing Device Apparatus
paper_content:
This study undertakes a simulation and validation experiment to provide guidelines regarding pointing device usage for quadriplegic individuals assisted by a newly developed integrated pointing device apparatus (IPDA). The simulation experiment involving 30 normal subjects whose upper limb movement was restricted by splints. Another 15 subjects with high level cervical spinal cord injury (SCI) were recruited for the validation study. All normal subjects employed six control modes for target-acquisition and drag-and-drop tasks using an IPDA to integrate common pointing devices. A previously designed software was used to evaluate the operational efficiency (OE), expressed as "able performance" (%AP), of the subjects. The experimental results indicated that the OE of normal subjects for controlling the pointing devices were dominated first by using the unilateral hand (69-100 %AP), then by using the wrist/hand (65-73 %AP), and finally by using either bilateral body parts or the combination of limb and chin (45-53 %AP). The OE for operating an orientation-rotated mouse using the dominant wrist/hand via IPDA in both tasks was equivalent to that for operating a trackball using the dominant hand. The experimental results obtained by subjects with SCI also demonstrated similar findings, although the OEs in each control mode were lower than in normal subjects. Results of this study provide valuable guidelines for selecting and integrating common pointing devices using IPDA for quadriplegic individuals. The priority for selecting which body part should control the pointing devices was as follows: unilateral hands, unilateral wrist/hands, and either bilateral body parts or a limb and chin/head/neck in combination.
---
paper_title: A Magneto-Inductive Sensor Based Wireless Tongue-Computer Interface
paper_content:
We have developed a noninvasive, unobtrusive magnetic wireless tongue-computer interface, called "Tongue Drive," to provide people with severe disabilities with flexible and effective computer access and environment control. A small permanent magnet secured on the tongue by implantation, piercing, or tissue adhesives, is utilized as a tracer to track the tongue movements. The magnetic field variations inside and around the mouth due to the tongue movements are detected by a pair of three-axial linear magneto-inductive sensor modules mounted bilaterally on a headset near the user's cheeks. After being wirelessly transmitted to a portable computer, the sensor output signals are processed by a differential field cancellation algorithm to eliminate the external magnetic field interference, and translated into user control commands, which could then be used to access a desktop computer, maneuver a powered wheelchair, or control other devices in the user's environment. The system has been successfully tested on six able-bodied subjects for computer access by defining six individual commands to resemble mouse functions. Results show that the Tongue Drive system response time for 87% correctly completed commands is 0.8 s, which yields to an information transfer rate of approximately 130 b/min.
---
paper_title: Application of tilt sensors in human-computer mouse interface for people with disabilities
paper_content:
This study describes the motivation and the design considerations of an economical head-operated computer mouse. In addition, it focuses on the invention of a head-operated computer mouse that employs two tilt sensors placed in the headset to determine head position and to function as simple head-operated computer mouse. One tilt sensor detects the lateral head-motion to drive the left/right displacement of the mouse. The other one detects the head's vertical motion to move up and down with respect to the displacement of the mouse. A touch switch device was designed to contact gently with operator's cheek. Operator may puff his cheek to trigger the device to perform single click, double clicks, and drag commands. This system was invented to assist people with disabilities to live an independent professional life.
---
paper_title: EEG-based communication: improved accuracy by response verification
paper_content:
Humans can learn to control the amplitude of electroencephalographic (EEG) activity in specific frequency bands over sensorimotor cortex and use it to move a cursor to a target on a computer screen. EEG-based communication could provide a new augmentative communication channel for individuals with motor disabilities. In the present system, each dimension of cursor movement is controlled by a linear equation. While the intercept in the equation is continually updated, it does not perfectly eliminate the impact of spontaneous variations in EEG amplitude. This imperfection reduces the accuracy of cursor movement. The authors evaluated a response verification (RV) procedure in which each outcome is determined by two opposite trials (e.g., one top-target trial and one bottom-target trial). Success, or failure, on both is required for a definitive outcome. The RV procedure reduces errors due to imperfection in intercept selection. Accuracy for opposite-trial pairs exceeds that predicted from the accuracies of individual trials, and greatly exceeds that for same-trial pairs. The RV procedure should be particularly valuable when the first trial has >2 possible targets, because the second trial need only confirm or deny the outcome of the first, and it should be applicable to nonlinear as well as to linear algorithms.
---
paper_title: The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities
paper_content:
The "Camera Mouse" system has been developed to provide computer access for people with severe disabilities. The system tracks the computer user's movements with a video camera and translates them into the movements of the mouse pointer on the screen. Body features such as the tip of the user's nose or finger can be tracked. The visual tracking algorithm is based on cropping an online template of the tracked feature from the current image frame and testing where this template correlates in the subsequent frame. The location of the highest correlation is interpreted as the new location of the feature in the subsequent frame. Various body features are examined for tracking robustness and user convenience. A group of 20 people without disabilities tested the Camera Mouse and quickly learned how to use it to spell out messages or play games. Twelve people with severe cerebral palsy or traumatic brain injury have tried the system, nine of whom have shown success. They interacted with their environment by spelling out messages and exploring the Internet.
---
paper_title: Psychology of human-computer interaction
paper_content:
Contents: Preface. An Applied Information-Processing Psychology. Part I: Science Base. The Human Information-Processor. Part II: Text-Editing. System and User Variability. An Exercise in Task Analysis. The GOMS Model of Manuscript Editing. Extensions of the GOMS Analysis. Models of Devices for Text Selection. Part III: Engineering Models. The Keystroke-Level Model. The Unit-Task Level of Analysis. Part IV: Extensions and Generalizations. An Exploration into Circuit Design. Cognitive Skill. Applying Psychology to Design Reprise.
---
paper_title: THE INFORMATION CAPACITY OF THE HUMAN MOTOR SYSTEM IN CONTROLLING THE AMPLITUDE OF MOVEMENT 1
paper_content:
Information theory has recently been employed to specify more precisely than has hitherto been possible man's capacity in certain sensory, perceptual, and perceptual-motor functions (5, 10, 13, 15, 17, 18). The experiments reported in the present paper extend the theory to the human motor system. The applicability of only the basic concepts, amount of information, noise, channel capacity, and rate of information transmission, will be examined at this time. General familiarity with these concepts as formulated by recent writers (4, 11,20, 22) is assumed. Strictly speaking, we cannot study man's motor system at the behavioral level in isolation from its associated sensory mechanisms. We can only analyze the behavior of the entire receptor-neural-effector system. However, by asking 51 to make rapid and uniform responses that have been highly overlearned, and by holding all relevant stimulus conditions constant with the exception of those resulting from 5"s own movements, we can create an experimental situation in which it is reasonable to assume that performance is limited primarily by the capacity of the motor system. The motor system in the present case is defined as including the visual and proprioceptive feedback loops that permit S to monitor his own activity. The information capacity of the motor system is specified by its ability to produce consistently one class of movement from among several alternative movement classes. The greater the number of alternative classes, the greater is the information capacity of a particular type of response. Since measurable aspects of motor responses, such as their force, direction, and amplitude, are continuous variables, their information capacity is limited only by the amount of statistical variability, or noise, that is characteristic of repeated efforts to produce the same response. The information capacity of the motor Editor's Note. This article is a reprint of an original work published in 1954 in the Journal of Experimental Psychology, 47, 381391.
---
paper_title: Optimizing Assisted Communication Devices for Children With Motor Impairments Using a Model of Information Rate and Channel Capacity
paper_content:
For children who depend on devices to communicate, the rate of communication is a primary determinant of success. For children with motor impairments, the rate of communication may be limited by inability to contact buttons or cells rapidly or accurately. It is, therefore, essential to know how to adjust the device interface in order to maximize each child's rate of communication. The optimal rate of communication is determined by the channel capacity, which is the maximum value of the information rate for all possible keyboard button or cell layouts for the communication device. We construct a mathematical model for the information rate based on the relationship between movement time and the number of buttons per screen, the size of the buttons, and the length of a sequence of buttons that must be pressed to communicate each word in the vocabulary. We measure the parameters of the model using a custom-programmed touchscreen interface in 10 children with disorders of arm movement due to cerebral palsy who use a DynaVox communication device. We measure the same parameters in 20 healthy control subjects. We show that the model approximates the measured information rate and that the information rate is lower in children with motor impairments compared with control subjects. The theory predicts that for each child there is a combination of button size and number that maximizes the predicted information rate and thereby achieves communication at the optimal channel capacity. Programming communication devices with each child's predicted optimal parameters improved the communication rate in five of the ten children, compared with programming by professionals. Therefore, measurement of information rate may provide an assessment of the effect of motor disorders on success in assisted communication. Optimization of the information rate may be useful for programming assisted communication devices.
---
paper_title: On the rate of gain of information.
paper_content:
Abstract The analytical methods of information theory are applied to the data obtained in certain choice-reaction-time experiments. Two types of experiment were performed: (a) a conventional choice-reaction experiment, with various numbers of alternatives up to ten, and with a negligible proportion of errors, and (b) a ten-choice experiment in which the subjects deliberately reduced their reaction time by allowing themselves various proportions of errors. The principal finding is that the rate of gain of information is, on the average, constant with respect to time, within the duration of one perceptual-motor act, and has a value of the order of five “bits” per second. The distribution of reaction times among the ten stimuli in the second experiment is shown to be related to the objective uncertainty as to which response will be given to each stimulus. The distribution of reaction times among the responses is also related to the same uncertainty. This is further evidence that information is intimately con...
---
paper_title: Modeling the speed of text entry with a word prediction interface
paper_content:
This study analyzes user performance of text entry tasks with word prediction by applying modeling techniques developed in the field of human-computer interaction. Fourteen subjects transcribed text with and without a word prediction feature for seven test sessions. Eight subjects were able-bodied and used mouthstick typing, while six subjects bad high-level spinal cord injuries and used their usual method of keyboard access. Use of word prediction decreased text generation rate for the spinal cord injured subjects and only modestly enhanced it for the able-bodied subjects. This suggests that the cognitive cost of using word prediction had a major impact on the performance of these subjects. Performance was analyzed in more detail by deriving subjects' times for keypress and list search actions during word prediction use. All subjects had slower keypress times during word prediction use as compared to letters-only typing, and spinal cord injured subjects had much slower list search times than able-bodied subjects. These parameter values were used in a two-parameter model to simulate subjects' word entry times during word prediction use, with an average model error of 16%. These simulation results are an encouraging first step toward demonstrating the ability of analytical models to represent user performance with word prediction. >
---
paper_title: Adaptive software for head-operated computer controls
paper_content:
Head-operated computer controls provide an alternative means of computer access for people with disabilities who are unable to use a standard mouse. However, a person's disability may limit his or her neck movements as well as upper extremity movements. Software was developed which automatically adjusts the interface sensitivity to the needs of a particular user. This adaptive software was evaluated in two stages. First, 16 novice head-control users with spinal-cord injury or multiple sclerosis used head controls with and without the adaptive software. The adaptive software was associated with increased speed in standardized icon selection exercises (p<0.05). A small increase in accuracy was also observed. In addition, five current head-control users evaluated the software in a real-world setting. One of these five subjects perceived an improvement in comparison to his current head-control system.
---
paper_title: Children with congenital spastic hemiplegia obey Fitts’ Law in a visually guided tapping task
paper_content:
Fitts' Law is commonly found to apply to motor tasks involving precise aiming movements. Children with cerebral palsy (CP) have severe difficulties in such tasks and it is unknown whether they obey Fitts' Law despite their motor difficulties. If Fitts' Law still does apply to these children, this would indicate that this law is extremely robust and that even performance of children with damaged central nervous systems can adhere to it. The integrity of motor control processes in spastic CP is usually tested in complex motor tasks, making it difficult to determine whether poor performance is due to a motor output deficit or to problems related to cognitive processes since both affect movement precision. In the present study a simple task was designed to evaluate Fitts' Law. Tapping movements were evaluated in 22 children with congenital spastic hemiplegia (CSH) and 22 typically developing children. Targets (2.5 and 5 cm in width) were placed at distances of 10 and 20 cm from each other in order to provide Indices of Difficulty (ID) of 2-4 bits. Using this Fitts' aiming task, prolonged reaction and movement time (MT) were found in the affected hand under all conditions in children with CSH as compared to controls. Like in the control group, MT in children with CSH was related to ID. The intercept 'a', corresponding to the time required to realize a tapping movement, was higher in the affected hand of the children in the CSH group. Although, the slope b (which reflects the sensitivity of the motor system to a change in difficulty of the task) and the reciprocal of slope (that represents the cognitive information processing capacity, expressed in bits/s) were similar in both groups. In conclusion, children with CSH obey Fitts' Law despite very obvious limitations in fine motor control.
---
paper_title: Accuracy measures for evaluating computer pointing devices
paper_content:
In view of the difficulties in evaluating computer pointing devices across different tasks within dynamic and complex systems, new performance measures are needed. This paper proposes seven new accuracy measures to elicit (sometimes subtle) differences among devices in precision pointing tasks. The measures are target re-entry, task axis crossing, movement direction change, orthogonal direction change, movement variability, movement error, and movement offset. Unlike movement time, error rate, and throughput, which are based on a single measurement per trial, the new measures capture aspects of movement behaviour during a trial. The theoretical basis and computational techniques for the measures are described, with examples given. An evaluation with four pointing devices was conducted to validate the measures. A causal relationship to pointing device efficiency (viz. throughput) was found, as was an ability to discriminate among devices in situations where differences did not otherwise appear. Implications for pointing device research are discussed.
---
paper_title: Evaluation of Head Orientation and Neck Muscle EMG Signals as Command Inputs to a Human–Computer Interface for Individuals With High Tetraplegia
paper_content:
We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, electromyography (EMG) from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2-D, center-out, Fitts' Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application.
---
paper_title: Imaging of computer input ability for patient with tetraplegia
paper_content:
In rehabilitation medicine, it is necessary for medical staff members to recommend the suitable computer input device for each patient with a spinal cord injury. This paper describes a measurement and evaluation system to determine the computer input ability of patients with tetraplegia. The authors will define a measurement procedure, a set of objective parameters, and a graphical representation of the parameters. They measure the position locus of the mouse cursor when a patient operates a computer using a computer input device. Using the representation, the medical staff can evaluate the patient's computer input ability and understand the characteristics of the patient's computer operation
---
paper_title: Evaluation of a modified Fitts law brain–computer interface target acquisition task in able and motor disabled individuals
paper_content:
A brain–computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.
---
paper_title: Neck range of motion and use of computer head controls
paper_content:
Computer head controls provide an alternative means of computer access for people with disabilities. However, a person?s ability to use head controls may be reduced if his or her disability involves neck movement limitations. In this study, 15 subjects without disabilities and 10 subjects with disabilities received neck range of motion evaluations and performed computer exercises using head controls. Regression analysis was used to determine the relationship between neck range of motion and performance on computer exercises. Reduced neck range of motion was found to be correlated with reduced functional range for moving the cursor across the screen, and reduced accuracy and speed in icon selection. Fitts' Law-type models were fit to the data, indicating higher Fitts' law slopes for subjects with disabilities compared to subjects without disabilities. Results also indicate that vertical cursor movements are faster than horizontal or diagonal movements.
---
paper_title: Psychology of human-computer interaction
paper_content:
Contents: Preface. An Applied Information-Processing Psychology. Part I: Science Base. The Human Information-Processor. Part II: Text-Editing. System and User Variability. An Exercise in Task Analysis. The GOMS Model of Manuscript Editing. Extensions of the GOMS Analysis. Models of Devices for Text Selection. Part III: Engineering Models. The Keystroke-Level Model. The Unit-Task Level of Analysis. Part IV: Extensions and Generalizations. An Exploration into Circuit Design. Cognitive Skill. Applying Psychology to Design Reprise.
---
paper_title: Conception and Experimentation of a Communication Device with Adaptive Scanning
paper_content:
For some people with motor disabilities and speech disorders, the only way to communicate and to have some control over their environment is through the use of a controlled scanning system operated by a single switch. The main problem with these systems is that the communication process tends to be exceedingly slow, since the system must scan through the available choices one at a time until the desired message is reached. One way of raising the speed of message selection is to optimize the elementary scanning delay in real time so that it allows the user to make selections as quickly as possible without making too many errors. With this objective in mind, this article presents a method for optimizing the scanning delay, which is based on an analysis of the data recorded in “log files” while applying the EDiTH system [Digital Teleaction Environment for People with Disabilities]. This analysis makes it possible to develop a human-machine interaction model specific to the study, and then to establish an adaptive algorithm for the calculation of the scanning delay. The results obtained with imposed scenarios and then in ecological situations provides a confirmation that our algorithms are effective in dynamically adapting a scan speed. The main advantage offered by the procedure proposed is that it works on timing information alone and thus does not require any knowledge of the scanning device itself. This allows it to work with any scanning device.
---
paper_title: Investigating the applicability of user models for motion-impaired users
paper_content:
This paper considers the differences between users with motion-impairments and able-bodied users when they interact with computers and the implications for user models. Most interface design and usability assessment practices are based on explicit or implicit models of user behaviour. This paper studies the applicability of an existing interface design user model to motion-impaired users for the relatively straightforward task of button activation. A discussion of the empirical results is provided and the paper concludes that there are significant differences between the behaviour of motion-impaired users and the accepted modelling theory.
---
paper_title: Text composition by the physically disabled: A rate prediction model for scanning input
paper_content:
Abstract Keyboard ‘bypass’ techniques allow physically disabled people, otherwise unable to use conventional means of text composition, to do so. Ability to compose text is extremely important to the disabled, offering potential for non-speech communication, computer access, creative writing, etc. Consequently, these techniques have received a good deal of attention and many diverse systems have evolved. They all suffer, however, from the drawback of inherently slow input, quite apart from any disability on the part of the user. For this reason, text composition rate (or communication rate) is the major figure of merit. Since many diverse systems and approaches exist, quantitative methods of comparison are required to guide prescription and development of such aids. Only recently have attempts to produce models which predict communication rate been made. This paper extends the earlier model of Rosen and Gooenough-Trepagnier to encompass scanning-input systems. Scanning input is of considerable interest since it can be used by very severely disabled people. The model developed is applied to the comparison of two very different systems: row-column scanning and the ‘scanning Microwriter’. According to the model, row-scanning is very much faster than the scanning Microwriter when a letter-frequency arrangement of the character selections is used. The relation of the model to classical information theory, treating the disabled user as an information source, is also explored.
---
paper_title: Performance Models for Automatic Evaluation of Virtual Scanning Keyboards
paper_content:
Virtual scanning keyboards are commonly used augmentative communication aids by persons with severe speech and motion impairments. Designers of virtual scanning keyboards face problems in evaluating alternate designs and hence in choosing the better design among alternatives. Automatic evaluation of designs will be helpful to designers in making the appropriate design choice. In this paper, we present performance models for virtual scanning keyboards that can be used for automatic evaluation. The proposed models address the limitations present in the reported work on similar models. We compared the model predictions with results from user trials and established the validity of the proposed models.
---
paper_title: Customised text entry devices for motor-impaired users.
paper_content:
The standard QWERTY keyboard is the principal text entry device for word processing and computer-based communications. For many motor-impaired individuals, and in particular those without intelligible speech, the low text entry rates they can typically achieve is a major problem. For some, the QWERTY design is completely inappropriate. Alternative designs that can appreciably increase these rates would greatly enhance their ability to communicate. This paper considers and compares several approaches to the design of text entry devices for motor-impaired users. A general method for customising (i e, optimising) these designs is employed, and consideration is given to designs requiring significantly fewer input switches than the 26 or more keys required by QWERTY. Use is made of language statistics in the design procedure, and the increased availability of inexpensive, powerful computers is directly exploited.
---
paper_title: EEG-based neuroprosthesis control: A step towards clinical practice
paper_content:
Abstract This case study demonstrates the coupling of an electroencephalogram (EEG)-based Brain–Computer Interface (BCI) with an implanted neuroprosthesis (Freehand ® system). Because the patient was available for only 3 days, the goal was to demonstrate the possibility of a patient gaining control over the motor imagery-based Graz BCI system within a very short training period. By applying himself to an organized and coordinated training procedure, the patient was able to generate distinctive EEG-patterns by the imagination of movements of his paralyzed left hand. These patterns consisted of power decreases in specific frequency bands that could be classified by the BCI. The output signal of the BCI emulated the shoulder joystick usually used, and by consecutive imaginations the patient was able to switch between different grasp phases of the lateral grasp that the Freehand ® system provided. By performing a part of the grasp-release test, the patient was able to move a simple object from one place to another. The results presented in this work give evidence that Brain–Computer Interfaces are an option for the control of neuroprostheses in patients with high spinal cord lesions. The fact that the user learned to control the BCI in a comparatively short time indicates that this method may also be an alternative approach for clinical purposes.
---
paper_title: EMG-Based Speech Recognition Using Hidden Markov Models With Global Control Variables
paper_content:
It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.
---
paper_title: Real-time implementation of electromyogram pattern recognition as a control command of man-machine interface.
paper_content:
The purpose of this study was to develop a real-time electromyogram (EMG) discrimination system to provide control commands for man-machine interface applications. A host computer with a plug-in data acquisition and processing board containing a TMS320 C31 floating-point digital signal processor was used to attain real-time EMG classification. Two-channel EMG signals were collected by two pairs of surface electrodes located bilaterally between the sternocleidomastoid and the upper trapezius. Five motions of the neck and shoulders were discriminated for each subject. The zero-crossing rate was employed to detect the onset of muscle contraction. The cepstral coefficients, derived from autoregressive coefficients and estimated by a recursive least square algorithm, were used as the recognition features. These features were then discriminated using a modified maximum likelihood distance classifier. The total response time of this EMG discrimination system was achieved about within 0.17 s. Four able bodied and two C5/6 quadriplegic subjects took part in the experiment, and achieved 95% mean recognition rate in discrimination between the five specific motions. The response time and the reliability of recognition indicate that this system has the potential to discriminate body motions for man-machine interface applications.
---
paper_title: A P300 event-related potential brain–computer interface (BCI): The effects of matrix size and inter stimulus interval on performance
paper_content:
We describe a study designed to assess properties of a P300 brain–computer interface (BCI). The BCI presents the user with a matrix containing letters and numbers. The user attends to a character to be communicated and the rows and columns of the matrix briefly intensify. Each time the attended character is intensified it serves as a rare event in an oddball sequence and it elicits a P300 response. The BCI works by detecting which character elicited a P300 response. We manipulated the size of the character matrix (either 3 � 3o r 6� 6) and the duration of the inter stimulus interval (ISI) between intensifications (either 175 or 350 ms). Online accuracy was highest for the 3 � 3 matrix 175-ms ISI condition, while bit rate was highest for the 6 � 6 matrix 175-ms ISI condition. Average accuracy in the best condition for each subject was 88%. P300 amplitude was significantly greater for the attended stimulus and for the 6 � 6 matrix. This work demonstrates that matrix size and ISI are important variables to consider when optimizing a BCI system for individual users and that a P300-BCI can be used for effective communication. # 2006 Elsevier B.V. All rights reserved.
---
paper_title: Enhanced hybrid electromyogram/Eye Gaze Tracking cursor control system for hands-free computer interaction.
paper_content:
This paper outlines the development and initial testing of a new hybrid computer cursor control system based on Eye Gaze Tracking (EGT) and electromyogram (EMG) processing for hands-free control of the computer cursor. The ultimate goal of the system is to provide an efficient computer interaction mechanism for individuals with severe motor disabilities (or specialized operators whose hands are committed to other tasks, such as surgeons, pilots, etc.) The paper emphasizes the enhancements that have been made on different areas of the architecture, with respect to a previous prototype developed by our group, and demonstrates the performance improvement verified for some of the enhancements.
---
paper_title: Human-machine interface for wheelchair control with EMG and its evaluation
paper_content:
The objective of this paper is to develop a powered wheelchair controller based on EMG for users with high-level spinal cord injury. EMG is very naturally measured when the user indicating a certain direction and the force information which will be used for the speed of wheelchair is easily extracted from EMG. Furthermore, the emergency situation based on EMG will be checked relatively ease. We classified the pre-defined motions such as rest case, forward movement, left movement, and right movement by fuzzy min-max neural networks (FMMNN). This classification results and evaluation results with real users shows the feasibility of EMG as an input interface for powered wheelchair.
---
paper_title: Development of a hybrid hands-off human computer interface based on electromyogram signals and eye-gaze tracking
paper_content:
A hybrid hands-off human computer interface that uses infrared video eye gaze tracking (EGT) and electromyogram (EMG) signals is introduced. This system combines the advantages of both sub-systems, providing quick cursor displacement In long excursions and steady, accurate movement in small position adjustments. The hybrid system also provides a reliable clicking mechanism. The evaluation protocol used to test the system is described and the results for the hybrid, an EMG Only interface, and the standard hand-held mouse are described and compared. These results show that the hybrid system is, in an average, faster than the EMG-only system by a factor of 2 or more.
---
paper_title: Session independent non-audible speech recognition using surface electromyography
paper_content:
In this paper we introduce a speech recognition system based on myoelectric signals. The system handles audible and non-audible speech. Major challenges in surface electromyography based speech recognition ensue from repositioning electrodes between recording sessions, environmental temperature changes, and skin tissue properties of the speaker. In order to reduce the impact of these factors, we investigate a variety of signal normalization and model adaptation methods. An average word accuracy of 97.3% is achieved using seven EMG channels and the same electrode positions. The performance drops to 76.2% after repositioning the electrodes if no normalization or adaptation is performed. By applying our adaptation methods we manage to restore the recognition rates to 87.1%. Furthermore, we compare audibly to non-audibly spoken speech. The results suggest that large differences exist between the corresponding muscle movements. Still, our recognition system recognizes both speech manners accurately when trained on pooled data
---
paper_title: Myoelectric Signals for Multimodal Speech Recognition
paper_content:
A Coupled Hidden Markov Model (CHMM) is proposed in this paper to perform multimodal speech recognition using myoeletric signals (MES) from the muscles of vocal articulation. MES signals are immune to noise, and words that are acoustically similar manifest distinctly in MES. Hence, they would effectively complement the acoustic data in a multimodal speech recognition system. Research in Audio-Visual Speech Recognition has shown that CHMMs model the asynchrony between different data streams effectively. Hence, we propose CHMM for multimodal speech recognition using audio and MES as the two data streams. Our experiments indicate that the multimodal CHMM system significantly outperforms the audio only system at different SNRs. We have also provided a comparison between different features for MES and have found that wavelet features provide the best results.
---
paper_title: A practical EMG-based human-computer interface for users with motor disabilities
paper_content:
In line with the mission of the Assistive Technology Act of 1998 (ATA), this study proposes an integrated assistive real-time system which "affirms that technology is a valuable tool that can be used to improve the lives of people with disabilities." An assistive technology device is defined by the ATA as "any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities." The purpose of this study is to design and develop an alternate input device that can be used even by individuals with severe motor disabilities. This real-time system design utilizes electromyographic (EMG) biosignals from cranial muscles and electroencephalographic (EEG) biosignals from the cerebrum's occipital lobe, which are transformed into controls for two-dimensional (2-D) cursor movement, the left-click (Enter) command, and an ON/OFF switch for the cursor-control functions. This HCI system classifies biosignals into "mouse" functions by applying amplitude thresholds and performing power spectral density (PSD) estimations on discrete windows of data. Spectral power summations are aggregated over several frequency bands between 8 and 500 Hz and then compared to produce the correct classification. The result is an affordable DSP-based system that, when combined with an on-screen keyboard, enables the user to fully operate a computer without using any extremities.
---
paper_title: Multimodal neuroelectric interface development
paper_content:
We are developing electromyographic and electroencephalographic methods, which draw control signals for human-computer interfaces from the human nervous system. We have made progress in four areas: 1) real-time pattern recognition algorithms for decoding sequences of forearm muscle activity associated with control gestures; 2) signal-processing strategies for computer interfaces using electroencephalogram (EEG) signals; 3) a flexible computation framework for neuroelectric interface research; and d) noncontact sensors, which measure electromyogram or EEG signals without resistive contact to the body.
---
paper_title: Drifting and Blinking Compensation in Electro-oculography (EOG) Eye-gaze Interface
paper_content:
This paper describes an eye-gaze interface using a biological signal, electro-oculorgram (EOG). This interface enables a user to move a computer cursor on a graphical user interface using eye gaze movement alone. It will be useful as a communication aid for individuals with mobility handicaps. Although EOG is easily recordable, drifting and blinking problems must be solved to produce a reliable eye-gaze interface. Here we introduced a calibration method and a feedback control to overcome these problems.
---
paper_title: Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients
paper_content:
We have previously demonstrated that an EEG-controlled web browser based on self-regulation of slow cortical potentials (SCPs) enables severely paralyzed patients to browse the internet independently of any voluntary muscle control. However, this system had several shortcomings, among them that patients could only browse within a limited number of web pages and had to select links from an alphabetical list, causing problems if the link names were identical or if they were unknown to the user (as in graphical links). Here we describe a new EEG-controlled web browser, called Nessi, which overcomes these shortcomings. In Nessi, the open source browser, Mozilla, was extended by graphical in-place markers, whereby different brain responses correspond to different frame colors placed around selectable items, enabling the user to select any link on a web page. Besides links, other interactive elements are accessible to the user, such as e-mail and virtual keyboards, opening up a wide range of hypertext-based applications.
---
paper_title: Application of facial electromyography in computer mouse access for people with disabilities.
paper_content:
Purpose. This study develops a newly facial EMG human – computer interface for people with disabilities for controllng the movement of the cursor on a computer screen.Method. We access the computer cursor according to different facial muscle activity patterns. In order to exactly detect the muscle activity threshold, this study adopts continuous wavelet transformation to estimate the single motor unit action potentials dynamically.Result. The experiment indicates that the accuracy of using the facial mouse is greater than 80%, and this result indicates the feasibility of the proposed system. Moreover, the subject can improve performance of manipulation by repeated training.Conclusion. Compared with previous works, the proposed system achieves complete cursor function and provides an inexpensive solution. Although there are still some drawbacks in the facial EMG-based human – computer interface, the facial mouse can provide an alternative among other expensive and complicated assistive technologies.
---
paper_title: A BCI-based environmental controller for the motion-disabled
paper_content:
With the development of brain-computer interface (BCI) technology, researchers are now attempting to put current BCI techniques into practical application. This paper presents an environmental controller using a BCI technique based on steady-state visual evoked potential. The system is composed of a stimulator, a digital signal processor, and a trainable infrared remote-controller. The attractive features of this system include noninvasive signal recording, little training requirement, and a high information transfer rate. Our test results have shown that this system can distinguish at least 48 targets and provide a transfer rate up to 68 b/min. The system has been applied to the control of an electric apparatus successfully.
---
paper_title: New classification algorithm for electromyography-based computer cursor control system
paper_content:
At present, a three-input electromyography (EMG) system has been created to provide real-time, hands-free cursor control. The system uses the real-time spectral analysis of three EMG signals to produce the following five cursor actions: i) LEFT, ii) RIGHT, iii) UP, iv) DOWN, v) LEFT-CLICK. The three EMG signals are obtained from two surface electrodes placed on the left and right temples of the head and one electrode placed in the forehead region. The present system for translating EMG activity into cursor actions does not always discriminate between up and down EMG activity efficiently. To resolve this problem it was proposed that the three-electrode system be converted into a four-electrode system, using two electrodes in the forehead of the user, instead of one. This paper compares the effectiveness of the four-electrode system to that of the three-electrode system in classifying EMG activity into cursor actions through the use of Matlab simulations. It will be shown that the new four-electrode system produces significant improvements in classification performance.
---
paper_title: Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials
paper_content:
Abstract This paper describes the development and testing of a system whereby one can communicate through a computer by using the P300 component of the event-related brain potential (ERP). Such a system may be used as a communication aid by individuals who cannot use any motor system for communication (e.g., ‘locked-in’ patients). The 26 letters of the alphabet, together with several other symbols and commands, are displayed on a computer screen which serves as the keyboard or prosthetic device. The subject focuses attention successively on the characters he wishes to communicate. The computer detects the chosen character on-line and in real time. This detection is achieved by repeatedly flashing rows and columns of the matrix. When the elements containing the chosen character are flashed, a P300 is elicited, and it is this P300 that is detected by the computer. We report an analysis of the operating characteristics of the system when used with normal volunteers, who took part in 2 experimental sessions. In the first session (the pilot study/training session) subjects attempted to spell a word and convey it to a voice synthesizer for production. In the second session (the analysis of the operating characteristics of the system) subjects were required simply to attend to individual letters of a word for a specific number of trials while data were recorded for off-line analysis. The analyses suggest that this communication channel can be operated accurately at the rate of 0.20 bits/sec. In other words, under the conditions we used, subjects can communicate 12.0 bits, or 2.3 characters, per min.
---
paper_title: Multi-stream HMM for EMG-based speech recognition
paper_content:
A technique for improving the recognition accuracy of EMG-based speech recognition by applying existing speech recognition technologies is proposed. The authors have proposed an EMG-based speech recognition system that requires only mouth movements, voice need not be generated. A multi-stream HMM (hidden Markov model) and feature extraction technique are applied to EMG-based speech recognition. 3 channel facial EMG signals are collected from ten subjects when uttering 10 Japanese isolated digits. One channel corresponds to one stream. By examining various features, we found that the delta component of the static parameter leads to higher accuracy. Compared to equal stream weighting, the individual optimization of stream weights increased recognition accuracy by 4.0% which corresponds to a 12.8% reduction in error rate. This result shows that multistream HMM is effective for the classification of EMG.
---
paper_title: Development of communication supporting device controlled by eye movements and voluntary eye blink
paper_content:
A communication support interface controlled by eye movements and voluntary eye blink has been developed for disabled individuals with motor paralysis who cannot speak. Horizontal and vertical electro-oculograms were measured using two surface electrodes attached above and beside the dominant eye and referring to an earlobe electrode and amplified with AC-coupling in order to reduce the unnecessary drift. Four directional cursor movements ---up, down, right, and left--- and one selected operation were realized by logically combining the two detected channel signals based on threshold settings specific to the individual. Letter input experiments were conducted on a virtual screen keyboard. The method's usability was enhanced by minimizing the number of electrodes and applying training to both the subject and the device. As a result, an accuracy of 90.1 ± 3.6% and a processing speed of 7.7 ± 1.9 letters/min. were obtained using our method.
---
paper_title: EMG based voice recognition
paper_content:
Besides its clinical applications, various researchers have shown that EMG can be utilised in areas such as computer human interface and in developing intelligent prosthetic devices. The paper presents results from a preliminary study. The work describes the outcome in using an artificial neural network (ANN) to recognise and classify human speech based on EMG. The EMG signals were acquired from three articulatory facial muscles. Three subjects were selected and participated in the experiments. Preliminarily, five English vowels were used as recognition variables. The root mean square (RMS) values of the EMG signals were estimated and used as a set of features to feed the ANN. The findings indicate that such a system may have the capacity to recognise and classify speech signals with an accuracy of up to 88%.
---
paper_title: Design of the human/computer interface for human with disability using myoelectric signal control
paper_content:
The purpose of this study is to develop a human-computer interface (HCI) application based on a real-time EMG discrimination system. A personal computer with a plug-in data acquisition and processing board containing a floating-point digital signal processor are used to attain real-time EMG classification. The integrated EMG is employed to detect the onset of muscle contraction. The cepstral coefficients derived from AR coefficients and estimated by a recursive least square algorithm, are used as the recognition feature. These features are then discriminated using a modified maximum likelihood distance classifier. The identified commands control the mouse cursor. It is fully compatible with a Microsoft serial mouse. This system can move the cursor in four directions, and double-click the icon in GUI operating systems.
---
paper_title: A low-cost interface for control of computer functions by means of eye movements
paper_content:
Human-computer interactions (HCI) have become an important area of research and development in computer science and psychology. Appropriate use of computers could be of primary importance for communication and education of those subjects which could not move, speak, see or hear properly. The aim of our study was to develop a reliable, low-cost and easy-to-use HCI based on electrooculography signal analysis, to allow physically impaired patients to control a computer as assisted communication. Twenty healthy subjects served as volunteers: eye movements were captured by means of four electrodes and a two-channel amplifier. The output signal was then transmitted to an ''Analog to Digital'' (AD) converter, which digitized the signal of the amplifier at a rate of 500Hz, before being sent to a laptop. We designed and coded a specific software, which analyzed the input signal to give an interpretation of eye movements. By means of a single ocular movement (up, down, left and right) the subjects were then able to move a cursor over a screen keyboard, passing from one letter to another; a double eye blink was then necessary to select and write the active letter. After a brief training session, all the subjects were able to confidently control the cursor and write words using only ocular movements and blinking. For each subject we presented three series of randomized words: mean time required to enter a single character was about 8.5s, while input errors were very limited (less than 1 per 250 characters). Our results confirm those obtained in previous studies: eye-movement interface can be used to properly control computer functions and to assist communication of movement-impaired patients.
---
paper_title: Neural Control of the Computer Cursor Based on Spectral Analysis of the Electromyogram
paper_content:
A classification algorithm is developed to translate facial movements into five cursor actions: i) left, ii) right, iii) up, iv) down, and v) left-click. The algorithm utilizes the unique spectral characteristics exhibited by electromyogram (EMG) signals obtained from different muscles in the face to assist in the classification process. A previous three-electrode, EMG-based system was utilized in to perform a similar translation of facial movements into cursor actions. This system also made use of spectral analysis to classify muscle activity. It was found that this system does not always discriminate between the EMG activity assigned to up and down cursor actions efficiently. To remedy this matter, a fourth electrode was added and a new classification algorithm was devised. This paper details the classification algorithm utilized with the four-electrode system. It also compares the effectiveness of the four-electrode system to that of the three-electrode system in classifying EMG activity into cursor actions. This was done through the use of Matlab simulations. It will be shown that the new four-electrode system produces significant improvements in classification performance
---
paper_title: A human computer interface using SSVEP-based BCI technology
paper_content:
To address the issue of system simplicity and subject applicability, a brain controlled HCI system derived from steady state visual evoked potential (SSVEP) based brain computer interface (BCI) is proposed in this paper. Aiming at an external input device for personal computer, key issues of hardware and software design for better performance and user-friendly interface are introduced systematically. With proper parameter customization for each individual, an average information transfer rate of 46bits/min was achieved in the operation of dialing a phone number. With encouraging online performance and advantages of system simplicity, the proposed HCI using SSVEP-based BCI technology is promising for a substitute of standard computer input device for both health and disabled computer users.
---
paper_title: Two-Dimensional Cursor-to-Target Control From Single Muscle Site sEMG Signals
paper_content:
In this study, human subjects achieve two-dimensional cursor-to-target control using the surface electromyogram (sEMG) from a single muscle site. The X-coordinate and the Y-coordinate of the computer cursor were simultaneously controlled by the active manipulation of power within two frequency bands of the sEMG power-spectrum. Success of the method depends on the sEMG frequency bandwidths and their midpoints. We acquired the sEMG signals at a single facial muscle site of four able-bodied subjects and trained them, by visual feedback, to control the position of the cursor. After training, all four subjects were able to simultaneously control the X and Y positions of the cursor to accurately and consistently hit three widely-separated targets on a computer screen. This technology has potential application in a wide variety of human-machine interfaces to assistive technologies.
---
paper_title: Development of EOG-Based Communication System Controlled by Eight-Directional Eye Movements
paper_content:
A communication support interface controlled by eye movements and voluntary eye blink has been developed for disabled individuals with motor paralysis who cannot speak. Horizontal and vertical electro-oculograms were measured using two electrodes attached above and beside the dominant eye and referring to an earlobe electrode and amplified with AC-coupling in order to reduce the unnecessary drift. Eight directional cursor movements and one selected operation were realized by logically combining the two detected channel signals based on threshold setting specific to the individuals. As experimental results using a projected screen keyboard, processing speed was improved to 12.1 letters/min. while the accuracy was 90.4%.
---
paper_title: P300-based brain computer interface: Reliability and performance in healthy and paralysed participants
paper_content:
Abstract Objective This study aimed to describe the use of the P300 event-related potential as a control signal in a brain computer interface (BCI) for healthy and paralysed participants. Methods The experimental device used the P300 wave to control the movement of an object on a graphical interface. Visual stimuli, consisting of four arrows (up, right, down, left) were randomly presented in peripheral positions on the screen. Participants were instructed to recognize only the arrow indicating a specific direction for an object to move. P300 epochs, synchronized with the stimulus, were analyzed on-line via Independent Component Analysis (ICA) with subsequent feature extraction and classification by using a neural network. Results We tested the reliability and the performance of the system in real-time. The system needed a short training period to allow task completion and reached good performance. Nonetheless, severely impaired patients had lower performance than healthy participants. Conclusions The proposed system is effective for use with healthy participants, whereas further research is needed before it can be used with locked-in syndrome patients. Significance The P300-based BCI described can reliably control, in ‘real time’, the motion of a cursor on a graphical interface, and no time-consuming training is needed in order to test possible applications for motor-impaired patients.
---
paper_title: "Virtual keyboard" controlled by spontaneous EEG activity
paper_content:
A "virtual keyboard" (VK) is a letter spelling device operated for example by spontaneous electroencephalogram (EEG), whereby the EEC is modulated by mental hand and leg motor imagery. We report on three able-bodied subjects, operating the VK. The ability in the use of the VK varies between 0.85 and 0.5 letters/min in error-free writing.
---
| <format>
Title: Alternative communication systems for people with severe motor disabilities: a survey
Section 1: Introduction
Description 1: Summarize the evolution of assistive technology (AT) devices for people with motor disabilities, focusing on alternative communication systems based on bioelectricity (EMG, EOG, and EEG).
Section 2: Switch based control and Proportional Biosignals sensors
Description 2: Describe the classification of human-machine interface sensors into switch-based control (SBC) and proportional sensor (PRO) types, with examples of their application in communication aids.
Section 3: EMG signal
Description 3: Explain the generation of EMG signals, types of electrodes used, and the acquisition process, including the applications of electromyography in communication aid devices.
Section 4: EMG applications
Description 4: Discuss the various applications of EMG signals in communication devices, such as switch-based control, mouse emulation, and speech recognition.
Section 5: EOG signal
Description 5: Describe the characteristics, acquisition methods, and potential applications of EOG signals in assistive communication systems.
Section 6: EOG applications
Description 6: Detail the specific applications of EOG signals in communication aids, including cursor control and writing systems.
Section 7: Electroencephalography
Description 7: Provide an overview of EEG-based brain-computer interfaces (BCI) for communication, discussing both invasive and non-invasive methods, and their application in assistive technologies.
Section 8: Asynchronous BCI
Description 8: Discuss the principles, advantages, and challenges of asynchronous BCI systems that use spontaneous cortical potentials.
Section 9: Synchronous BCI
Description 9: Explain synchronous BCI systems that rely on evoked potentials like P300 and SSVEP, including their advantages and challenges.
Section 10: Considerations about synchronous and asynchronous BCIs
Description 10: Compare synchronous and asynchronous BCI systems, and discuss the trade-offs and considerations in choosing the type of interface for assistive communication devices.
Section 11: Performance evaluation
Description 11: Review the methods and metrics used to evaluate the performance of alternative communication systems.
Section 12: Measurements of performance without interaction model
Description 12: Discuss performance measurement criteria based on user experiments, and describe various criteria used in the evaluation of communication interfaces.
Section 13: Direct communication
Description 13: Explore models of user performance for text entry tasks using direct control of keyboards and touchscreens.
Section 14: Pointing Tasks
Description 14: Examine the application of Fitts' Law and other metrics for evaluating pointing devices' performance in assistive communication.
Section 15: Scanning systems
Description 15: Analyze scanning systems for communication, focusing on optimizing communication rate and evaluating user performance.
Section 16: Conclusions
Description 16: Summarize the findings and discussions of the various communication systems and their applications, highlighting the importance of individualized assessment in deploying these technologies.
</format>
|
THE SNARE LANGUAGE OVERVIEW | 4 | ---
paper_title: Taking stock of networks and organizations. A multilevel perspective
paper_content:
The central argument of network research is that actors are embedded in networks of interconnected social relationships that offer opportunities for and constraints on behavior. We review research on the antecedents and consequences of networks at the interpersonal, interunit, and interorganizational levels of analysis, evaluate recent theoretical and empirical trends, and give directions for future research, highlighting the importance of investigating cross-level network phenomena.
---
paper_title: Data Mining Concepts and Techniques
paper_content:
Understand the need for analyses of large, complex, information-rich data sets. Identify the goals and primary tasks of the data-mining process. Describe the roots of data-mining technology. Recognize the iterative character of a data-mining process and specify its basic steps. Explain the influence of data quality on a data-mining process. Establish the relation between data warehousing and data mining. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an "interesting" outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers. In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data-mining activities into one of two categories: Predictive data mining, which produces the model of the system described by the given data set, or Descriptive data mining, which produces new, nontrivial information based on the available data set.
---
| Title: THE SNARE LANGUAGE OVERVIEW
Section 1: INTRODUCTION
Description 1: This section introduces the foundational concepts related to social networks, including definitions, roles, and different types of social networks.
Section 2: MODELING SOCIAL NETWORKS
Description 2: This section discusses the methods and techniques used to model social networks, such as graphical representations, mathematical analysis procedures, and statistical models.
Section 3: SNARE LANGUAGE
Description 3: This section outlines the SNARE language, detailing its purpose, main concepts, and how it supports the representation of social networks.
Section 4: DISCUSSION
Description 4: This section offers insights and reflections on the research work, the flexibility of the SNARE language, its applications, and potential future directions. |
Covering radius survey and recent results | 10 | ---
| Title: Covering Radius Survey and Recent Results
Section 1: INTRODUCTION
Description 1: Introduce the concept of covering radius for block codes, its importance, and basic properties. Summarize the main applications and previous foundational results, along with the structure of the paper.
Section 2: The Translate Leader
Description 2: Discuss the Translate Leader concept for linear codes, including weight of a coset leader and syndrome representation. Highlight the simple criterion for identifying a translate leader.
Section 3: Perfect Codes and Quasi-perfect Codes
Description 3: Define perfect and quasi-perfect codes based on covering radius and covering radius excess, A(C). Present results related to A(C).
Section 4: UPPER BOUNDS ON COVERING RADIUS
Description 4: Outline various upper bounds on covering radius, including Redundancy, Delsarte bounds, Supercode bounds, and Norse bounds. Discuss multiple theorems providing these bounds and their proofs.
Section 5: Some Links Between t(C) and d(C)
Description 5: Explore the relationship between covering radius \( t(C) \) and minimum distance \( d(C) \). Present propositions and theorems related to maximal codes and specific classes of codes, particularly Reed-Muller codes.
Section 6: ON THE LEAST COVERING RADIUS OF (n, K) CODES
Description 6: Estimate functions \( t[n, k] \) and \( t(n, K) \) for various binary linear codes. Present lower and upper bounds on these values and discuss specific cases where exact values or tight bounds are known.
Section 7: Recent Work of Helleseth
Description 7: Summarize Helleseth's contributions related to binary cyclic codes, covering radius, and Waring's problem in \( GF(2^m) \).
Section 8: A Walsh-Transform Approach
Description 8: Describe the Walsh transform approach for calculating the covering radius, including the algorithmic complexity and computational aspects.
Section 9: Results of Wolfmann and Assmus-Pless
Description 9: Explore specific cases where the Delsarte bound on covering radius is attained. Provide related results from Wolfmann and Assmus-Pless.
Section 10: OPEN PROBLEMS
Description 10: Present open problems in the field regarding covering radius, including specific cases of codes where covering radius is not yet determined, and conjectures about code structures and behaviors. |
Diversity creation methods: a survey and categorisation | 14 | ---
paper_title: Generalization error of ensemble estimators
paper_content:
It has been empirically shown that a better estimate with less generalization error can be obtained by averaging outputs of multiple estimators. This paper presents an analytical result for the generalization error of ensemble estimators. First, we derive a general expression of the ensemble generalization error by using factors of interest (bias, variance, covariance, and noise variance) and show how the generalization error is affected by each of them. Some special cases are then investigated. The result of a simulation is shown to verify our analytical result. A practically important problem of the ensemble approach, ensemble dilemma, is also discussed.
---
paper_title: Bagging Predictors
paper_content:
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
---
paper_title: Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization
paper_content:
A general theoretical framework for Monte Carlo averaging methods of improving regression estimates is presented with application to neural network classification and time series prediction. Given a population of regression estimators, it is shown how to construct a hybrid estimator which is as good as or better than, in the MSE sense, any estimator in the population. ::: It is argued that the ensemble method presented has several properties: It efficiently uses all the regressors of a population--none need be discarded. It efficiently uses all the available data for training without over-fitting. It inherently performs regularization by smoothing in functional space which helps to avoid over-fitting. It utilizes local minima to construct improved estimates whereas other regression algorithms are hindered by local minima. It is ideally suited for parallel computation. It leads to a very useful and natural measure of the number of distinct estimators in a population. The optimal parameters of the ensemble estimator are given in closed form. ::: It is shown that this result derives from the notion of convexity and can be applied to a wide variety of optimization algorithms including: Mean Square Error, a general class of $L\sb{p}$-norm cost functions, Maximum Likelihood Estimation, Maximum Entropy, Maximum Mutual Information, the Kullback-Leibler Information (Cross Entropy), Penalized Maximum Likelihood Estimation and Smoothing Splines. ::: The connection to Bayesian Inference is discussed. ::: Experimental results on the NIST OCR database, the Turk and Pentland human face database and sunspot time series prediction are presented which demonstrate that the ensemble method dramatically improves regression performance on real-world classification tasks.
---
paper_title: Optimal Linear Combinations Of Neural Networks
paper_content:
Neural network-based modeling often involves trying multiple networks with different architectures and training parameters in order to achieve acceptable model accuracy. Typically, one of the trained networks is chosen as best, while the rest are discarded. [Hashem and Schmeiser (1995)] proposed using optimal linear combinations of a number of trained neural networks instead of using a single best network. Combining the trained networks may help integrate the knowledge acquired by the components networks and thus improve model accuracy. In this paper, we extend the idea of optimal linear combinations (OLCs) of neural networks and discuss issues related to the generalization ability of the combined model. We then present two algorithms for selecting the component networks for the combination to improve the generalization ability of OLCs. Our experimental results demonstrate significant improvements in model accuracy, as a result of using OLCs, compared to using the apparent best network. Copyright 1997 Elsevier Science Ltd.
---
paper_title: The Combination of Forecasts
paper_content:
Two separate sets of forecasts of airline passenger data have been combined to form a composite set of forecasts. The main conclusion is that the composite set of forecasts can yield lower mean-square error than either of the original forecasts. Past errors of each of the original forecasts are used to determine the weights to attach to these two original forecasts in forming the combined forecasts, and different methods of deriving these weights are examined.
---
paper_title: Generalization error of ensemble estimators
paper_content:
It has been empirically shown that a better estimate with less generalization error can be obtained by averaging outputs of multiple estimators. This paper presents an analytical result for the generalization error of ensemble estimators. First, we derive a general expression of the ensemble generalization error by using factors of interest (bias, variance, covariance, and noise variance) and show how the generalization error is affected by each of them. Some special cases are then investigated. The result of a simulation is shown to verify our analytical result. A practically important problem of the ensemble approach, ensemble dilemma, is also discussed.
---
paper_title: Neural Networks and the Bias/Variance Dilemma
paper_content:
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.
---
paper_title: Generalization error of ensemble estimators
paper_content:
It has been empirically shown that a better estimate with less generalization error can be obtained by averaging outputs of multiple estimators. This paper presents an analytical result for the generalization error of ensemble estimators. First, we derive a general expression of the ensemble generalization error by using factors of interest (bias, variance, covariance, and noise variance) and show how the generalization error is affected by each of them. Some special cases are then investigated. The result of a simulation is shown to verify our analytical result. A practically important problem of the ensemble approach, ensemble dilemma, is also discussed.
---
paper_title: Neural Networks and the Bias/Variance Dilemma
paper_content:
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.
---
paper_title: The Combination of Forecasts
paper_content:
Two separate sets of forecasts of airline passenger data have been combined to form a composite set of forecasts. The main conclusion is that the composite set of forecasts can yield lower mean-square error than either of the original forecasts. Past errors of each of the original forecasts are used to determine the weights to attach to these two original forecasts in forming the combined forecasts, and different methods of deriving these weights are examined.
---
paper_title: Error Correlation and Error Reduction in Ensemble Classifiers
paper_content:
Using an ensemble of classifiers, instead of a single classifier, can lead to improved generalization. The gains obtained by combining, however, are often affected more by the selection of what is presented to the combiner than by the actual combining method that is chosen. In this paper, we focus on data selection and classifier training methods, in order to 'prepare' classifiers for combining. We review a combining framework for classification problems that quantifies the need for reducing the correlation among individual classifiers. Then, we discuss several methods that make the classifiers in an ensemble more complementary. Experimental results are provided to illustrate the benefits and pitfalls of reducing the correlation among classifiers, especially when the training data are in limited supply.
---
paper_title: Analysis of Linear and Order Statistics Combiners for Fusion of Imbalanced Classifiers
paper_content:
So far few theoretical works investigated the conditions under which specific fusion rules can work well, and a unifying framework for comparing rules of different complexity is clearly beyond the state of the art. A clear theoretical comparison is lacking even if one focuses on specific classes of combiners (e.g., linear combiners). In this paper, we theoretically compare simple and weighted averaging rules for fusion of imbalanced classifiers. Continuing the work reported in [10], we get a deeper knowledge of classifiers imbalance effects in linear combiners. In addition, we experimentally compare the performance of linear and order statistics combiners for ensembles with different degrees of classifiers imbalance.
---
paper_title: Linear Combiners for Classifier Fusion: Some Theoretical and Experimental Results
paper_content:
In this paper, we continue the theoretical and experimental analysis of two widely used combining rules, namely, the simple and weighted average of classifier outputs, that we started in previous works. We analyse and compare the conditions which affect the performance improvement achievable by weighted average over simple average, and over individual classifiers, under the assumption of unbiased and uncorrelated estimation errors. Although our theoretical results have been obtained under strict assumptions, the reported experiments show that they can be useful in real applications, for designing multiple classifier systems based on linear combiners.
---
paper_title: Error Correlation and Error Reduction in Ensemble Classifiers
paper_content:
Using an ensemble of classifiers, instead of a single classifier, can lead to improved generalization. The gains obtained by combining, however, are often affected more by the selection of what is presented to the combiner than by the actual combining method that is chosen. In this paper, we focus on data selection and classifier training methods, in order to 'prepare' classifiers for combining. We review a combining framework for classification problems that quantifies the need for reducing the correlation among individual classifiers. Then, we discuss several methods that make the classifiers in an ensemble more complementary. Experimental results are provided to illustrate the benefits and pitfalls of reducing the correlation among classifiers, especially when the training data are in limited supply.
---
paper_title: Analysis of Linear and Order Statistics Combiners for Fusion of Imbalanced Classifiers
paper_content:
So far few theoretical works investigated the conditions under which specific fusion rules can work well, and a unifying framework for comparing rules of different complexity is clearly beyond the state of the art. A clear theoretical comparison is lacking even if one focuses on specific classes of combiners (e.g., linear combiners). In this paper, we theoretically compare simple and weighted averaging rules for fusion of imbalanced classifiers. Continuing the work reported in [10], we get a deeper knowledge of classifiers imbalance effects in linear combiners. In addition, we experimentally compare the performance of linear and order statistics combiners for ensembles with different degrees of classifiers imbalance.
---
paper_title: Linear Combiners for Classifier Fusion: Some Theoretical and Experimental Results
paper_content:
In this paper, we continue the theoretical and experimental analysis of two widely used combining rules, namely, the simple and weighted average of classifier outputs, that we started in previous works. We analyse and compare the conditions which affect the performance improvement achievable by weighted average over simple average, and over individual classifiers, under the assumption of unbiased and uncorrelated estimation errors. Although our theoretical results have been obtained under strict assumptions, the reported experiments show that they can be useful in real applications, for designing multiple classifier systems based on linear combiners.
---
paper_title: Neural network ensembles
paper_content:
Several means for improving the performance and training of neural networks for classification are proposed. Crossvalidation is used as a tool for optimizing network parameters and architecture. It is shown that the remaining residual generalization error can be reduced by invoking ensembles of similar networks. >
---
paper_title: Using Diversity in Preparing Ensembles of Classifiers Based on Different Feature Seature Subsets to Minimize Generalization Error
paper_content:
It is well known that ensembles of predictors produce better accuracy than a single predictor provided there is diversity in the ensemble. This diversity manifests itself as disagreement or ambiguity among the ensemble members. In this paper we focus on ensembles of classifiers based on different feature subsets and we present a process for producing such ensembles that emphasizes diversity (ambiguity) in the ensemble members. This emphasis on diversity produces ensembles with low generalization errors from ensemble members with comparatively high generalization error. We compare this with ensembles produced focusing only on the error of the ensemble members (without regard to overall diversity) and find that the ensembles based on ambiguity have lower generalization error. Further, we find that the ensemble members produced focusing on ambiguity have less features on average that those based on error only. We suggest that this indicates that these ensemble members are local learners.
---
paper_title: Bagging and Boosting for the Nearest Mean Classifier: Effects of Sample Size on Diversity and Accuracy
paper_content:
In combining classifiers, it is believed that diverse ensembles perform better than non-diverse ones. In order to test this hypothesis, we study the accuracy and diversity of ensembles obtained in bagging and boosting applied to the nearest mean classifier. In our simulation study we consider two diversity measures: the Q statistic and the disagreement measure. The experiments, carried out on four data sets have shown that both diversity and the accuracy of the ensembles depend on the training sample size. With exception of very small training sample sizes, both bagging and boosting are more useful when ensembles consist of diverse classifiers. However, in boosting the relationship between diversity and the efficiency of ensembles is much stronger than in bagging.
---
paper_title: Diversity versus Quality in Classification Ensembles based on Feature Selection
paper_content:
Feature subset-selection has emerged as a useful technique for creating diversity in ensembles - particularly in classification ensembles. In this paper we argue that this diversity needs to be monitored in the creation of the ensemble. We propose an entropy measure of the outputs of the ensemble members as a useful measure of the ensemble diversity. Further, we show that using the associated conditional entropy as a loss function (error measure) works well and the entropy in the ensemble predicts well the reduction in error due to the ensemble. These measures are evaluated on a medical prediction problem and are shown to predict the performance of the ensemble well. We also show that the entropy measure of diversity has the added advantage that it seems to model the change in diversity with the size of the ensemble.
---
paper_title: Relationships between combination methods and measures of diversity in combining classifiers
paper_content:
Abstract This study looks at the relationships between different methods of classifier combination and different measures of diversity. We considered 10 combination methods and 10 measures of diversity on two benchmark data sets. The relationship was sought on ensembles of three classifiers built on all possible partitions of the respective feature sets into subsets of pre-specified sizes. The only positive finding was that the Double-Fault measure of diversity and the measure of difficulty both showed reasonable correlation with Majority Vote and Naive Bayes combinations. Since both these measures have an indirect connection to the ensemble accuracy, this result was not unexpected. However, our experiments did not detect a consistent relationship between the other measures of diversity and the 10 combination methods.
---
paper_title: Ten measures of diversity in classifier ensembles: limits for two classifiers
paper_content:
Independence and dependence of classifier outputs have been debated in the recent literature giving rise to notions such as diversity, complementarity, orthogonality, etc. There seems to be no consensus on the meaning of these notions beyond the intuitive perception. We summarize 10 measures of classifier diversity: 4 pairwise and 6 non-pairwise measures. We derive the limits of the measures for 2 classifiers of equal accuracy. (10 pages)
---
paper_title: Is independence good for combining classifiers?
paper_content:
Independence between individual classifiers is typically viewed as an asset in classifier fusion. We study the limits on the majority vote accuracy when combining dependent classifiers. Q statistics are used to measure the dependence between classifiers. We show that dependent classifiers could offer a dramatic improvement over the individual accuracy. However, the relationship between dependency and accuracy of the pool is ambivalent. A synthetic experiment demonstrates the intuitive result that, in general, negative dependence is preferable.
---
paper_title: Limits on the majority vote accuracy in classifier fusion
paper_content:
We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classiers in the pool (L), and the pairwise dependence between classiers, measured by Yule's Q statistic. Independence between individual classi- ers is typically viewed as an asset in classier fusion. We show that the majority vote with dependent classiers can potentially oer a dramatic improvement both over independent clas- siers and over the individual accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classier outputs (correct/incorrect) are identied to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is benecial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classiers in the pool should have the same negative dependence.
---
paper_title: Bagging Predictors
paper_content:
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
---
paper_title: Ensemble Learning using Decorrelated Neural Networks
paper_content:
We describe a decorrelation network training method for improving the quality of regression learning in 'ensemble' neural networks NNs that are composed of linear combinations of individual NNs. In this method, individual networks are trained by backpropogation not only to reproduce a desired output, but also to have their errors linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the 'three-parity' logic function, a noisy sine function and a one-dimensional non-linear function, and compare the results with the ensemble networks composed of independently trained individual networks without decorrelation training . Empirical results show than when individual networks are forced to be decorrelated with one another the resulting ensemble NNs have lower mean squared errors than the ensemble networks having independently trained i...
---
paper_title: Popular Ensemble Methods: An Empirical Study
paper_content:
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier - especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.
---
paper_title: Fast committee learning: preliminary results
paper_content:
Fast committee learning can, to some extent, achieve the generalisation advantages of a committee of neural networks, without the need for independent learning of the committee members. This is achieved by selecting committee members from time-slices of the learning trajectory of one neural network.
---
paper_title: Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks
paper_content:
The primary goal of inductive learning is to generalize well - that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combinations of networks initialized the traditional way.
---
paper_title: Improving Committee Diagnosis with Resampling Techniques
paper_content:
Central to the performance improvement of a committee relative to individual networks is the error correlation between networks in the committee. We investigated methods of achieving error independence between the networks by training the networks with different resampling sets from the original training set. The methods were tested on the sinwave artificial task and the real-world problems of hepatoma (liver cancer) and breast cancer diagnoses.
---
paper_title: Engineering Multiversion Neural-Net Systems
paper_content:
In this paper we address the problem of constructing reliable neural-net implementations, given the assumption that any particular implementation will not be totally correct. The approach taken in this paper is to organize the inevitable errors so as to minimize their impact in the context of a multiversion system, i.e., the system functionality is reproduced in multiple versions, which together will constitute the neural-net system. The unique characteristics of neural computing are exploited in order to engineer reliable systems in the form of diverse, multiversion systems that are used together with a "decision strategy" (such as majority vote). Theoretical notions of "methodological diversity" contributing to the improvement of system performance are implemented and tested. An important aspect of the engineering of an optimal system is to overproduce the components and then choose an optimal subset. Three general techniques for choosing final system components are implemented and evaluated. Several different approaches to the effective engineering of complex multiversion systems designs are realized and evaluated to determine overall reliability as well as reliability of the overall system in comparison to the lesser reliability of component substructures.
---
paper_title: The "test and Select" Approach to Ensemble Combination
paper_content:
The performance of neural nets can be improved through the use of ensembles of redundant nets. In this paper, some of the available methods of ensemble creation are reviewed and the "test and select" methodolology for ensemble creation is considered. This approach involves testing potential ensemble combinations on a validation set, and selecting the best performing ensemble on this basis, which is then tested on a final test set. The application of this methodology, and of ensembles in general, is explored further in two case studies. The first case study is of fault diagnosis in a diesel engine, and relies on ensembles of nets trained from three different data sources. The second case study is of robot localisation, using an evidence-shifting method based on the output of trained SOMs. In both studies, improved results are obtained as a result of combining nets to form ensembles.
---
paper_title: Experiments with Classifier Combining Rules
paper_content:
A large experiment on combining classifiers is reported and discussed. It includes, both, the combination of different classifiers on the same feature set and the combination of classifiers on different feature sets. Various fixed and trained combining rules are used. It is shown that there is no overall winning combining rule and that bad classifiers as well as bad feature sets may contain valuable information for performance improvement by combining rules. Best performance is achieved by combining both, different feature sets and different classifiers.
---
paper_title: Error-Correcting Output Codes: A General Method for Improving Multiclass Inductive Learning Programs
paper_content:
Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k < 2 values (i.e., k "classes"). The definition is acquired by studying large collections of training examples of the form [xi, f(xi)]. Existing approaches to this problem include (a) direct application of multiclass algorithms such as the decision-tree algorithms ID3 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and (c) application of binary concept learning algorithms with distributed output codes such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which BCH error-correcting codes are employed as a distributed output representation. We show that these output representations improve the performance of ID3 on the NETtalk task and of back propagation on an isolated-letter speech-recognition task. These results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.
---
paper_title: Using Diversity in Preparing Ensembles of Classifiers Based on Different Feature Seature Subsets to Minimize Generalization Error
paper_content:
It is well known that ensembles of predictors produce better accuracy than a single predictor provided there is diversity in the ensemble. This diversity manifests itself as disagreement or ambiguity among the ensemble members. In this paper we focus on ensembles of classifiers based on different feature subsets and we present a process for producing such ensembles that emphasizes diversity (ambiguity) in the ensemble members. This emphasis on diversity produces ensembles with low generalization errors from ensemble members with comparatively high generalization error. We compare this with ensembles produced focusing only on the error of the ensemble members (without regard to overall diversity) and find that the ensembles based on ambiguity have lower generalization error. Further, we find that the ensemble members produced focusing on ambiguity have less features on average that those based on error only. We suggest that this indicates that these ensemble members are local learners.
---
paper_title: Boosting with averaged weight vectors
paper_content:
AdaBoost [5] is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence [7]. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution orthogonal to the mistake vectors of all the previous base models, but that this is not always possible [7]. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm [7], which also attempts to satisfy this goal.
---
paper_title: Diverse neural net solutions to a fault diagnosis problem
paper_content:
The development of a neural net system for fault diagnosis in a marine diesel engine is described. Nets were trained to classify combustion quality on the basis of simulated data. Three different types of data were used: pressure, temperature and combined pressure and temperature. Subsequent to training, three nets were selected and combined by means of a majority voter to form a system which achieved 100% generalisation to the test set. This performance is attributable to a reliance on the software engineering concept of diversity. Following experimental evaluation of methods of creating diverse neural nets solutions, it was concluded that the best results should be obtained when data is taken from two different sensors (e.g. a pressure and a temperature sensor), or where this is not possible, when new data sets are created by subjecting a set of inputs to non-linear transformations. These conclusions have far reaching implications for other neural net applications.
---
paper_title: Error-Correcting Output Coding Corrects Bias and Variance
paper_content:
Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k ≫ 2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method— like any form of voting or committee—can reduce the variance of the learning algorithm. Furthermore—unlike methods that simply combine multiple runs of the same learning algorithm—ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local behavior of C4.5.
---
paper_title: Bagging Predictors
paper_content:
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
---
paper_title: Randomizing Outputs to Increase Prediction Accuracy
paper_content:
Bagging and boosting reduce error by changing both the inputs and outputs to form perturbed training sets, growing predictors on these perturbed training sets and combining them. An interesting question is whether it is possible to get comparable performance by perturbing the outputs alone. Two methods of randomizing outputs are experimented with. One is called output smearing and the other output flipping. Both are shown to consistently do better than bagging.
---
paper_title: Input decimation ensembles: Decorrelation through dimensionality reduction
paper_content:
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many machine learning problems [4, 16]. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers [1,14]. As such, reducing those correlations while keeping the base classifiers' performance levels high is a promising research topic. In this paper, we describe input decimation, a method that decouples the base classifiers by training them with different subsets of the input features. In past work [15], we showed the theoretical benefits of input decimation and presented its application to a handful of real data sets. In this paper, we provide a systematic study of input decimation on synthetic data sets and analyze how the interaction between correlation and performance in base classifiers affects ensemble performance.
---
paper_title: Combining Decision Trees and Neural Networks for Drug Discovery
paper_content:
Genetic programming (GP) offers a generic method of automatically fusing together classifiers using their receiver operating characteristics (ROC) to yield superior ensembles. We combine decision trees (C4.5) and artificial neural networks (ANN) on a difficult pharmaceutical data mining (KDD) drug discovery application. Specifically predicting inhibition of a P450 enzyme. Training data came from high throughput screening (HTS) runs. The evolved model may be used to predict behaviour of virtual (i.e. yet to be manufactured) chemicals. Measures to reduce over fitting are also described.
---
paper_title: A Constructive Algorithm for Training Cooperative Neural Network Ensembles
paper_content:
Presents a constructive algorithm for training cooperative neural-network ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and Mackey-Glass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability.
---
paper_title: Network generalization differences quantified
paper_content:
Abstract It has long been observed, and frequently noted, by connectionists that small changes in initial conditions, prior to training, can result in networks that generalize very differently. We have performed a systematic study of this phenomenon, using a number of different statistical measures of generalization differences. From these we derive a formal definition of Generalization Diversity. We quantify the relative impacts on generalization of the major parameters used in network initialization as well as extend the formal framework to also encompass the differences in generalization difference from one parameter to another. We reveal, for example, the relative effects of random initialization of the link weights and variation of the number of hidden units, and how similar these two resultant effects are. Finally, examples are presented of how the proposed generalization diversity measure may be exploited in order to improve the performance of neural-net systems. We show how several of these measures can be used to engineer reliability improvements in neural-net systems.
---
paper_title: Generating Accurate and Diverse Members of a Neural-Network Ensemble
paper_content:
Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called ADDEMUP that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. ADDEMUP works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that ADDEMUP is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that ADDEMUP is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.
---
paper_title: Diversity between Neural Networks and Decision Trees for Building Multiple Classifier Systems
paper_content:
A multiple classifier system can only improve the performance when the members in the system are diverse from each other. Combining some methodologically different techniques is considered a constructive way to expand the diversity. This paper investigates the diversity between the two different data mining techniques, neural networks and automatically induced decision trees. Input decimation through salient feature selection is also explored in the paper in the hope of acquiring further diversity. Among various diversities defined, the coincident failure diversity (CFD) appears to be an effective measure of useful diversity among classifiers in a multiple classifier system when the majority voting decision strategy is applied. A real-world medical classification problem is presented as an application of the techniques. The constructed multiple classifier systems are evaluated with a number of statistical measures in terms of reliability and generalisation. The results indicate that combined MCSs of the nets and trees trained with the selected features have higher diversity and produce better classification results.
---
paper_title: Engineering Multiversion Neural-Net Systems
paper_content:
In this paper we address the problem of constructing reliable neural-net implementations, given the assumption that any particular implementation will not be totally correct. The approach taken in this paper is to organize the inevitable errors so as to minimize their impact in the context of a multiversion system, i.e., the system functionality is reproduced in multiple versions, which together will constitute the neural-net system. The unique characteristics of neural computing are exploited in order to engineer reliable systems in the form of diverse, multiversion systems that are used together with a "decision strategy" (such as majority vote). Theoretical notions of "methodological diversity" contributing to the improvement of system performance are implemented and tested. An important aspect of the engineering of an optimal system is to overproduce the components and then choose an optimal subset. Three general techniques for choosing final system components are implemented and evaluated. Several different approaches to the effective engineering of complex multiversion systems designs are realized and evaluated to determine overall reliability as well as reliability of the overall system in comparison to the lesser reliability of component substructures.
---
paper_title: Combination of multiple classifiers using local accuracy estimates
paper_content:
This paper presents a method for combining classifiers that uses estimates of each individual classifier's local accuracy in small regions of feature space surrounding an unknown test sample. An empirical evaluation using five real data sets confirms the validity of our approach compared to some other combination of multiple classifiers algorithms. We also suggest a methodology for determining the best mix of individual classifiers.
---
paper_title: Modelling conditional probabilities with network committees: how overfitting can be useful
paper_content:
Training neural networks for predicting conditional probability densities can be accelerated considerably by adopting the random vector functional link net (RVFL) approach. ::: In this way, a whole ensemble of models can be trained at the same computational ::: costs as otherwise required for training only one conventional network. The inherent ::: stochasticity of the RVFL method increases the diversity in this ensemble, which leads ::: to a signi cant reduction of the generalisation error. The application of this scheme to ::: a synthetic multimodal stochastic time series and a real-world benchmark problem was ::: found to achieve a performance better than or comparable to the best results otherwise obtained so far. Moreover, the simulations support a recent theoretical study and ::: show that when making predictions with network committees, it can be advantageous ::: to employ underregularised models that overfit the training data
---
paper_title: Ensemble Learning via Negative Correlation
paper_content:
This paper presents a learning approach, i.e. negative correlation learning, for neural network ensembles. Unlike previous learning approaches for neural network ensembles, negative correlation learning attempts to train individual networks in an ensemble and combines them in the same learning process. In negative correlation learning, all the individual networks in the ensemble are trained simultaneously and interactively through the correlation penalty terms in their error functions. Rather than producing unbiased individual networks whose errors are uncorrelated, negative correlation learning can create negatively correlated networks to encourage specialisation and cooperation among the individual networks. Empirical studies have been carried out to show why and how negative correlation learning works. The experimental results show that negative correlation learning can produce neural network ensembles with good generalisation ability.
---
paper_title: The Use of the Ambiguity Decomposition in Neural Network Ensemble Learning Methods
paper_content:
We analyze the formal grounding behind Negative Correlation (NC) Learning, an ensemble learning technique developed in the evolutionary computation literature. We show that by removing an assumption made in the original work, NC can be seen to be exploiting the well-known Ambiguity decomposition of the ensemble error, grounding it in a statistics framework around the bias-variance decomposition. We use this grounding to find bounds for the parameters, and provide insights into the behaviour of the optimal parameter values. These observations allow us understand how NC relates to other algorithms, identifying a group of papers spread over the last decade that have all exploited the Ambiguity decomposition for machine learning problems. When taking into account our new understanding of the algorithm, significant reductions in error rates were observed in empirical tests.
---
paper_title: Learning with ensembles: How overfitting can be useful
paper_content:
We study the characteristics of learning with ensembles. Solving exactly the simple model of an ensemble of linear students, we find surprisingly rich behaviour. For learning in large ensembles, it is advantageous to use under-regularized students, which actually over-fit the training data. Globally optimal performance can be obtained by choosing the training set sizes of the students appropriately. For smaller ensembles, optimization of the ensemble weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide range of regularization parameters makes this improvement robust against changes in the unknown level of noise in the training data.
---
paper_title: Ensemble Learning using Decorrelated Neural Networks
paper_content:
We describe a decorrelation network training method for improving the quality of regression learning in 'ensemble' neural networks NNs that are composed of linear combinations of individual NNs. In this method, individual networks are trained by backpropogation not only to reproduce a desired output, but also to have their errors linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the 'three-parity' logic function, a noisy sine function and a one-dimensional non-linear function, and compare the results with the ensemble networks composed of independently trained individual networks without decorrelation training . Empirical results show than when individual networks are forced to be decorrelated with one another the resulting ensemble NNs have lower mean squared errors than the ensemble networks having independently trained i...
---
paper_title: Genetic Programming: On the Programming of Computers by Means of Natural Selection
paper_content:
Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect. Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository.
---
paper_title: Speciated neural networks evolved with fitness sharing technique
paper_content:
In order to develop effective evolutionary artificial neural networks (EANNs) we have to address the questions on how to evolve EANNs more efficiently and how to achieve the best performance from the ANNs evolved. Most of the previous works, however, do not utilize all the information obtained with several ANNs but choose the one best network in the last generation. Some recent works indicate that making use of population information by combining ANNs in the last generation can improve the performance, because they can complement each other to construct effective multiple neural networks. We propose a new method of evolving multiple speciated neural networks by fitness sharing which helps to optimize multi-objective functions with genetic algorithms. Experiments with the breast cancer data from UCI benchmark datasets show that the proposed method can produce more speciated ANNs and improve the performance by combining the only representative individuals.
---
paper_title: Speciation as automatic categorical modularization
paper_content:
Many natural and artificial systems use a modular approach to reduce the complexity of a set of subtasks while solving the overall problem satisfactorily. There are two distinct ways to do this. In functional modularization, the components perform very different tasks, such as subroutines of a large software project. In categorical modularization, the components perform different versions of basically the same task, such as antibodies in the immune system. This second aspect is the more natural for acquiring strategies in games of conflict, An evolutionary learning system is presented which follows this second approach to automatically create a repertoire of specialist strategies for a game-playing system. This relieves the human effort of deciding how to divide and specialize. The genetic algorithm speciation method used is one based on fitness sharing. The learning task is to play the iterated prisoner's dilemma. The learning system outperforms the tit-for-tat strategy against unseen test opponents. It learns using a "black box" simulation, with minimal prior knowledge of the learning task.
---
paper_title: Making Use of Population Information in Evolutionary Artificial Neural Networks
paper_content:
This paper is concerned with the simultaneous evolution of artificial neural network (ANN) architectures and weights. The current practice in evolving ANN's is to choose the best ANN in the last generation as the final result. This paper proposes a different approach to form the final result by combining all the individuals in the last generation in order to make best use of all the information contained in the whole population. This approach regards a population of ANN's as an ensemble and uses a combination method to integrate them. Although there has been some work on integrating ANN modules, little has been done in evolutionary learning to make best use of its population information. Four linear combination methods have been investigated in this paper to illustrate our ideas. Three real-world data sets have been used in our experimental studies, which show that the recursive least-square (RLS) algorithm always produces an integrated system that outperforms the best individual. The results confirm that a population contains more information than a single individual. Evolutionary learning should exploit such information to improve generalization of learned systems.
---
paper_title: Artificial speciation of neural network ensembles
paper_content:
Modular approach of solving a complex problem can reduce the total complexity of the system while solving a difficult problem satisfactorily. To implement this idea, an EANN system is developed here for classifying data. The system evolved is speciated in such a manner that members of a particular species solve certain parts of the problem and complement each other in solving one big problem. Fitness sharing is used in evolving the group of ANNs to achieve the required speciation. Sharing was performed at phenotypic level using modified Kullback-Leibler entropy as the distance measure. Since the group as a unit solves the classification problem, outputs of all the ANNs are used in finding the final output. For the combination of ANN outputs 3 different methods – Voting, averaging and recursive least square are used. The evolved system is tested on two data classification problems (Heart Disease Dataset and Breast Cancer Dataset) taken from UCI machine learning benchmark repository.
---
paper_title: Feature Selection for Ensembles
paper_content:
The traditional motivation behind feature selection algorithms is to find the best subset of features for a task using one particular learning algonthm. Given the recent success of ensembles, however, we investigate the notion of ensemble feature selection in this paper. This task is harder than traditional feature selection in that one not only needs to find features germane to the learning task and learning algorithm, but one also needs to find a set of feature subsets that will promote disagreement among the ensemble's classifiers. In this paper, we present an ensemble feature selection approach that is based on genetic algorithms. Our algorithm shows improved performance over the popular and powerful ensemble approaches of AdaBoost and Bagging and demonstrates the utility of ensemble feature selection.
---
paper_title: Types of Multinet System
paper_content:
A limiting factor in research on combining classifiers is a lack of awareness of the full range of available modular structures. One reason for this is that there is as yet little agreement on a means of describing and classifying types of multiple classifier system. In this paper, a categorisation scheme for the identification and description of types of multinet systems is proposed in which systems are described as (a) involving competitive or cooperative combination mechanisms; (b) combining either ensemble, modular, or hybrid components; (c) relying on either bottom-up, or top-down combination, and (d) when bottom up as using either static or fixed combination methods. It is claimed that the categorisation provides an early, but necessary, step in the process of mapping the space of multinet systems: permitting the comparison of different types of system, and facilitating their design and description. On the basis of this scheme, one ensemble and two modular multinet system designs are implemented, and applied to an engine fault diagnosis problem. The best generalisation performance was achieved from the ensemble system.
---
paper_title: Data Complexity Analysis for Classifier Combination
paper_content:
Multiple classifier methods are effective solutions to difficult pattern recognition problems. However, empirical successes and failures have not been completely explained. Amid the excitement and confusion, uncertainty persists in the optimality of method choices for specific problems due to strong data dependences of classifier performance. In response to this, I propose that further exploration of the methodology be guided by detailed descriptions of geometrical characteristics of data and classifier models.
---
paper_title: The Use of the Ambiguity Decomposition in Neural Network Ensemble Learning Methods
paper_content:
We analyze the formal grounding behind Negative Correlation (NC) Learning, an ensemble learning technique developed in the evolutionary computation literature. We show that by removing an assumption made in the original work, NC can be seen to be exploiting the well-known Ambiguity decomposition of the ensemble error, grounding it in a statistics framework around the bias-variance decomposition. We use this grounding to find bounds for the parameters, and provide insights into the behaviour of the optimal parameter values. These observations allow us understand how NC relates to other algorithms, identifying a group of papers spread over the last decade that have all exploited the Ambiguity decomposition for machine learning problems. When taking into account our new understanding of the algorithm, significant reductions in error rates were observed in empirical tests.
---
paper_title: Generating Accurate and Diverse Members of a Neural-Network Ensemble
paper_content:
Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called ADDEMUP that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. ADDEMUP works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that ADDEMUP is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that ADDEMUP is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.
---
paper_title: Error-Correcting Output Coding Corrects Bias and Variance
paper_content:
Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k ≫ 2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method— like any form of voting or committee—can reduce the variance of the learning algorithm. Furthermore—unlike methods that simply combine multiple runs of the same learning algorithm—ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local behavior of C4.5.
---
paper_title: Variance and Bias for General Loss Functions
paper_content:
When using squared error loss, bias and variance and their decomposition of prediction error are well understood and widely used concepts. However, there is no universally accepted definition for other loss functions. Numerous attempts have been made to extend these concepts beyond squared error loss. Most approaches have focused solely on 0-1 loss functions and have produced significantly different definitions. These differences stem from disagreement as to the essential characteristics that variance and bias should display. This paper suggests an explicit list of rules that we feel any “reasonable” set of definitions should satisfy. Using this framework, bias and variance definitions are produced which generalize to any symmetric loss function. We illustrate these statistics on several loss functions with particular emphasis on 0-1 loss. We conclude with a discussion of the various definitions that have been proposed in the past as well as a method for estimating these quantities on real data sets.
---
paper_title: Generalization error of ensemble estimators
paper_content:
It has been empirically shown that a better estimate with less generalization error can be obtained by averaging outputs of multiple estimators. This paper presents an analytical result for the generalization error of ensemble estimators. First, we derive a general expression of the ensemble generalization error by using factors of interest (bias, variance, covariance, and noise variance) and show how the generalization error is affected by each of them. Some special cases are then investigated. The result of a simulation is shown to verify our analytical result. A practically important problem of the ensemble approach, ensemble dilemma, is also discussed.
---
paper_title: Variance and Bias for General Loss Functions
paper_content:
When using squared error loss, bias and variance and their decomposition of prediction error are well understood and widely used concepts. However, there is no universally accepted definition for other loss functions. Numerous attempts have been made to extend these concepts beyond squared error loss. Most approaches have focused solely on 0-1 loss functions and have produced significantly different definitions. These differences stem from disagreement as to the essential characteristics that variance and bias should display. This paper suggests an explicit list of rules that we feel any “reasonable” set of definitions should satisfy. Using this framework, bias and variance definitions are produced which generalize to any symmetric loss function. We illustrate these statistics on several loss functions with particular emphasis on 0-1 loss. We conclude with a discussion of the various definitions that have been proposed in the past as well as a method for estimating these quantities on real data sets.
---
| Title: Diversity Creation Methods: A Survey and Categorisation
Section 1: Introduction
Description 1: Provide an overview of the importance of creating diverse sets of classifiers, the challenges involved, and a summary of what this paper aims to achieve.
Section 2: When is an Ensemble Better than a Single Learner?
Description 2: Discuss why ensembles with diverse errors tend to perform better than single learners, covering both regression and classification contexts.
Section 3: In a Regression Context
Description 3: Present the historical context and foundational studies on combining regression estimators, and describe the mathematical frameworks used to understand diversity in regression.
Section 4: The Ambiguity Decomposition
Description 4: Explain the Ambiguity Decomposition, its significance for ensemble research, and how it provides a simple expression for error correlation effects in an ensemble.
Section 5: Bias, Variance and Covariance
Description 5: Discuss the Bias-Variance-Covariance decomposition, detailing the components of bias, variance, and covariance, and their effect on ensemble error.
Section 6: The Connection Between Ambiguity and Covariance
Description 6: Illustrate the mathematical and conceptual link between the Ambiguity Decomposition and the Bias-Variance-Covariance decomposition.
Section 7: In a Classification Context
Description 7: Explore the challenges and subtleties of quantifying diversity in classification problems, comparing it to the regression context.
Section 8: Ordinal Outputs
Description 8: Review theoretical work by Tumer and Ghosh on averaging combination rules and posterior probability estimates for classification ensembles.
Section 9: Non-Ordinal Outputs
Description 9: Examine methods and theoretical results for handling non-ordinal classifier outputs, covering topics like majority voting and classification error diversity.
Section 10: A Heuristic Metric for Classification Error Diversity?
Description 10: Summarize heuristic metrics and empirical studies aimed at measuring classification error diversity, including Sharkey's levels of diversity and Kuncheva's metrics.
Section 11: Towards A Taxonomy of Methods for Creating Diversity
Description 11: Propose a new taxonomy for categorising diversity creation methods in ensembles, split into explicit and implicit methods, and further divided into starting points, accessible hypotheses, and traversal methods in hypothesis space.
Section 12: How to Categorise Multiple Classifier Systems?
Description 12: Discuss different approaches to categorising multiple classifier systems, emphasizing the importance of a clear representation of the state of the field.
Section 13: How to Quantify Classification Diversity?
Description 13: Explore potential pathways to defining a bias-variance-covariance decomposition for zero-one loss functions to better understand classification error diversity.
Section 14: Conclusions
Description 14: Summarize the findings of the survey, highlighting the contribution of the categorisation of diversity creation methods and suggesting directions for future research. |
A Review of the Minimum Maximum Criterion for Optimal Bit Allocation among Dependent Quantizers | 6 | ---
paper_title: Object-adaptive vertex-based shape coding method
paper_content:
The paper presents a new technique for compactly representing the shape of a visual object within a scene. This method encodes the vertices of a polygonal approximation of the object's shape by adapting the representation to the dynamic range of the relative locations of the object's vertices and by exploiting an octant-based representation of each individual vertex. The object-level adaptation to the relative-location dynamic range provides the flexibility needed to efficiently encode objects of different sizes and with different allowed approximation distortion. At the vertex-level, the octant-based representation allows coding gains for vertices closely spaced relative to the object-level dynamic range. This vertex coding method may be used with techniques which code the polygonal approximation error for further gains in coding efficiency. Results are shown which demonstrate the effectiveness of the vertex encoding method. The rate-distortion comparisons presented show that the technique's adaptive nature allows it to operate efficiently over a wide range of rates and distortions and across a variety of input material, whereas other methods are efficient over more limited conditions.
---
paper_title: MPEG-4 and Rate-Distortion-Based Shape-Coding Techniques
paper_content:
We address the problem of the efficient encoding of object boundaries. This problem is becoming increasingly important in applications such as content-based storage and retrieval, studio and television postproduction, and mobile multimedia applications. The MPEG-4 visual standard will allow the transmission of arbitrarily shaped video objects. The techniques developed for shape coding within the MPEG-4 standardization effort are described and compared first. A framework for the representation of shapes using their contours is presented next. Such representations are achieved using curves of various orders, and they are optimal in the rate-distortion sense. Finally, conclusions are drawn.
---
paper_title: Bit allocation for dependent quantization with applications to multiresolution and MPEG video coders.
paper_content:
We address the problem of efficient bit allocation in a dependent coding environment. While optimal bit allocation for independently coded signal blocks has been studied in the literature, we extend these techniques to the more general temporally and spatially dependent coding scenarios. Of particular interest are the topical MPEG video coder and multiresolution coders. Our approach uses an operational rate-distortion (R-D) framework for arbitrary quantizer sets. We show how a certain monotonicity property of the dependent R-D curves can be exploited in formulating fast ways to obtain optimal and near-optimal solutions. We illustrate the application of this property in specifying intelligent pruning conditions to eliminate suboptimal operating points for the MPEG allocation problem, for which we also point out fast nearly-optimal heuristics. Additionally, we formulate an efficient allocation strategy for multiresolution coders, using the spatial pyramid coder as an example. We then extend this analysis to a spatio-temporal 3-D pyramidal coding scheme. We tackle the compatibility problem of optimizing full-resolution quality while simultaneously catering to subresolution bit rate or quality constraints. We show how to obtain fast solutions that provide nearly optimal (typically within 0.3 dB) full resolution quality while providing much better performance for the subresolution layer (typically 2-3 dB better than the full-resolution optimal solution).
---
paper_title: A Video Compression Scheme with Optimal Bit Allocation Among Segmentation , Motion , and Residual Error
paper_content:
We present a theory for the optimal bit allocation among quadtree (QT) segmentation, displacement vector field (DVF), and displaced frame difference (DFD). The theory is applicable to variable block size motion-compensated video coders (VBSMCVC), where the variable block sizes are encoded using the QT structure, the DVF is encoded by first-order differential pulse code modulation (DPCM), the DFD is encoded by a block-based scheme, and an additive distortion measure is employed. We derive an optimal scanning path for a QT that is based on a Hilbert curve. We consider the case of a lossless VBSMCVC first, for which we develop the optimal bit allocation algorithm using dynamic programming (DP). We then consider a lossy VBSMCVC, for which we use Lagrangian relaxation, and show how an iterative scheme, which employs the DP-based solution, can be used to find the optimal solution. We finally present a VBSMCVC, which is based on the proposed theory, which employs a DCT-based DFD encoding scheme. We compare the proposed coder with H.263. The results show that it outperforms H.263 significantly in the rate distortion sense, as well as in the subjective sense.
---
| Title: A Review of the Minimum Maximum Criterion for Optimal Bit Allocation among Dependent Quantizers
Section 1: Introduction
Description 1: Provide an overview of the paper’s purpose, the problems being addressed, and the significance of the MINMAX criterion in contrast to the MINAVE criterion.
Section 2: Notation, Assumptions and Problem Formulation
Description 2: Introduce the necessary notation, underlying assumptions, and the mathematical formulation of the optimal bit allocation problem in a dependent coding framework.
Section 3: The MINAVE Criterion
Description 3: Explain how the optimal bit allocation problem can be solved for the MINAVE criterion using the Lagrange multiplier method and dynamic programming (DP).
Section 4: The MINMAX Criterion
Description 4: Describe the general algorithm for optimal bit allocation among dependent quantizers for the MINMAX criterion, including the minimum rate problem, the minimum distortion problem, and methods for breaking ties.
Section 5: Applications
Description 5: Present several examples from different areas of data compression to compare the MINAVE and MINMAX approaches, including still-frame compression, inter-mode frame compression, and shape coding.
Section 6: Conclusions
Description 6: Summarize the comparison between the MINAVE and MINMAX criteria, highlighting the advantages of the MINMAX approach and its potential applications in various coding schemes. |
Visualization of linear time-oriented data: a survey | 7 | ---
paper_title: The Visual Display Of Quantitative Information
paper_content:
Thank you very much for reading the visual display of quantitative information. Maybe you have knowledge that, people have search numerous times for their chosen novels like this the visual display of quantitative information, but end up in infectious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their computer. the visual display of quantitative information is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the the visual display of quantitative information is universally compatible with any devices to read.
---
paper_title: Visualization using timelines
paper_content:
A timeline is a linear, graphical visualization of events over time. For example, in concurrent application, events would represent state changes for some system object (such as a task or variable). A timeline display generator creates the graphical visualization from some record of events. This paper reports on a model for timeline display generators based on a formal model of event history and the objectives of timeline visualization. In this model, any timeline display generator is completely described through the definition of a set of mathematical functions. The exact characteristics and flexibility of a particular implementation of a timeline display generator, depends on the way in which these functions have been implemented. The current prototype, xtg, (Timeline Display Generator for X-windows) serves as an example implementation of these ideas. Characteristics of xtg are presented, and its use in the analysis of a real-world client-server application is discussed. Xtg has been applied to several other applications to-date and is being applied by several telecommunications companies to areas ranging from software process analysis to call trace data analysis.
---
paper_title: Timelines : An Interactive System for the Collection and Visualization of Temporal Data
paper_content:
Human‐Computer Interaction (HCI) researchers collect, analyze, and interpret information about user behavior in order to make inferences about the systems they are designing. This paper describes four paradigm shifts in HCI research and discusses how each influences data collection and analysis. Based on these trends the implications for tool design are outlined. We then describe the Timelines system, which was created to address both these paradigm shifts and two years of user testing results from the VANNA system [4, 5, 6]. The Timelines system is an interactive data collection and visualization tool which supports both quantitative and qualitative analysis, and exploratory sequential data analysis. It accepts many diverse types of temporal data and provides the user with powerful data manipulation and color, graphical visualization tools. We summarize four representative case studies which reflect different methodological approaches and research goals, typical of our user community. From this the implications for the design of our system (and for data collection and analysis tools in general) are described.
---
paper_title: LifeLines: visualizing personal histories
paper_content:
LifeLines provide a general visualization environment for personal histories that can be applied to medical and court records, professional histories and other types of biographical data. A one screen overview shows multiple facets of the records. Aspects, for example medical conditions or legal cases, are displayed as individual time lines, while icons indicate discrete events, such as physician consultations or legal reviews. Line color and thickness illustrate relationships or significance, rescaling tools and filters allow users to focus on part of the information. LifeLines reduce the chances of missing information, facilitate spotting anomalies and trends, streamline access to details, while remaining tailorable and easily transferable between applications. The paper describes the use of LifeLines for youth records of the Maryland Department of Juvenile Justice and also for medical records. User's feedback was collected using a Visual Basic prototype for the youth record. Techniques to deal with complex records are reviewed and issues of a standard personal record format are discussed.
---
paper_title: TimeScape: a time machine for the desktop environment
paper_content:
This paper describes a new desktop metaphor/system called TimeScape. A user of TimeScape can spatially arrange information on the desktop. Any desktop item can be removed at any time, and the system supports time-travel to the past or the future of the deskktop. The combination of spatial information arrangement and chronological navigation allows the user to organize and archive electric information without being bothered by document folders or file classification problems.
---
paper_title: Visualization using timelines
paper_content:
A timeline is a linear, graphical visualization of events over time. For example, in concurrent application, events would represent state changes for some system object (such as a task or variable). A timeline display generator creates the graphical visualization from some record of events. This paper reports on a model for timeline display generators based on a formal model of event history and the objectives of timeline visualization. In this model, any timeline display generator is completely described through the definition of a set of mathematical functions. The exact characteristics and flexibility of a particular implementation of a timeline display generator, depends on the way in which these functions have been implemented. The current prototype, xtg, (Timeline Display Generator for X-windows) serves as an example implementation of these ideas. Characteristics of xtg are presented, and its use in the analysis of a real-world client-server application is discussed. Xtg has been applied to several other applications to-date and is being applied by several telecommunications companies to areas ranging from software process analysis to call trace data analysis.
---
paper_title: Visualizing the performance of parallel programs
paper_content:
ParaGraph, a software tool that provides a detailed, dynamic, graphical animation of the behavior of message-passing parallel programs and graphical summaries of their performance, is presented. ParaGraph animates trace information from actual runs to depict behavior and obtain the performance summaries. It provides twenty-five perspectives on the same data, lending insight that might otherwise be missed. ParaGraph's features are described, its use is explained, its software design is briefly discussed, and its displays are examined in some detail. Future work on ParaGraph is indicated. >
---
paper_title: LifeLines: visualizing personal histories
paper_content:
LifeLines provide a general visualization environment for personal histories that can be applied to medical and court records, professional histories and other types of biographical data. A one screen overview shows multiple facets of the records. Aspects, for example medical conditions or legal cases, are displayed as individual time lines, while icons indicate discrete events, such as physician consultations or legal reviews. Line color and thickness illustrate relationships or significance, rescaling tools and filters allow users to focus on part of the information. LifeLines reduce the chances of missing information, facilitate spotting anomalies and trends, streamline access to details, while remaining tailorable and easily transferable between applications. The paper describes the use of LifeLines for youth records of the Maryland Department of Juvenile Justice and also for medical records. User's feedback was collected using a Visual Basic prototype for the youth record. Techniques to deal with complex records are reviewed and issues of a standard personal record format are discussed.
---
paper_title: Lifestreams: a storage model for personal data
paper_content:
Conventional software systems, such as those based on the “desktop metaphor,” are ill-equipped to manage the electronic information and events of the typical computer user. We introduce a new metaphor, Lifestreams, for dynamically organizing a user's personal workspace. Lifestreams uses a simple organizational metaphor, a time-ordered stream of documents, as an underlying storage system. Stream filters are used to organize, monitor and summarize information for the user. Combined, they provide a system that subsumes many separate desktop applications. This paper describes the Lifestreams model and our prototype system.
---
paper_title: VRML history: storing and browsing temporal 3D-worlds
paper_content:
Spatio-temporal data are presented and explored by VR-based visualization systems which offer 3D-navigation and time-navigation for better immersion and analysis. If the visualization results are disseminated on the WWW, they are mostly transformed into videos or, recently, into animated VRML-files which neither support 3D-navigation nor time navigation nor a time-referenced data representation. In this paper, the script language VRML History is proposed which supports the description of spatio-temporal worlds on the internet by conceptually extending VRML with a new time dimension. This is realized by a set of new nodes representing temporal geometries and time references, and a set of Java-classes extending standard VRML-browsers to perform time navigation. CR
---
paper_title: The perspective wall: detail and context smoothly integrated
paper_content:
Tasks that involve large information spaces overwhelm workspaces that do not support efiicient use of space and time. For example, case studies indicate that information often contains linear components, which can result in 2D layouts with wide, inefficient aspect ratios. This paper describes a technique called the Perspective W’aU for visualizing linear information by smoothly integrating detailed and contextual views. It uses hardware support for 3D interactive animation to fold wide 2D layouts into intuitive 3D visualizations that have a center panel for detail and two perspective panels for context. The resulting visualization supports efficient use of space and time.
---
paper_title: PeopleGarden: creating data portraits for users
paper_content:
Many on-line interaction environments have a large number of users. It is difficult for the participants, especially new ones, to form a clear mental image about those with whom they are interacting. How can we compactly convey information about these participants to each other? We propose the data portrait , a novel graphical representation of users based on their past interactions. Data portraits can inform users about each other and the overall social environment. We use a flower metaphor for creating individual data portraits, and a garden metaphor for combining these portraits to represent an on-line environment. We will review previous work in visualizing both individuals and groups. We will then describe our visualizations, explain how to create them, and show how they can be used to address user questions.
---
paper_title: Visualize a port in Africa
paper_content:
Techniques to visualize quantitative discrete event simulation input and output data are presented. General concepts connected with graphical excellence are discussed in a simulation context. Brief examples of graphs and visualizations are presented from a classic model of African port operations.
---
paper_title: Generalized fisheye views
paper_content:
In many contexts, humans often represent their own “neighborhood” in great detail, yet only major landmarks further away. This suggests that such views (“fisheye views”) might be useful for the computer display of large information structures like programs, data bases, online text, etc. This paper explores fisheye views presenting, in turn, naturalistic studies, a general formalism, a specific instantiation, a resulting computer program, example displays and an evaluation.
---
paper_title: Visualization using timelines
paper_content:
A timeline is a linear, graphical visualization of events over time. For example, in concurrent application, events would represent state changes for some system object (such as a task or variable). A timeline display generator creates the graphical visualization from some record of events. This paper reports on a model for timeline display generators based on a formal model of event history and the objectives of timeline visualization. In this model, any timeline display generator is completely described through the definition of a set of mathematical functions. The exact characteristics and flexibility of a particular implementation of a timeline display generator, depends on the way in which these functions have been implemented. The current prototype, xtg, (Timeline Display Generator for X-windows) serves as an example implementation of these ideas. Characteristics of xtg are presented, and its use in the analysis of a real-world client-server application is discussed. Xtg has been applied to several other applications to-date and is being applied by several telecommunications companies to areas ranging from software process analysis to call trace data analysis.
---
paper_title: VRML history: storing and browsing temporal 3D-worlds
paper_content:
Spatio-temporal data are presented and explored by VR-based visualization systems which offer 3D-navigation and time-navigation for better immersion and analysis. If the visualization results are disseminated on the WWW, they are mostly transformed into videos or, recently, into animated VRML-files which neither support 3D-navigation nor time navigation nor a time-referenced data representation. In this paper, the script language VRML History is proposed which supports the description of spatio-temporal worlds on the internet by conceptually extending VRML with a new time dimension. This is realized by a set of new nodes representing temporal geometries and time references, and a set of Java-classes extending standard VRML-browsers to perform time navigation. CR
---
paper_title: TimeScape: a time machine for the desktop environment
paper_content:
This paper describes a new desktop metaphor/system called TimeScape. A user of TimeScape can spatially arrange information on the desktop. Any desktop item can be removed at any time, and the system supports time-travel to the past or the future of the deskktop. The combination of spatial information arrangement and chronological navigation allows the user to organize and archive electric information without being bothered by document folders or file classification problems.
---
paper_title: Time-machine computing: a time-centric approach for the information environment
paper_content:
This paper describes the concept of Time-Machine Computing (TMC) , a time-centric approach to organizing information on computers. A system based on Time-Machine Computing allows a user to visit the past and the future states of computers. When a user needs to refer to a document that he/she was working on at some other time, he/she can travel in the time dimension and the system restores the computer state at that time. Since the user's activities on the system are automatically archived, the user's daily workspace is seamlessly integrated into the information archive. The combination of spatial information management of the desktop metaphor and time traveling allows a user to organize and archive information without being bothered by folder hierarchies or the file classification problems that are common in today's desktop environments. TMC also provides a mechanism for linking multiple applications and external information sources by exchanging time information. This paper describes the key features of TMC, a time-machine desktop environment called “TimeScape,” and several time-oriented application integration examples.
---
paper_title: VRML history: storing and browsing temporal 3D-worlds
paper_content:
Spatio-temporal data are presented and explored by VR-based visualization systems which offer 3D-navigation and time-navigation for better immersion and analysis. If the visualization results are disseminated on the WWW, they are mostly transformed into videos or, recently, into animated VRML-files which neither support 3D-navigation nor time navigation nor a time-referenced data representation. In this paper, the script language VRML History is proposed which supports the description of spatio-temporal worlds on the internet by conceptually extending VRML with a new time dimension. This is realized by a set of new nodes representing temporal geometries and time references, and a set of Java-classes extending standard VRML-browsers to perform time navigation. CR
---
paper_title: TimeScape: a time machine for the desktop environment
paper_content:
This paper describes a new desktop metaphor/system called TimeScape. A user of TimeScape can spatially arrange information on the desktop. Any desktop item can be removed at any time, and the system supports time-travel to the past or the future of the deskktop. The combination of spatial information arrangement and chronological navigation allows the user to organize and archive electric information without being bothered by document folders or file classification problems.
---
paper_title: The eyes have it: a task by data type taxonomy for information visualizations
paper_content:
A useful starting point for designing advanced graphical user interfaces is the visual information seeking Mantra: overview first, zoom and filter, then details on demand. But this is only a starting point in trying to understand the rich and varied set of information visualizations that have been proposed in recent years. The paper offers a task by data type taxonomy with seven data types (one, two, three dimensional data, temporal and multi dimensional data, and tree and network data) and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extracts).
---
paper_title: LifeLines: visualizing personal histories
paper_content:
LifeLines provide a general visualization environment for personal histories that can be applied to medical and court records, professional histories and other types of biographical data. A one screen overview shows multiple facets of the records. Aspects, for example medical conditions or legal cases, are displayed as individual time lines, while icons indicate discrete events, such as physician consultations or legal reviews. Line color and thickness illustrate relationships or significance, rescaling tools and filters allow users to focus on part of the information. LifeLines reduce the chances of missing information, facilitate spotting anomalies and trends, streamline access to details, while remaining tailorable and easily transferable between applications. The paper describes the use of LifeLines for youth records of the Maryland Department of Juvenile Justice and also for medical records. User's feedback was collected using a Visual Basic prototype for the youth record. Techniques to deal with complex records are reviewed and issues of a standard personal record format are discussed.
---
paper_title: Temporal and Real-Time Databases: A Survey
paper_content:
A temporal database contains time-varying data. In a real-time database transactions have deadlines or timing constraints. In this paper we review the substantial research in these two previously separate areas. First we characterize the time domain; then we investigate temporal and real-time data models. We evaluate temporal and real-time query languages along several dimensions. We examine temporal and real-time DBMS implementation. Finally, we summarize major research accomplishments to date and list several unanswered research questions. >
---
| Title: Visualization of Linear Time-Oriented Data: A Survey
Section 1: Introduction
Description 1: This section explains the purpose and context of visualizing linear time-oriented data and gives an overview of the paper's structure.
Section 2: A Temporal Framework
Description 2: This section introduces the temporal framework used to classify temporal data models and visual techniques.
Section 3: Slice Visualization
Description 3: This section discusses various visualizations that support slice views of time-linear data and provides examples of systems implementing them.
Section 4: Periodic Slice Visualization
Description 4: This section covers visual techniques proposed for visualizing periodic patterns in historical data, such as personal calendars and time-series data.
Section 5: Snapshot Visualization
Description 5: This section describes techniques for the snapshot visualization of data valid at a single instant or interval, highlighting instant facts.
Section 6: Temporal Visual Queries
Description 6: This section presents systems combining query and visualization phases for temporal data, detailing examples of visual query interfaces.
Section 7: Conclusions
Description 7: This section summarizes the findings and effectiveness of different visual techniques and discusses future research directions for visualizing time-dependent information. |
A Review of Intelligent Practices for Irrigation Prediction | 5 | ---
paper_title: Participatory decision support for agricultural management. A case study from Sri Lanka
paper_content:
Abstract Agricultural policy makers were helped to construct and use a decision support system (DSS) to identify problems and assess potential solutions for a river basin in Sri Lanka. Through building the DSS themselves, policy makers should reach better decisions. The main aim of the study was to test whether this could be done using a tool called a Bayesian network (BN) which is accessible to non-specialists and able to provide a generic, flexible framework for the construction of DSS. Results from a workshop indicated that the approach showed promise, providing a common framework for discussion and allowing policy makers to structure complex systems from a multi-disciplinary perspective. The need for a multi-disciplinary perspective was clearly demonstrated. The study also suggested improvements to the ways in which BNs can be used in practice. Further workshops with farmers highlighted the importance of involving them in the planning process and suggested more effective ways of doing this while using BNs.
---
paper_title: Framewise phoneme classification with bidirectional LSTM and other neural network architectures
paper_content:
In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it'.
---
paper_title: Learning to forget: continual prediction with LSTM
paper_content:
Long short-term memory (LSTM) can solve many tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams without explicitly marked sequence ends. Without resets, the internal state values may grow indefinitely and eventually cause the network to break down. Our remedy is an adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review an illustrative benchmark problem on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve a continual version of that problem. LSTM with forget gates, however, easily solves it in an elegant way.
---
paper_title: Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition
paper_content:
Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) architecture that has been designed to address the vanishing and exploding gradient problems of conventional RNNs. Unlike feedforward neural networks, RNNs have cyclic connections making them powerful for modeling sequences. They have been successfully used for sequence labeling and sequence prediction tasks, such as handwriting recognition, language modeling, phonetic labeling of acoustic frames. However, in contrast to the deep neural networks, the use of RNNs in speech recognition has been limited to phone recognition in small scale tasks. In this paper, we present novel LSTM based RNN architectures which make more effective use of model parameters to train acoustic models for large vocabulary speech recognition. We train and compare LSTM, RNN and DNN models at various numbers of parameters and configurations. We show that LSTM models converge quickly and give state of the art speech recognition performance for relatively small sized models.
---
paper_title: Energy gradient line approach for direct hydraulic calculation in drip irrigation design
paper_content:
Direct calculations can be made for all emitter flows along a lateral line and in a submain unit based on an Energy Gradient Line (EGL) approach. Errors caused by the EGL approach were evaluated by a computer simulation. A Revised Energy Gradient Line (REGL) approach, developed using a mean discharge approximation, can reduce the errors and match with the results from a Step-by-Step (SBS) calculation for all emitters in a drip system. The developed equations can be used for computerized design of drip irrigation systems.
---
paper_title: Economic Liberalisation and Indian Agriculture: A District-Level Study
paper_content:
Foreword - R Radhakrishnan Introduction Economic Liberalisation and Indian Agriculture: State-wise Analysis Levels of Agricultural Output: District-wise Analysis Spatial Pattern of Growth of Agricultural Output: District-level Analysis Changes in Agricultural Labour Productivity: State- and District-level Analyses Analytical Findings and Recommendations Appendices Annexures Bibliography Index
---
paper_title: Improved irrigation water demand forecasting using a soft-computing hybrid model
paper_content:
Recently, Computational Neural Networks (CNNs) and fuzzy inference systems have been successfully applied to time series forecasting. In this study the performance of a hybrid methodology combining feed forward CNN, fuzzy logic and genetic algorithm to forecast one-day ahead daily water demands at irrigation districts considering that only flows in previous days are available for the calibration of the models were analysed. Individual forecasting models were developed using historical time series data from the Fuente Palmera irrigation district located in Andalucia, southern Spain. These models included univariate autoregressive CNNs trained with the Levenberg–Marquardt algorithm (LM). The individual models forecasting were then corrected via a fuzzy logic approach whose parameters were adjusted using a genetic algorithm in order to improve the forecasting accuracy. For the purpose of comparison, this hybrid methodology was also applied with univariate autoregressive CNN models trained with the Extended-Delta-Bar-Delta algorithm (EDBD) and calibrated in a previous study in the same irrigation district. A multicriteria evaluation with several statistics and absolute error measures showed that the hybrid model performed significantly better than univariate and multivariate autoregressive CNNs.
---
paper_title: Comparison of feedforward and recurrent neural network language models
paper_content:
Research on language modeling for speech recognition has increasingly focused on the application of neural networks. Two competing concepts have been developed: On the one hand, feedforward neural networks representing an n-gram approach, on the other hand recurrent neural networks that may learn context dependencies spanning more than a fixed number of predecessor words. To the best of our knowledge, no comparison has been carried out between feedforward and state-of-the-art recurrent networks when applied to speech recognition. This paper analyzes this aspect in detail on a well-tuned French speech recognition task. In addition, we propose a simple and efficient method to normalize language model probabilities across different vocabularies, and we show how to speed up training of recurrent neural networks by parallelization.
---
paper_title: Hybrid speech recognition with Deep Bidirectional LSTM
paper_content:
Deep Bidirectional LSTM (DBLSTM) recurrent neural networks have recently been shown to give state-of-the-art performance on the TIMIT speech database. However, the results in that work relied on recurrent-neural-network-specific objective functions, which are difficult to integrate with existing large vocabulary speech recognition systems. This paper investigates the use of DBLSTM as an acoustic model in a standard neural network-HMM hybrid system. We find that a DBLSTM-HMM hybrid gives equally good results on TIMIT as the previous work. It also outperforms both GMM and deep network benchmarks on a subset of the Wall Street Journal corpus. However the improvement in word error rate over the deep network is modest, despite a great increase in framelevel accuracy. We conclude that the hybrid approach with DBLSTM appears to be well suited for tasks where acoustic modelling predominates. Further investigation needs to be conducted to understand how to better leverage the improvements in frame-level accuracy towards better word error rates.
---
paper_title: Automation of Irrigation System based on Wi-Fi Technology and IOT
paper_content:
Background/Objectives: The main intention is to develop an automation to supply water for home gardening and irrigation system in farm fields. Methods and Analysis: It is done with the help of soil moisture sensor and temperature sensor which are fixed at root area of the plants. The values detected by these sensors are transmitted to base station. The key aim of base station is to collect data from field station and upload those values in internet by using Wi-Fi technology also notify user about any peculiar circumstances like low moisture and high temperature. Findings: This irrigation system has been approved under different climates with various levels of moisture contents especially the red chilly weeds. Application/Improvement: Home gardening is the hobby of many people and also same works for the irrigation system in the agriculture fields.
---
paper_title: Number of kernels in wheat crops and the influence of solar radiation and temperature
paper_content:
The number of kernels per m 2 ( K ) in well managed and watered wheat crops was studied using results of experiments in Mexico and Australia in which short spring wheat cultivars were subjected to independent variation in radiation, largely via artificial shading, and in temperature. Also crops subjected to differences in weather (year), sowing date and location within Mexico, revealed responses to the natural and simultaneous variation which occurs in radiation and temperature. Responses in K were interpreted in terms of spike dry weight at anthesis (g/m 2 ) and number of kernels per unit of spike weight. K was linearly and most closely related to incident solar radiation in the 30 days or so preceding anthesis, herein termed the spike growth period; for the cultivar Yecora 70 with full ground cover the slope was 19 kernels/MJ. This response seemed largely due to a linear response of crop growth rate to intercepted solar radiation. The proportion of dry weight increase partitioned to the spike increased somewhat with reduced radiation. Also increasing temperature in the range 14–22 °C during this period reduced K (slope approximately 4% per CC at 15 °C). The cause appeared to be lower spike dry weight due to accelerated development. The number of kernels per unit spike weight at anthesis was little affected by radiation or temperature, and averaged 78±2/g for the cultivar Yecora 70. With natural variation in radiation and temperature, K was closely and linearly correlated with the ratio of mean daily incident or intercepted radiation to mean temperature above 4·5 °C in the 30 days preceding anthesis. As this ratio, termed the photothermal quotient, increased from 0·5 to 2·0 MJ/m 2 /day/degree, K increased from 70 to 196 × 10 2 /m 2 . These responses of K to weather, sowing date and location were closely associated with variation in spike dry weight. It was concluded that the ratio of solar radiation to temperature could be very useful for estimating K in wheat crop models. Also the analysis of K determination in terms of spike dry weight appeared promising, and suggests that wheat physiologists should place greater emphasis on the growth period immediately before anthesis.
---
paper_title: Nonlinear temperature effects indicate severe damages to U.S. crop yields under climate change
paper_content:
The United States produces 41% of the world's corn and 38% of the world's soybeans. These crops comprise two of the four largest sources of caloric energy produced and are thus critical for world food supply. We pair a panel of county-level yields for these two crops, plus cotton (a warmer-weather crop), with a new fine-scale weather dataset that incorporates the whole distribution of temperatures within each day and across all days in the growing season. We find that yields increase with temperature up to 29° C for corn, 30° C for soybeans, and 32° C for cotton but that temperatures above these thresholds are very harmful. The slope of the decline above the optimum is significantly steeper than the incline below it. The same nonlinear and asymmetric relationship is found when we isolate either time-series or cross-sectional variations in temperatures and yields. This suggests limited historical adaptation of seed varieties or management practices to warmer temperatures because the cross-section includes farmers' adaptations to warmer climates and the time-series does not. Holding current growing regions fixed, area-weighted average yields are predicted to decrease by 30–46% before the end of the century under the slowest (B1) warming scenario and decrease by 63–82% under the most rapid warming scenario (A1FI) under the Hadley III model.
---
paper_title: Net and solar radiation relations over irrigated field crops
paper_content:
Abstract Many of the meteorological methods used to estimate evapotranspiration from crop surfaces require net radiation information. Since net radiation data are not generally available and solar radiation is measured at several locations throughout the world, it would be desirable to estimate net radiation from solar radiation. Consequently, a relation was sought between solar radiation and net radiation measured over irrigated field crops of alfalfa, barley, wheat, oats, cotton, and sorghum. Data collected under field conditions were analyzed by linear regression techniques. Standard deviation from regression was 0.02 ly/min for individual days and for individual crops. The regression equations changed from day to day; therefore, were of little value for estimation purposes. Seasonal data for individual crops were pooled and analyzed by linear regression. The resulting standard errors were about twice as large as those for individual days, ranging from 0.02 to 0.05 ly/min. When the data for all crops and all days were pooled and analyzed by linear regression, the resulting standard error was 0.06 ly/min. Thus, estimating net radiation for any of the crops from solar radiation would result in a standard error of 38 ly, or approximately 10%, for a 12-hour day having 649 ly incoming radiation. The regression equation may be solved using either hourly or daily solar radiation data. In the absence of net radiation data either the pooled regression or the individual crop regressions may provide sufficiently accurate estimates for some applications—for example, estimating evapotranspiration to be used in the design of irrigation projects. However, the error appears to be too large where daily evapotranspiration results are required. Therefore, measurements of net radiation are still desirable. Since reflected solar radiation is one of the components of net radiation, a better relation might be expected between net radiation and net solar radiation incoming minus reflected solar radiation). The inclusion of the reflected solar radiation data does not reduce the standard errors. Calculation of albedos of the various surfaces from daily totals of incoming and reflected solar radiation indicated a range of 0.14 over wet soil to 0.24 over dry soil, and up to 0.27 over crop surfaces. The average albedo of crop surfaces was 0.24. Row crops tended to have lower albedos until the maximum canopy was developed; then the albedos were similar to those of continuous crops such as alfalfa. Broadleaf plants tended to have larger reflections than grasses. The crops studied rank in order of increasing albedos as sorghum, wheat, barley, oats, cotton, and alfalfa. Increasing the surface albedo may result in water conservation by reducing the amount of energy absorbed which could be used in evaporation. It appears that the greatest effect could be achieved by increasing the albedo of wet bare soil.
---
paper_title: Effect of mulch, irrigation, and soil type on water use and yield of maize
paper_content:
Abstract Tillage practices that maintain crop residues on the soil surface help reduce evaporation of soil water, which can benefit high water use crops such as maize ( Zea mays L.). Management practices, climatic conditions, and soil type may affect how well a crop responds to surface residue. We conducted experiments with short season maize in 1994 and 1995 in Bushland, TX, USA, utilizing a rain shelter facility that has lysimeters containing monolithic cores of the Pullman (fine, mixed, thermic Torrertic Paleustolls), the Ulysses (fine-silty, mixed, mesic Aridic Haplustolls), and the Amarillo (fine-loamy, mixed, thermic Aridic Paleustalfs) soil series. In 1994, the treatments were a flat wheat ( Triticum aestivum L.) straw and coconut ( Cocus nucifera L.) fiber mulch of 4 Mg ha −1 with infrequent irrigations totaling 25% and 75% of long-term average rainfall for the growing season (200 mm). The 1995 treatments were similar, but used a heavier mulch of 6.7 Mg ha −1 and more frequent irrigations totaling 60% and 100% of long-term average rainfall. The mulch was applied at the 3-leaf growth stage. Mean potential grass reference evapotranspiration for the vegetative and reproductive growth stages in 1994 was 6.6 and 6.3 mm day −1 , respectively, and in 1995 it was 6.8 and 7 mm day −1 , respectively. The mulched and bare soil surface treatments used similar amounts of water in each year. In 1994, mulch did not affect yield, yield components, or leaf area index (LAI). No significant differences occurred in plant available water (PAW) between mulched and bare soil treatments from emergence through harvest. In 1995, mulch increased grain yield by 17%, aboveground biomass by 19%, and grain water use efficiency (WUE) by 14% compared with bare soil treatments. Mulched treatments also maintained significantly greater PAW compared with bare soil treatments until near anthesis and, after anthesis, LAI was significantly greater in the mulched treatments compared with the bare soil treatments. In 1995, mulch significantly increased grain yield and grain WUE of the maize crop in the Pullman soil, grain yield and biomass WUE of the crop in the Amarillo soil, and had no significant effect on the crop in the Ulysses soil compared with the bare soil treatments. The significant increase in water use efficiency in 1995 was the result of soil water being used for crop growth and yield rather than in evaporation of soil water. The more favorable soil water regime in 1995 compared with 1994 between the mulched and bare soil treatments was possibly due to the higher evaporative demand environment, the increase in mulch mass, and the increased irrigation frequency. This was especially important in soils where textural characteristics affected both rooting and soil water extraction by maize which limited its ability to tolerate water stress.
---
paper_title: The Influence of Progressive Increases in total Soil Moisture Stress on Transpiration, Growth, and Internal Water Relationships of Plants
paper_content:
The responses of tomato (Lycopersicon esculentum (Mill.), privet (Ligustrum lucidum Ait.), and cotton (Gossypium barbadense L.) to conditions of increasing total soil moisture stress were measured in terms of vegetative growth, stem elongation, transpiration, leaf turgor, diffusion pressure deficit, and osmotic pressure.
---
paper_title: Irrigation Scheduling Impact Assessment MODel (ISIAMOD): a Decision Tool for Irrigation Scheduling
paper_content:
This paper presents a process-based simulation known as Irrigation Scheduling Impact Assessment MODel (ISIAMOD). It was developed to simulate crop growth&yield, soil water balance and water management response indices to define the impact of irrigation scheduling decisions. ISIAMOD was calibrated and validated using data from field experiments on the irrigated maize crop conducted in an irrigation scheme located in south western Tanzania. The model adequately simulates crop biomass yield, grain yield, seasonal evapotranspiration and average soil moisture content in the crop effective rooting depth. Some unique features of this model make it a major improvement over the existing crop-soil simulation models.
---
paper_title: Large-scale machine learning with stochastic gradient descent
paper_content:
During the last decade, the data sizes have grown faster than the speed of processors. In this context, the capabilities of statistical machine learning methods is limited by the computing time rather than the sample size. A more precise analysis uncovers qualitatively different tradeoffs for the case of small-scale and large-scale learning problems. The large-scale case involves the computational complexity of the underlying optimization algorithm in non-trivial ways. Unlikely optimization algorithms such as stochastic gradient descent show amazing performance for large-scale problems. In particular, second order stochastic gradient and averaged stochastic gradient are asymptotically efficient after a single pass on the training set.
---
paper_title: Linear Models for Multivariate, Time Series, and Spatial Data
paper_content:
Multivariate linear models discrimination and allocation frequency analysis of time series time domain analysis linear models for spatial data.
---
paper_title: EXPLORE: a novel decision tree classification algorithm
paper_content:
Decision tree algorithms such as See5 (or C5) are typically used in data mining for classification and prediction purposes. In this study we propose EXPLORE, a novel decision tree algorithm, which is a modification of See5. The modifications are made to improve the capability of a tree in extracting hidden patterns. Justification of the proposed modifications is also presented. We experimentally compare EXPLORE with some existing algorithms such as See5, REPTree and J48 on several issues including quality of extracted rules/patterns, simplicity, and classification accuracy of the trees. Our initial experimental results indicate advantages of EXPLORE over existing algorithms.
---
paper_title: Induction of decision trees
paper_content:
The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.
---
paper_title: Knowledge discovery through SysFor: a systematically developed forest of multiple decision trees
paper_content:
Decision tree based classification algorithms like C4.5 and Explore build a single tree from a data set. The two main purposes of building a decision tree are to extract various patterns/logic-rules existing in a data set, and to predict the class attribute value of an unlabeled record. Sometimes a set of decision trees, rather than just a single tree, is also generated from a data set. A set of multiple trees, when used wisely, typically have better prediction accuracy on unlabeled records. Existing multiple tree techniques are catered for high dimensional data sets and therefore unable to build many trees from low dimensional data sets. In this paper we present a novel technique called Sys-For that can build many trees even from a low dimensional data set. Another strength of the technique is that instead of building multiple trees using any attribute (good or bad) it uses only those attributes that have high classification capabilities. We also present two novel voting techniques in order to predict the class value of an unlabeled record through the collective use of multiple trees. Experimental results demonstrate that SysFor is suitable for multiple pattern extraction and knowledge discovery from both low dimensional and high dimensional data sets by building a number of good quality decision trees. Moreover, it also has prediction accuracy higher than the accuracy of several existing techniques that have previously been shown as having high performance.
---
paper_title: EXPLORE: a novel decision tree classification algorithm
paper_content:
Decision tree algorithms such as See5 (or C5) are typically used in data mining for classification and prediction purposes. In this study we propose EXPLORE, a novel decision tree algorithm, which is a modification of See5. The modifications are made to improve the capability of a tree in extracting hidden patterns. Justification of the proposed modifications is also presented. We experimentally compare EXPLORE with some existing algorithms such as See5, REPTree and J48 on several issues including quality of extracted rules/patterns, simplicity, and classification accuracy of the trees. Our initial experimental results indicate advantages of EXPLORE over existing algorithms.
---
paper_title: A Tutorial on Support Vector Machines for Pattern Recognition
paper_content:
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
---
paper_title: The Nature of Statistical Learning Theory
paper_content:
Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.
---
paper_title: Fuzzy composite programming to combine remote sensing and crop models for decision support in precision crop management
paper_content:
Abstract Precision crop management is by definition a multi-objective decision-making process that must incorporate a diversity of data, opinion, preference and objective. This paper details an approach to decision making that allows users to express individual or corporate values and preferences; highlights the degree of imprecision associated with each input; highlights the degree of imprecision associated with each alternative; facilitates structuring of the decision process; reduces several levels of complex information into a single chart; allows examination of trade-off between alternatives and interests; and forces examination of inter-relationships between interest. The addition of using remote sensing data provides an efficient method to describe spatial variability in terms that can be related to a crop model, making the decision-making approach feasible for precision farming applications. The crop model provides information that can be used by the decision model, and the remote sensing data is used to fine tune the calibration of the crop model, maximizing the accuracy of its results.
---
paper_title: A crop water stress index for tall fescue (Festuca arundinacea Schreb.) irrigation decision-making — a traditional method
paper_content:
Abstract A high irradiance plant growth chamber was used to study crop water stress indices (CWSI) and baselines with increasing soil water deficit for Tall Fescue ( Festuca arundinacea Schreb.). Canopy temperatures for turf plugs were continuously measured with infrared thermometers, along with plant water use, measured with electronic mini-lysimeters. Net radiation, canopy and air temperatures, and vapor pressure deficit (VPD) levels were recorded and analyzed statistically. The canopy–air temperature differential ( T c − T a ) increased with a decrease in soil moisture content. T c − T a increased as net radiation became greater, independent of soil water deficit. Canopy temperature of well-watered plants decreased at rate of 2.4°C for each 1 kPa reduction in air vapor pressure deficit for all net radiation levels. For each 100 Wm −2 increase in net radiation, canopy temperature of well-watered plants increased at a rate of 0.6°C and was well correlated (well-watered baseline) with VPD. Increases in canopy temperature coupled with a decrease in transpiration rate were hallmark signs of water stress progression. However, ( T c − T a ) and VPD baseline relationships correlated poorly for moderate-stress and severe stress conditions regardless of net radiation levels. Thus, even with the increased precision and replications of a controlled environment study, lower limit crop water stress baselines were quite variable.
---
paper_title: ANFIS: Adaptive-Network-Based Fuzzy Inference System
paper_content:
The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. >
---
paper_title: Comparison of the ARMA, ARIMA, and the autoregressive artificial neural network models in forecasting the monthly inflow of Dez dam reservoir
paper_content:
Summary The goal of the present research is forecasting the inflow of Dez dam reservoir by using Auto Regressive Moving Average (ARMA) and Auto Regressive Integrated Moving Average (ARIMA) models while increasing the number of parameters in order to increase the forecast accuracy to four parameters and comparing them with the static and dynamic artificial neural networks. In this research, monthly discharges from 1960 to 2007 were used. The statistics related to first 42 years were used to train the models and the 5 past years were used to forecast. In ARMA and ARIMA models, the polynomial was derived respectively with four and six parameters to forecast the inflow. In the artificial neural network, the radial and sigmoid activity functions were used with several different neurons in the hidden layers. By comparing root mean square error (RMSE) and mean bias error (MBE), dynamic artificial neural network model with sigmoid activity function and 17 neurons in the hidden layer was chosen as the best model for forecasting inflow of the Dez dam reservoir. Inflow of the dam reservoir in the 12 past months shows that ARIMA model had a less error compared with the ARMA model. Static and Dynamic autoregressive artificial neural networks with activity sigmoid function can forecast the inflow to the dam reservoirs from the past 60 months.
---
paper_title: A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series
paper_content:
Summary Developing a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R ), Nash–Sutcliffe efficiency coefficient ( E ), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.
---
paper_title: Daily irrigation water demand prediction using Adaptive Neuro-Fuzzy Inferences Systems (ANFIS).
paper_content:
One of the main problems in the management of large water supply and distribution systems is the forecasting of daily demand in order to schedule pumping effort and minimize costs. This paper examines a methodology for consumer demand modeling and prediction in a real-time environment of an irrigation water distribution system. The approach is based on Adaptive Neuro-Fuzzy Inferences System (ANFIS) technique. The data was taken from a Cretan water company named O.A.DY.K and concerns the area of prefecture of Chania. ANFIS was comprised with traditional forecasting techniques as the autoregressive (AR) and autoregressive moving average (ARMA) models. ANFIS provide the better prediction results of daily water demand. Key-Words: ANFIS; forecasting; neuro-fuzzy; water forecasting, irrigation water, neuro-fuzzy forecasting
---
paper_title: Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models
paper_content:
Many statistical models, and in particular autoregressive-moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrelated sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values, the resulting sequence is referred to as the "residuals," which can be regarded as estimates of the errors. If the appropriate model has been chosen, there will be zero autocorrelation in the errors. In checking adequacy of fit it is therefore logical to study the sample autocorrelation function of the residuals. For large samples the residuals from a correctly fitted model resemble very closely the true errors of the process; however, care is needed in interpreting the serial correlations of the residuals. It is shown here that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the autocorrelations of the errors so that they possess a singular normal distribution. Failing to allow for this results in a tendency to overlook evidence of lack of fit. Tests of fit and diagnostic checks are devised which take these facts into account.
---
paper_title: Forecasting Weekly Evapotranspiration with ARIMA and Artificial Neural Network Models
paper_content:
Information about the parameters defining water resources availability is a key factor in their management. Reference evapotranspiration ( ET0 ) prediction is fundamental in planning, design, and management of water resource systems for irrigation. The application of time series analysis methodologies, which allow evapotranspiration prediction, is of great use for the latter. The objective of the present study was the comparison of weekly evapotranspiration ARIMA and artificial neural network (ANN)-based forecasts with regard to a model based on weekly averages, in the region of Alava situated in the Basque Country (northern Spain). The application of both ARIMA and ANN models improved the performance of 1 week in advance weekly evapotranspiration predictions compared to the model based on means (mean year model). The ARIMA and ANN models reduced the prediction root mean square differences with respect to the mean year model (based on historical averages) by 6–8%, and reduced the standard deviation differ...
---
paper_title: Study on Applying Fuzzy Inference to Single Factor Prediction Method for Precipitation Irrigation Requirement Forecast
paper_content:
The paper elaborates on the principle of applying fuzzy inference to single factor prediction method in precipitation irrigation requirement forecasting. Through the introduction of prediction example, the paper showed us how to conduct a prediction process specifically. Finally, it is proved that the precision of prediction of this approach can meet the need of agricultural production.
---
paper_title: A new fuzzy-based feature selection and hybrid TLA–ANN modelling for short-term load forecasting
paper_content:
In this paper, a new hybrid method based on teacher learning algorithm (TLA) and artificial neural network (ANN) is proposed to develop an accurate model to investigate short-term load forecasting more precisely. In contrast to the other evolutionary-based training techniques, the proposed method utilises both the ability of ANNs to generate a non-linear mapping among different complex data as well as the powerful ability of TLA for global search and exploration. In addition, in an attempt to choose the most satisfying features from the set of input variables, a novel feature-selection approach based on fuzzy clustering and fuzzy set theory is proposed and utilised sufficiently. In order to improve the overall performance of TLA for optimisation applications, a new modification phase is proposed to increase the ability of the algorithm to explore the entire search space globally. The simulation results show the feasibility and the superiority of the proposed hybrid method over the other well-known methods...
---
paper_title: Study on Applying Fuzzy Inference to Single Factor Prediction Method for Precipitation Irrigation Requirement Forecast
paper_content:
The paper elaborates on the principle of applying fuzzy inference to single factor prediction method in precipitation irrigation requirement forecasting. Through the introduction of prediction example, the paper showed us how to conduct a prediction process specifically. Finally, it is proved that the precision of prediction of this approach can meet the need of agricultural production.
---
paper_title: Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks, and Genetic Algorithms
paper_content:
Foreword P.P. Wang. Editor's Preface Da Ruan. Part 1: Basic Principles and Methodologies. 1. Introduction to Fuzzy Systems, Neural Networks, and Genetic Algorithms H. Takagi. 2. A Fuzzy Neural Network for Approximate Fuzzy Reasoning L.P. Maguire, et al. 3. Novel Neural Algorithms for Solving Fuzzy Relation Equations Xiaozhong Li, Da Ruan. 4. Methods for Simplification of Fuzzy Models U. Kaymak, et al. 5. A New Approach of Neurofuzzy Learning Algorithm M. Mizumoto, Yan Shi. Part 2: Data Analysis and Information Systems. 6. Neural Networks in Intelligent Data Analysis Xiaohui Liu. 7. Data-Driven Identification of Key Variables Bo Yuan, G. Klir. 8. Applications of Intelligent Techniques in Process Analysis J. Angstenberger, R. Weber. 9. Neurofuzzy-Chaos Engineering for Building Intelligent Adaptive Information Systems N.K. Kasabov, R. Kozma. 10. A Sequential Training Strategy for Locally Recurrent Neural Networks Jie Zhang, A.J. Morris. Part 3: Nonlinear Systems and System Identification. 11. Adaptive Genetic Programming for System Identification A. Bastian. 12. Nonlinear System Identification with Neurofuzzy Methods O. Nelles. 13. A Genetic Algorithm for Mixed-Integer Optimisation in Power and Water System Design and Control Kai Chen, et al. 14. Soft Computing Based Signal Prediction, Restoration, and Filtering E. Uchino, T. Yamakawa. Subject Index.
---
paper_title: A fusion model of HMM, ANN and GA for stock market forecasting
paper_content:
In this paper we propose and implement a fusion model by combining the Hidden Markov Model (HMM), Artificial Neural Networks (ANN) and Genetic Algorithms (GA) to forecast financial market behaviour. The developed tool can be used for in depth analysis of the stock market. Using ANN, the daily stock prices are transformed to independent sets of values that become input to HMM. We draw on GA to optimize the initial parameters of HMM. The trained HMM is used to identify and locate similar patterns in the historical data. The price differences between the matched days and the respective next day are calculated. Finally, a weighted average of the price differences of similar patterns is obtained to prepare a forecast for the required next day. Forecasts are obtained for a number of securities in the IT sector and are compared with a conventional forecast method.
---
paper_title: Flood Forecasting Using ANN, Neuro-Fuzzy, and Neuro-GA Models
paper_content:
Flood forecasting at Jamtara gauging site of the Ajay River Basin in Jharkhand, India is carried out using an artificial neural network (ANN) model, an adaptive neuro-fuzzy interference system (ANFIS) model, and an adaptive neuro-GA integrated system (ANGIS) model. Relative performances of these models are also compared. Initially the ANN model is developed and is then integrated with fuzzy logic to develop an ANFIS model. Further, the ANN weights are optimized by genetic algorithm (GA) to develop an ANGIS model. For development of these models, 20 rainfall–runoff events are selected, of which 15 are used for model training and five are used for validation. Various performance measures are used to evaluate and compare the performances of different models. For the same input data set ANGIS model predicts flood events with maximum accuracy. ANFIS and ANN model perform similarly in some cases, but ANFIS model predicts better than the ANN model in most of the cases.
---
paper_title: An inexact rough-interval fuzzy linear programming method for generating conjunctive water-allocation strategies to agricultural irrigation systems
paper_content:
Abstract An inexact rough-interval fuzzy linear programming (IRFLP) method is developed for agricultural irrigation systems to generate conjunctive water allocation strategies. The concept of “rough interval” is introduced in the modeling framework to represent dual-uncertain parameters. After the modeling formulation, an agricultural water allocation management system is provided to demonstrate the applicability of the developed method. The results show that reasonable solutions and allocation strategies are obtained. Based on the analysis of alternatives obtained from different scenarios, the significant impact of dual uncertainties existing in the system is specified. Comparisons between the results from IRFLP and interval-valued fuzzy linear programming are also conducted. The obtained rough-interval solutions correspond to the management strategies under both normal and special system conditions, and thus more conveniences would be provided for decision makers. Compared to the previous modeling efforts, the proposed IRFLP shows uniqueness in addressing the interaction between dual intervals of highly uncertain parameters, as well as their joint impact on the system.
---
paper_title: Fuzzy based Decision Support Model for Irrigation System Management
paper_content:
this paper, an efficient irrigation system is proposed based on computing evapotranspiration (ET) and the required irrigation quantity using fuzzy inference methodology. The aim of this system is to schedule irrigation according to the particular requirements of a crop and to the change in various climatological parameters and other factors. This is to avoid over- or under-watering which significantly affects the crop quality and yields using the proposed algorithm. Moreover, our algorithm reduces the power switching, hence it conserves energy. The results demonstrate that the fuzzy model is a quick and accurate tool for calculating evapotranspiration as well as the required net irrigation. Besides, no water stress occurs because our model prohibits depletion in soil moisture from reaching 100% which represents permanent wilting point. Since, irrigation always starts when depletion ratio reaches 50% of total available soil moisture. Additionally, we introduce a general algorithm as a part of the proposed system to calculate the irrigation time, which well suits both micro-irrigation methods: sprinkler and drip irrigation.
---
paper_title: A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series
paper_content:
Summary Developing a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R ), Nash–Sutcliffe efficiency coefficient ( E ), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.
---
paper_title: Demand Forecasting for Irrigation Water Distribution Systems
paper_content:
One of the main problems in the management of large water supply and distribution systems is the forecasting of daily demand in order to schedule pumping effort and minimize costs. This paper examines methodologies for consumer demand modeling and prediction in a real-time environment for an on-demand irrigation water distribution system. Approaches based on linear multiple regression, univariate time series models (exponential smoothing and ARIMA models), and computational neural networks (CNNs) are developed to predict the total daily volume demand. A set of templates is then applied to the daily demand to produce the diurnal demand profile. The models are established using actual data from an irrigation water distribution system in southern Spain. The input variables used in various CNN and multiple regression models are (1) water demands from previous days; (2) climatic data from previous days (maximum temperature, minimum temperature, average temperature, precipitation, relative humidity, wind speed, and sunshine duration); (3) crop data (surfaces and crop coefficients); and (4) water demands and climatic and crop data. In CNN models, the training method used is a standard back-propagation variation known as extended-delta-bar-delta. Different neural architectures are compared whose learning is carried out by controlling several threshold determination coefficients. The nonlinear CNN model approach is shown to provide a better prediction of daily water demand than linear multiple regression and univariate time series analysis. The best results were obtained when water demand and maximum temperature variables from the two previous days were used as input data.
---
paper_title: A simulation study of artificial neural networks for nonlinear time-series forecasting
paper_content:
Abstract This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The effects of three main factors — input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the overfitting problem. Scope and purpose Interest in using artificial neural networks for forecasting has led to a tremendous surge in research activities in the past decade. Yet, mixed results are often reported in the literature and the effect of key modeling factors on performance has not been thoroughly examined. The lack of systematic approaches to neural network model building is probably the primary cause of inconsistencies in reported findings. In this paper, we present a systematic investigation of the application of neural networks for nonlinear time-series analysis and forecasting. The purpose is to have a detailed examination of the effects of certain important neural network modeling factors on nonlinear time-series modeling and forecasting.
---
paper_title: Time series forecasting using a hybrid ARIMA and neural network model
paper_content:
Abstract Autoregressive integrated moving average (ARIMA) is one of the popular linear models in time series forecasting during the past three decades. Recent research activities in forecasting with artificial neural networks (ANNs) suggest that ANNs can be a promising alternative to the traditional linear methods. ARIMA models and ANNs are often compared with mixed conclusions in terms of the superiority in forecasting performance. In this paper, a hybrid methodology that combines both ARIMA and ANN models is proposed to take advantage of the unique strength of ARIMA and ANN models in linear and nonlinear modeling. Experimental results with real data sets indicate that the combined model can be an effective way to improve forecasting accuracy achieved by either of the models used separately.
---
paper_title: Irrigation Demand Forecasting Using Artificial Neuro-Genetic Networks
paper_content:
In recent years, a significant evolution of forecasting methods has been possible due to advances in artificial computational intelligence. The achievement of the optimal architecture of an ANN is a complex process. Thus, in this work, an Evolutionary Robotic (study of the evolution of an ANN using Genetic Algorithm) approach has been used to obtain an Artificial Neuro-Genetic Networks (ANGN) to the short-term forecasting of daily irrigation water demand that maximizes the accuracy of the predictions. The methodology is applied in the Bembezar Irrigation District (Southern Spain). An optimal ANGN architecture (ANGN (7, 29, 16, 1)) has achieved obtaining a Standard Error Prediction (SEP) value of the daily water demand of 12.63 % and explaining 93 % of the total variance observed during validation process. The developed model proved to be a powerful tool that, without long dataset and time requirements, can be very useful for the development of management strategies.
---
paper_title: Backpropagation through time: what it does and how to do it
paper_content:
Basic backpropagation, which is a simple method now being widely used in areas like pattern recognition and fault diagnosis, is reviewed. The basic equations for backpropagation through time, and applications to areas like pattern recognition involving dynamic systems, systems identification, and control are discussed. Further extensions of this method, to deal with systems other than neural networks, systems involving simultaneous equations, or true recurrent networks, and other practical issues arising with the method are described. Pseudocode is provided to clarify the algorithms. The chain rule for ordered derivatives-the theorem which underlies backpropagation-is briefly discussed. The focus is on designing a simpler version of backpropagation which can be translated into computer code and applied directly by neutral network users. >
---
paper_title: LSTM can Solve Hard Long Time Lag Problems
paper_content:
Standard recurrent nets cannot deal with long minimal time lags between relevant signals. Several recent NIPS papers propose alternative methods. We first show: problems used to promote various previous algorithms can be solved more quickly by random weight guessing than by the proposed algorithms. We then use LSTM, our own recent algorithm, to solve a hard problem that can neither be quickly solved by random search nor by any other recurrent net algorithm we are aware of.
---
paper_title: Learning to forget: continual prediction with LSTM
paper_content:
Long short-term memory (LSTM) can solve many tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams without explicitly marked sequence ends. Without resets, the internal state values may grow indefinitely and eventually cause the network to break down. Our remedy is an adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review an illustrative benchmark problem on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve a continual version of that problem. LSTM with forget gates, however, easily solves it in an elegant way.
---
paper_title: Recurrent neural networks and robust time series prediction
paper_content:
We propose a robust learning algorithm and apply it to recurrent neural networks. This algorithm is based on filtering outliers from the data and then estimating parameters from the filtered data. The filtering removes outliers from both the target function and the inputs of the neural network. The filtering is soft in that some outliers are neither completely rejected nor accepted. To show the need for robust recurrent networks, we compare the predictive ability of least squares estimated recurrent networks on synthetic data and on the Puget Power Electric Demand time series. These investigations result in a class of recurrent neural networks, NARMA(p,q), which show advantages over feedforward neural networks for time series with a moving average component. Conventional least squares methods of fitting NARMA(p,q) neural network models are shown to suffer a lack of robustness towards outliers. This sensitivity to outliers is demonstrated on both the synthetic and real data sets. Filtering the Puget Power Electric Demand time series is shown to automatically remove the outliers due to holidays. Neural networks trained on filtered data are then shown to give better predictions than neural networks trained on unfiltered time series. >
---
paper_title: Recurrent neural network based language model
paper_content:
A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition
---
paper_title: Recurrent neural networks and robust time series prediction
paper_content:
We propose a robust learning algorithm and apply it to recurrent neural networks. This algorithm is based on filtering outliers from the data and then estimating parameters from the filtered data. The filtering removes outliers from both the target function and the inputs of the neural network. The filtering is soft in that some outliers are neither completely rejected nor accepted. To show the need for robust recurrent networks, we compare the predictive ability of least squares estimated recurrent networks on synthetic data and on the Puget Power Electric Demand time series. These investigations result in a class of recurrent neural networks, NARMA(p,q), which show advantages over feedforward neural networks for time series with a moving average component. Conventional least squares methods of fitting NARMA(p,q) neural network models are shown to suffer a lack of robustness towards outliers. This sensitivity to outliers is demonstrated on both the synthetic and real data sets. Filtering the Puget Power Electric Demand time series is shown to automatically remove the outliers due to holidays. Neural networks trained on filtered data are then shown to give better predictions than neural networks trained on unfiltered time series. >
---
| Title: A Review of Intelligent Practices for Irrigation Prediction
Section 1: INTRODUCTION
Description 1: Introduce the motivation and background for intelligent irrigation prediction, highlighting the importance of efficient water management in agriculture.
Section 2: SOURCES INFLUENCING IRRIGATION DEMAND
Description 2: Describe the various sources and factors that affect crop irrigation demand, including meteorological factors, crop input factors, and agricultural factors.
Section 3: DESCRIPTION OF METHODS
Description 3: Review different computational and data mining techniques used for irrigation water prediction, such as evapotranspiration, logistic regression, decision tree classifiers, SysFor, support vector machines, and hybrid models.
Section 4: PROPOSED MODEL
Description 4: Present a novel RNN LSTM model proposed to improve the prediction of irrigation needs, detailing its architecture, advantages, and potential challenges.
Section 5: CONCLUSION
Description 5: Summarize the findings and discuss the effectiveness of different methods and the potential of the proposed model in improving irrigation prediction. |
A Survey of Energy-Efficient Techniques for 5G Networks and Challenges Ahead | 8 | ---
paper_title: The global footprint of mobile communications: The ecological and economic perspective
paper_content:
This article quantifies the global carbon footprint of mobile communication systems, and discusses its ecological and economic implications. Using up-to-date data and life cycle assessment models, we predict an increase of CO2 equivalent emissions by a factor of three until 2020 compared to 2007, rising from about 86 to 235 Mto CO2e, suggesting a steeper increase than predicted in the well-known SMART2020 report. We provide a breakdown of the global carbon footprint, which reveals that production of mobile devices and global radio access network operation will remain the major contributors, accompanied by an increasing share of emissions due to data transfer in the backbone resulting from rising mobile traffic volumes. The energy bill due to network operation will gain increasing importance in cellular business models. Furthermore, technologies to reduce energy consumption are considered a key enabler for the spread of mobile communications in developing countries. Taking into account several scenarios of technological advancement and rollout, we analyze the overall energy consumption of global radio access networks and illustrate the saving potential of green communication technologies. We conclude that, conditioned on quick implementation and alongside other "classical" improvements of spectral efficiency, these technologies offer the potential to serve three orders of magnitude more traffic with the same overall energy consumption as today.
---
paper_title: Power Control for Wireless Data
paper_content:
With cellular phones mass-market consumer items, the next frontier is mobile multimedia communications. This situation raises the question of how to perform power control for information sources other than voice. To explore this issue, we use the concepts and mathematics of microeconomics and game theory. In this context, the quality of service of a telephone call is referred to as the "utility" and the distributed power control problem for a CDMA telephone is a "noncooperative game." The power control algorithm corresponds to a strategy that has a locally optimum operating point referred to as a "Nash equilibrium." The telephone power control algorithm is also "Pareto efficient," in the terminology of game theory. When we apply the same approach to power control in wireless data transmissions, we find that the corresponding strategy, while locally optimum, is not Pareto efficient. Relative to the telephone algorithm, there are other algorithms that produce higher utility for at least one terminal, without decreasing the utility for any other terminal. This article presents one such algorithm. The algorithm includes a price function proportional to transmitter power. When terminals adjust their power levels to maximize the net utility (utility-price), they arrive at lower power levels and higher utility than they achieve when they individually strive to maximize utility.
---
paper_title: An energy-efficient approach to power control and receiver design in wireless data networks
paper_content:
In this paper, the cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency. The uplink of a direct-sequence code-division multiple-access data network is considered, and a noncooperative game is proposed in which users in the network are allowed to choose their uplink receivers as well as their transmit powers to maximize their own utilities. The utility function measures the number of reliable bits transmitted by the user per joule of energy consumed. Focusing on linear receivers, the Nash equilibrium for the proposed game is derived. It is shown that the equilibrium is one where the powers are signal-to-interference-plus-noise ratio-balanced with the minimum mean-square error (MMSE) detector as the receiver. In addition, this framework is used to study power-control games for the matched filter, the decorrelator, and the MMSE detector; and the receivers' performance is compared in terms of the utilities achieved at equilibrium (in bits/joule). The optimal cooperative solution is also discussed and compared with the noncooperative approach. Extensions of the results to the case of multiple receive antennas are also presented. In addition, an admission-control scheme based on maximizing the total utility in the network is proposed.
---
paper_title: Cloud technologies for flexible 5G radio access networks
paper_content:
The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. Two key enablers will allow the realization of the vision of 5G: very dense deployments and centralized processing. This article discusses the challenges and requirements in the design of 5G mobile networks based on these two key enablers. It discusses how cloud technologies and flexible functionality assignment in radio access networks enable network densification and centralized operation of the radio access network over heterogeneous backhaul networks. The article describes the fundamental concepts, shows how to evolve the 3GPP LTE a
---
paper_title: Energy Efficiency in Wireless Networks via Fractional Programming Theory
paper_content:
This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.
---
paper_title: Dynamic Base Station Switching-On/Off Strategies for Green Cellular Networks
paper_content:
In this paper, we investigate dynamic base station (BS) switching to reduce energy consumption in wireless cellular networks. Specifically, we formulate a general energy minimization problem pertaining to BS switching that is known to be a difficult combinatorial problem and requires high computational complexity as well as large signaling overhead. We propose a practically implementable switching-on/off based energy saving (SWES) algorithm that can be operated in a distributed manner with low computational complexity. A key design principle of the proposed algorithm is to turn off a BS one by one that will minimally affect the network by using a newly introduced notion of network-impact, which takes into account the additional load increments brought to its neighboring BSs. In order to further reduce the signaling and implementation overhead over the air and backhaul, we propose three other heuristic versions of SWES that use the approximate values of network-impact as their decision metrics. We describe how the proposed algorithms can be implemented in practice at the protocol-level and also estimate the amount of energy savings through a first-order analysis in a simple setting. Extensive simulations demonstrate that the SWES algorithms can significantly reduce the total energy consumption, e.g., we estimate up to 50-80% potential savings based on a real traffic profile from a metropolitan urban area.
---
paper_title: What Will 5G Be?
paper_content:
What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.
---
paper_title: Energy-Efficient Resource Allocation for MIMO-OFDM Systems Serving Random Sources With Statistical QoS Requirement
paper_content:
This paper optimizes resource allocation that maximizes the energy efficiency (EE) of wireless systems with statistical quality of service (QoS) requirement, where a delay bound and its violation probability need to be guaranteed. To avoid wasting energy when serving random sources over wireless channels, we convert the QoS exponent, a key parameter to characterize statistical QoS guarantee under the framework of effective bandwidth and effective capacity, into multi-state QoS exponents dependent on the queue length. To illustrate how to optimize resource allocation, we consider multi-input-multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. A general method to optimize the queue length based bandwidth and power allocation (QRA) policy is proposed, which maximizes the EE under the statistical QoS constraint. A closed-form optimal QRA policy is derived for massive MIMO-OFDM system with infinite antennas serving the first order autoregressive source. The EE limit obtained from infinite delay bound and the achieved EEs of different policies under finite delay bounds are analyzed. Simulation and numerical results show that the EE achieved by the QRA policy approaches the EE limit when the delay bound is large, and is much higher than those achieved by existing policies considering statistical QoS provision when the delay bound is stringent.
---
paper_title: Optimal Energy-Efficient Power Allocation for OFDM-Based Cognitive Radio Networks
paper_content:
This letter investigates the energy-efficient power allocation for orthogonal frequency division multiplexing (OFDM) based cognitive radio networks (CRNs). The problem is to maximize the energy-efficiency measured using the "throughput per Joule" metric subject to the total transmit power and interference constraints. It is then transformed into an equivalent convex problem using parametric programming. Furthermore, an optimal iterative algorithm based on convex optimization theory and parametric programming is proposed. The numerical results show that the proposed optimal algorithm can achieve higher energy-efficiency than that obtained by solving the original problem directly because of its non-convexity. Energy-efficiency maximization can also achieve a good tradeoff between capacity and energy in CRNs.
---
paper_title: Energy Efficiency Analysis of Cooperative Jamming in Cognitive Radio Networks With Secrecy Constraints
paper_content:
We investigate energy-efficient cooperation for secrecy in cognitive radio networks. In particular, we consider a four-node cognitive scenario where the secondary receiver is treated as a potential eavesdropper with respect to the primary transmission. The cognitive transmitter should ensure that the primary message is not leaked to the secondary user by using cooperative jamming. We investigate the optimal power allocation and power splitting at the secondary transmitter for our cognitive model to maximize the secondary energy efficiency (EE) under secrecy constraints. We formulate and analyze an important EE Stackelberg game between the two transmitters aiming at maximizing their utilities. We illustrate the analytical results through our geometrical model, highlighting the EE performance of the system and the impact of the Stackelberg game.
---
paper_title: A Repeated Game Formulation of Energy-Efficient Decentralized Power Control
paper_content:
Decentralized multiple access channels where each transmitter wants to selfishly maximize his transmission energy-efficiency are considered. Transmitters are assumed to choose freely their power control policy and interact (through multiuser interference) several times. It is shown that the corresponding conflict of interest can have a predictable outcome, namely a finitely or discounted repeated game equilibrium. Remarkably, it is shown that this equilibrium is Pareto-efficient under reasonable sufficient conditions and the corresponding decentralized power control policies can be implemented under realistic information assumptions: only individual channel state information and a public signal are required to implement the equilibrium strategies. Explicit equilibrium conditions are derived in terms of minimum number of game stages or maximum discount factor. Both analytical and simulation results are provided to compare the performance of the proposed power control policies with those already existing and exploiting the same information assumptions namely, those derived for the one-shot and Stackelberg games.
---
paper_title: Achieving maximum energy-efficiency in multi-relay OFDMA cellular networks: a fractional programming approach
paper_content:
In this paper, the joint power and subcarrier allocation problem is solved in the context of maximizing the energy-efficiency (EE) of a multi-user, multi-relay orthogonal frequency division multiple access (OFDMA) cellular network, where the objective function is formulated as the ratio of the spectral-efficiency (SE) over the total power dissipation. It is proven that the fractional programming problem considered is quasi-concave so that Dinkelbach's method may be employed for finding the optimal solution at a low complexity. This method solves the above-mentioned master problem by solving a series of parameterized concave secondary problems. These secondary problems are solved using a dual decomposition approach, where each secondary problem is further decomposed into a number of similar subproblems. The impact of various system parameters on the attainable EE and SE of the system employing both EE maximization (EEM) and SE maximization (SEM) algorithms is characterized. In particular, it is observed that increasing the number of relays for a range of cell sizes, although marginally increases the attainable SE, reduces the EE significantly. It is noted that the highest SE and EE are achieved, when the relays are placed closer to the BS to take advantage of the resultant line-of-sight link. Furthermore, increasing both the number of available subcarriers and the number of active user equipment (UE) increases both the EE and the total SE of the system as a benefit of the increased frequency and multi-user diversity, respectively. Finally, it is demonstrated that as expected, increasing the available power tends to improve the SE, when using the SEM algorithm. By contrast, given a sufficiently high available power, the EEM algorithm attains the maximum achievable EE and a suboptimal SE.
---
paper_title: Energy-Efficient Precoding for Multiple-Antenna Terminals
paper_content:
The problem of energy-efficient precoding is investigated when the terminals in the system are equipped with multiple antennas. Considering static and fast-fading multiple-input multiple-output (MIMO) channels, the energy-efficiency is defined as the transmission rate to power ratio and shown to be maximized at low transmit power. The most interesting case is the one of slow fading MIMO channels. For this type of channels, the optimal precoding scheme is generally not trivial. Furthermore, using all the available transmit power is not always optimal in the sense of energy-efficiency [which, in this case, corresponds to the communication-theoretic definition of the goodput-to-power (GPR) ratio]. Finding the optimal precoding matrices is shown to be a new open problem and is solved in several special cases: 1. when there is only one receive antenna; 2. in the low or high signal-to-noise ratio regime; 3. when uniform power allocation and the regime of large numbers of antennas are assumed. A complete numerical analysis is provided to illustrate the derived results and stated conjectures. In particular, the impact of the number of antennas on the energy-efficiency is assessed and shown to be significant.
---
paper_title: Energy Efficiency Optimization of 5G Radio Frequency Chain Systems
paper_content:
With the massive multi-input multi-output (MIMO) antennas technology adopted for the fifth generation (5G) wireless communication systems, a large number of radio frequency (RF) chains have to be employed for RF circuits. However, a large number of RF chains not only increase the cost of RF circuits but also consume additional energy in 5G wireless communication systems. In this paper we investigate energy and cost efficiency optimization solutions for 5G wireless communication systems with a large number of antennas and RF chains. An energy efficiency optimization problem is formulated for 5G wireless communication systems using massive MIMO antennas and millimeter wave technology. Considering the nonconcave feature of the objective function, a suboptimal iterative algorithm, i.e., the energy efficient hybrid precoding (EEHP) algorithm is developed for maximizing the energy efficiency of 5G wireless communication systems. To reduce the cost of RF circuits, the energy efficient hybrid precoding with the minimum number of RF chains (EEHP-MRFC) algorithm is also proposed. Moreover, the critical number of antennas searching (CNAS) and user equipment number optimization (UENO) algorithms are further developed to optimize the energy efficiency of 5G wireless communication systems by the number of transmit antennas and UEs. Compared with the maximum energy efficiency of conventional zero-forcing (ZF) precoding algorithm, numerical results indicate that the maximum energy efficiency of the proposed EEHP and EEHP-MRFC algorithms are improved by 220% and 171%, respectively.
---
paper_title: Pricing and Power Control in a Multicell Wireless Data Network
paper_content:
We consider distributed power control in a multicell wireless data system and study the effect of pricing transmit power. Drawing on the earlier work of Goodman and Mandayam (see IEEE Personal Commun. Mag., vol.7, p.48-54, 2000), we formulate the QoS of a data user via a utility function measured in bits per Joule. We consider distributed power control, modeled as a noncooperative game, where users maximize their utilities in a multicell system. Base station assignment based on received signal strength as well as received signal-to-interference ratio (SIR) are considered jointly with power control. Our results indicate that for both assignment schemes, such a procedure results in an inefficient operating point (Nash equilibrium) for the entire system. We introduce pricing of transmit power as a mechanism for influencing data user behavior and our results show that the distributed power control based on maximizing the net utility (utility minus the price) results in improving the Pareto efficiency of the resulting operating point. Variations of pricing based on global and local loading in cells are considered as a means of improving the efficiency of wireless data networks. Finally, we discuss the improvement in utilities through a centralized scheme where each base station (BS) calculates the best SIR to be targeted by the terminals it is assigned.
---
paper_title: Power-Delay Tradeoff With Predictive Scheduling in Integrated Cellular and Wi-Fi Networks
paper_content:
The explosive growth of global mobile traffic has led to rapid growth in the energy consumption in communication networks. In this paper, we focus on the energy-aware design of the network selection, subchannel, and power allocation in cellular and Wi-Fi networks, while taking into account the traffic delay of mobile users. Based on the two-timescale Lyapunov optimization technique, we first design an online Energy-Aware Network Selection and Resource Allocation (ENSRA) algorithm, which yields a power consumption within $O\left({\frac{1}{V}} \right)$ bound of the optimal value, and guarantees an $O\left(V \right)$ traffic delay for any positive control parameter $V$ . Motivated by the recent advancement in the accurate estimation and prediction of user mobility, channel conditions, and traffic demands, we further develop a novel predictive Lyapunov optimization technique to utilize the predictive information, and propose a Predictive Energy-Aware Network Selection and Resource Allocation (P-ENSRA) algorithm. We characterize the performance bounds of P-ENSRA in terms of the power-delay tradeoff theoretically. To reduce the computational complexity, we finally propose a Greedy Predictive Energy-Aware Network Selection and Resource Allocation (GP-ENSRA) algorithm, where the operator solves the problem in P-ENSRA approximately and iteratively. Numerical results show that GP-ENSRA significantly improves the power-delay performance over ENSRA in the large delay regime. For a wide range of system parameters, GP-ENSRA reduces the traffic delay over ENSRA by 20–30% under the same power consumption.
---
paper_title: On the Energy Efficiency-Spectral Efficiency Trade-Off of Distributed MIMO Systems
paper_content:
In this paper, the trade-off between energy efficiency (EE) and spectral efficiency (SE) is analyzed for both the uplink and downlink of the distributed multiple-input multiple-output (DMIMO) system over the Rayleigh fading channel while considering different types of power consumption models (PCMs). A novel tight closed-form approximation of the DMIMO EE-SE trade-off is presented and a detailed analysis is provided for the scenario with practical antenna configurations. Furthermore, generic and accurate low and high-SE approximations of this trade-off are derived for any number of radio access units (RAUs) in both the uplink and downlink channels. Our expressions have been utilized for assessing both the EE gain of DMIMO over co-located MIMO (CMIMO) and the incremental EE gain of DMIMO in the downlink channel. Our results reveal that DMIMO is more energy efficient than CMIMO for cell edge users in both the idealistic and realistic PCMs; whereas in terms of the incremental EE gain, connecting the user terminal to only one RAU is the most energy efficient approach when a realistic PCM is considered.
---
paper_title: Outage Probability and Energy Efficiency of Cooperative MIMO with Antenna Selection
paper_content:
This paper compares the energy efficiency of some cooperative MIMO schemes in wireless networks. By energy efficiency we denote the spectral efficiency seen at the receiver normalized by the total energy consumption, which includes the circuitry, the efficiency of the power amplifier, and the transmission rate. We focus on transmit antenna selection (TAS) and switch and stay combining (SSC) at the receiver. The performance of TAS+SSC is compared to that of TAS and maximal ratio combining (MRC), and to that of transmit/receive beamforming using a singular value decomposition (SVD) technique. We derive closed-form outage probability expressions, and analyze the effect of selecting the antenna from the source to optimize the communication with the relay or with the destination. Our results show that selecting the antennas with respect to the destination is in general a better option when the energy consumption is accounted. Moreover, some power allocation strategies are described and the comparison of the schemes in terms of energy efficiency reveals a considerable improvement with the use of TAS+SSC for low to moderate spectral efficiency, while beamforming outperforms the other schemes for high spectral efficiency when the number of antennas at each node is small.
---
paper_title: Energy Efficiency and Interference Neutralization in Two-Hop MIMO Interference Channels
paper_content:
The issue of energy-aware resource allocation in an amplify-and-forward (AF) relay-assisted multiple-antenna interference channel (IC) is considered. A novel interference neutralization (IN) scheme is proposed for relay design and, based on the IN relay matrix design, two algorithms are developed to jointly allocate the users' transmit powers, beamforming (BF) and receive filters. The first algorithm considers a competitive scenario and employs a noncooperative game-theoretic approach to maximize the individual energy efficiency (EE) of each communication link, defined as the ratio of the achievable rate over the consumed power. The resulting algorithm converges to a unique fixed point, has limited complexity, and can be implemented in a distributed fashion. The second algorithm employs fractional programming tools and sequential convex optimization to centrally allocate the users' transmit powers, BF, and receive filters for global energy efficiency (GEE) maximization. The resulting algorithm is guaranteed to converge and has limited computational complexity. Numerical results show that the competitive IN design achieves virtually the same performance as the cooperative design if IN is feasible, while the gap is small if perfect IN is not achievable.
---
paper_title: Pricing-Based Distributed Energy-Efficient Beamforming for MISO Interference Channels
paper_content:
In this paper, we consider the problem of maximizing the weighted sum energy efficiency (WS-EE) for multi-input single-output (MISO) interference channels (ICs), which are well acknowledged as general models of heterogeneous networks (HetNets), multicell networks, etc. To address this problem, we develop an efficient distributed beamforming algorithm based on a pricing mechanism. Specifically, we carefully introduce a price metric for distributed beamforming design, which fortunately allows efficient closed-form solutions to the per-user beam-vector optimization problem. The convergence of the distributed pricing-based beamforming design is theoretically proven. Furthermore, we present an implementation strategy of the proposed distributed algorithm with limited information exchange. Numerical results show that our algorithm converges much faster than existing algorithms, while yielding comparable, sometimes even better performance in terms of the WS-EE. Finally, by taking the backhaul power consumption into account, it is interesting to show that the proposed algorithm with limited information exchange achieves better WS-EE than the full information exchange-based algorithm in some special cases.
---
paper_title: An energy-efficient approach to power control and receiver design in wireless data networks
paper_content:
In this paper, the cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency. The uplink of a direct-sequence code-division multiple-access data network is considered, and a noncooperative game is proposed in which users in the network are allowed to choose their uplink receivers as well as their transmit powers to maximize their own utilities. The utility function measures the number of reliable bits transmitted by the user per joule of energy consumed. Focusing on linear receivers, the Nash equilibrium for the proposed game is derived. It is shown that the equilibrium is one where the powers are signal-to-interference-plus-noise ratio-balanced with the minimum mean-square error (MMSE) detector as the receiver. In addition, this framework is used to study power-control games for the matched filter, the decorrelator, and the MMSE detector; and the receivers' performance is compared in terms of the utilities achieved at equilibrium (in bits/joule). The optimal cooperative solution is also discussed and compared with the noncooperative approach. Extensions of the results to the case of multiple receive antennas are also presented. In addition, an admission-control scheme based on maximizing the total utility in the network is proposed.
---
paper_title: Energy-Efficient Resource Allocation in OFDM Systems With Distributed Antennas
paper_content:
In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental tradeoff between energy- and spectral-efficient transmission designs.
---
paper_title: A game-theoretic approach to energy-efficient power control in multicarrier CDMA systems
paper_content:
A game-theoretic model for studying power control in multicarrier code-division multiple-access systems is proposed. Power control is modeled as a noncooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multidimensional nature of users' strategies and the nonquasi-concavity of the utility function make the multicarrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error detector, a user's utility is maximized when the user transmits only on its "best" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently
---
paper_title: Energy-efficient resource allocation in multi-cell OFDMA systems with limited backhaul capacity
paper_content:
In this paper, resource allocation for energy efficient communication in multi-cell orthogonal frequency division multiple access (OFDMA) downlink networks with cooperative base stations (BSs) is studied. The considered problem is formulated as a non-convex optimization problem which takes into account the circuit power consumption, the limited backhaul capacity, and the minimum required data rate for joint BS zero-forcing beamforming (ZFBF) transmission. By exploiting the properties of fractional programming, the considered non-convex optimization problem in fractional form is transformed into an equivalent optimization problem in subtractive form, which enables the derivation of an efficient iterative resource allocation algorithm. For each iteration, the optimal power allocation solution is derived with a low complexity suboptimal subcarrier allocation policy for maximization of the energy efficiency of data transmission (bit/Joule delivered to the users). Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations, and unveil the trade-off between energy efficiency and network capacity.
---
paper_title: Coordinated Multicell Multiuser Precoding for Maximizing Weighted Sum Energy Efficiency
paper_content:
Energy efficiency optimization of wireless systems has become urgently important due to its impact on the global carbon footprint. In this paper we investigate energy efficient multicell multiuser precoding design and consider a new criterion of weighted sum energy efficiency, which is defined as the weighted sum of the energy efficiencies of multiple cells. This objective is more general than the existing methods and can satisfy heterogeneous requirements from different kinds of cells, but it is hard to tackle due to its sum-of-ratio form. In order to address this non-convex problem, the user rate is first formulated as a polynomial optimization problem with the test conditional probabilities to be optimized. Based on that, the sum-of-ratio form of the energy efficient precoding problem is transformed into a parameterized polynomial form optimization problem, by which a solution in closed form is achieved through a two-layer optimization. We also show that the proposed iterative algorithm is guaranteed to converge. Numerical results are finally provided to confirm the effectiveness of our energy efficient beamforming algorithm. It is observed that in the low signal-to-noise ratio (SNR) region, the optimal energy efficiency and the optimal sum rate are simultaneously achieved by our algorithm; while in the middle-high SNR region, a certain performance loss in terms of the sum rate is suffered to guarantee the weighed sum energy efficiency.
---
paper_title: Power Minimization Based Resource Allocation for Interference Mitigation in OFDMA Femtocell Networks
paper_content:
With the introduction of femtocells, cellular networks are moving from the conventional centralized network architecture to a distributed one, where each network cell should make its own radio resource allocation decisions, while providing inter-cell interference mitigation. However, realizing such distributed network architecture is not a trivial task. In this paper, we first introduce a simple self-organization rule, based on minimizing cell transmit power, following which a distributed cellular network is able to converge into an efficient resource reuse pattern. Based on such self-organization rule and taking realistic resource allocation constraints into account, we also propose two novel resource allocation algorithms, being autonomous and coordinated, respectively. Performance of the proposed self-organization rule and resource allocation algorithms are evaluated using system-level simulations, and show that power efficiency is not necessarily in conflict with capacity improvements at the network level. The proposed resource allocation algorithms provide significant performance improvements in terms of user outages and network capacity over cutting-edge resource allocation algorithms proposed in the literature.
---
paper_title: Energy Efficiency in Wireless Networks via Fractional Programming Theory
paper_content:
This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.
---
paper_title: Design of 3-Way Relay Channels for Throughput and Energy Efficiency.
paper_content:
AbstractThroughput and energy efciency in 3-way relay channels are studied in this paper. Unlike previouscontributions, we consider a circular message exchange. First, an outer bound and achievable sum rateexpressions for different relaying protocols are derived for 3-way relay channels. The sum capacity ischaracterized for certain SNR regimes. Next, leveraging the derived achievable sum rate expressions,cooperative and competitive maximization of the energy efciency are considered. For the cooperativecase, both low-complexity and globally optimal algorithms for joint power allocation at the users and atthe relay are designed so as to maximize the system global energy efciency. For the competitive case, agame theoretic approach is taken, and it is shown that the best response dynamics is guaranteed to convergeto a Nash equilibrium. A power consumption model for mmWave board-to-board communications isdeveloped, and numerical results are provided to corroborate and provide insight on the theoreticalndings.Index TermsMulti-way networks, relay systems, energy efcienc y, green communications, resource allocation,fractional programming, monotonic optimization, game theory, 5G networks, mmWave communications,power control.
---
paper_title: Energy-Efficient Power Control: A Look at 5G Wireless Technologies
paper_content:
This paper develops power control algorithms for energy efficiency (EE) maximization (measured in bit/Joule) in wireless networks. Unlike previous related works, minimum-rate constraints are imposed and the signal-to-interference-plus-noise ratio takes a more general expression, which allows one to encompass some of the most promising 5G candidate technologies. Both network-centric and user-centric EE maximizations are considered. In the network-centric scenario, the maximization of the global EE and the minimum EE of the network is performed. Unlike previous contributions, we develop centralized algorithms that are guaranteed to converge, with affordable computational complexity, to a Karush–Kuhn–Tucker point of the considered non-convex optimization problems. Moreover, closed-form feasibility conditions are derived. In the user-centric scenario, game theory is used to study the equilibria of the network and to derive convergent power control algorithms, which can be implemented in a fully decentralized fashion. Both scenarios above are studied under the assumption that single or multiple resource blocks are employed for data transmission. Numerical results assess the performance of the proposed solutions, analyzing the impact of minimum-rate constraints, and comparing the network-centric and user-centric approaches.
---
paper_title: Energy-Efficient Configuration of Spatial and Frequency Resources in MIMO-OFDMA Systems
paper_content:
In this paper, we investigate adaptive configuration of spatial and frequency resources to maximize energy efficiency (EE) and reveal the relationship between the EE and the spectral efficiency (SE) in downlink multiple-input-multiple-output (MIMO) orthogonal frequency division multiple access (OFDMA) systems. We formulate the problem as minimizing the total power consumed at the base station under constraints on the ergodic capacities from multiple users, the total number of subcarriers, and the number of radio frequency (RF) chains. A three-step searching algorithm is developed to solve this problem. We then analyze the impact of spatial-frequency resources, overall SE requirement and user fairness on the SE-EE relationship. Analytical and simulation results show that increasing frequency resource is more efficient than increasing spatial resource to improve the SE-EE relationship as a whole. The EE increases with the SE when the frequency resource is not constrained to the maximum value, otherwise a tradeoff between the SE and the EE exists. Sacrificing the fairness among users in terms of ergodic capacities can enhance the SE-EE relationship. In general, the adaptive configuration of spatial and frequency resources outperforms the adaptive configuration of only spatial or frequency resource.
---
paper_title: Energy-Efficient Scheduling and Power Allocation in Downlink OFDMA Networks With Base Station Coordination
paper_content:
This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular OFDMA system. Three definitions of the energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. %The performance of both an isolated and a non-isolated cluster of coordinated base stations is examined in the numerical experiments. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides a large saving in terms of dissipated energy. Also, the performance gap among the considered resource allocation strategies reduces as the out-of-cluster interference increases.
---
paper_title: A Game-Theoretic Approach to Energy-Efficient Power Control and Receiver Design in Cognitive CDMA Wireless Networks
paper_content:
The uplink of a multiuser cognitive radio network, wherein secondary users communicating with a secondary access point coexist with primary users communicating with a primary access point, is considered in this paper. Primary and secondary users' signals coexist in the same frequency band, and the transmit powers of the secondary users are constrained so that the interference from the whole secondary network to each primary user does not exceed a prescribed threshold. Given this constraint, a noncooperative power control game for maximum energy efficiency with a fairness constraint on the maximum received powers for the secondary users has been considered. The considered game is shown to admit a unique Nash equilibrium, also in the case in which energy efficiency is maximized with respect to both transmit power and choice of the linear uplink receiver. Based on large system analysis, a one-shot procedure for computing the users' transmit powers at the Nash equilibrium with no need for iteration among users is also derived. Numerical simulations confirm the theoretical results on the existence and uniqueness of the Nash equilibrium, confirm the effectiveness of the results obtained through the large system analysis, and show that secondary users have a beneficial impact on the whole network throughput, at the price of a moderate degradation in the performance of the primary users.
---
paper_title: Transmitter Waveform and Widely Linear Receiver Design: Noncooperative Games for Wireless Multiple-Access Networks
paper_content:
The issue of noncooperative transceiver optimization in the uplink of a multiuser wireless code division multiple access data network with widely linear detection at the receiver is considered. While previous work in this area has focused on a simple real signal model, in this paper, a baseband complex representation of the data is used so as to properly take into account the I and Q components of the received signal. For the case in which the received signal is improper, a widely linear reception structure, processing separately the data and their complex conjugates, is considered. Several noncooperative resource allocation games are considered for this new scenario, and the performance gains granted by the use of widely linear detection are assessed through theoretical analysis. Numerical results confirm the validity of the theoretical findings and show that exploiting the improper nature of the data in noncooperative resource allocation brings remarkable performance improvements in multiuser wireless systems.
---
paper_title: A Novel Power Consumption Model for Effective Energy Efficiency in Wireless Networks
paper_content:
Designing energy-efficient delay-aware communication networks have become an inevitable trend in 5G wireless networks. In this letter, we present an energy-efficient and delay-aware cross-layer resource allocation in SISO wireless systems. To achieve this goal, we apply the notion of effective energy efficiency (EEE), defined as the ratio of the system effective capacity (EC) over the total power consumption. Unlike previous works, we introduce a new average power consumption model which accounts for the data link layer, allowing for the probability of emptying the buffer during the transmission timeframe. This leads to a new definition of EEE, which results in better performance in terms of both EEE and EC.
---
paper_title: Energy-Aware Competitive Power Control in Relay-Assisted Interference Wireless Networks
paper_content:
Competitive power control for energy efficiency maximization in wireless interference networks is addressed, for the scenarios in which the users' SINR can be expressed as either (a) γ=(α p)/(φ p+ω), or (b) γ=(α p+β p^{2})/(φ p+ω), with p the user's transmit power. The considered SINR expressions naturally arise in relay-assisted systems. The energy efficiency is measured in bit/Joule and is defined as the ratio of a proper function of the SINR, divided by the consumed power. Unlike most previous related works, in the definition of the consumed power, not only the transmit power, but also the circuit power needed to operate the devices is accounted for. A non-cooperative game theoretic approach is employed and distributed power control algorithms are proposed. For both SINR expressions (a) and (b), it is shown that the competitive power allocation problem always admits a Nash equilibrium. Moreover, for the SINR (a), the equilibrium is also shown to be unique and the best-response dynamic is guaranteed to converge to such unique equilibrium. For the two-user case, the efficient computation of the Pareto frontier of the considered game is addressed, and, for benchmarking purposes, a social optimum solution with fairness constraint is derived.
---
paper_title: Energy-Efficient Power Control of Cognitive Femto Users for 5G Communications
paper_content:
We study the energy efficiency issue in 5G communications scenarios, where cognitive femtocells coexist with picocells operating at the same frequency bands. Optimal energy-efficient power allocation based on the sensing-based spectrum sharing (SBSS) is proposed for the uplink cognitive femto users operating in a multiuser MIMO mode. Both hard-decision and soft-decision schemes are considered for the SBSS. Different from the existing energy-efficient designs in multiuser scenarios, which consider system-wise energy efficiency, we consider user-wise energy efficiency and optimize them in a Pareto sense. To resolve the nonconvexity of the formulated optimization problem, we include an additional power constraint to convexify the problem without losing global optimality. Simulation results show that the proposed schemes significantly enhance the energy efficiency of the cognitive femto users compared with the existing spectral-efficient designs.
---
paper_title: Energy-Aware and Rate-Aware Heuristic Beamforming in Downlink MIMO OFDMA Networks With Base-Station Coordination
paper_content:
This paper addresses the problem of coordinated beamforming across a group of base stations (BSs) and frequency slots in the downlink of a multiple-input multiple-output (MIMO) orthogonal frequency-division multiple-access (OFDMA) cellular network. Three figures of merit are considered for system design under a per-BS power constraint: 1) the weighted sum of the rates (WSR) on the frequency slots of the coordinated BSs; 2) the global energy efficiency (GEE), defined as the ratio between the network sum rate and the corresponding consumed power; and 3) the weighted sum of the energy efficiencies (WSEE) on the frequency slots of the coordinated BSs. The Karush–Kuhn–Tucker (KKT) conditions of the considered optimization problems are first derived to gain insight into the structure of the optimal beamformers. Then, we propose a suboptimal design method that can be applied to all considered figures of merit. Numerical results are provided to assess the performance of the proposed beamforming strategies.
---
paper_title: Energy-efficient MIMO underlay spectrum sharing with rate splitting
paper_content:
This work studies energy efficiency (EE) of multiple-input and multiple-output (MIMO) underlay spectrum sharing, where rate splitting at the secondary transmitter and successive decoding at the secondary receiver is deployed when feasible. EE of the secondary transmission is optimized while satisfying the primary rate requirement and the secondary power constraint. EE is defined as the ratio of the achievable secondary rate and the secondary power consumption, including both the transmit power and the circuit power. Numerical results show that higher EE can be achieved at the cost of achievable rate compared with that of MIMO underlay spectrum sharing with rate optimization, and that rate splitting is beneficial in terms of EE.
---
paper_title: Distributed Interference-Aware Energy-Efficient Power Optimization
paper_content:
Power optimization techniques are becoming increasingly important in wireless system design since battery technology has not kept up with the demand of mobile devices. They are also critical to interference management in wireless systems because interference usually results from both aggressive spectral reuse and high power transmission and severely limits system performance. In this paper, we develop an energy-efficient power optimization scheme for interference-limited wireless communications. We consider both circuit and transmission powers and focus on energy efficiency over throughput. We first investigate a non-cooperative game for energy-efficient power optimization in frequency-selective channels and reveal the conditions of the existence and uniqueness of the equilibrium for this game. Most importantly, we discover a sufficient condition for generic multi-channel power control to have a unique equilibrium in frequency-selective channels. Then we study the tradeoff between energy efficiency and spectral efficiency and show by simulation results that the proposed scheme improves both energy efficiency and spectral efficiency in an interference-limited multi-cell cellular network.
---
paper_title: Precoding for Full Duplex Multiuser MIMO Systems: Spectral and Energy Efficiency Maximization
paper_content:
We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.
---
paper_title: Learning to Be Green: Robust Energy Efficiency Maximization in Dynamic MIMO–OFDM Systems
paper_content:
In this paper, we examine the maximization of energy efficiency (EE) in next-generation multiuser MIMO–OFDM networks that vary dynamically over time—e.g., due to user mobility, fluctuations in the wireless medium, modulations in the users’ load, etc. Contrary to the static/stationary regime, the system may evolve in an arbitrary manner, so users must adjust “on the fly,” without being able to predict the state of the system in advance. To tackle these issues, we propose a simple and distributed online optimization policy that leads to no regret , i.e., it allows users to match (and typically outperform) even the best fixed transmit policy in hindsight, irrespective of how the system varies with time. Moreover, to account for the scarcity of perfect channel state information (CSI) in massive MIMO systems, we also study the algorithm’s robustness in the presence of measurement errors and observation noise. Importantly, the proposed policy retains its no-regret properties under very mild assumptions on the error statistics: on average, it enjoys the same performance guarantees as in the noiseless deterministic case. Our analysis is supplemented by extensive numerical simulations, which show that, in realistic network environments, users track their individually optimum transmit profile even under rapidly changing channel conditions, achieving gains of up to 600% in energy efficiency over uniform power allocation policies.
---
paper_title: Framework for Link-Level Energy Efficiency Optimization with Informed Transmitter
paper_content:
The dramatic increase of network infrastructure comes at the cost of rapidly increasing energy consumption, which makes optimization of energy efficiency (EE) an important topic. Since EE is often modeled as the ratio of rate to power, we present a mathematical framework called fractional programming that provides insight into this class of optimization problems, as well as algorithms for computing the solution. The main idea is that the objective function is transformed to a weighted sum of rate and power. A generic problem formulation for systems dissipating transmit-independent circuit power in addition to transmit-dependent power is presented. We show that a broad class of EE maximization problems can be solved efficiently, provided the rate is a concave function of the transmit power. We elaborate examples of various system models including time-varying parallel channels. Rate functions with an arbitrary discrete modulation scheme are also treated. The examples considered lead to water-filling solutions, but these are different from the dual problems of power minimization under rate constraints and rate maximization under power constraints, respectively, because the constraints need not be active. We also demonstrate that if the solution to a rate maximization problem is known, it can be utilized to reduce the EE problem into a one-dimensional convex problem.
---
paper_title: Joint Receiver and Transmitter Optimization for Energy-Efficient CDMA Communications
paper_content:
This paper focuses on the cross-layer issue of joint multiuser detection and resource allocation for energy efficiency in wireless code-division multiple-access (CDMA) networks. In particular, assuming that a linear multiuser detector is adopted in the uplink receiver, the situation considered is that in which each terminal is allowed to vary its transmit power, spreading code, and uplink receiver in order to maximize its own utility, which is defined as the ratio of data throughput to transmit power. Applying a game-theoretic formulation, a non-cooperative game for utility maximization is formulated, and it is proved that a unique Nash equilibrium exists, which, under certain conditions, is also Pareto-optimal. Theoretical results concerning the relationship between the problems of signal-to-interference-plus noise ratio (SINR) maximization and mean-square error (MSE) minimization are given, and, by applying the tools of large system analysis, a new distributed power control algorithm is implemented, based on very little prior information about the user of interest. The utility profile achieved by the active users in a large CDMA system is also computed, and, moreover, the centralized socially optimal solution is analyzed. Considerations concerning the extension of the proposed framework to a multi-cell scenario are also briefly detailed. Simulation results confirm that the proposed non-cooperative game largely outperforms competing alternatives, and that it exhibits negligible performance loss with respect to the socially optimal solution, and only in the case in which the number of users exceeds the processing gain. Finally, results also show an excellent agreement between the theoretical closed-form formulas based on large system analysis and the outcome of numerical experiments.
---
paper_title: Energy Efficiency Optimization for MIMO Broadcast Channels
paper_content:
Characterizing the fundamental energy efficiency (EE) limits of MIMO broadcast channels (BC) is significant for the development of green wireless communications. We address the EE optimization problem for MIMO-BC in this paper and consider a practical power model, i.e., taking into account a transmit independent power which is related to the number of active transmit antennas. Under this setup, we propose a new optimization approach, in which the transmit covariance is optimized under fixed active transmit antenna sets, and then active transmit antenna selection (ATAS) is utilized. During the transmit covariance optimization, we propose a globally optimal energy efficient iterative water-filling scheme through solving a series of concave-convex fractional programs based on the block-coordinate ascent algorithm. After that, ATAS is employed to determine the active transmit antenna set. Since activating more transmit antennas can achieve higher sum-rate but at the cost of larger transmit independent power consumption, there exists a tradeoff between the sum-rate gain and the power consumption. Here ATAS can explore the optimal tradeoff curve and thus further improve the EE. Optimal exhaustive search and low-complexity norm based ATAS schemes are developed. Through simulations, we discuss the effect of different parameters on the EE of the MIMO-BC.
---
paper_title: A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay QoS Constraints
paper_content:
A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. This strategy is different from one that would maximize spectral efficiency. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are also studied and quantified. In addition, the effect of trellis-coded modulation on energy efficiency is discussed.
---
paper_title: Energy-Efficient Power Control in Impulse Radio UWB Wireless Networks
paper_content:
In this paper, a game-theoretic model for studying power control for wireless data networks in frequency-selective multipath environments is analyzed. The uplink of an impulse-radio ultrawideband system is considered. The effects of self-interference and multiple-access interference on the performance of generic Rake receivers are investigated for synchronous systems. Focusing on energy efficiency, a noncooperative game is proposed in which users in the network are allowed to choose their transmit powers to maximize their own utilities, and the Nash equilibrium for the proposed game is derived. It is shown that, due to the frequency selective multipath, the noncooperative solution is achieved at different signal-to-interference-plus-noise ratios, depending on the channel realization and the type of Rake receiver employed. A large-system analysis is performed to derive explicit expressions for the achieved utilities. The Pareto-optimal (cooperative) solution is also discussed and compared with the noncooperative approach.
---
paper_title: Distributed Energy-Efficient Power Optimization for CoMP Systems With Max-Min Fairness
paper_content:
This letter considers the power optimization problem for downlink transmission in coordinated multi-point (CoMP) systems. We aim to maximize the minimum weighted energy efficiency (EE) with QoS constraint and limited intercell coordination. The established optimization problem is converted to the standard form of max-min fractional programming problem first. Then we discuss feasibility of the Dual Dinkelbach-type Algorithm (DDA). A distributed algorithm is proposed to solve the subproblem in DDA with limited intercell coordination, where only a small number of positive scalars are shared between base stations. Simulation results show that the proposed algorithm outperforms previous schemes in terms of both the minimum EE of all the BSs' EE and fairness.
---
paper_title: Resource Allocation for Power Minimization in the Downlink of THP-Based Spatial Multiplexing MIMO-OFDMA Systems
paper_content:
In this paper, we deal with resource allocation in the downlink of spatial multiplexing multiple-input–multiple-output (MIMO)-orthogonal frequency-division multiple-access (OFDMA) systems. In particular, we concentrate on the problem of jointly optimizing the transmit and receive processing matrices, the channel assignment, and the power allocation with the objective of minimizing the total power consumption while satisfying different quality-of-service (QoS) requirements. A layered architecture is used in which users are first partitioned in different groups on the basis of their channel quality, and then channel assignment and transceiver design are sequentially addressed starting from the group of users with most adverse channel conditions. The multiuser interference among users belonging to different groups is removed at the base station (BS) using a Tomlinson–Harashima precoder operating at user level. Numerical results are used to highlight the effectiveness of the proposed solution and to make comparisons with existing alternatives.
---
paper_title: Energy Efficiency Optimization in Licensed-Assisted Access
paper_content:
To improve system capacity, licensed-assisted access (LAA) has been proposed for long-term evolution (LTE) systems to use unlicensed bands. However, the energy efficiency (EE) of the LTE system may be degraded by LAA since unlicensed bands are generally less energy-efficient than licensed bands. In this paper, we investigate the EE optimization of LAA systems. We first develop a criterion to determine whether unlicensed bands can be leveraged to improve the EE of LAA systems. We prove that unlicensed bands can be used to improve the EE only when the allocated licensed resource blocks (RBs) are not enough. We then investigate joint licensed and unlicensed RB allocation to maximize the EE of each small cell base station (SBS) in a multi-SBS scenario, taking into account fair resource sharing between LTE and WiFi networks. The complete Pareto optimal EE set can be obtained by the weighted Tchebycheff method. We also develop an algorithm to provide fair EE among different SBSs based on the Nash bargaining solution. Numerical results are presented to confirm our analysis and to demonstrate the effectiveness of the proposed algorithms.
---
paper_title: Energy-Efficient Cell Activation, User Association, and Spectrum Allocation in Heterogeneous Networks
paper_content:
Next generation (5G) cellular networks are expected to be supported by an extensive infrastructure with many-fold increase in the number of cells per unit area compared to today. The total energy consumption of base transceiver stations (BTSs) is an important issue for both economic and environmental reasons. In this paper, an optimization-based framework is proposed for energy-efficient global radio resource management in heterogeneous wireless networks. Specifically, with stochastic arrivals of known rates intended for users, the smallest set of BTSs is activated with jointly optimized user association and spectrum allocation to stabilize the network. The average delay is subsequently minimized. The scheme can be carried out periodically on a relatively slow timescale to adapt to aggregate traffic variations and average channel conditions. Numerical results show that the proposed scheme significantly reduces energy consumption and increases quality of service compared to existing schemes.
---
paper_title: Massive MIMO for Next Generation Wireless Systems
paper_content:
Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
---
paper_title: Scaling up MIMO: Opportunities and challenges with very large arrays
paper_content:
Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles.
---
paper_title: Energy Group-Buying with Loading Sharing for Green Cellular Networks
paper_content:
In the emerging hybrid electricity market, mobile network operators (MNOs) of cellular networks can make day-ahead energy purchase commitments at low prices and real-time flexible energy purchase at high prices. To minimize electricity bills, it is essential for MNOs to jointly optimize the day-ahead and real-time energy purchase based on their time-varying wireless traffic load. In this paper, we consider two different MNOs coexisting in the same area, and exploit their collaboration in both energy purchase and wireless load sharing for energy cost saving. Specifically, we propose a new approach named energy group buying with load sharing, in which the two MNOs are aggregated as a single group to make the day-ahead and real-time energy purchase, and their base stations (BSs) share the wireless traffic to maximally turn lightly-loaded BSs into sleep mode. When the two MNOs belong to the same entity and aim to minimize their total energy cost, we use the two-stage stochastic programming to obtain the optimal day-ahead and real-time energy group buying jointly with wireless load sharing. When the two MNOs belong to different entities and are self-interested in minimizing their individual energy costs, we propose a novel repeated Nash bargaining scheme for them to negotiate and share their energy costs under energy group buying and load sharing. Our proposed repeated Nash bargaining scheme is shown to achieve Pareto-optimal and fair energy cost reductions for both MNOs.
---
paper_title: Performance of Conjugate and Zero-Forcing Beamforming in Large-Scale Antenna Systems
paper_content:
Large-Scale Antenna Systems (LSAS) is a form of multi-user MIMO technology in which unprecedented numbers of antennas serve a significantly smaller number of autonomous terminals. We compare the two most prominent linear pre-coders, conjugate beamforming and zero-forcing, with respect to net spectral-efficiency and radiated energy-efficiency in a simplified single-cell scenario where propagation is governed by independent Rayleigh fading, and where channel-state information (CSI) acquisition and data transmission are both performed during a short coherence interval. An effective-noise analysis of the pre-coded forward channel yields explicit lower bounds on net capacity which account for CSI acquisition overhead and errors as well as the sub-optimality of the pre-coders. In turn the bounds generate trade-off curves between radiated energy-efficiency and net spectral-efficiency. For high spectral-efficiency and low energy-efficiency zero-forcing outperforms conjugate beamforming, while at low spectral-efficiency and high energy-efficiency the opposite holds. Surprisingly, in an optimized system, the total LSAS-critical computational burden of conjugate beamforming may be greater than that of zero-forcing. Conjugate beamforming may still be preferable to zero-forcing because of its greater robustness, and because conjugate beamforming lends itself to a de-centralized architecture and de-centralized signal processing.
---
paper_title: Energy-Efficient Power Control: A Look at 5G Wireless Technologies
paper_content:
This paper develops power control algorithms for energy efficiency (EE) maximization (measured in bit/Joule) in wireless networks. Unlike previous related works, minimum-rate constraints are imposed and the signal-to-interference-plus-noise ratio takes a more general expression, which allows one to encompass some of the most promising 5G candidate technologies. Both network-centric and user-centric EE maximizations are considered. In the network-centric scenario, the maximization of the global EE and the minimum EE of the network is performed. Unlike previous contributions, we develop centralized algorithms that are guaranteed to converge, with affordable computational complexity, to a Karush–Kuhn–Tucker point of the considered non-convex optimization problems. Moreover, closed-form feasibility conditions are derived. In the user-centric scenario, game theory is used to study the equilibria of the network and to derive convergent power control algorithms, which can be implemented in a fully decentralized fashion. Both scenarios above are studied under the assumption that single or multiple resource blocks are employed for data transmission. Numerical results assess the performance of the proposed solutions, analyzing the impact of minimum-rate constraints, and comparing the network-centric and user-centric approaches.
---
paper_title: Massive MIMO in the UL/DL of Cellular Networks: How Many Antennas Do We Need?
paper_content:
We consider the uplink (UL) and downlink (DL) of non-cooperative multi-cellular time-division duplexing (TDD) systems, assuming that the number N of antennas per base station (BS) and the number K of user terminals (UTs) per cell are large. Our system model accounts for channel estimation, pilot contamination, and an arbitrary path loss and antenna correlation for each link. We derive approximations of achievable rates with several linear precoders and detectors which are proven to be asymptotically tight, but accurate for realistic system dimensions, as shown by simulations. It is known from previous work assuming uncorrelated channels, that as N→∞ while K is fixed, the system performance is limited by pilot contamination, the simplest precoders/detectors, i.e., eigenbeamforming (BF) and matched filter (MF), are optimal, and the transmit power can be made arbitrarily small. We analyze to which extent these conclusions hold in the more realistic setting where N is not extremely large compared to K. In particular, we derive how many antennas per UT are needed to achieve η% of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively.
---
paper_title: Energy and Spectral Efficiency of Very Large Multiuser MIMO Systems
paper_content:
A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. Lower capacity bounds for maximum-ratio combining (MRC), zero-forcing (ZF) and minimum mean-square error (MMSE) detection are derived. An MRC receiver normally performs worse than ZF and MMSE. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option. The tradeoff between the energy efficiency (as measured in bits/J) and spectral efficiency (as measured in bits/channel use/terminal) is quantified for a channel model that includes small-scale fading but not large-scale fading. It is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single-antenna system.
---
paper_title: Energy Efficient Heterogeneous Cellular Networks
paper_content:
With the exponential increase in mobile internet traffic driven by a new generation of wireless devices, future cellular networks face a great challenge to meet this overwhelming demand of network capacity. At the same time, the demand for higher data rates and the ever-increasing number of wireless users led to rapid increases in power consumption and operating cost of cellular networks. One potential solution to address these issues is to overlay small cell networks with macrocell networks as a means to provide higher network capacity and better coverage. However, the dense and random deployment of small cells and their uncoordinated operation raise important questions about the energy efficiency implications of such multi-tier networks. Another technique to improve energy efficiency in cellular networks is to introduce active/sleep (on/off) modes in macrocell base stations. In this paper, we investigate the design and the associated tradeoffs of energy efficient cellular networks through the deployment of sleeping strategies and small cells. Using a stochastic geometry based model, we derive the success probability and energy efficiency in homogeneous macrocell (single-tier) and heterogeneous K-tier wireless networks under different sleeping policies. In addition, we formulate the power consumption minimization and energy efficiency maximization problems, and determine the optimal operating regimes for macrocell base stations. Numerical results confirm the effectiveness of switching off base stations in homogeneous macrocell networks. Nevertheless, the gains in terms of energy efficiency depend on the type of sleeping strategy used. In addition, the deployment of small cells generally leads to higher energy efficiency but this gain saturates as the density of small cells increases. In a nutshell, our proposed framework provides an essential understanding on the deployment of future green heterogeneous networks.
---
paper_title: Optimal Design of Energy-Efficient Multi-User MIMO Systems: Is Massive MIMO the Answer?
paper_content:
Assume that a multi-user multiple-input multiple-output (MIMO) system is designed from scratch to uniformly cover a given area with maximal energy efficiency (EE). What are the optimal number of antennas, active users, and transmit power? The aim of this paper is to answer this fundamental question. We consider jointly the uplink and downlink with different processing schemes at the base station and propose a new realistic power consumption model that reveals how the above parameters affect the EE. Closed-form expressions for the EE-optimal value of each parameter, when the other two are fixed, are provided for zero-forcing (ZF) processing in single-cell scenarios. These expressions prove how the parameters interact. For example, in sharp contrast to common belief, the transmit power is found to increase (not to decrease) with the number of antennas. This implies that energy-efficient systems can operate in high signal-to-noise ratio regimes in which interference-suppressing signal processing is mandatory. Numerical and analytical results show that the maximal EE is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve a relatively large number of users using ZF processing. The numerical results show the same behavior under imperfect channel state information and in symmetric multi-cell scenarios.
---
paper_title: Spectral and Energy Efficiency of Multipair Two-Way Full-Duplex Relay Systems With Massive MIMO
paper_content:
In this paper, we consider a multipair amplify-and-forward two-way relay channel, where multiple pairs of full-duplex users exchange information through a full-duplex relay with massive antennas. For improving the energy efficiency, four typical power-scaling schemes are proposed based on the maximum-ratio combining/maximum-ratio transmission (MRC/MRT) and zero-forcing reception/zero-forcing transmission (ZFR/ZFT) at the relay. When the number of relay antennas tends to infinity, we quantify the asymptotic spectral efficiencies and energy efficiencies for the proposed power-scaling schemes. We show that the loop interference can be reduced by decreasing the transmit power under massive relay antennas. Besides, the inter-pair interference and inter-user interference in such systems can also be eliminated in large number of antennas. Moreover, we analytically compare the performance between MRC/MRT and ZFR/ZFT, and describe the impact of the number of user pairs on the spectral efficiency. We also evaluate the energy efficiency performance based on the practical power consumption model, and depict the impact of the relay antenna number on the energy efficiencies for the proposed schemes. Furthermore, we provide the available regions where full-duplex systems can outperform half-duplex systems. Finally, we show that the proposed schemes achieve good performance tradeoffs between the spectral efficiency and the energy efficiency.
---
paper_title: Noncooperative Cellular Wireless with Unlimited Numbers of Base Station Antennas
paper_content:
A cellular base station serves a multiplicity of single-antenna terminals over the same time-frequency interval. Time-division duplex operation combined with reverse-link pilots enables the base station to estimate the reciprocal forward- and reverse-link channels. The conjugate-transpose of the channel estimates are used as a linear precoder and combiner respectively on the forward and reverse links. Propagation, unknown to both terminals and base station, comprises fast fading, log-normal shadow fading, and geometric attenuation. In the limit of an infinite number of antennas a complete multi-cellular analysis, which accounts for inter-cellular interference and the overhead and errors associated with channel-state information, yields a number of mathematically exact conclusions and points to a desirable direction towards which cellular wireless could evolve. In particular the effects of uncorrelated noise and fast fading vanish, throughput and the number of terminals are independent of the size of the cells, spectral efficiency is independent of bandwidth, and the required transmitted energy per bit vanishes. The only remaining impairment is inter-cellular interference caused by re-use of the pilot sequences in other cells (pilot contamination) which does not vanish with unlimited number of antennas.
---
paper_title: From Immune Cells to Self-Organizing Ultra-Dense Small Cell Networks
paper_content:
In order to cope with the wireless traffic demand explosion within the next decade, operators are underlying their macrocellular networks with low power base stations in a more dense manner. Such networks are typically referred to as heterogeneous or ultra-dense small cell networks, and their deployment entails a number of challenges in terms of backhauling, capacity provision, and dynamics in spatio-temporally fluctuating traffic load. Self-organizing network (SON) solutions have been defined to overcome these challenges. Since self-organization occurs in a plethora of biological systems, we identify the design principles of immune system self-regulation and draw analogies with respect to ultra-dense small cell networks. In particular, we develop a mathematical model of an artificial immune system (AIS) that autonomously activates or deactivates small cells in response to the local traffic demand. The main goal of the proposed AIS-based SON approach is the enhancement of energy efficiency and improvement of cell-edge throughput. As a proof of principle, system level simulations are carried out in which the bio-inspired algorithm is evaluated for various parameter settings, such as the speed of small cell activation and the delay of deactivation. Analysis using spatio-temporally varying traffic exhibiting uncertainty through geo-location demonstrates the robustness of the AIS-based SON approach proposed.
---
paper_title: Deploying Dense Networks for Maximal Energy Efficiency: Small Cells Meet Massive MIMO
paper_content:
What would a cellular network designed for maximal energy efficiency look like? To answer this fundamental question, tools from stochastic geometry are used in this paper to model future cellular networks and obtain a new lower bound on the average uplink spectral efficiency. This enables us to formulate a tractable uplink energy efficiency (EE) maximization problem and solve it analytically with respect to the density of base stations (BSs), the transmit power levels, the number of BS antennas and users per cell, and the pilot reuse factor. The closed-form expressions obtained from this general EE maximization framework provide valuable insights on the interplay between the optimization variables, hardware characteristics, and propagation environment. Small cells are proved to give high EE, but the EE improvement saturates quickly with the BS density. Interestingly, the maximal EE is achieved by also equipping the BSs with multiple antennas and operate in a “massive MIMO” fashion, where the array gain from coherent detection mitigates interference and the multiplexing of many users reduces the energy cost per user.
---
paper_title: Energy-Per-Bit Minimized Radio Resource Allocation in Heterogeneous Networks
paper_content:
In this paper, we present an energy-per-bit minimized radio resource allocation scheme in heterogeneous networks equipped with multi-homing capability, simultaneously connecting to different wireless interfaces. Specifically, we formulate an optimization problem related to minimization of energy-per-bit which takes a form of nonlinear fractional programming. Then we derive a parametric optimization problem out of that fractional programming and solve the original problem by using a double-loop iteration method. In each iteration, we derive the optimal resource allocation policy by applying Lagrangian duality and an efficient dual update method. In addition, we present suboptimal resource allocation algorithms using the properties of the optimal resource allocation policy. Numerical results reveal that the optimal allocation algorithm improves energy efficiency significantly over the existing resource allocation algorithms designed for homogeneous networks and its performance is superior to suboptimal algorithms in reducing energy consumption as well as in enhancing network energy efficiency.
---
paper_title: Massive MIMO Systems with Non-Ideal Hardware: Energy Efficiency, Estimation, and Capacity Limits
paper_content:
The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little interuser interference. Since these results rely on asymptotics, it is important to investigate whether the conventional system models are reasonable in this asymptotic regime. This paper considers a new system model that incorporates general transceiver hardware impairments at both the BSs (equipped with large antenna arrays) and the single-antenna user equipments (UEs). As opposed to the conventional case of ideal hardware, we show that hardware impairments create finite ceilings on the channel estimation accuracy and on the downlink/uplink capacity of each UE. Surprisingly, the capacity is mainly limited by the hardware at the UE, while the impact of impairments in the large-scale arrays vanishes asymptotically and interuser interference (in particular, pilot contamination) becomes negligible. Furthermore, we prove that the huge degrees of freedom offered by massive MIMO can be used to reduce the transmit power and/or to tolerate larger hardware impairments, which allows for the use of inexpensive and energy-efficient antenna elements.
---
paper_title: Joint Design of Radio and Transport for Green Residential Access Networks
paper_content:
Mobile networks are the largest contributor to the carbon footprint of the telecom sector and their contribution is expected to rapidly increase in the future due to the foreseen traffic growth. Therefore, there is an increasing urgency in the definition of green mobile network deployment strategies. This paper proposes a four-step design and power assessment methodology for mobile networks, taking into consideration both radio and transport segments. A number of mobile network deployment architectures for urban residential areas based on different radio (i.e., macro base station, distributed indoor radio, femto cell) and transport (i.e., microwave, copper, optical fiber) technologies are proposed and evaluated to identify the most energy efficient solution. The results show that with low traffic the conventional macro base station deployment with microwave based backhaul is the best option. However, with higher traffic values heterogeneous networks with macro base stations and indoor small cells are more energy efficient. The best small cell solution highly depends on the transport network architecture. In particular, our results show that a femto cell based deployment with optical fiber backhaul is the most energy efficient, even if a distributed indoor radio architecture (DRA) deployment with fiber fronthaul is also a competitive approach.
---
paper_title: Seven ways that HetNets are a cellular paradigm shift
paper_content:
Imagine a world with more base stations than cell phones: this is where cellular technology is headed in 10-20 years. This mega-trend requires many fundamental differences in visualizing, modeling, analyzing, simulating, and designing cellular networks vs. the current textbook approach. In this article, the most important shifts are distilled down to seven key factors, with the implications described and new models and techniques proposed for some, while others are ripe areas for future exploration.
---
paper_title: Optimal Combination of Base Station Densities for Energy-Efficient Two-Tier Heterogeneous Cellular Networks
paper_content:
In this paper, the optimal BS (Base Station) density for both homogeneous and heterogeneous cellular networks to minimize network energy cost is analyzed with stochastic geometry theory. For homogeneous cellular networks, both upper and lower bounds of the optimal BS density are derived. For heterogeneous cellular networks, our analysis reveals the best type of BSs to be deployed for capacity extension, or to be switched off for energy saving. Specifically, if the ratio between the micro BS cost and the macro BS cost is lower than a threshold, which is a function of path loss and their transmit power, then the optimal strategy is to deploy micro BSs for capacity extension or to switch off macro BSs (if possible) for energy saving with higher priority. Otherwise, the optimal strategy is the opposite. The optimal combination of macro and micro BS densities can be calculated numerically through our analysis, or alternatively be conservatively approximated with a closed-form solution. Based on the parameters from EARTH, numerical results show that in the dense urban scenario, compared to the traditional macro-only homogeneous cellular network with no BS sleeping, deploying micro BSs can reduce about 40% of the total energy cost, and further reduce up to 35% with BS sleeping capability.
---
paper_title: What Will 5G Be?
paper_content:
What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.
---
paper_title: Transmission capacity of wireless networks
paper_content:
Transmission capacity (TC) is a performance metric for wireless networks that measures the spatial intensity of successful transmissions per unit area, subject to a constraint on the permissible outage probability (where outage occurs when the SINR at a receiver is below a threshold). This volume gives a unified treatment of the TC framework that has been developed by the authors and their collaborators over the past decade. The mathematical framework underlying the analysis (reviewed in Ch. 2) is stochastic geometry: Poisson point processes model the locations of interferers, and (stable) shot noise processes represent the aggregate interference seen at a receiver. Ch. 3 presents TC results (exact, asymptotic, and bounds) on a simple model in order to illustrate a key strength of the framework: analytical tractability yields explicit performance dependence upon key model parameters. Ch. 4 presents enhancements to this basic model --- channel fading, variable link distances, and multi-hop. Ch. 5 presents four network design case studies well-suited to TC: i) spectrum management, ii) interference cancellation, iii) signal threshold transmission scheduling, and iv) power control. Ch. 6 studies the TC when nodes have multiple antennas, which provides a contrast vs. classical results that ignore interference.
---
paper_title: Random Matrix Methods for Wireless Communications
paper_content:
Blending theoretical results with practical applications, this book provides an introduction to random matrix theory and shows how it can be used to tackle a variety of problems in wireless communications. The Stieltjes transform method, free probability theory, combinatoric approaches, deterministic equivalents and spectral analysis methods for statistical inference are all covered from a unique engineering perspective. Detailed mathematical derivations are presented throughout, with thorough explanation of the key results and all fundamental lemmas required for the reader to derive similar calculus on their own. These core theoretical concepts are then applied to a wide range of real-world problems in signal processing and wireless communications, including performance analysis of CDMA, MIMO and multi-cell networks, as well as signal detection and estimation in cognitive radio networks. The rigorous yet intuitive style helps demonstrate to students and researchers alike how to choose the correct approach for obtaining mathematically accurate results.
---
paper_title: Energy-Efficient Hybrid Analog and Digital Precoding for MmWave MIMO Systems With Large Antenna Arrays
paper_content:
Millimeter wave (mmWave) MIMO will likely use hybrid analog and digital precoding, which uses a small number of RF chains to reduce the energy consumption associated with mixed signal components like analog-to-digital components not to mention baseband processing complexity. However, most hybrid precoding techniques consider a fully connected architecture requiring a large number of phase shifters, which is also energy-intensive. In this paper, we focus on the more energy-efficient hybrid precoding with subconnected architecture, and propose a successive interference cancelation (SIC)-based hybrid precoding with near-optimal performance and low complexity. Inspired by the idea of SIC for multiuser signal detection, we first propose to decompose the total achievable rate optimization problem with nonconvex constraints into a series of simple subrate optimization problems, each of which only considers one subantenna array. Then, we prove that maximizing the achievable subrate of each subantenna array is equivalent to simply seeking a precoding vector sufficiently close (in terms of Euclidean distance) to the unconstrained optimal solution. Finally, we propose a low-complexity algorithm to realize SIC-based hybrid precoding, which can avoid the need for the singular value decomposition (SVD) and matrix inversion. Complexity evaluation shows that the complexity of SIC-based hybrid precoding is only about 10% as complex as that of the recently proposed spatially sparse precoding in typical mmWave MIMO systems. Simulation results verify that SIC-based hybrid precoding is near-optimal and enjoys higher energy efficiency than the spatially sparse precoding and the fully digital precoding.
---
paper_title: Device-to-device communication in 5G cellular networks: challenges, solutions, and future directions
paper_content:
In a conventional cellular system, devices are not allowed to directly communicate with each other in the licensed cellular bandwidth and all communications take place through the base stations. In this article, we envision a two-tier cellular network that involves a macrocell tier (i.e., BS-to-device communications) and a device tier (i.e., device-to-device communications). Device terminal relaying makes it possible for devices in a network to function as transmission relays for each other and realize a massive ad hoc mesh network. This is obviously a dramatic departure from the conventional cellular architecture and brings unique technical challenges. In such a two-tier cellular system, since the user data is routed through other users? devices, security must be maintained for privacy. To ensure minimal impact on the performance of existing macrocell BSs, the two-tier network needs to be designed with smart interference management strategies and appropriate resource allocation schemes. Furthermore, novel pricing models should be designed to tempt devices to participate in this type of communication. Our article provides an overview of these major challenges in two-tier networks and proposes some pricing schemes for different types of device relaying.
---
paper_title: Wireless Device-to-Device Caching Networks: Basic Principles and System Performance
paper_content:
As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains.
---
paper_title: Traffic off-balancing algorithm for energy efficient networks
paper_content:
Physical layer of high-end network system uses multiple interface arrays. Under the load-balancing perspective, light load can be distributed to multiple interfaces. However, it can cause energy inefficiency in terms of the number of poor utilization interfaces. To tackle this energy inefficiency, traffic off-balancing algorithm for traffic adaptive interface sleep/awake is investigated. As a reference model, 40G/100G Ethernet is investigated. We report that suggested algorithm can achieve energy efficiency while satisfying traffic transmission requirement.
---
paper_title: Millimeter Wave Mobile Communications for 5G Cellular: It Will Work!
paper_content:
The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.
---
paper_title: Energy-aware resource allocation for device-to-device underlay communication
paper_content:
Device-to-device (D2D) communication as an underlay to cellular networks brings significant benefits to users' throughput and battery lifetime. The allocation of power and channel resources to D2D communication needs elaborate coordination, as D2D user equipments (UEs) cause interference to other UEs. In this paper, we propose a novel resource allocation scheme to improve the performance of D2D communication. Battery lifetime is explicitly considered as our optimization goal. We first formulate the allocation problem as a non-cooperative resource allocation game in which D2D UEs are viewed as players competing for channel resources. Then, we add pricing to the game in order to improve the efficacy, and propose an efficient auction algorithm. We also perform simulations to prove efficacy of the proposed algorithm.
---
paper_title: Energy Efficient Visible Light Communications Relying on Amorphous Cells
paper_content:
In this paper, we design an energy efficient indoor visible light communications (VLC) system from a radically new perspective based on an amorphous user-to-network association structure. Explicitly, this intriguing problem is approached from three inter-linked perspectives, considering the cell formation, link-level transmission and system-level optimisation, critically appraising the related optical constraints. To elaborate, apart from proposing hitherto unexplored amorphous cells (A-Cells), we employ a powerful amalgam of asymmetrically clipped optical orthogonal frequency division multiplexing (ACO-OFDM) and transmitter pre-coding aided multi-input single-output (MISO) transmission. As far as the overall system-level optimisation is concerned, we propose a low-complexity solution dispensing with the classic Dinkelbach’s algorithmic structure. Our numerical study compares a range of different cell formation strategies and investigates diverse design aspects of the proposed A-Cells. Specifically, our results show that the A-Cells proposed are capable of achieving a much higher energy efficiency per user compared to that of the conventional cell formation for a range of practical field of views (FoVs) angles.
---
paper_title: Visible light communications for 5G wireless networking systems: from fixed to mobile communications
paper_content:
Visible light communication, considered as a potential access option for 5G wireless communications, is gaining extensive attention. VLC has strengths in energy efficiency and ultra wide bandwidth, but also has weakness in transmission range and obstacles in transmission paths. This article aims to provide a conclusive investigation of the latest progress in research on VLC, which can be used as part of 5G wireless communication systems. This work highlights the strengths and weaknesses of VLC in comparison with RF-based communications, especially in spectrum, spatial reuse, security and energy efficiency. The article also investigates various lighting sources proposed for VLC systems. It summarizes the literature work on VLC networking into two categories: fixed and mobile VLC communications.
---
paper_title: Energy Efficiency of Downlink Networks with Caching at Base Stations
paper_content:
Caching popular contents at base stations (BSs) can reduce the backhaul cost and improve the network throughput. Yet whether locally caching at the BSs can improve the energy efficiency (EE), a major goal for fifth generation cellular networks, remains unclear. Due to the entangled impact of various factors on EE such as interference level, backhaul capacity, BS density, power consumption parameters, BS sleeping, content popularity, and cache capacity, another important question is what are the key factors that contribute more to the EE gain from caching. In this paper, we attempt to explore the potential of EE of the cache-enabled wireless access networks and identify the key factors. By deriving closed-form expression of the approximated EE, we provide the condition when the EE can benefit from caching, find the optimal cache capacity that maximizes the network EE, and analyze the maximal EE gain brought by caching. We show that caching at the BSs can improve the network EE when power efficient cache hardware is used. When local caching has EE gain over not caching, caching more contents at the BSs may not provide higher EE. Numerical and simulation results show that the caching EE gain is large when the backhaul capacity is stringent, interference level is low, content popularity is skewed, and when caching at pico BSs instead of macro BSs.
---
paper_title: Five Disruptive Technology Directions for 5G
paper_content:
New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain.
---
paper_title: Assessing System-Level Energy Efficiency of mmWave-Based Wearable Networks
paper_content:
The emerging fifth-generation (5G) wireless technology will need to harness the massively unused millimeter-wave (mmWave) spectrum to meet the projected acceleration in mobile traffic demand. Today, the available range of mmWave-based solutions is already represented by IEEE 802.11ad (WiGig), IEEE 802.15.3c, WirelessHD, and ECMA-387 standards, with more to come in the following years. As the key performance-related aspects of these enabling technologies are rapidly taking shape, the primary research challenge shifts to characterizing network energy efficiency, among other system-level parameters. This is particularly important in scenarios that are not handled by current 4G communication networks, including congested public places, homes, and offices. In these dense deployments, wireless wearable devices are increasingly proliferating to assist in diverse user needs. However, mmWave operation in crowded environments, and especially for multiple neighboring personal networks, is not nearly well-understood. Bridging this gap, we conduct a full-fledged energy efficiency assessment of mmWave-based “high-end” wearables that employ advanced antenna beamforming techniques. Our rigorous analytical results shed light on the underlying scaling laws for the interacting mmWave-based networks based on IEEE 802.11ad and quantify the impact of beamforming quality on system energy efficiency under various conditions. Furthermore, we look at the system optimization potential subject to realistic hardware capabilities.
---
paper_title: Energy Efficient Resource Allocation for Mixed RF/VLC Heterogeneous Wireless Networks
paper_content:
Developing energy efficient wireless communication networks has become crucial due to the associated environmental and financial benefits. Visible light communication (VLC) has emerged as a promising candidate for achieving energy efficient wireless communications. Integrating VLC with radio frequency (RF)-based wireless networks has improved the achievable data rates of mobile users. In this paper, we investigate the energy efficiency benefits of integrating VLC with RF-based networks in a heterogeneous wireless environment. We formulate and solve the problem of power and bandwidth allocation for energy efficiency maximization of a heterogeneous network composed of a VLC system and an RF communication system. Then, we investigate the impact of the system parameters on the energy efficiency of the mixed RF/VLC heterogeneous network. Numerical results are conducted to corroborate the superiority in performance of the proposed hybrid system. The impact of hybrid system parameters on the overall energy efficiency is also quantified.
---
paper_title: Fundamental Tradeoffs on Energy-Aware D2D Communication Underlaying Cellular Networks: A Dynamic Graph Approach
paper_content:
With the ever-increasing energy consumption in transmissions of explosive growing mobile data, energy-efficient solutions are needed to be integrated into the future mobile networks. The upcoming 5G networks support device-to-device (D2D) communication underlaying the cellular networks, which enables proximity cellular users to communicate directly with high data rate and low transmit power. In this paper, targeting the energy-aware D2D communications underlaying cellular system, we investigate the fundamental problems of what is the potential gains of D2D communications for energy saving, which are the fundamental reasons to decrease the energy consumption, and how about the tradeoffs between energy consumption and other network factors of available bandwidth, buffer size and service delay in large scale D2D communication networks. To answer the above challenging problems, we utilize a dynamic graph approach to model the system with human mobility for a realistic D2D communication scenario. Specifically, by formulating a mixed integer linear programming problem that minimizes the energy consumption for data transmission from the cellular base stations to the receivers through any possible ways of transmissions, we obtain the theoretical performance lower bound of system energy consumption, which shows that cellular D2D communications decrease the energy consumption about 65% averagely under the realistic scenario. Furthermore, the obtained fundamental tradeoffs reveal that energy consumption for large bandwidth can be kept at a low level with increase of the buffer size and service delay.
---
paper_title: Living on the edge: The role of proactive caching in 5G wireless networks
paper_content:
This article explores one of the key enablers of beyond 4G wireless networks leveraging small cell network deployments, proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context awareness, and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands via caching at base stations and users' devices. In order to show the effectiveness of proactive caching, we examine two case studies that exploit the spatial and social structure of the network, where proactive caching plays a crucial role. First, in order to alleviate backhaul congestion, we propose a mechanism whereby files are proactively cached during off-peak periods based on file popularity and correlations among user and file patterns. Second, leveraging social networks and D2D communications, we propose a procedure that exploits the social structure of the network by predicting the set of influential users to (proactively) cache strategic contents and disseminate them to their social ties via D2D communications. Exploiting this proactive caching paradigm, numerical results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users of up to 22 and 26 percent, respectively. Higher gains can be further obtained by increasing the storage capability at the network edge.
---
paper_title: Optimal Adaptive Random Multiaccess in Energy Harvesting Wireless Sensor Networks
paper_content:
Wireless sensors can integrate rechargeable batteries and energy-harvesting (EH) devices to enable long-term, autonomous operation, thus requiring intelligent energy management to limit the adverse impact of energy outages. This work considers a network of EH wireless sensors, which report packets with a random utility value to a fusion center (FC) over a shared wireless channel. Decentralized access schemes are designed, where each node performs a local decision to transmit/discard a packet, based on an estimate of the packet's utility, its own energy level, and the scenario state of the EH process, with the objective to maximize the average long-term aggregate utility of the packets received at the FC. Due to the non-convex structure of the problem, an approximate optimization is developed by resorting to a mathematical artifice based on a game theoretic formulation of the multiaccess scheme, where the nodes do not behave strategically, but rather attempt to maximize a common network utility with respect to their own policy. The symmetric Nash equilibrium (SNE) is characterized, where all nodes employ the same policy; its uniqueness is proved, and it is shown to be a local maximum of the original problem. An algorithm to compute the SNE is presented, and a heuristic scheme is proposed, which is optimal for large battery capacity. It is shown numerically that the SNE typically achieves near-optimal performance, within 3% of the optimal policy, at a fraction of the complexity, and two operational regimes of EH-networks are identified and analyzed: an energy-limited scenario , where energy is scarce and the channel is under-utilized, and a network-limited scenario , where energy is abundant and the shared wireless channel represents the bottleneck of the system.
---
paper_title: Modeling and Analysis of Energy Consumption for RF Transceivers in Wireless Cellular Systems
paper_content:
In this paper, a comprehensive model has been provided to study the energy consumption of wireless cellular devices, by analyzing the relationship between the modulation order and energy consumption of the power amplifier (PA) and other circuits in radio frequency transceivers. Two types of energy consumption for PAs are studied in detail: transmitted energy, which is provided to the antenna to transmit data, and energy dissipated as a heat. First, the transmitted energy is studied along with different modulation orders for different distances between the transmitter and receiver. Next, the dissipated energy with all corresponding parameters such as peak to average ratio (PAR) and the drain efficiency of the PA is discussed. Other circuits are examined to show that the energy of these circuits--unlike other models in the literature--change with modulation order. The results reinforce the idea that increasing the modulation order leads to higher energy consumption in the RF transceiver for large distance. The results also show that the energy dissipated due to PAR and drain efficiency is larger than the transmitted energy.
---
paper_title: Optimum Transmission Policies for Battery Limited Energy Harvesting Nodes
paper_content:
Wireless networks with energy harvesting battery equipped nodes are quickly emerging as a viable option for future wireless networks with extended lifetime. Equally important to their counterpart in the design of energy harvesting radios are the design principles that this new networking paradigm calls for. In particular, unlike wireless networks considered to date, the energy replenishment process and the storage constraints of the rechargeable batteries need to be taken into account in designing efficient transmission strategies. In this work, such transmission policies for rechargeable nodes are considered, and optimum solutions for two related problems are identified. Specifically, the transmission policy that maximizes the short term throughput, i.e., the amount of data transmitted in a finite time horizon is found. In addition, the relation of this optimization problem to another, namely, the minimization of the transmission completion time for a given amount of data is demonstrated, which leads to the solution of the latter as well. The optimum transmission policies are identified under the constraints on energy causality, i.e., energy replenishment process, as well as the energy storage, i.e., battery capacity. For battery replenishment, a model with discrete packets of energy arrivals is considered. The necessary conditions that the throughput-optimal allocation satisfies are derived, and then the algorithm that finds the optimal transmission policy with respect to the short-term throughput and the minimum transmission completion time is given. Numerical results are presented to confirm the analytical findings.
---
paper_title: Energy-Efficient Hybrid Analog and Digital Precoding for MmWave MIMO Systems With Large Antenna Arrays
paper_content:
Millimeter wave (mmWave) MIMO will likely use hybrid analog and digital precoding, which uses a small number of RF chains to reduce the energy consumption associated with mixed signal components like analog-to-digital components not to mention baseband processing complexity. However, most hybrid precoding techniques consider a fully connected architecture requiring a large number of phase shifters, which is also energy-intensive. In this paper, we focus on the more energy-efficient hybrid precoding with subconnected architecture, and propose a successive interference cancelation (SIC)-based hybrid precoding with near-optimal performance and low complexity. Inspired by the idea of SIC for multiuser signal detection, we first propose to decompose the total achievable rate optimization problem with nonconvex constraints into a series of simple subrate optimization problems, each of which only considers one subantenna array. Then, we prove that maximizing the achievable subrate of each subantenna array is equivalent to simply seeking a precoding vector sufficiently close (in terms of Euclidean distance) to the unconstrained optimal solution. Finally, we propose a low-complexity algorithm to realize SIC-based hybrid precoding, which can avoid the need for the singular value decomposition (SVD) and matrix inversion. Complexity evaluation shows that the complexity of SIC-based hybrid precoding is only about 10% as complex as that of the recently proposed spatially sparse precoding in typical mmWave MIMO systems. Simulation results verify that SIC-based hybrid precoding is near-optimal and enjoys higher energy efficiency than the spatially sparse precoding and the fully digital precoding.
---
paper_title: Wireless Networks With RF Energy Harvesting: A Contemporary Survey
paper_content:
Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirement. In this paper, we present an extensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RF-EHNs according to the network types, i.e., single-hop network, multi-antenna network, relay network and cognitive radio network. Finally, we envision some open research directions.
---
paper_title: Traffic-Aware Cloud RAN: A Key for Green 5G Networks
paper_content:
Next generation 5G wireless networks envision innovative radio technologies for ultra dense deployment with improved coverage and higher data rates. However, the deployment of ultra dense 5G networks, with relatively smaller cells, raises significant challenges in network energy consumption. Emerging green cloud radio access networks (C-RANs) are providing assurance of energy efficient cellular operations for reduction of both greenhouse emissions and operators’ energy bill. Cellular traffic dynamics play a significant role in efficient network energy management. In this paper, we first identify the complexity of the optimal traffic awareness in 5G C-RAN and design a framework for traffic-aware energy optimization. The virtual base station cluster (VBSC) of C-RAN exploits an information theoretic approach to model and analyze the uncertainty of cellular traffic, captured by remote radio heads (RRH). Subsequently, using an online, stochastic game theoretic algorithm, the VBS instances optimize and learn the cellular traffic patterns. Efficient learning makes the C-RAN aware of the near-future traffic. Traffic awareness helps in selective switching of a subset of RRHs, thus reducing the overall energy consumption. Our VBS prototype implementation, testbed experiments, and simulation results, performed with actual cellular traffic traces, demonstrate that our framework results in almost 25% daily energy savings and 35% increased energy efficiency with a negligible overhead.
---
paper_title: Mixed-ADC Massive MIMO
paper_content:
Motivated by the demand for energy-efficient communication solutions in the next generation cellular network, a mixed-ADC architecture for massive multiple-input-multiple-output (MIMO) systems is proposed, which differs from previous works in that herein one-bit analog-to-digital converters (ADCs) partially replace the conventionally assumed high-resolution ADCs. The information-theoretic tool of generalized mutual information (GMI) is exploited to analyze the achievable data rates of the proposed system architecture and an array of analytical results of engineering interest are obtained. For fixed single-input-multiple-output (SIMO) channels, a closed-form expression of the GMI is derived, based on which the linear combiner is optimized. The analysis is then extended to ergodic fading channels, for which tight lower and upper bounds of the GMI are obtained. Impacts of dithering and imperfect channel state information (CSI) are also investigated, and it is shown that dithering can remarkably improve the system performance while imperfect CSI only introduces a marginal rate loss. Finally, the analytical framework is applied to the multiuser access scenario. Numerical results demonstrate that the mixed-ADC architecture with a relatively small number of high-resolution ADCs is able to achieve a large fraction of the channel capacity of conventional architecture, while reduce the energy consumption considerably even compared with antenna selection, for both single-user and multiuser scenarios.
---
paper_title: MIMO Beamforming Designs With Partial CSI Under Energy Harvesting Constraints
paper_content:
In this letter, we investigate multiple-input multiple-output (MIMO) communications under energy harvesting (EH) constraints. In our considered EH system, there is one information transmitting (ITx) node, one traditional information receiving (IRx) node and multiple EH nodes. EH nodes can transform the received electromagnetic waves into energy to enlarge the network operation life. When the ITx node sends signals to the destination, it should also optimize the beamforming/precoder matrix to charge the EH nodes efficiently simultaneously. Additionally, the charged energy should be larger than a predefined threshold. Under the EH constraints, in our work both minimum mean-square-error (MMSE) and mutual information are taken as the performance metrics for the beamforming designs at the ITx node. In order to make the proposed algorithms suitable for practical implementation and have affordable overhead, our work focuses on the beamforming designs with partial CSI and this is the distinct contribution of our work. Finally, numerical results are given to show the performance advantages of the proposed algorithms.
---
paper_title: Optimal Packet Scheduling in an Energy Harvesting Communication System
paper_content:
We consider the optimal packet scheduling problem in a single-user energy harvesting wireless communication system. In this system, both the data packets and the harvested energy are modeled to arrive at the source node randomly. Our goal is to adaptively change the transmission rate according to the traffic load and available energy, such that the time by which all packets are delivered is minimized. Under a deterministic system setting, we assume that the energy harvesting times and harvested energy amounts are known before the transmission starts. For the data traffic arrivals, we consider two different scenarios. In the first scenario, we assume that all bits have arrived and are ready at the transmitter before the transmission starts. In the second scenario, we consider the case where packets arrive during the transmissions, with known arrival times and sizes. We develop optimal off-line scheduling policies which minimize the time by which all packets are delivered to the destination, under causality constraints on both data and energy arrivals.
---
paper_title: Large scale antenna system with hybrid digital and analog beamforming structure
paper_content:
Large scale antenna systems (LSAS) are expected to significantly enhance the energy efficiency (EE) and spectrum efficiency (SE) of wireless communication systems. However, there are many open issues regarding the implementation of digital beamforming (BF) structures: calibration, complexity, and cost. In a practical LSAS deployment, hybrid digital and analog BF structures with active antennas can be an alternative choice. In this paper, an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid BF structure is investigated, where the analog BF is performed per transceiver and digital BF is performed across N transceivers. Analysis of the N by M BF structure includes: the optimal analog and digital BF design, EE-SE relationship at the green point (i.e. the point with highest EE) in EE-SE curve, impact of N on EE performance at a given SE value, and impact of N on the green point EE. Numerical simulations are provided to support the analysis.
---
paper_title: Cloud technologies for flexible 5G radio access networks
paper_content:
The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. Two key enablers will allow the realization of the vision of 5G: very dense deployments and centralized processing. This article discusses the challenges and requirements in the design of 5G mobile networks based on these two key enablers. It discusses how cloud technologies and flexible functionality assignment in radio access networks enable network densification and centralized operation of the radio access network over heterogeneous backhaul networks. The article describes the fundamental concepts, shows how to evolve the 3GPP LTE a
---
paper_title: Relaying Protocols for Wireless Energy Harvesting and Information Processing
paper_content:
An emerging solution for prolonging the lifetime of energy constrained relay nodes in wireless networks is to avail the ambient radio-frequency (RF) signal and to simultaneously harvest energy and process information. In this paper, an amplify-and-forward (AF) relaying network is considered, where an energy constrained relay node harvests energy from the received RF signal and uses that harvested energy to forward the source information to the destination. Based on the time switching and power splitting receiver architectures, two relaying protocols, namely, i) time switching-based relaying (TSR) protocol and ii) power splitting-based relaying (PSR) protocol are proposed to enable energy harvesting and information processing at the relay. In order to determine the throughput, analytical expressions for the outage probability and the ergodic capacity are derived for delay-limited and delay-tolerant transmission modes, respectively. The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes. In particular, the TSR protocol outperforms the PSR protocol in terms of throughput at relatively low signal-to-noise-ratios and high transmission rate.
---
paper_title: A Survey on Power-Amplifier-Centric Techniques for Spectrum- and Energy-Efficient Wireless Communications
paper_content:
In this paper, we provide a survey on techniques to improve the spectrum and energy efficiency of wireless communication systems. Recognizing the fact that power amplifier (PA) is one of the most critical components in wireless communication systems and consumes a significant fraction of the total energy, we take a bottom-up approach to focus on PA-centric designs. In the first part of the survey, we introduce the fundamental properties of the PA, such as linearity and efficiency . Next, we quantify the detrimental effects of the signal non-linearity and power inefficiency of the PA on the spectrum efficiency (SE) and energy efficiency (EE) of wireless communications. In the last part, we survey known mitigation techniques from three perspectives: PA design , signal design and network design . We believe that this broad understanding will help motivate holistic design approaches to mitigate the non-ideal effects in real-life PA devices, and accelerate cross-domain research to further enhance the available techniques.
---
paper_title: Energy-Efficient Resource Allocation in OFDMA Systems with Hybrid Energy Harvesting Base Station
paper_content:
We study resource allocation algorithm design for energy-efficient communication in an orthogonal frequency division multiple access (OFDMA) downlink network with hybrid energy harvesting base station (BS). Specifically, an energy harvester and a constant energy source driven by a non-renewable resource are used for supplying the energy required for system operation. We first consider a deterministic offline system setting. In particular, assuming availability of non-causal knowledge about energy arrivals and channel gains, an offline resource allocation problem is formulated as a non-convex optimization problem over a finite horizon taking into account the circuit energy consumption, a finite energy storage capacity, and a minimum required data rate. We transform this non-convex optimization problem into a convex optimization problem by applying time-sharing and exploiting the properties of non-linear fractional programming which results in an efficient asymptotically optimal offline iterative resource allocation algorithm for a sufficiently large number of subcarriers. In each iteration, the transformed problem is solved by using Lagrange dual decomposition. The obtained resource allocation policy maximizes the weighted energy efficiency of data transmission (weighted bit/Joule delivered to the receiver). Subsequently, we focus on online algorithm design. A conventional stochastic dynamic programming approach is employed to obtain the optimal online resource allocation algorithm which entails a prohibitively high complexity. To strike a balance between system performance and computational complexity, we propose a low complexity suboptimal online iterative algorithm which is motivated by the offline algorithm. Simulation results illustrate that the proposed suboptimal online iterative resource allocation algorithm does not only converge in a small number of iterations, but also achieves a close-to-optimal system energy efficiency by utilizing only causal channel state and energy arrival information.
---
paper_title: Wireless-Powered Cooperative Communications: Power-Splitting Relaying with Energy Accumulation
paper_content:
A harvest-use-store power splitting (PS) relaying strategy with distributed beamforming is proposed for wireless-powered multi-relay cooperative networks in this paper. Different from the conventional battery-free PS relaying strategy, harvested energy is prioritized to power information relaying while the remainder is accumulated and stored for future usage with the help of a battery in the proposed strategy, which supports an efficient utilization of harvested energy. However, PS affects throughput at subsequent time slots due to the battery operations including the charging and discharging. To this end, PS and battery operations are coupled with distributed beamforming. A throughput optimization problem to incorporate these coupled operations is formulated though it is intractable. To address the intractability of the optimization, a layered optimization method is proposed to achieve the optimal joint PS and battery operation design with non-causal channel state information (CSI), in which the PS and the battery operation can be analyzed in a decomposed manner. Then, a general case with causal CSI is considered, where the proposed layered optimization method is extended by utilizing the statistical properties of CSI. To reach a better tradeoff between performance and complexity, a greedy method that requires no information about subsequent time slots is proposed. Simulation results reveal the upper and lower bound on performance of the proposed strategy, which are reached by the layered optimization method with non-causal CSI and the greedy method, respectively. Moreover, the proposed strategy outperforms the conventional PS-based relaying without energy accumulation and time switching-based relaying strategy.
---
paper_title: Energy cooperation in cellular networks with renewable powered base stations
paper_content:
In this paper, we propose a model for energy cooperation between cellular base stations (BSs) with individual renewable energy sources, limited energy storages and connected by resistive power lines for energy sharing. When the renewable energy profile and energy demand profile at all BSs are deterministic or known ahead of time, we show that the optimal energy cooperation policy for the BSs can be found by solving a linear program. We show the benefits of energy cooperation in this regime. When the renewable energy and demand profiles are stochastic and only causally known at the BSs, we propose an online energy cooperation algorithm and show the optimality properties of this algorithm under certain conditions. Furthermore, the energy-saving performances of the developed offline and online algorithms are compared by simulations, and the effect of the availability of energy state information (ESI) on the performance gains of the BSs' energy cooperation is investigated.
---
paper_title: Wireless Information and Power Transfer: Energy Efficiency Optimization in OFDMA Systems
paper_content:
This paper considers orthogonal frequency division multiple access (OFDMA) systems with simultaneous wireless information and power transfer. We study the resource allocation algorithm design for maximization of the energy efficiency of data transmission (bits/Joule delivered to the receivers). In particular, we focus on power splitting hybrid receivers which are able to split the received signals into two power streams for concurrent information decoding and energy harvesting. Two scenarios are investigated considering different power splitting abilities of the receivers. In the first scenario, we assume receivers which can split the received power into a continuous set of power streams with arbitrary power splitting ratios. In the second scenario, we examine receivers which can split the received power only into a discrete set of power streams with fixed power splitting ratios. For both scenarios, we formulate the corresponding algorithm design as a non-convex optimization problem which takes into account the circuit power consumption, the minimum data rate requirements of delay constrained services, the minimum required system data rate, and the minimum amount of power that has to be delivered to the receivers. By exploiting fractional programming and dual decomposition, suboptimal iterative resource allocation algorithms are developed to solve the non-convex problems. Simulation results illustrate that the proposed iterative resource allocation algorithms approach the optimal solution within a small number of iterations and unveil the trade-off between energy efficiency, system capacity, and wireless power transfer: (1) wireless power transfer enhances the system energy efficiency by harvesting energy in the radio frequency, especially in the interference limited regime; (2) the presence of multiple receivers is beneficial for the system capacity, but not necessarily for the system energy efficiency.
---
paper_title: Designing intelligent energy harvesting communication systems
paper_content:
From being a scientific curiosity only a few years ago, energy harvesting (EH) is well on its way to becoming a game-changing technology in the field of autonomous wireless networked systems. The promise of long-term, uninterrupted and self-sustainable operation in a diverse array of applications has captured the interest of academia and industry alike. Yet the road to the ultimate network of perpetual communicating devices is plagued with potholes: ambient energy is intermittent and scarce, energy storage capacity is limited, and devices are constrained in size and complexity. In dealing with these challenges, this article will cover recent developments in the design of intelligent energy management policies for EH wireless devices and discuss pressing research questions in this rapidly growing field.
---
paper_title: Sum-rate optimal power policies for energy harvesting transmitters in an interference channel
paper_content:
This paper considers a two-user Gaussian interference channel with energy harvesting transmitters. Different than conventional battery powered wireless nodes, energy harvesting transmitters have to adapt transmission to availability of energy at a particular instant. In this setting, the optimal power allocation problem to maximize the sum throughput with a given deadline is formulated. The convergence of the proposed iterative coordinate descent method for the problem is proved and the short-term throughput maximizing offline power allocation policy is found. Examples for interference regions with known sum capacities are given with directional waterfilling interpretations. Next, stochastic data arrivals are addressed. Finally, online and/or distributed near-optimal policies are proposed. Performance of the proposed algorithms are demonstrated through simulations.
---
paper_title: Power Allocation Strategies in Energy Harvesting Wireless Cooperative Networks
paper_content:
In this paper, a wireless cooperative network is considered, in which multiple source-destination pairs communicate with each other via an energy harvesting relay. The focus of this paper is on the relay's strategies to distribute the harvested energy among the multiple users and their impact on the system performance. Specifically, a non-cooperative strategy that uses the energy harvested from the i-th source as the relay transmission power to the i-th destination is considered first, and asymptotic results show that its outage performance decays as log SNR/SNR. A faster decay rate, 1/SNR, can be achieved by two centralized strategies proposed next, of which a water filling based one can achieve optimal performance with respect to several criteria, at the price of high complexity. An auction based power allocation scheme is also proposed to achieve a better tradeoff between system performance and complexity. Simulation results are provided to confirm the accuracy of the developed analytical results.
---
paper_title: RF Energy Harvesting and Transport for Wireless Sensor Network Applications: Principles and Requirements
paper_content:
This paper presents an overview of principles and requirements for powering wireless sensors by radio-frequency (RF) energy harvesting or transport. The feasibility of harvesting is discussed, leading to the conclusion that RF energy transport is preferred for powering small sized sensors. These sensors are foreseen in future Smart Buildings. Transmitting in the ISM frequency bands, respecting the transmit power limits ensures that the International Commission on Non-Ionizing Radiation Protection (ICNIRP) exposure limits are not exceeded. With the transmit side limitations being explored, the propagation channel is next discussed, leading to the observation that a better than free-space attenuation may be achieved in indoors line-of-sight environments. Then, the components of the rectifying antenna (rectenna) are being discussed: rectifier, dc-dc boost converter, and antenna. The power efficiencies of all these rectenna subcomponents are being analyzed and finally some examples are shown. To make RF energy transport a feasible powering technology for low-power sensors, a number of precautions need to be taken. The propagation channel characteristics need to be taken into account by creating an appropriate transmit antenna radiation pattern. All subcomponents of the rectenna need to be impedance matched, and the power transfer efficiencies of the rectifier and the boost converter need to be optimized.
---
paper_title: Optimal Energy Allocation for Wireless Communications With Energy Harvesting Constraints
paper_content:
We consider the use of energy harvesters, in place of conventional batteries with fixed energy storage, for point-to-point wireless communications. In addition to the challenge of transmitting in a channel with time selective fading, energy harvesters provide a perpetual but unreliable energy source. In this paper, we consider the problem of energy allocation over a finite horizon, taking into account channel conditions and energy sources that are time varying, so as to maximize the throughput. Two types of side information (SI) on the channel conditions and harvested energy are assumed to be available: causal SI (of the past and present slots) or full SI (of the past, present and future slots). We obtain structural results for the optimal energy allocation, via the use of dynamic programming and convex optimization techniques. In particular, if unlimited energy can be stored in the battery with harvested energy and the full SI is available, we prove the optimality of a water-filling energy allocation solution where the so-called water levels follow a staircase function.
---
paper_title: Energy Efficient Beamforming in MISO Heterogeneous Cellular Networks With Wireless Information and Power Transfer
paper_content:
The advent of simultaneous wireless information and power transfer (SWIPT) offers a promising approach to providing cost-effective and perpetual power supplies for energy-constrained mobile devices in heterogeneous cellular networks (HCNs). As energy efficiency (EE) has been envisioned as a key performance metric in 5G wireless networks, we consider a multiple-input single-output (MISO) femtocell cochannel overlaid with a Macrocell to exploit the advantages of SWIPT while promoting the EE. The femto base station sends information to information decoding (ID) femto users (FUs) and transfers energy to energy harvesting (EH) FUs simultaneously, and also suppresses its interference to Macro users. We maximize the information transmission efficiency (ITE) of ID FUs and energy harvesting efficiency (EHE) of EH FUs, respectively, with the QoS of all users, and investigate their relationship. We formulate these problems as fractional programming, which are nontrivial to solve due to the nonconvexity of ITE and EHE. To tackle these problems, we devise two beamformers namely zero-forcing (ZF) and mixed beamforming (MBF), and then propose an efficient algorithm to obtain the optimal power under both beamformers. Simulation results demonstrate that MBF provides better ITE and EHE than ZF, and there exists a tradeoff between ITE and EHE in general.
---
paper_title: Simultaneous Information and Power Transfer for Broadband Wireless Systems
paper_content:
Far-field microwave power transfer (MPT) will free wireless sensors and other mobile devices from the constraints imposed by finite battery capacities. Integrating MPT with wireless communications to support simultaneous wireless information and power transfer (SWIPT) allows the same spectrum to be used for dual purposes without compromising the quality of service. A novel approach is presented in this paper for realizing SWIPT in a broadband system where orthogonal frequency division multiplexing and transmit beamforming are deployed to create a set of parallel sub-channels for SWIPT, which simplifies resource allocation. Based on a proposed reconfigurable mobile architecture, different system configurations are considered by combining single-user/multi-user systems, downlink/uplink information transfer, and variable/fixed coding rates. Optimizing the power control for these configurations results in a new class of multi-user power-control problems featuring the circuit-power constraints, specifying that the transferred power must be sufficiently large to support the operation of the receiver circuitry. Solving these problems gives a set of power-control algorithms that exploit channel diversity in frequency for simultaneously enhancing the throughput and the MPT efficiency. For the system configurations with variable coding rates, the algorithms are variants of water-filling that account for the circuit-power constraints. The optimal algorithms for those configurations with fixed coding rates are shown to sequentially allocate mobiles their required power for decoding in ascending order until the entire budgeted power is spent. The required power for a mobile is derived as simple functions of the minimum signal-to-noise ratio for correct decoding, the circuit power and sub-channel gains.
---
paper_title: Energy Efficiency of Downlink Transmission Strategies for Cloud Radio Access Networks
paper_content:
This paper studies the energy efficiency of the cloud radio access network (C-RAN), specifically focusing on two fundamental and different downlink transmission strategies, namely the data-sharing strategy and the compression strategy. In the data-sharing strategy, the backhaul links connecting the central processor (CP) and the base-stations (BSs) are used to carry user messages—each user’s messages are sent to multiple BSs; the BSs locally form the beamforming vectors then cooperatively transmit the messages to the user. In the compression strategy, the user messages are precoded centrally at the CP, which forwards a compressed version of the analog beamformed signals to the BSs for cooperative transmission. This paper compares the energy efficiencies of the two strategies by formulating an optimization problem of minimizing the total network power consumption subject to user target rate constraints, where the total network power includes the BS transmission power, BS activation power, and load-dependent backhaul power. To tackle the discrete and nonconvex nature of the optimization problems, we utilize the techniques of reweighted $\ell_1$ minimization and successive convex approximation to devise provably convergent algorithms. Our main finding is that both the optimized data-sharing and compression strategies in C-RAN achieve much higher energy efficiency as compared to the nonoptimized coordinated multipoint transmission, but their comparative effectiveness in energy saving depends on the user target rate. At low user target rate, data-sharing consumes less total power than compression; however, as the user target rate increases, the backhaul power consumption for data-sharing increases significantly leading to better energy efficiency of compression at the high user rate regime.
---
paper_title: Spatially Sparse Precoding in Millimeter Wave MIMO Systems
paper_content:
Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding/combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered.
---
paper_title: Broadcasting with an Energy Harvesting Rechargeable Transmitter
paper_content:
In this paper, we investigate the transmission completion time minimization problem in a two-user additive white Gaussian noise (AWGN) broadcast channel, where the transmitter is able to harvest energy from the nature, using a rechargeable battery. The harvested energy is modeled to arrive at the transmitter randomly during the course of transmissions. The transmitter has a fixed number of packets to be delivered to each receiver. Our goal is to minimize the time by which all of the packets for both users are delivered to their respective destinations. To this end, we optimize the transmit powers and transmission rates intended for both users. We first analyze the structural properties of the optimal transmission policy. We prove that the optimal total transmit power has the same structure as the optimal single-user transmit power. We also prove that there exists a cut-off power level for the stronger user. If the optimal total transmit power is lower than this cut-off level, all transmit power is allocated to the stronger user, and when the optimal total transmit power is larger than this cut-off level, all transmit power above this level is allocated to the weaker user. Based on these structural properties of the optimal policy, we propose an algorithm that yields the globally optimal off-line scheduling policy. Our algorithm is based on the idea of reducing the two-user broadcast channel problem into a single-user problem as much as possible.
---
paper_title: Transmission with Energy Harvesting Nodes in Fading Wireless Channels: Optimal Policies
paper_content:
Wireless systems comprised of rechargeable nodes have a significantly prolonged lifetime and are sustainable. A distinct characteristic of these systems is the fact that the nodes can harvest energy throughout the duration in which communication takes place. As such, transmission policies of the nodes need to adapt to these harvested energy arrivals. In this paper, we consider optimization of point-to-point data transmission with an energy harvesting transmitter which has a limited battery capacity, communicating in a wireless fading channel. We consider two objectives: maximizing the throughput by a deadline, and minimizing the transmission completion time of the communication session. We optimize these objectives by controlling the time sequence of transmit powers subject to energy storage capacity and causality constraints. We, first, study optimal offline policies. We introduce a directional water-filling algorithm which provides a simple and concise interpretation of the necessary optimality conditions. We show the optimality of an adaptive directional water-filling algorithm for the throughput maximization problem. We solve the transmission completion time minimization problem by utilizing its equivalence to its throughput maximization counterpart. Next, we consider online policies. We use stochastic dynamic programming to solve for the optimal online policy that maximizes the average number of bits delivered by a deadline under stochastic fading and energy arrival processes with causal channel state feedback. We also propose near-optimal policies with reduced complexity, and numerically study their performances along with the performances of the offline and online optimal policies under various different configurations.
---
paper_title: 5G on the Horizon: Key Challenges for the Radio-Access Network
paper_content:
Toward the fifth generation (5G) of wireless/mobile broadband, numerous devices and networks will be interconnected and traffic demand will constantly rise. Heterogeneity will also be a feature that is expected to characterize the emerging wireless world, as mixed usage of cells of diverse sizes and access points with different characteristics and technologies in an operating environment are necessary. Wireless networks pose specific requirements that need to be fulfilled. In this respect, approaches for introducing intelligence will be investigated by the research community. Intelligence shall provide energy- and cost-efficient solutions at which a certain application/service/quality provision is achieved. Particularly, the introduction of intelligence in heterogeneous network deployments and the cloud radio-access network (RAN) is investigated. Finally, elaboration on emerging enabling technologies for applying intelligence will focus on the recent concepts of software-defined networking (SDN) and network function virtualization (NFV). This article provided an overview for delivering intelligence toward the 5G of wireless/mobile broadband by taking into account the complex context of operation and essential requirements such as QoE, energy efficiency, cost efficiency, and resource efficiency.
---
paper_title: A Learning Theoretic Approach to Energy Harvesting Communication System Optimization
paper_content:
A point-to-point wireless communication system in which the transmitter is equipped with an energy harvesting device and a rechargeable battery, is studied. Both the energy and the data arrivals at the transmitter are modeled as Markov processes. Delay-limited communication is considered assuming that the underlying channel is block fading with memory, and the instantaneous channel state information is available at both the transmitter and the receiver. The expected total transmitted data during the transmitter's activation time is maximized under three different sets of assumptions regarding the information available at the transmitter about the underlying stochastic processes. A learning theoretic approach is introduced, which does not assume any a priori information on the Markov processes governing the communication system. In addition, online and offline optimization problems are studied for the same setting. Full statistical knowledge and causal information on the realizations of the underlying stochastic processes are assumed in the online optimization problem, while the offline optimization problem assumes non-causal knowledge of the realizations in advance. Comparing the optimal solutions in all three frameworks, the performance loss due to the lack of the transmitter's information regarding the behaviors of the underlying Markov processes is quantified.
---
paper_title: Opportunistic Wireless Energy Harvesting in Cognitive Radio Networks
paper_content:
Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.
---
paper_title: Energy Efficiency Benefits of RAN-as-a-Service Concept for a Cloud-Based 5G Mobile Network Infrastructure
paper_content:
This paper focuses on energy efficiency aspects and related benefits of radio-access-network-as-a-service (RANaaS) implementation (using commodity hardware) as architectural evolution of LTE-advanced networks toward 5G infrastructure. RANaaS is a novel concept introduced recently, which enables the partial centralization of RAN functionalities depending on the actual needs as well as on network characteristics. In the view of future definition of 5G systems, this cloud-based design is an important solution in terms of efficient usage of network resources. The aim of this paper is to give a vision of the advantages of the RANaaS, to present its benefits in terms of energy efficiency and to propose a consistent system-level power model as a reference for assessing innovative functionalities toward 5G systems. The incremental benefits through the years are also discussed in perspective, by considering technological evolution of IT platforms and the increasing matching between their capabilities and the need for progressive virtualization of RAN functionalities. The description is complemented by an exemplary evaluation in terms of energy efficiency, analyzing the achievable gains associated with the RANaaS paradigm.
---
paper_title: Smoothed $L_p$-Minimization for Green Cloud-RAN With User Admission Control
paper_content:
The cloud radio access network (Cloud-RAN) has recently been proposed as one of the cost-effective and energy-efficient techniques for 5G wireless networks. By moving the signal processing functionality to a single baseband unit (BBU) pool, centralized signal processing and resource allocation are enabled in cloud-RAN, thereby providing the promise of improving the energy efficiency via effective network adaptation and interference management. In this paper, we propose a holistic sparse optimization framework to design green cloud-RAN by taking into consideration the power consumption of the fronthaul links, multicast services, as well as user admission control. Specifically, we first identify the sparsity structures in the solutions of both the network power minimization and user admission control problems, which call for adaptive remote radio head (RRH) selection and user admission. However, finding the optimal sparsity structures turns out to be NP-hard, with the coupled challenges of the $\ell_0$ -norm-based objective functions and the nonconvex quadratic QoS constraints due to multicast beamforming. In contrast to the previous works on convex but nonsmooth sparsity inducing approaches, e.g., the group sparse beamforming algorithm based on the mixed $\ell_1/\ell_2$ -norm relaxation, we adopt the nonconvex but smoothed $\ell_p$ -minimization ( $0 ) approach to promote sparsity in the multicast setting, thereby enabling efficient algorithm design based on the principle of the majorization–minimization (MM) algorithm and the semidefinite relaxation (SDR) technique. In particular, an iterative reweighted- $\ell_2$ algorithm is developed, which will converge to a Karush–Kuhn–Tucker (KKT) point of the relaxed smoothed $\ell_p$ -minimization problem from the SDR technique. We illustrate the effectiveness of the proposed algorithms with extensive simulations for network power minimization and user admission control in multicast cloud-RAN.
---
paper_title: Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing
paper_content:
Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources—the transmit precoding matrices of the MUs—and the computational resources—the CPU cycles/second assigned by the cloud to each MU—in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
---
paper_title: Energy-Efficient Transmission for Wireless Energy Harvesting Nodes
paper_content:
Energy harvesting is increasingly gaining importance as a means to charge battery powered devices such as sensor nodes. Efficient transmission strategies must be developed for Wireless Energy Harvesting Nodes (WEHNs) that take into account both the availability of energy and data in the node. We consider a scenario where data and energy packets arrive to the node where the time instants and amounts of the packets are known (offline approach). In this paper, the best data transmission strategy is found for a finite battery capacity WEHN that has to fulfill some Quality of Service (QoS) constraints, as well as the energy and data causality constraints. As a result of our analysis, we can state that losing energy due to overflows of the battery is inefficient unless there is no more data to transmit and that the problem may not have a feasible solution. Finally, an algorithm that computes the data transmission curve minimizing the total transmission time that satisfies the aforementioned constraints has been developed.
---
paper_title: Cooperative Non-Orthogonal Multiple Access with Simultaneous Wireless Information and Power Transfer
paper_content:
In this paper, the application of simultaneous wireless information and power transfer (SWIPT) to nonorthogonal multiple access (NOMA) networks in which users are spatially randomly located is investigated. A new co-operative SWIPT NOMA protocol is proposed, in which near NOMA users that are close to the source act as energy harvesting relays to help far NOMA users. Since the locations of users have a significant impact on the performance, three user selection schemes based on the user distances from the base station are proposed. To characterize the performance of the proposed selection schemes, closed-form expressions for the outage probability and system throughput are derived. These analytical results demonstrate that the use of SWIPT will not jeopardize the diversity gain compared to the conventional NOMA. The proposed results confirm that the opportunistic use of node locations for user selection can achieve low outage probability and deliver superior throughput in comparison to the random selection scheme.
---
paper_title: Energy Harvesting Wireless Communications: A Review of Recent Advances
paper_content:
This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.
---
paper_title: Energy Cooperation in Energy Harvesting Communications
paper_content:
In energy harvesting communications, users transmit messages using energy harvested from nature during the course of communication. With an optimum transmit policy, the performance of the system depends only on the energy arrival profiles. In this paper, we introduce the concept of energy cooperation, where a user wirelessly transmits a portion of its energy to another energy harvesting user. This enables shaping and optimization of the energy arrivals at the energy-receiving node, and improves the overall system performance, despite the loss incurred in energy transfer. We consider several basic multi-user network structures with energy harvesting and wireless energy transfer capabilities: relay channel, two-way channel and multiple access channel. We determine energy management policies that maximize the system throughput within a given duration using a Lagrangian formulation and the resulting KKT optimality conditions. We develop a two-dimensional directional water-filling algorithm which optimally controls the flow of harvested energy in two dimensions: in time (from past to future) and among users (from energy-transferring to energy-receiving) and show that a generalized version of this algorithm achieves the boundary of the capacity region of the two-way channel.
---
paper_title: Performance of the Wideband Massive Uplink MIMO with One-Bit ADCs
paper_content:
Analog-to-digital converters (ADCs) consume a significant part of the total power in a massive MIMO base station. One-bit ADCs are one way to reduce power consumption. This paper presents an analysis of the spectral efficiency of single-carrier and OFDM transmission in massive MIMO systems that use one-bit ADCs. A closed-form achievable rate is derived for a wideband system with a large number of channel taps, assuming linear channel estimation and symbol detection. Quantization results in two types of error in the symbol detection. The circularly symmetric error becomes Gaussian in massive MIMO and vanishes as the number of antennas grows. The amplitude distortion, which severely degrades the performance of OFDM, is caused by variations in received interference energy between symbol durations. As the number of channel taps grows, the amplitude distortion vanishes and OFDM has the same performance as single-carrier transmission. A main conclusion of this paper is that wideband massive MIMO systems work well with one-bit ADCs.
---
paper_title: Capacity Analysis of One-Bit Quantized MIMO Systems with Transmitter Channel State Information
paper_content:
With bandwidths on the order of a gigahertz in emerging wireless systems, high-resolution analog-to-digital convertors (ADCs) become a power consumption bottleneck. One solution is to employ low resolution one-bit ADCs. In this paper, we analyze the flat fading multiple-input multiple-output (MIMO) channel with one-bit ADCs. Channel state information is assumed to be known at both the transmitter and receiver. For the multiple-input single-output channel, we derive the exact channel capacity. For the single-input multiple-output and MIMO channel, the capacity at infinite signal-to-noise ratio (SNR) is found. We also derive upper bound at finite SNR, which is tight when the channel has full row rank. In addition, we propose an efficient method to design the input symbols to approach the capacity achieving solution. We incorporate millimeter wave channel characteristics and find the bounds on the infinite SNR capacity. The results show how the number of paths and number of receive antennas impact the capacity.
---
paper_title: Large-scale antenna systems with hybrid analog and digital beamforming for millimeter wave 5G
paper_content:
With the severe spectrum shortage in conventional cellular bands, large-scale antenna systems in the mmWave bands can potentially help to meet the anticipated demands of mobile traffic in the 5G era. There are many challenging issues, however, regarding the implementation of digital beamforming in large-scale antenna systems: complexity, energy consumption, and cost. In a practical large-scale antenna deployment, hybrid analog and digital beamforming structures can be important alternative choices. In this article, optimal designs of hybrid beamforming structures are investigated, with the focus on an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid beamforming structure. Optimal analog and digital beamforming designs in a multi-user beamforming scenario are discussed. Also, the energy efficiency and spectrum efficiency of the N × M beamforming structure are analyzed, including their relationship at the green point (i.e., the point with the highest energy efficiency) on the energy efficiency-spectrum efficiency curve, the impact of N on the energy efficiency performance at a given spectrum efficiency value, and the impact of N on the green point energy efficiency. These results can be conveniently utilized to guide practical LSAS design for optimal energy/ spectrum efficiency trade-off. Finally, a reference signal design for the hybrid beamform structure is presented, which achieves better channel estimation performance than the method solely based on analog beamforming. It is expected that large-scale antenna systems with hybrid beamforming structures in the mmWave band can play an important role in 5G.
---
paper_title: Energy-efficient hybrid precoding based on successive interference cancelation for millimeter-wave massive MIMO systems
paper_content:
Millimeter-wave massive MIMO usually utilizes the hybrid precoding to overcome the serious signal attenuation, where only a small number of RF chains is required. However, most of hybrid precoding schemes consider the complicated full-connected architecture. In this paper, we focus on the more energy-efficient sub-connected architecture, and propose a low-complexity hybrid precoding based on successive interference cancelation (SIC). The basic idea of SIC-based hybrid precoding is to decompose the total capacity optimization problem into a series of sub-problems, each of which only considers one sub-antenna array. Then, we can optimize the capacity of each sub-antenna array one by one until the last sub-antenna array is considered. Simulation results verify that SIC-based hybrid precoding can achieve the near-optimal performance.
---
paper_title: Throughput Maximization for the Gaussian Relay Channel with Energy Harvesting Constraints
paper_content:
This paper considers the use of energy harvesters, instead of conventional time-invariant energy sources, in wireless cooperative communication. For the purpose of exposition, we study the classic three-node Gaussian relay channel with decode-and-forward (DF) relaying, in which the source and relay nodes transmit with power drawn from energy-harvesting (EH) sources. Assuming a deterministic EH model under which the energy arrival time and the harvested amount are known prior to transmission, the throughput maximization problem over a finite horizon of N transmission blocks is investigated. In particular, two types of data traffic with different delay constraints are considered: delay-constrained (DC) traffic (for which only one-block decoding delay is allowed at the destination) and no-delay-constrained (NDC) traffic (for which arbitrary decoding delay up to N blocks is allowed). For the DC case, we show that the joint source and relay power allocation over time is necessary to achieve the maximum throughput, and propose an efficient algorithm to compute the optimal power profiles. For the NDC case, although the throughput maximization problem is non-convex, we prove the optimality of a separation principle for the source and relay power allocation problems, based upon which a two-stage power allocation algorithm is developed to obtain the optimal source and relay power profiles separately. Furthermore, we compare the DC and NDC cases, and obtain the sufficient and necessary conditions under which the NDC case performs strictly better than the DC case. It is shown that NDC transmission is able to exploit a new form of diversity arising from the independent source and relay energy availability over time in cooperative communication, termed "energy diversity", even with time-invariant channels.
---
paper_title: Renewable energy in cellular networks: A survey
paper_content:
In recent years a lot of research work was dedicated to energy efficiency in cellular networks. An emerging field of interest in that direction is the use of renewable energy sources such as wind and solar - a type of energy that is bound to increase and develop. In this paper we survey the existing works and present the fundamental principles and motivations necessary for a thorough understanding of the usage of renewable energy in cellular networks. Furthermore, we introduce a reference model for renewable energy base station (REBS) and provide analysis of its components. The synthesis of dedicated REBS algorithms, system architectures, deployment strategies and evaluation metrics provides comprehensive overview and helps identify the perspectives in this promising research field.
---
paper_title: A general framework for the optimization of energy harvesting communication systems with battery imperfections
paper_content:
Energy harvesting has emerged as a powerful technology for complementing current battery-powered communication systems in order to extend their lifetime. In this paper a general framework is introduced for the optimization of communication systems in which the transmitter is able to harvest energy from its environment. Assuming that the energy arrival process is known non-causally at the transmitter, the structure of the optimal transmission scheme, which maximizes the amount of transmitted data by a given deadline, is identified. Our framework includes models with continuous energy arrival as well as battery constraints. A battery that suffers from energy leakage is studied further, and the optimal transmission scheme is characterized for a constant leakage rate.
---
paper_title: Energy Efficiency and Interference Neutralization in Two-Hop MIMO Interference Channels
paper_content:
The issue of energy-aware resource allocation in an amplify-and-forward (AF) relay-assisted multiple-antenna interference channel (IC) is considered. A novel interference neutralization (IN) scheme is proposed for relay design and, based on the IN relay matrix design, two algorithms are developed to jointly allocate the users' transmit powers, beamforming (BF) and receive filters. The first algorithm considers a competitive scenario and employs a noncooperative game-theoretic approach to maximize the individual energy efficiency (EE) of each communication link, defined as the ratio of the achievable rate over the consumed power. The resulting algorithm converges to a unique fixed point, has limited complexity, and can be implemented in a distributed fashion. The second algorithm employs fractional programming tools and sequential convex optimization to centrally allocate the users' transmit powers, BF, and receive filters for global energy efficiency (GEE) maximization. The resulting algorithm is guaranteed to converge and has limited computational complexity. Numerical results show that the competitive IN design achieves virtually the same performance as the cooperative design if IN is feasible, while the gap is small if perfect IN is not achievable.
---
paper_title: Energy Efficiency in MIMO Underlay and Overlay Device-to-Device Communications and Cognitive Radio Systems
paper_content:
This paper addresses the problem of resource allocation for systems in which a primary and a secondary link share the available spectrum by an underlay or overlay approach. After observing that such a scenario models both cognitive radio and D2D communications, we formulate the problem as the maximization of the secondary energy efficiency subject to a minimum rate requirement for the primary user. This leads to challenging nonconvex, fractional problems. In the underlay scenario, we obtain the global solution by means of a suitable reformulation. In the overlay scenario, two algorithms are proposed. The first one yields a resource allocation fulfilling the first-order optimality conditions of the resource allocation problem, by solving a sequence of easier fractional programs. The second one enjoys a weaker optimality claim, but an even lower computational complexity. Numerical results demonstrate the merits of the proposed algorithms both in terms of energy-efficient performance and complexity, also showing that the two proposed algorithms for the overlay scenario perform very similarly, despite the different complexity.
---
paper_title: Energy-Efficient Power Control: A Look at 5G Wireless Technologies
paper_content:
This paper develops power control algorithms for energy efficiency (EE) maximization (measured in bit/Joule) in wireless networks. Unlike previous related works, minimum-rate constraints are imposed and the signal-to-interference-plus-noise ratio takes a more general expression, which allows one to encompass some of the most promising 5G candidate technologies. Both network-centric and user-centric EE maximizations are considered. In the network-centric scenario, the maximization of the global EE and the minimum EE of the network is performed. Unlike previous contributions, we develop centralized algorithms that are guaranteed to converge, with affordable computational complexity, to a Karush–Kuhn–Tucker point of the considered non-convex optimization problems. Moreover, closed-form feasibility conditions are derived. In the user-centric scenario, game theory is used to study the equilibria of the network and to derive convergent power control algorithms, which can be implemented in a fully decentralized fashion. Both scenarios above are studied under the assumption that single or multiple resource blocks are employed for data transmission. Numerical results assess the performance of the proposed solutions, analyzing the impact of minimum-rate constraints, and comparing the network-centric and user-centric approaches.
---
paper_title: Energy-Efficient Scheduling and Power Allocation in Downlink OFDMA Networks With Base Station Coordination
paper_content:
This paper addresses the problem of energy-efficient resource allocation in the downlink of a cellular OFDMA system. Three definitions of the energy efficiency are considered for system design, accounting for both the radiated and the circuit power. User scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power (either per subcarrier or per base station). The asymptotic noise-limited regime is discussed as a special case. %The performance of both an isolated and a non-isolated cluster of coordinated base stations is examined in the numerical experiments. Results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides a large saving in terms of dissipated energy. Also, the performance gap among the considered resource allocation strategies reduces as the out-of-cluster interference increases.
---
paper_title: Precoding for Full Duplex Multiuser MIMO Systems: Spectral and Energy Efficiency Maximization
paper_content:
We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.
---
| Title: A Survey of Energy-Efficient Techniques for 5G Networks and Challenges Ahead
Section 1: INTRODUCTION
Description 1: This section introduces the primary concerns regarding energy consumption in wireless communication systems, emphasizing the need for energy efficiency in designing 5G networks.
Section 2: RESOURCE ALLOCATION
Description 2: This section discusses different techniques and mathematical tools for energy-efficient resource allocation in 5G networks, comparing them with traditional throughput-optimized communication methods.
Section 3: NETWORK PLANNING AND DEPLOYMENT
Description 3: This section explores various strategies for network densification and infrastructure planning, including dense heterogeneous networks and massive MIMO, to enhance energy efficiency.
Section 4: OFFLOADING TECHNIQUES
Description 4: This section outlines various offloading strategies, such as D2D communications, VLC, local caching, and mmWave cellular, aimed at improving network capacity and energy efficiency.
Section 5: ENERGY HARVESTING AND TRANSFER
Description 5: This section reviews methods of harvesting energy from the environment and radio-frequency signals, addressing the challenges posed by the randomness of energy availability.
Section 6: HARDWARE SOLUTIONS
Description 6: This section covers energy-efficient hardware strategies, including the green design of RF chains, simplified transmitter/receiver architectures, and cloud-based implementations of RAN.
Section 7: FUTURE RESEARCH CHALLENGES
Description 7: This section identifies the gaps in current research and suggests holistic approaches, interference and randomness management, emerging techniques, and new energy models for future studies.
Section 8: CONCLUSIONS
Description 8: This section concludes the survey by summarizing the importance of energy efficiency in 5G networks and highlighting the persistent challenges and prospects for further research and development. |
A review of state-of-the-art numerical methods for simulating flow through mechanical heart valves | 7 | ---
paper_title: Turbulent stresses downstream of three mechanical aortic valve prostheses in human beings.
paper_content:
High levels of turbulent stresses resulting from disturbed blood flow may cause damage to red blood cells and platelets. The purpose of this study was to evaluate the spatial distribution and temporal development of turbulent stresses downstream of three mechanical aortic valve prostheses in human subjects: the St. Jude Medical, the CarboMedics, and the Staff-Edwards silicone rubber ball. Blood velocity measurements were taken at 17 measuring points in the cross-sectional area of the ascending aorta 5 to 6 cm downstream of the aortic anulus with the use of a perivascular pulsed Doppler ultrasound system. Turbulence analysis was done for each of the 17 measuring points by calculating the radial Reynolds normal stresses within 50 msec overlapping time windows during systole. By coordinating the calculated Reynolds normal stress values for each time window and for all measuring points, computerized two-dimensional color-coded mapping of the turbulent stress distribution during systole was done. For the St. Jude Medical valves the highest Reynolds normal stress (27 to 63 N/m 2 ) were found along the central slit near the vessel walls. The temporal development and spatial distribution of Reynolds normal stresses for the CarboMedics valves were quite similar to those of the St. Jude Medical valves with maximum Reynolds normal stress values ranging from 19 to 72 N/m 2 . The typical Reynolds normal stress distribution for the Staff-Edwards silicone rubber ball valves was asymmetric, revealing the highest Reynolds normal stresses (11 to 56 N/m 2 ) at various locations in the annular region between the ball and the vessel wall. The spatial distribution and temporal development of turbulent stresses downstream of the three investigated mechanical aortic valve prostheses correlated well with the superstructure of the valves. The maximum Reynolds normal stresses for the three valve types were in the same order of magnitude with exposure times sufficient to cause sublethal damage to red blood cells and platelets.
---
paper_title: In Vitro Hydrodynamic Characteristics Among Three Bileaflet Valves in the Mitral Position
paper_content:
: The non–fully open phenomenon of the advancing standard medical bileaflet heart valves (the ATS valve) are frequently observed in clinical cases, even though there is no problem with their hemodynamic function. The movement of the leaflets was affected easily by the transvalvular flow because of the unique open pivot design of the ATS valve. In this paper, a comparative in vitro hydrodynamic test was conducted among 3 different types of bileaflet valves, and the effect of different shapes of downstream conduits, which induce different transvalvular flow, on hydrodynamic performance was studied. Three bileaflet valves, the ATS valve, CarboMedics valve (CM), and St. Jude Medical valve (SJM), with an annulus diameter of 29 mm for the mitral position were chosen throughout our experiments. First, pressure drops across the valves under steady flow were measured. Then, the valves were tested at the mitral position with our pneumatically driven pulsatile pump. In this pulsatile flow study, 2 different conduits (straight shape and abrupt enlargement shape) were in turn incorporated at the downstream portion of the mitral valve. A high-speed video camera was employed to observe leaflet movements. In a steady-flow test, the ATS and the SJM produced the same pressure drop, but the CM recorded a higher value. In the pulsatile study, it was observed that the ATS leaflets did not open fully in the mitral position when the downstream conduit with an abrupt enlargement shape was incorporated. However, the CM and the SJM always indicated a fully open movement regardless of the shape of downstream conduits. When the straight downstream conduit was incorporated, the ATS produced a similar pressure drop to that of the SJM, which coincided with the steady test results. When the enlargement conduit was incorporated, however, the ATS presented the lowest pressure drop despite the non–fully open movement. The conduit shape at the valve downstream had a significant influence on the closing volume. These findings indicate that the conduit shape at the valve outlet can affect the hydrodynamic characteristics of bileaflet valves.
---
paper_title: Turbulent shear stress measurements in the vicinity of aortic heart valve prostheses
paper_content:
Abstract A two dimensional laser Doppler anemometer system has been used to measure the turbulent shear fields in the immediate downstream vicinity of a variety of mechanical and bioprosthetic aortic heart valves. The measurements revealed that all the mechanical valves studied, created regions of elevated levels of turbulent shear stress during the major portion of systole. The tissue bioprostheses also created elevated levels of turbulence, but they were confined to narrow regions in the bulk of the flow field. The newer generation of bioprostheses create turbulent shear stresses which are considerably lower than those created by the older generation tissue valve designs. All the aortic valves studied (mechanical and tissue) create turbulent shear stress levels which are capable of causing sub-lethal and/or lethal damage to blood elements.
---
paper_title: Vorticity dynamics of a bileaflet mechanical heart valve in an axisymmetric aorta
paper_content:
We present comprehensive particle image velocimetry measurements and direct numerical simulation (DNS) of physiological, pulsatile flow through a clinical quality bileaflet mechanical heart valve mounted in an idealized axisymmetric aorta geometry with a sudden expansion modeling the aortic sinus region. Instantaneous and ensemble-averaged velocity measurements as well as the associated statistics of leaflet kinematics are reported and analyzed in tandem to elucidate the structure of the velocity and vorticity fields of the ensuing flow-structure interaction. The measurements reveal that during the first half of the acceleration phase, the flow is laminar and repeatable from cycle to cycle. The valve housing shear layer rolls up into the sinus and begins to extract vorticity of opposite sign from the sinus wall. A start-up vortical structure is shed from the leaflets and is advected downstream as the leaflet shear layers become wavy and oscillatory. In the second half of flow acceleration the leaflet shea...
---
paper_title: Flow-Induced Platelet Activation in Bileaflet and Monoleaflet Mechanical Heart Valves
paper_content:
A study was conducted to measure in vitro the procoagulant properties of platelets induced by flow through Carbomedics bileaflet and Bjork-Shiley monoleaflet mechanical heart valves (MHVs). Valves were mounted in a left ventricular assist device, and platelets were circulated through them under pulsatile flow. Platelet activation states (PAS) were measured during circulation using a modified prothrombinase method. Computational fluid dynamics (CFD) simulations of turbulent, transient, and non-Newtonian blood flow patterns generated by the two valve designs were done using the Wilcox k - w turbulence model, and platelet shear-stress histories (the integral of shear-stress exposure with respect to time) through the two MHVs were calculated. PAS measurements indicated that the bileaflet MHV activated platelets at a rate more than twice that observed with the monoleaflet MHV. Turbulent flow patterns were evident in CFD simulations for both valves, and corroborated the PAS observations, showing that, for particles close to the leaflet(s), shear-stress exposure in the bileaflet MHV can be more than four times that in the monoleaflet valve.
---
paper_title: Experimental Investigation of the Steady Flow Downstream of the St. Jude Bileaflet Heart Valve: A Comparison Between Laser Doppler Velocimetry and Particle Image Velocimetry Techniques
paper_content:
AbstractThis study investigates turbulent flow, based on high Reynolds number, downstream of a prosthetic heart valve using both laser Doppler velocimetry (LDV) and particle image velocimetry (PIV). Until now, LDV has been the more commonly used tool in investigating the flow characteristics associated with mechanical heart valves. The LDV technique allows point by point velocity measurements and provides enough statistical information to quantify turbulent structure. The main drawback of this technique is the time consuming nature of the data acquisition process in order to assess an entire flow field area. Another technique now used in fluid dynamics studies is the PIV measurement technique. This technique allows spatial and temporal measurement of the entire flow field. Using this technique, the instantaneous and average velocity flow fields can be investigated for different positions. This paper presents a comparison of PIV two-dimensional measurements to LDV measurements, performed under steady flow conditions, for a measurement plane parallel to the leaflets of a St. Jude Medical (SJM) bileaflet valve. Comparisons of mean velocity obtained by the two techniques are in good agreement except for where there is instability in the flow. For second moment quantities the comparisons were less agreeable. This suggests that the PIV technique has sufficient temporal and spatial resolution to estimate mean velocity depending on the degree of instability in the flow and also provides sufficient images needed to duplicate mean flow but not for higher moment turbulence quantities such as maximum turbulent shear stress. © 2000 Biomedical Engineering Society. ::: PAC00: 8719Uv, 4262Be, 8780-y
---
paper_title: Regurgitant flow field characteristics of the St. Jude bileaflet mechanical heart valve under physiologic pulsatile flow using particle image velocimetry.
paper_content:
The regurgitant flow fields of clinically used mechanical heart valves have been traditionally studied in vitro using flow visualization, ultrasound techniques, and laser Doppler velocimetry under steady and pulsatile flow. Detailed investigation of the forward and regurgitant flow fields of these valves can elucidate a valve's propensity for blood element damage, thrombus formation, or cavitation. Advances in particle image velocimetry (PIV) have allowed its use in the study of the flow fields of prosthetic valves. Unlike other flow field diagnostic systems, recent work using PIV has been able to relate particular regurgitant flow field characteristics of the Bjork-Shiley Monostrut valve to a propensity for cavitation. In this study, the regurgitant flow field of the St. Jude Medical bileaflet mechanical heart valve was assessed using PIV under physiologic pulsatile flow conditions. Data collected at selected time points prior to and after valve closure demonstrated the typical regurgitant jet flow patterns associated with the St. Jude valve, and indicated the formation of a strong regurgitant jet, in the B-datum plane, along with twin vortices near the leaflets. Estimated ensemble-average viscous shear rates suggested little potential for hemolysis when the hinge jets collided. However, the vortex motion near the occluder tips potentially provides a low-pressure environment for cavitation.
---
paper_title: Velocity measurements and flow patterns within the hinge region of a Medtronic Parallel bileaflet mechanical valve with clear housing.
paper_content:
BACKGROUND AND AIMS OF THE STUDY: During recent clinical trials the Medtronic Parallel bileaflet mechanical heart valve was found to have an unacceptable number of valves with thrombus formation when implanted in the mitral position. Thrombi were observed in the hinge region and also in the upstream portion of the valve housing in the vicinity of the hinge. It was hypothesized that the flow conditions inside the hinge may have contributed to the thrombus formation. METHODS: In order to investigate the flow structures within the hinge, laser Doppler anemometry (LDA) measurements were conducted in both steady and pulsatile flow at approximately 70 predetermined sites within the hinge region of a 27 mm Medtronic Parallel mitral valve with transparent housing. The pulsatile flow velocity measurements were animated in time using a graphical software package to visualize the hinge flow field throughout the cardiac cycle. RESULTS: The LDA measurements revealed that mean forward flow velocities through the hinge region were on the order of 0.10-0.20 m/s. In the inflow channel, a large vortical structure was present during diastole. Upon valve closure, peak reverse velocity reached 3 m/s close to the housing wall in the inflow channel. This area also experienced high turbulent shear stresses (> 6000 dynes/cm2) during the leakage flow phase. A disturbed, vortical flow was again present in the inflow channel after valve closure, while slightly above the leaflet peg and relief the flow was essentially stagnant. The high turbulent stresses near the top of the inflow channel, combined with a persistent vortex, implicate the inflow channel of the hinge as a likely region of thrombus formation. CONCLUSIONS: This experimental investigation revealed zones of flow stagnation in the inflow region of the hinge throughout the cardiac cycle and elevated turbulent shear stress levels in the inflow region during the leakage flow phase. These fluid mechanic phenomena are most likely a direct result of the complex geometry of the hinge of this valve. Although the LDA measurements were conducted at only a limited number of sites within the hinge, these results suggest that the hinge design can significantly affect the washout capacity and thrombogenic potential of the Medtronic Parallel bileaflet mechanical heart valve. The use of LDA within the confines of the hinge region of a mechanical heart valve is a new application, made possible by recent advances in manufacturing technologies and a proprietary process developed by Medtronic that allowed the production of a transparent valve housing. Together, these modalities represent a new method by which future valve designs can be assessed before clinical trials are initiated.
---
paper_title: An Analysis of Turbulent Shear Stresses in Leakage Flow Through a Bileaflet Mechanical Prostheses
paper_content:
In this work, estimates of turbulence were made from pulsatile flow laser Doppler velocimetry measurements using traditional phase averaging and averaging after the removal of cyclic variation. These estimates were compared with estimates obtained from steady leakage flow LDV measurements and an analytical method. The results of these studies indicate that leakage jets which are free and planar in shape may be more unstable than other leakage jets, and that cyclic variation does not cause a gross overestimation of the Reynolds stresses at large distances from the leakage jet orifice.
---
paper_title: STEADY FLOW DYNAMICS OF PROSTHETIC AORTIC HEART VALVES : A COMPARATIVE EVALUATION WITH PIV TECHNIQUES
paper_content:
Abstract Particle Image Velocimetry (PIV), capable of providing full-field measurement of velocities and flow stresses, has become an invaluable tool in studying flow behaviour in prosthetic heart valves. This method was used to evaluate the performances of four prosthetic heart valves; a porcine bioprostheses, a caged ball valve, and two single leaflet tilting disc valves with different opening angles. Flow visualization techniques, combined with velocity vector fields and Reynolds stresses mappings in the aortic root obtained from PIV, and pressure measurements were used to give an overall picture of the flow field of the prosthetic heart valves under steady flow conditions. The porcine bioprostheses exhibited the highest pressure loss and Reynolds stresses of all the valves tested. This was mainly due to the reduction in orifice area caused by the valve mounting ring and the valve stents. For the tilting disc valves, a larger opening angle resulted in a smoother flow profile, and thus lower Reynolds stresses and pressure drops. The St. Vincent valve exhibited the lowest pressure drop and Reynolds stresses.
---
paper_title: Fluid mechanics of heart valves.
paper_content:
Valvular heart disease is a life-threatening disease that afflicts millions of people worldwide and leads to approximately 250,000 valve repairs and/or replacements each year. Malfunction of a native valve impairs its efficient fluid mechanic/hemodynamic performance. Artificial heart valves have been used since 1960 to replace diseased native valves and have saved millions of lives. Unfortunately, despite four decades of use, these devices are less than ideal and lead to many complications. Many of these complications/problems are directly related to the fluid mechanics associated with the various mechanical and bioprosthetic valve designs. This review focuses on the state-of-the-art experimental and computational fluid mechanics of native and prosthetic heart valves in current clinical use. The fluid dynamic performance characteristics of caged-ball, tilting-disc, bileaflet mechanical valves and porcine and pericardial stented and nonstented bioprostheic valves are reviewed. Other issues related to heart valve performance, such as biomaterials, solid mechanics, tissue mechanics, and durability, are not addressed in this review.
---
paper_title: Characterization of Hemodynamic Forces Induced by Mechanical Heart Valves: Reynolds vs. Viscous Stresses
paper_content:
Bileaflet mechanical heart valves (BMHV) are widely used to replace diseased heart valves. Implantation of BMHV, however, has been linked with major complications, which are generally considered to be caused by mechanically induced damage of blood cells resulting from the non-physiological hemodynamics environment induced by BMHV, including regions of recirculating flow and elevated Reynolds (turbulence) shear stress levels. In this article, we analyze the results of 2D high-resolution velocity measurements and full 3D numerical simulation for pulsatile flow through a BMHV mounted in a model axisymmetric aorta to investigate the mechanical environment experienced by blood elements under physiologic conditions. We show that the so-called Reynolds shear stresses neither directly contribute to the mechanical load on blood cells nor is a proper measurement of the mechanical load experienced by blood cells. We also show that the overall levels of the viscous stresses, which comprise the actual flow environment experienced by cells, are apparently too low to induce damage to red blood cells, but could potentially damage platelets. The maximum instantaneous viscous shear stress observed throughout a cardiac cycle is <15 N/m(2). Our analysis is restricted to the flow downstream of the valve leaflets and thus does not address other areas within the BMHV where potentially hemodynamically hazardous levels of viscous stresses could still occur (such as in the hinge gaps and leakage jets).
---
paper_title: Turbulent stress measurements downstream of three bileaflet heart valve designs in pigs
paper_content:
OBJECTIVE ::: Mechanical heart valves can cause thromboembolic complications, possibly due to abnormal flow patterns that produce turbulence downstream of the valve. The objective of this study was to investigate whether three different bileaflet valve designs would exhibit clinically relevant differences in downstream turbulent stresses. ::: ::: ::: METHODS ::: Three bileaflet mechanical heart valves (Medtronic Advantage), CarboMedics Orbis Universal and St. Jude Medical Standard) were implanted into 19 female 90 kg pigs. Blood velocity was measured during open chest conditions in the cross sectional area downstream of the valves with 10 MHz ultrasonic probes connected to a modified Alfred Pulsed Doppler equipment. As a measure of turbulence, Reynolds normal stress (RNS) was calculated at three different cardiac output ranges (3-4, 4.5-5.5, 6-7 L/min). ::: ::: ::: RESULTS ::: Data from 12 animals were obtained. RNS correlated with increasing cardiac outputs. The highest instantaneous RNS observed in these experiments was 47 N/m2, and the mean RNS taken spatially over the cross sectional area of the aorta during systole was between 3 N/m2 and 15 N/m2. In none of the cardiac output ranges RNS values exceeded the lower critical limit for erythrocyte or thrombocyte damage for any of the valve designs. ::: ::: ::: CONCLUSIONS ::: Reynolds normal stress values were below 100 N/m2 for all three valve designs and the difference in design was not reflected in generation of turbulence. Hence, it is unlikely that any of the valve designs causes flow induced damage to platelets or erythrocytes.
---
paper_title: An immersed interface method for viscous incompressible flows involving rigid and flexible boundaries
paper_content:
We present an immersed interface method for the incompressible Navier-Stokes equations capable of handling both rigid and flexible boundaries. The immersed boundaries are represented by a number of Lagrangian control points. In order to ensure that the no-slip condition on the rigid boundary is satisfied, singular forces are applied on the fluid. The forces are related to the jumps in pressure and the jumps in the derivatives of both pressure and velocity, and are interpolated using cubic splines. The strength of the singular forces at the rigid boundary is determined by solving a small system of equations at each timestep. For flexible boundaries, the forces that the boundary exerts on the fluid are computed from the constitutive relation of the flexible boundary and are applied to the fluid through the jump conditions. The position of the flexible boundary is updated implicitly using a quasi-Newton method (BFGS) within each timestep. The Navier-Stokes equations are discretized on a staggered Cartesian grid by a second order accurate projection method for pressure and velocity and the overall scheme is second order accurate.
---
paper_title: A distributed Lagrange multiplier/fictitious domain method for particulate flows
paper_content:
Abstract A new Lagrange-multiplier based fictitious-domain method is presented for the direct numerical simulation of viscous incompressible flow with suspended solid particles. The method uses a finite-element discretization in space and an operator-splitting technique for discretization in time. The linearly constrained quadratic minimization problems which arise from this splitting are solved using conjugate-gradient algorithms. A key feature of the method is that the fluid–particle motion is treated implicitly via a combined weak formulation in which the mutual forces cancel—explicit calculation of the hydrodynamic forces and torques on the particles is not required. The fluid flow equations are enforced inside, as well as outside, the particle boundaries. The flow inside, and on, each particle boundary is constrained to be a rigid-body motion using a distributed Lagrange multiplier. This multiplier represents the additional body force per unit volume needed to maintain the rigid-body motion inside the particle boundary, and is analogous to the pressure in incompressible fluid flow, whose gradient is the force required to maintain the constraint of incompressibility. The method is validated using the sedimentation of two circular particles in a two-dimensional channel as the test problem, and is then applied to the sedimentation of 504 circular particles in a closed two-dimensional box. The resulting suspension is fairly dense, and the computation could not be carried out without an effective strategy for preventing particles from penetrating each other or the solid outer walls; in the method described herein, this is achieved by activating a repelling force on close approach, such as might occur as a consequence of roughness elements on the particle. The development of physically based mathematical methods for avoiding particle–particle and particle–wall penetration is a new problem posed by the direct simulation of fluidized suspensions. The simulation starts with the particles packed densely at the top of the sedimentation column. In the course of their fall to the bottom of the box, a fingering motion of the particles, which are heavier than the surrounding fluid, develops in a way reminiscent of the familiar dynamics associated with the Rayleigh–Taylor instability of heavy fluid above light. We also present here the results of a three-dimensional simulation of the sedimentation of two spherical particles. The simulation reproduces the familiar dynamics of drafting, kissing and tumbling to side-by-side motion with the line between centers across the flow at Reynolds numbers in the hundreds.
---
paper_title: A three-dimensional computational method for blood flow in the heart. 1. Immersed elastic fibers in a viscous incompressible fluid
paper_content:
Abstract This paper describes the numerical solution of the 3-dimensional equations of motion of a viscous incompressible fluid that contains an immersed system of elastic fibers. Implementation details such as vectorization and the efficient use of external memory are discussed. The method is applied to the damped vibrations of a fiber-wound toroidal tube, and empirical evidence of convergence is presented.
---
paper_title: Flow patterns around heart valves: A numerical method
paper_content:
Abstract The subject of this paper is the flow of a viscous incompressible fluid in a region containing immersed boundaries which move with the fluid and exert forces on the fluid. An example of such a boundary is the flexible leaflet of a human heart valve. It is the main achievement of the present paper that a method for solving the Navier-Stokes equations on a rectangular domain can now be applied to a problem involving this type of immersed boundary. This is accomplished by replacing the boundary by a field of force which is defined on the mesh points of the rectangular domain and which is calculated from the configuration of the boundary. In order to link the representations of the boundary and fluid, since boundary points and mesh points need not coincide, a semi-discrete analog of the δ function is introduced. Because the boundary forces are of order h −1 , and because they are sensitive to small changes in boundary configuration, they tend to produce numerical instability. This difficulty is overcome by an implicit method for calculating the boundary forces, a method which takes into account the displacements that will be produced by the boundary forces themselves. The numerical scheme is applied to the two-dimensional simulation of flow around the natural mitral valve.
---
paper_title: Three-Dimensional Fluid-Structure Interaction Simulation of Bileaflet Mechanical Heart Valve Flow Dynamics
paper_content:
The wall shear stress induced by the leaflet motion during the valve-closing phase has been implicated with thrombus initiation with prosthetic valves. Detailed flow dynamic analysis in the vicinity of the leaflets and the housing during the valve-closure phase is of interest in understanding this relationship. A three-dimensional unsteady flow analysis past bileaflet valve prosthesis in the mitral position is presented incorporating a fluid-structure interaction algorithm for leaflet motion during the valve-closing phase. Arbitrary Lagrangian-Eulerian method is employed for incorporating the leaflet motion. The forces exerted by the fluid on the leaflets are computed and applied to the leaflet equation of motion to predict the leaflet position. Relatively large velocities are computed in the valve clearance region between the valve housing and the leaflet edge with the resulting relatively large wall shear stresses at the leaflet edge during the impact-rebound duration. Negative pressure transients are computed on the surface of the leaflets on the atrial side of the valve, with larger magnitudes at the leaflet edge during the closing and rebound as well. Vortical flow development is observed on the inflow (atrial) side during the valve impact-rebound phase in a location central to the leaflet and away from the clearance region where cavitation bubbles have been visualized in previously reported experimental studies.
---
paper_title: A general reconstruction algorithm for simulating flows with complex 3D immersed boundaries on Cartesian grids
paper_content:
In the present note a general reconstruction algorithm for simulating incompressible flows with complex immersed boundaries on Cartesian grids is presented. In the proposed method an arbitrary three-dimensional solid surface immersed in the fluid is discretized using an unstructured, triangular mesh, and all the Cartesian grid nodes near the interface are identified. Then, the solution at these nodes is reconstructed via linear interpolation along the local normal to the body, in a way that the desired boundary conditions for both pressure and velocity fields are enforced. The overall accuracy of the resulting solver is second-order, as it is demonstrated in two test cases involving laminar flow past a sphere.
---
paper_title: Numerical simulation of 3D fluid-structure interaction flow using an immersed object method with overlapping grids
paper_content:
The newly developed immersed object method (IOM) [Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady incompressible viscous flows around moving rigid bodies using an immersed object method with overlapping grids. J Comput Phys 2005; 207(1): 151-72] is extended for 3D unsteady flow simulation with fluid-structure interaction (FSI), which is made possible by combining it with a parallel unstructured multigrid Navier-Stokes solver using a matrix-free implicit dual time stepping and finite volume method [Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady three-dimensional incompressible viscous flow using an unstructured multigrid method. In: The second M.I.T. conference on computational fluid and solid mechanics, June 17-20, MIT, Cambridge, MA 02139, USA, 2003; Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady three-dimensional incompressible viscous flow using an unstructured multigrid method, Special issue on ''Preconditioning methods: algorithms, applications and software environments. Comput Struct 2004; 82(28): 2425-36]. This uniquely combined method is then employed to perform detailed study of 3D unsteady flows with complex FSI. In the IOM, a body force term F is introduced into the momentum equations during the artificial compressibility (AC) sub-iterations so that a desired velocity distribution V"0 can be obtained on and within the object boundary, which needs not coincide with the grid, by adopting the direct forcing method. An object mesh is immersed into the flow domain to define the boundary of the object. The advantage of this is that bodies of almost arbitrary shapes can be added without grid restructuring, a procedure which is often time-consuming and computationally expensive. It has enabled us to perform complex and detailed 3D unsteady blood flow and blood-leaflets interaction in a mechanical heart valve (MHV) under physiological conditions.
---
paper_title: An immersed boundary method for complex incompressible flows
paper_content:
An immersed boundary method for time-dependent, three-dimensional, incompressible flows is presented in this paper. The incompressible Navier-Stokes equations are discretized using a low-diffusion flux splitting method for the inviscid fluxes and second-order central-differences for the viscous components. Higher-order accuracy achieved by using weighted essentially non-oscillatory (WENO) or total variation diminishing (TVD) schemes. An implicit method based on artificial compressibility and dual-time stepping is used for time advancement. The immersed boundary surfaces are defined as clouds of points, which may be structured or unstructured. Immersed-boundary objects are rendered as level sets in the computational domain, and concepts from computational geometry are used to classify points as being outside, near, or inside the immersed boundary. The velocity field near an immersed surface is determined from separate interpolations of the components tangent and normal to the surface. The tangential velocity near the surface is constructed as a power-law function of the local wall normal distance. Appropriate choices of the power law enable the method to approximate the energizing effects of a turbulent boundary layer for higher Reynolds number flows. Five different flow problems (flow over a circular cylinder, an in-line oscillating cylinder, a NACA0012 airfoil, a sphere, and a stationary mannequin) are simulated using the present immersed boundary method, and the predictions show good agreement with previous computational and experimental results. Finally, the flow induced by realistic human walking motion is simulated as an example of a problem involving multiple moving immersed objects.
---
paper_title: A hybrid Cartesian/immersed boundary method for simulating flows with 3D, geometrically complex, moving bodies
paper_content:
A numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations in Cartesian domains containing immersed boundaries of arbitrary geometrical complexity moving with prescribed kinematics. The governing equations are discretized on a hybrid staggered/non-staggered grid layout using second-order accurate finite-difference formulas. The discrete equations are integrated in time via a second-order accurate dual-time-stepping, artificial compressibility iteration scheme. Unstructured, triangular meshes are employed to discretize complex immersed boundaries. The nodes of the surface mesh constitute a set of Lagrangian control points used to track the motion of the flexible body. At every instant in time, the influence of the body on the flow is accounted for by applying boundary conditions at Cartesian grid nodes located in the exterior but in the immediate vicinity of the body by reconstructing the solution along the local normal to the body surface. Grid convergence tests are carried out for the flow induced by an oscillating sphere in a cubic cavity, which show that the method is second-order accurate. The method is validated by applying it to calculate flow in a Cartesian domain containing a rigid sphere rotating at constant angular velocity as well as flow induced by a flapping wing. The ability of the method to simulate flows in domains with arbitrarily complex moving bodies is demonstrated by applying to simulate flow past an undulating fish-like body and flow past an anatomically realistic planktonic copepod performing an escape-like maneuver. euver.
---
paper_title: Vorticity dynamics of a bileaflet mechanical heart valve in an axisymmetric aorta
paper_content:
We present comprehensive particle image velocimetry measurements and direct numerical simulation (DNS) of physiological, pulsatile flow through a clinical quality bileaflet mechanical heart valve mounted in an idealized axisymmetric aorta geometry with a sudden expansion modeling the aortic sinus region. Instantaneous and ensemble-averaged velocity measurements as well as the associated statistics of leaflet kinematics are reported and analyzed in tandem to elucidate the structure of the velocity and vorticity fields of the ensuing flow-structure interaction. The measurements reveal that during the first half of the acceleration phase, the flow is laminar and repeatable from cycle to cycle. The valve housing shear layer rolls up into the sinus and begins to extract vorticity of opposite sign from the sinus wall. A start-up vortical structure is shed from the leaflets and is advected downstream as the leaflet shear layers become wavy and oscillatory. In the second half of flow acceleration the leaflet shea...
---
paper_title: Collagen fibers reduce stresses and stabilize motion of aortic valve leaflets during systole.
paper_content:
The effect of collagen fibers on the mechanics and hemodynamics of a trileaflet aortic valve contained in a rigid aortic root is investigated in a numerical analysis of the systolic phase. Collagen fibers are known to reduce stresses in the leaflets during diastole, but their role during systole has not been investigated in detail yet. It is demonstrated that also during systole these fibers substantially reduce stresses in the leaflets and provide smoother opening and closing. Compared to isotropic leaflets, collagen reinforcement reduces the fluttering motion of the leaflets. Due to the exponential stress-strain behavior of collagen, the fibers have little influence on the initial phase of the valve opening, which occurs at low strains, and therefore have little impact on the transvalvular pressure drop.
---
paper_title: Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies
paper_content:
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived.
---
paper_title: Two-dimensional fluid-structure interaction simulation of bileaflet mechanical heart valve flow dynamics.
paper_content:
BACKGROUND AND AIM OF THE STUDY: Mechanical heart valve implantation requires long-term anticoagulation because of thromboembolic complications. Recent studies have indicated that the relatively high wall shear stresses and negative pressure transients developed during the valve closing phase may be dominant factors inducing thrombus initiation. The study aim was a two-dimensional (2D) functional simulation of flow past bileaflet heart valve prosthesis during the closing phase, incorporating the fluid-structure interaction analysis to induce motion of the leaflets. METHODS: The fluid-structure interaction model used was based on unsteady 2D Navier-Stokes equations with the arbitrary Lagrangian-Eulerian method for moving boundaries, coupled with the dynamic equation for leaflet motion. Parametric analysis of the effect of valve size, leaflet density, and the coefficient of resilience at the instant of impact of the leaflet with the housing were also performed. RESULTS: Comparing the predicted motion of the leaflet with previous experimental results validated the simulation. The results showed the presence of negative pressure transients near the inflow side of the leaflet at the instant of valve closure, and the negative pressure transients were augmented during the leaflet rebound process. Relatively high velocities and wall shear stresses, detrimental to the formed elements in blood were present in the clearance region between the leaflet and valve housing at the instant of valve closure. CONCLUSION: The simulation can be potentially applied to analyze the effects of valve geometry and dimensions, and the effect of leaflet material on the flow dynamics past the valve prosthesis during the opening and closing phases for design improvements in minimizing problems associated with thromboembolic complications.
---
paper_title: A numerical method for solving the 3D unsteady incompressible Navier-Stokes equations in curvilinear domains with complex immersed boundaries
paper_content:
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
---
paper_title: IMMERSED BOUNDARY METHODS
paper_content:
The term “immersed boundary method” was first used in reference to a method developed by Peskin (1972) to simulate cardiac mechanics and associated blood flow. The distinguishing feature of this method was that the entire simulation was carried out on a Cartesian grid, which did not conform to the geometry of the heart, and a novel procedure was formulated for imposing the effect of the immersed boundary (IB) on the flow. Since Peskin introduced this method, numerous modifications and refinements have been proposed and a number of variants of this approach now exist. In addition, there is another class of methods, usually referred to as “Cartesian grid methods,” which were originally developed for simulating inviscid flows with complex embedded solid boundaries on Cartesian grids (Berger & Aftosmis 1998, Clarke et al. 1986, Zeeuw & Powell 1991). These methods have been extended to simulate unsteady viscous flows (Udaykumar et al. 1996, Ye et al. 1999) and thus have capabilities similar to those of IB methods. In this review, we use the term immersed boundary (IB) method to encompass all such methods that simulate viscous flows with immersed (or embedded) boundaries on grids that do not conform to the shape of these boundaries. Furthermore, this review focuses mainly on IB methods for flows with immersed solid boundaries. Application of these and related methods to problems with liquid-liquid and liquid-gas boundaries was covered in previous reviews by Anderson et al. (1998) and Scardovelli & Zaleski (1999). Consider the simulation of flow past a solid body shown in Figure 1a. The conventional approach to this would employ structured or unstructured grids that conform to the body. Generating these grids proceeds in two sequential steps. First, a surface grid covering the boundaries b is generated. This is then used as a boundary condition to generate a grid in the volume f occupied by the fluid. If a finite-difference method is employed on a structured grid, then the differential form of the governing equations is transformed to a curvilinear coordinate system aligned with the grid lines (Ferziger & Peric 1996). Because the grid conforms to the surface of the body, the transformed equations can then be discretized in the
---
paper_title: A ghost-cell immersed boundary method for flow in complex geometry
paper_content:
An efficient ghost-cell immersed boundary method (GCIBM) for simulating turbulent flows in complex geometries is presented. A boundary condition is enforced through a ghost cell method. The reconstruction procedure allows systematic development of numerical schemes for treating the immersed boundary while preserving the overall second-order accuracy of the base solver. Both Dirichlet and Neumann boundary conditions can be treated. The current ghost cell treatment is both suitable for staggered and non-staggered Cartesian grids. The accuracy of the current method is validated using flow past a circular cylinder and large eddy simulation of turbulent flow over a wavy surface. Numerical results are compared with experimental data and boundary-fitted grid results. The method is further extended to an existing ocean model (MITGCM) to simulate geophysical flow over a three-dimensional bump. The method is easily implemented as evidenced by our use of several existing codes.
---
paper_title: The Immersed Interface Method for Elliptic Equations with Discontinuous Coefficients and Singular Sources
paper_content:
The authors develop finite difference methods for elliptic equations of the form \[ \nabla \cdot (\beta (x)\nabla u(x)) + \kappa (x)u(x) = f(x)\] in a region $\Omega $ in one or two space dimension...
---
paper_title: A three-dimensional computational analysis of fluid–structure interaction in the aortic valve
paper_content:
Abstract Numerical analysis of the aortic valve has mainly been focused on the closing behaviour during the diastolic phase rather than the kinematic opening and closing behaviour during the systolic phase of the cardiac cycle. Moreover, the fluid–structure interaction in the aortic valve system is most frequently ignored in numerical modelling. The effect of this interaction on the valve's behaviour during systolic functioning is investigated. The large differences in material properties of fluid and structure and the finite motion of the leaflets complicate blood–valve interaction modelling. This has impeded numerical analyses of valves operating under physiological conditions. A numerical method, known as the Lagrange multiplier based fictitious domain method, is used to describe the large leaflet motion within the computational fluid domain. This method is applied to a three-dimensional finite element model of a stented aortic valve. The model provides both the mechanical behaviour of the valve and the blood flow through it. Results show that during systole the leaflets of the stented valve appear to be moving with the fluid in an essentially kinematical process governed by the fluid motion.
---
paper_title: Combined Immersed-Boundary Finite-Difference Methods for Three-Dimensional Complex Flow Simulations
paper_content:
A second-order accurate, highly efficient method is developed for simulating unsteady three-dimensional incompressible flows in complex geometries. This is achieved by using boundary body forces that allow the imposition of the boundary conditions on a given surface not coinciding with the computational grid. The governing equations, therefore, can be discretized and solved on a regular mesh thus retaining the advantages and the efficiency of the standard solution procedures. Two different forcings are tested showing that while the quality of the results is essentially the same in both cases, the efficiency of the calculation strongly depends on the particular expression. A major issue is the interpolation of the forcing over the grid that determines the accuracy of the scheme; this ranges from zeroth-order for the most commonly used interpolations up to second-order for an ad hoc velocity interpolation. The present scheme has been used to simulate several flows whose results have been validated by experiments and other results available in the literature. Finally in the last example we show the flow inside an IC piston/cylinder assembly at high Reynolds number; to our knowledge this is the first example in which the immersed boundary technique is applied to a full three-dimensional complex flow with moving boundaries and with a Reynolds number high enough to require a subgrid-scale turbulence model.
---
paper_title: A computational fluid-structure interaction analysis of a fiber-reinforced stentless aortic valve.
paper_content:
The importance of the aortic root compliance in the aortic valve performance has most frequently been ignored in computational valve modeling, although it has a significant contribution to the functionality of the valve. Aortic root aneurysm or (calcific) stiffening severely affects the aortic valve behavior and, consequently, the cardiovascular regulation. The compromised mechanical and hemodynamical performance of the valve are difficult to study both 'in vivo' and 'in vitro'. Computational analysis of the valve enables a study on system responses that are difficult to obtain otherwise. In this paper a numerical model of a fiber-reinforced stentless aortic valve is presented. In the computational evaluation of its clinical functioning the interaction of the valve with the blood is essential. Hence, the blood-tissue interaction is incorporated in the model using a combined fictitious domain/arbitrary Lagrange-Euler formulation, which is integrated within the Galerkin finite element method. The model can serve as a diagnostic tool for clinical purposes and as a design tool for improving existing valve prostheses or developing new concepts. Structural mechanical and fluid dynamical aspects are analyzed during the systolic course of the cardiac cycle. Results show that aortic root compliance largely influences the valve opening and closing configurations. Stresses in the delicate parts of the leaflets are substantially reduced if fiber-reinforcement is applied and the aortic root is able to expand.
---
paper_title: A two-dimensional fluid–structure interaction model of the aortic value
paper_content:
Failure of synthetic heart valves is usually caused by tearing and calci"cation of the lea#ets. Lea#et "ber-reinforcement increases the durability of these values by unloading the delicate parts of the lea#ets, maintaining their physiological functioning. The interaction of the valve with the surrounding #uid is essential when analyzing its functioning. However, the large di!erences in material properties of #uid and structure and the "nite motion of the lea#ets complicate blood}valve interaction modeling. This has, so far, obstructed numerical analyses of valves operating under physiological conditions. A two-dimensional #uid}structure interaction model is presented, which allows the Reynolds number to be within the physiological range, using a "ctitious domain method based on Lagrange multipliers to couple the two phases. The extension to the three-dimensional case is straightforward. The model has been validated experimentally using laser Doppler anemometry for measuring the #uid #ow and digitized high-speed video recordings to visualize the lea#et motion in corresponding geometries. Results show that both the #uid and lea#et behaviour are well predicted for di!erent lea#et thicknesses. ( 2000 Elsevier Science Ltd. All rights reserved.
---
paper_title: An adaptive, formally second order accurate version of the immersed boundary method
paper_content:
Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509–534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75–105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves indicates that the new methodology provides enhanced boundary layer resolution. Differences are also observed in the flow about the mitral valve leaflets.
---
paper_title: Computation of Solid-Liquid Phase Fronts in the Sharp Interface Limit on Fixed Grids
paper_content:
A finite-difference formulation is applied to track solid?liquid boundaries on a fixed underlying grid. The interface is not of finite thickness but is treated as a discontinuity and is explicitly tracked. The imposition of boundary conditions exactly on a sharp interface that passes through the Cartesian grid is performed using simple stencil readjustments in the vicinity of the interface. Attention is paid to formulating difference schemes that are globally second-order accurate in x and t. Error analysis and grid refinement studies are performed for test problems involving the diffusion and convection?diffusion equations, and for stable solidification problems. Issues concerned with stability and change of phase of grid points in the evolution of solid?liquid phase fronts are also addressed. It is demonstrated that the field calculation is second-order accurate while the position of the phase front is calculated to first-order accuracy. Furthermore, the accuracy estimates hold for the cases where there is a property jump across the interface. Unstable solidification phenomena are simulated and an attempt is made to compare results with previously published work. The results indicate the need to begin an effort to benchmark computations of instability phenomena.
---
paper_title: Sharp interface Cartesian grid method I: An easily implemented technique for 3D moving boundary computations
paper_content:
A Cartesian grid method is developed for the simulation of incompressible flows around stationary and moving three-dimensional immersed boundaries. The embedded boundaries are represented using level-sets and treated in a sharp manner without the use of source terms to represent boundary effects. The narrow-band distance function field in the level-set boundary representation facilitates implementation of the finite-difference flow solver. The resulting algorithm is implemented in a straightforward manner in three-dimensions and retains global second-order accuracy. The accuracy of the finite-difference scheme is established and shown to be comparable to finite-volume schemes that are considerably more difficult to implement. Moving boundaries are handled naturally. The pressure solver is accelerated using an algebraic multigrid technique adapted to be effective in the presence of moving embedded boundaries. Benchmarking of the method is performed against available numerical as well as experimental results.
---
paper_title: Analysis and Stabilization of Fluid-Structure Interaction Algorithm for Rigid-Body Motion
paper_content:
Fluid-structure interaction computations in geometries where different chambers are almost completely separated from each other by a movable rigid body but connected through very small gaps can encounter stability problems when a standard explicit coupling procedure is used for the coupling of the fluid flow and the movement of the rigid body. An example of such kind of flows is the opening and closing of valves, when the valve motion is driven by the flow. A stability analysis is performed for the coupling procedure of the movement of a cylinder in a cylindrical tube, filled with fluid, Between the moving cylinder and the tube, a small gap is present, so that two chambers are formed. It is shown that a standard explicit coupling procedure or an implicit coupling procedure with explicit coupling in the subiterations steps can lead to unstable motion depending on the size of the gaps, the density of the rigid body, and the density of the fluid. It is proven that a reduction of the time-step size cannot stabilize the coupling procedure. An implicit coupling procedure with implicit coupling in the subiterations has to be used. An illustration is given on how such a coupling procedure can be implemented in a commercial computational fluid dynamics (CFD) software package. The CFD package FLUENT (Fluent, Inc.) is used. As an application, the opening and the closing of a prosthetic aortic valve is computed.
---
paper_title: Numerical simulation of the dynamics of a bileaflet prosthetic heart valve using a fluid-structure interaction approach
paper_content:
Abstract The main purpose of this study is to reproduce in silico the dynamics of a bileaflet mechanical heart valve (MHV; St Jude Hemodynamic Plus, 27 mm characteristic size) by means of a fully implicit fluid–structure interaction (FSI) method, and experimentally validate the results using an ultrafast cinematographic technique. The computational model was constructed to realistically reproduce the boundary condition (72 beats per minute (bpm), cardiac output 4.5 l/min) and the geometry of the experimental setup, including the valve housing and the hinge configuration. The simulation was carried out coupling a commercial computational fluid dynamics (CFD) package based on finite-volume method with user-defined code for solving the structural domain, and exploiting the parallel performance of the whole numerical setup. Outputs are leaflets excursion from opening to closure and the fluid dynamics through the valve. Results put in evidence a favorable comparison between the computed and the experimental data: the model captures the main features of the leaflet motion during the systole. The use of parallel computing drastically limited the computational costs, showing a linear scaling on 16 processors (despite the massive use of user-defined subroutines to manage the FSI process). The favorable agreement obtained between in vitro and in silico results of the leaflet displacements confirms the consistency of the numerical method used, and candidates the application of FSI models to become a major tool to optimize the MHV design and eventually provides useful information to surgeons.
---
paper_title: Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies
paper_content:
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived.
---
paper_title: High-Resolution Fluid–Structure Interaction Simulations of Flow Through a Bi-Leaflet Mechanical Heart Valve in an Anatomic Aorta
paper_content:
We have performed high-resolution fluid-structure interaction simulations of physiologic pulsatile flow through a bi-leaflet mechanical heart valve (BMHV) in an anatomically realistic aorta. The results are compared with numerical simulations of the flow through an identical BMHV implanted in a straight aorta. The comparisons show that although some of the salient features of the flow remain the same, the aorta geometry can have a major effect on both the flow patterns and the motion of the valve leaflets. For the studied configuration, for instance, the BMHV leaflets in the anatomic aorta open much faster and undergo a greater rebound during closing than the same valve in the straight axisymmetric aorta. Even though the characteristic triple-jet structure does emerge downstream of the leaflets for both cases, for the anatomic case the leaflet jets spread laterally and diffuse much faster than in the straight aorta due to the aortic curvature and complex shape of the anatomic sinus. Consequently the leaflet shear layers in the anatomic case remain laminar and organized for a larger portion of the accelerating phase as compared to the shear layers in the straight aorta, which begin to undergo laminar instabilities well before peak systole is reached. For both cases, however, the flow undergoes a very similar explosive transition to the small-scale, turbulent-like state just prior to reaching peak systole. The local maximum shear stress is used as a metric to characterize the mechanical environment experienced by blood cells. Pockets of high local maximum shear are found to be significantly more widespread in the anatomic aorta than in the straight aorta throughout the cardiac cycle. Pockets of high local maximum shear were located near the leaflets and in the aortic arc region. This work clearly demonstrates the importance of the aortic geometry on the flow phenomena in a BMHV and demonstrates the potential of our computational method to carry out image-based patient-specific simulations for clinically relevant studies of heart valve hemodynamics.
---
paper_title: Comparison of the Hemodynamic and Thrombogenic Performance of Two Bileaflet Mechanical Heart Valves Using a CFD/FSI Model
paper_content:
The hemodynamic and the thrombogenic performance of two commercially available bileaflet mechanical heart valves (MHVs)--the ATS Open Pivot Valve (ATS) and the St. Jude Regent Valve (SJM), was compared using a state of the art computational fluid dynamics-fluid structure interaction (CFD-FSI) methodology. A transient simulation of the ATS and SJM valves was conducted in a three-dimensional model geometry of a straight conduit with sudden expansion distal the valves, including the valve housing and detailed hinge geometry. An aortic flow waveform (60 beats/min, cardiac output 4 l/min) was applied at the inlet. The FSI formulation utilized a fully implicit coupling procedure using a separate solver for the fluid problem (FLUENT) and for the structural problem. Valve leaflet excursion and pressure differences were calculated, as well as shear stress on the leaflets and accumulated shear stress on particles released during both forward and backward flow phases through the open and closed valve, respectively. In contrast to the SJM, the ATS valve opened to less than maximal opening angle. Nevertheless, maximal and mean pressure gradients and velocity patterns through the valve orifices were comparable. Platelet stress accumulation during forward flow indicated that no platelets experienced a stress accumulation higher than 35 dyne x s/cm2, the threshold for platelet activation (Hellums criterion). However, during the regurgitation flow phase, 0.81% of the platelets in the SJM valve experienced a stress accumulation higher than 35 dyne x s/cm2, compared with 0.63% for the ATS valve. The numerical results indicate that the designs of the ATS and SJM valves, which differ mostly in their hinge mechanism, lead to different potential for platelet activation, especially during the regurgitation phase. This numerical methodology can be used to assess the effects of design parameters on the flow induced thrombogenic potential of blood recirculating devices.
---
paper_title: Partitioned analysis of coupled mechanical systems
paper_content:
Abstract This is a tutorial article that reviews the use of partitioned analysis procedures for the analysis of coupled dynamical systems. Attention is focused on the computational simulation of systems in which a structure is a major component. Important applications in that class are provided by thermomechanics, fluid–structure interaction and control–structure interaction. In the partitioned solution approach, systems are spatially decomposed into partitions. This decomposition is driven by physical or computational considerations. The solution is separately advanced in time over each partition. Interaction effects are accounted for by transmission and synchronization of coupled state variables. Recent developments in the use of this approach for multilevel decomposition aimed at massively parallel computation are discussed.
---
paper_title: Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies
paper_content:
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived.
---
paper_title: Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies
paper_content:
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived.
---
paper_title: Numerical Simulation of Flow in Mechanical Heart Valves: Grid Resolution and the Assumption of Flow Symmetry
paper_content:
A numerical method is developed for simulating unsteady, 3-D, laminar flow through a bileaflet mechanical heart valve with the leaflets fixed. The method employs a dual-timestepping artificial-compressibility approach together with overset (Chimera) grids and is second-order accurate in space and time. Calculations are carried out for the full 3-D valve geometry under steady inflow conditions on meshes with a total number of nodes ranging from 4310 5 to 1.6310 6 . The computed results show that downstream of the leaflets the flow is dominated by two pairs of counter-rotating vortices, which originate on either side of the central orifice in the aortic sinus and rotate such that the common flow of each pair is directed away from the aortic wall. These vortices intensify with Reynolds number, and at a Reynolds number of approximately 1200 their complex interaction leads to the onset of unsteady flow and the break of symmetry with respect to both geometric planes of symmetry. Our results show the highly 3-D structure of the flow; question the validity of computationally expedient assumptions of flow symmetry; and demonstrate the need for highly resolved, fully 3-D simulations if computational fluid dynamics is to accurately predict the flow in prosthetic mechanical heart valves. @DOI: 10.1115/1.1614817#
---
paper_title: Unsteady Effects on the Flow Across Tilting Disk Valves
paper_content:
The present study simulates numerically the flow across two-dimensional tilting disk mod-els of mechanical heart valves. The time-dependent Navier-Stokes equations are solved toassess the importance of unsteady effects in the fully open position of the valve. Flowcases with steady or physiological inflow conditions and with fixed or moving valves aresolved. The simulations lead into mixed conclusions. It is obvious that steady inflow casesthat account for vortex shedding only cannot model realistic physiological cases. In caseswith imposed physiological inflow, the details of the flow field for fixed and moving valvesmight differ in the fully open position as well, although the gross features are quitesimilar. The fixed valve case consistently results in safe estimations of several criticalquantities such as the axial force, the maximal shear stress on the valve, or the transval-vular pressure drop. Thus, fixed valve simulations can provide useful information for thedesign of prosthetic heart valves, as long as the properties in the fully open position onlyare sought. @DOI: 10.1115/1.1427696#
---
paper_title: Numerical Simulations of Fluid-Structure Interaction Problems in Biological Flows
paper_content:
University of Minnesota Ph.D. dissertation. June, 2008. Major: Mechanical Engineering. Advisor: Fotis Sotiropoulos. 1 computer file (PDF); xvi, 273 pages.
---
paper_title: Two-Dimensional Dynamic Simulation of Platelet Activation During Mechanical Heart Valve Closure
paper_content:
A major drawback in the operation of mechanical heart valve prostheses is thrombus formation in the near valve region. Detailed flow analysis in this region during the valve closure phase is of interest in understanding the relationship between shear stress and platelet activation. A fixed-grid Cartesian mesh flow solver is used to simulate the blood flow through a bi-leaflet mechanical valve employing a two-dimensional geometry of the leaflet with a pivot point representing the hinge region. A local mesh refinement algorithm allows efficient and fast flow computations with mesh adaptation based on the gradients of the flow field in the leaflet-housing gap at the instant of valve closure. Leaflet motion is calculated dynamically based on the fluid forces acting on it employing a fluid-structure interaction algorithm. Platelets are modeled and tracked as point particles by a Lagrangian particle tracking method which incorporates the hemodynamic forces on the particles. A platelet activation model is included to predict regions which are prone to platelet activation. Closure time of the leaflet is validated against experimental studies. Results show that the orientation of the jet flow through the gap between the housing and the leaflet causes the boundary layer from the valve housing to be drawn in by the shear layer separating from the leaflet. The interaction between the separating shear layers is seen to cause a region of intensely rotating flow with high shear stress and high residence time of particles leading to high likelihood of platelet activation in that region.
---
paper_title: Three-Dimensional Fluid-Structure Interaction Simulation of Bileaflet Mechanical Heart Valve Flow Dynamics
paper_content:
The wall shear stress induced by the leaflet motion during the valve-closing phase has been implicated with thrombus initiation with prosthetic valves. Detailed flow dynamic analysis in the vicinity of the leaflets and the housing during the valve-closure phase is of interest in understanding this relationship. A three-dimensional unsteady flow analysis past bileaflet valve prosthesis in the mitral position is presented incorporating a fluid-structure interaction algorithm for leaflet motion during the valve-closing phase. Arbitrary Lagrangian-Eulerian method is employed for incorporating the leaflet motion. The forces exerted by the fluid on the leaflets are computed and applied to the leaflet equation of motion to predict the leaflet position. Relatively large velocities are computed in the valve clearance region between the valve housing and the leaflet edge with the resulting relatively large wall shear stresses at the leaflet edge during the impact-rebound duration. Negative pressure transients are computed on the surface of the leaflets on the atrial side of the valve, with larger magnitudes at the leaflet edge during the closing and rebound as well. Vortical flow development is observed on the inflow (atrial) side during the valve impact-rebound phase in a location central to the leaflet and away from the clearance region where cavitation bubbles have been visualized in previously reported experimental studies.
---
paper_title: Numerical simulation of 3D fluid-structure interaction flow using an immersed object method with overlapping grids
paper_content:
The newly developed immersed object method (IOM) [Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady incompressible viscous flows around moving rigid bodies using an immersed object method with overlapping grids. J Comput Phys 2005; 207(1): 151-72] is extended for 3D unsteady flow simulation with fluid-structure interaction (FSI), which is made possible by combining it with a parallel unstructured multigrid Navier-Stokes solver using a matrix-free implicit dual time stepping and finite volume method [Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady three-dimensional incompressible viscous flow using an unstructured multigrid method. In: The second M.I.T. conference on computational fluid and solid mechanics, June 17-20, MIT, Cambridge, MA 02139, USA, 2003; Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady three-dimensional incompressible viscous flow using an unstructured multigrid method, Special issue on ''Preconditioning methods: algorithms, applications and software environments. Comput Struct 2004; 82(28): 2425-36]. This uniquely combined method is then employed to perform detailed study of 3D unsteady flows with complex FSI. In the IOM, a body force term F is introduced into the momentum equations during the artificial compressibility (AC) sub-iterations so that a desired velocity distribution V"0 can be obtained on and within the object boundary, which needs not coincide with the grid, by adopting the direct forcing method. An object mesh is immersed into the flow domain to define the boundary of the object. The advantage of this is that bodies of almost arbitrary shapes can be added without grid restructuring, a procedure which is often time-consuming and computationally expensive. It has enabled us to perform complex and detailed 3D unsteady blood flow and blood-leaflets interaction in a mechanical heart valve (MHV) under physiological conditions.
---
paper_title: Vorticity dynamics of a bileaflet mechanical heart valve in an axisymmetric aorta
paper_content:
We present comprehensive particle image velocimetry measurements and direct numerical simulation (DNS) of physiological, pulsatile flow through a clinical quality bileaflet mechanical heart valve mounted in an idealized axisymmetric aorta geometry with a sudden expansion modeling the aortic sinus region. Instantaneous and ensemble-averaged velocity measurements as well as the associated statistics of leaflet kinematics are reported and analyzed in tandem to elucidate the structure of the velocity and vorticity fields of the ensuing flow-structure interaction. The measurements reveal that during the first half of the acceleration phase, the flow is laminar and repeatable from cycle to cycle. The valve housing shear layer rolls up into the sinus and begins to extract vorticity of opposite sign from the sinus wall. A start-up vortical structure is shed from the leaflets and is advected downstream as the leaflet shear layers become wavy and oscillatory. In the second half of flow acceleration the leaflet shea...
---
paper_title: Numerical simulation of the dynamics of a bileaflet prosthetic heart valve using a fluid-structure interaction approach
paper_content:
Abstract The main purpose of this study is to reproduce in silico the dynamics of a bileaflet mechanical heart valve (MHV; St Jude Hemodynamic Plus, 27 mm characteristic size) by means of a fully implicit fluid–structure interaction (FSI) method, and experimentally validate the results using an ultrafast cinematographic technique. The computational model was constructed to realistically reproduce the boundary condition (72 beats per minute (bpm), cardiac output 4.5 l/min) and the geometry of the experimental setup, including the valve housing and the hinge configuration. The simulation was carried out coupling a commercial computational fluid dynamics (CFD) package based on finite-volume method with user-defined code for solving the structural domain, and exploiting the parallel performance of the whole numerical setup. Outputs are leaflets excursion from opening to closure and the fluid dynamics through the valve. Results put in evidence a favorable comparison between the computed and the experimental data: the model captures the main features of the leaflet motion during the systole. The use of parallel computing drastically limited the computational costs, showing a linear scaling on 16 processors (despite the massive use of user-defined subroutines to manage the FSI process). The favorable agreement obtained between in vitro and in silico results of the leaflet displacements confirms the consistency of the numerical method used, and candidates the application of FSI models to become a major tool to optimize the MHV design and eventually provides useful information to surgeons.
---
paper_title: Collagen fibers reduce stresses and stabilize motion of aortic valve leaflets during systole.
paper_content:
The effect of collagen fibers on the mechanics and hemodynamics of a trileaflet aortic valve contained in a rigid aortic root is investigated in a numerical analysis of the systolic phase. Collagen fibers are known to reduce stresses in the leaflets during diastole, but their role during systole has not been investigated in detail yet. It is demonstrated that also during systole these fibers substantially reduce stresses in the leaflets and provide smoother opening and closing. Compared to isotropic leaflets, collagen reinforcement reduces the fluttering motion of the leaflets. Due to the exponential stress-strain behavior of collagen, the fibers have little influence on the initial phase of the valve opening, which occurs at low strains, and therefore have little impact on the transvalvular pressure drop.
---
paper_title: Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies
paper_content:
The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived.
---
paper_title: Two-dimensional fluid-structure interaction simulation of bileaflet mechanical heart valve flow dynamics.
paper_content:
BACKGROUND AND AIM OF THE STUDY: Mechanical heart valve implantation requires long-term anticoagulation because of thromboembolic complications. Recent studies have indicated that the relatively high wall shear stresses and negative pressure transients developed during the valve closing phase may be dominant factors inducing thrombus initiation. The study aim was a two-dimensional (2D) functional simulation of flow past bileaflet heart valve prosthesis during the closing phase, incorporating the fluid-structure interaction analysis to induce motion of the leaflets. METHODS: The fluid-structure interaction model used was based on unsteady 2D Navier-Stokes equations with the arbitrary Lagrangian-Eulerian method for moving boundaries, coupled with the dynamic equation for leaflet motion. Parametric analysis of the effect of valve size, leaflet density, and the coefficient of resilience at the instant of impact of the leaflet with the housing were also performed. RESULTS: Comparing the predicted motion of the leaflet with previous experimental results validated the simulation. The results showed the presence of negative pressure transients near the inflow side of the leaflet at the instant of valve closure, and the negative pressure transients were augmented during the leaflet rebound process. Relatively high velocities and wall shear stresses, detrimental to the formed elements in blood were present in the clearance region between the leaflet and valve housing at the instant of valve closure. CONCLUSION: The simulation can be potentially applied to analyze the effects of valve geometry and dimensions, and the effect of leaflet material on the flow dynamics past the valve prosthesis during the opening and closing phases for design improvements in minimizing problems associated with thromboembolic complications.
---
paper_title: High-Resolution Fluid–Structure Interaction Simulations of Flow Through a Bi-Leaflet Mechanical Heart Valve in an Anatomic Aorta
paper_content:
We have performed high-resolution fluid-structure interaction simulations of physiologic pulsatile flow through a bi-leaflet mechanical heart valve (BMHV) in an anatomically realistic aorta. The results are compared with numerical simulations of the flow through an identical BMHV implanted in a straight aorta. The comparisons show that although some of the salient features of the flow remain the same, the aorta geometry can have a major effect on both the flow patterns and the motion of the valve leaflets. For the studied configuration, for instance, the BMHV leaflets in the anatomic aorta open much faster and undergo a greater rebound during closing than the same valve in the straight axisymmetric aorta. Even though the characteristic triple-jet structure does emerge downstream of the leaflets for both cases, for the anatomic case the leaflet jets spread laterally and diffuse much faster than in the straight aorta due to the aortic curvature and complex shape of the anatomic sinus. Consequently the leaflet shear layers in the anatomic case remain laminar and organized for a larger portion of the accelerating phase as compared to the shear layers in the straight aorta, which begin to undergo laminar instabilities well before peak systole is reached. For both cases, however, the flow undergoes a very similar explosive transition to the small-scale, turbulent-like state just prior to reaching peak systole. The local maximum shear stress is used as a metric to characterize the mechanical environment experienced by blood cells. Pockets of high local maximum shear are found to be significantly more widespread in the anatomic aorta than in the straight aorta throughout the cardiac cycle. Pockets of high local maximum shear were located near the leaflets and in the aortic arc region. This work clearly demonstrates the importance of the aortic geometry on the flow phenomena in a BMHV and demonstrates the potential of our computational method to carry out image-based patient-specific simulations for clinically relevant studies of heart valve hemodynamics.
---
paper_title: Flow-driven opening of a valvular leaflet
paper_content:
The understanding of valvular opening is a central issue in cardiac flows, whose analysis is often prohibited by the unavailability of ( in vivo ) data about tissue properties. Asymptotic or approximate representations of fluid–structure interaction are thus sought. The dynamics of an accelerated stream, in a two-dimensional channel initially closed by a rigid inertialess movable leaflet, is studied as a simple model problem aimed at demonstrating the main phenomena contributing to the fluid–structure interaction. The problem is solved by the coupled numerical solution of equations for the flow and solid. The results show that the leaflet initially opens in a no-shedding regime, driven by fluid mass conservation and a predictable dynamics. Then the leaflet motion jumps, after the saturation of a very rapid intermediate vortex-shedding phase, to the asymptotic slower regime with a stable self-similar wake structure.
---
paper_title: A numerical method for solving the 3D unsteady incompressible Navier-Stokes equations in curvilinear domains with complex immersed boundaries
paper_content:
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
---
paper_title: Flow in a Mechanical Bileaflet Heart Valve at Laminar and Near-Peak Systole Flow Rates: CFD Simulations and Experiments
paper_content:
Time-accurate, fully 3D numerical simulations and particle image velocity laboratory experiments are carried out for flow through a fully open bileaflet mechanical heart valve under steady (nonpulsatile) inflow conditions. Flows at two different Reynolds numbers, one in the laminar regime and the other turbulent (near-peak systole flow rate), are investigated. A direct numerical simulation is carried out for the laminar flow case while the turbulent flow is investigated with two different unsteady statistical turbulence modeling approaches, unsteady Reynolds-averaged Navier-Stokes (URANS) and detached-eddy simulation (DES) approach. For both the laminar and turbulent cases the computed mean velocity profiles are in good overall agreement with the measurements. For the turbulent simulations, however, the comparisons with the measurements demonstrate clearly the superiority of the DES approach and underscore its potential as a powerful modeling tool of cardiovascular flows at physiological conditions. The study reveals numerous previously unknown features of the flow.
---
paper_title: An Analysis of Turbulent Shear Stresses in Leakage Flow Through a Bileaflet Mechanical Prostheses
paper_content:
In this work, estimates of turbulence were made from pulsatile flow laser Doppler velocimetry measurements using traditional phase averaging and averaging after the removal of cyclic variation. These estimates were compared with estimates obtained from steady leakage flow LDV measurements and an analytical method. The results of these studies indicate that leakage jets which are free and planar in shape may be more unstable than other leakage jets, and that cyclic variation does not cause a gross overestimation of the Reynolds stresses at large distances from the leakage jet orifice.
---
paper_title: A three-dimensional computational analysis of fluid–structure interaction in the aortic valve
paper_content:
Abstract Numerical analysis of the aortic valve has mainly been focused on the closing behaviour during the diastolic phase rather than the kinematic opening and closing behaviour during the systolic phase of the cardiac cycle. Moreover, the fluid–structure interaction in the aortic valve system is most frequently ignored in numerical modelling. The effect of this interaction on the valve's behaviour during systolic functioning is investigated. The large differences in material properties of fluid and structure and the finite motion of the leaflets complicate blood–valve interaction modelling. This has impeded numerical analyses of valves operating under physiological conditions. A numerical method, known as the Lagrange multiplier based fictitious domain method, is used to describe the large leaflet motion within the computational fluid domain. This method is applied to a three-dimensional finite element model of a stented aortic valve. The model provides both the mechanical behaviour of the valve and the blood flow through it. Results show that during systole the leaflets of the stented valve appear to be moving with the fluid in an essentially kinematical process governed by the fluid motion.
---
paper_title: A computational fluid-structure interaction analysis of a fiber-reinforced stentless aortic valve.
paper_content:
The importance of the aortic root compliance in the aortic valve performance has most frequently been ignored in computational valve modeling, although it has a significant contribution to the functionality of the valve. Aortic root aneurysm or (calcific) stiffening severely affects the aortic valve behavior and, consequently, the cardiovascular regulation. The compromised mechanical and hemodynamical performance of the valve are difficult to study both 'in vivo' and 'in vitro'. Computational analysis of the valve enables a study on system responses that are difficult to obtain otherwise. In this paper a numerical model of a fiber-reinforced stentless aortic valve is presented. In the computational evaluation of its clinical functioning the interaction of the valve with the blood is essential. Hence, the blood-tissue interaction is incorporated in the model using a combined fictitious domain/arbitrary Lagrange-Euler formulation, which is integrated within the Galerkin finite element method. The model can serve as a diagnostic tool for clinical purposes and as a design tool for improving existing valve prostheses or developing new concepts. Structural mechanical and fluid dynamical aspects are analyzed during the systolic course of the cardiac cycle. Results show that aortic root compliance largely influences the valve opening and closing configurations. Stresses in the delicate parts of the leaflets are substantially reduced if fiber-reinforcement is applied and the aortic root is able to expand.
---
paper_title: Vortex Shedding as a Mechanism for Free Emboli Formation in Mechanical Heart Valves
paper_content:
The high incidence of thromboembolic complications of mechanical heart valves (MHV) limits their success as permanent implants. The thrombogenicity of all MHV is primarily due to platelet activation by contact with foreign surfaces and by nonphysiological flow patterns. The latter include elevated flow stresses and regions of recirculation of blood that are induced by valve design characteristics. A numerical simulation of unsteady turbulent flow through a bileaflet MHV was conducted, using the Wilcox k-omega turbulence model for internal low-Reynolds-number flows, and compared to quantitative flow visualization performed in a pulse duplicator system using Digital Particle Image Velocimetry (DPIV). The wake of the valve leaflet during the deceleration phase revealed an intricate pattern of interacting shed vortices. Particle paths showed that platelets that were exposed to the highest flow stresses around the leaflets were entrapped within the shed vortices. Potentially activated, such platelets may tend to aggregate and form free emboli. Once formed, such free emboli would be convected downstream by the shed vortices, increasing the risk of systemic emboli.
---
paper_title: Asymmetric opening of a simple bileaflet valve.
paper_content:
The opening motion of a bileaflet valve driven by a pulsed stream is studied in the limiting condition of rigid massless leaflets closing a two-dimensional channel. This simple arrangement allows us to write simple dynamical equations for the solid that are solved numerically with the Navier-Stokes equation for the fluid. The analysis is focused on the influence of asymmetry on the coupled fluid-valve dynamics when parameters are taken with references to cardiac valves. Results show that the wake generation from the leaflet's trailing edge can be partly inhibited; the primary vortex downstream does not occur in an intermediate range of asymmetry. The potential emergence of such a phenomenon in realistic cases would present implications in development of diagnostic schemes.
---
paper_title: Numerical Analysis of Three-Dimensional Björk–Shiley Valvular Flow in an Aorta
paper_content:
Laminar vortical flow around a fully opened Bjork-Shiley valve in an aorta is obtained by solving the three-dimensional incompressible Navier-Stokes equations. Used is a non iterative implicit finite-element Navier-Stokes code developed by the authors, which makes use of the well-known finite difference algorithm PISO. The code utilizes segregated formulation and efficient iterative matrix solvers such as PCGS and ICCG. Computational results show that the three-dimensional vortical flow is recirculating with large shear in the sinus region of the valve chamber. Passing through the valve, the flow is split into major upper and lower jet flows. The spiral vortices generated by the disk are advected in the wake and attenuated rapidly downstream by diffusion. It is shown also that the shear stress becomes maximum near the leading edge of the disk valve.
---
paper_title: Fluid mechanics of heart valves.
paper_content:
Valvular heart disease is a life-threatening disease that afflicts millions of people worldwide and leads to approximately 250,000 valve repairs and/or replacements each year. Malfunction of a native valve impairs its efficient fluid mechanic/hemodynamic performance. Artificial heart valves have been used since 1960 to replace diseased native valves and have saved millions of lives. Unfortunately, despite four decades of use, these devices are less than ideal and lead to many complications. Many of these complications/problems are directly related to the fluid mechanics associated with the various mechanical and bioprosthetic valve designs. This review focuses on the state-of-the-art experimental and computational fluid mechanics of native and prosthetic heart valves in current clinical use. The fluid dynamic performance characteristics of caged-ball, tilting-disc, bileaflet mechanical valves and porcine and pericardial stented and nonstented bioprostheic valves are reviewed. Other issues related to heart valve performance, such as biomaterials, solid mechanics, tissue mechanics, and durability, are not addressed in this review.
---
paper_title: A two-dimensional fluid–structure interaction model of the aortic value
paper_content:
Failure of synthetic heart valves is usually caused by tearing and calci"cation of the lea#ets. Lea#et "ber-reinforcement increases the durability of these values by unloading the delicate parts of the lea#ets, maintaining their physiological functioning. The interaction of the valve with the surrounding #uid is essential when analyzing its functioning. However, the large di!erences in material properties of #uid and structure and the "nite motion of the lea#ets complicate blood}valve interaction modeling. This has, so far, obstructed numerical analyses of valves operating under physiological conditions. A two-dimensional #uid}structure interaction model is presented, which allows the Reynolds number to be within the physiological range, using a "ctitious domain method based on Lagrange multipliers to couple the two phases. The extension to the three-dimensional case is straightforward. The model has been validated experimentally using laser Doppler anemometry for measuring the #uid #ow and digitized high-speed video recordings to visualize the lea#et motion in corresponding geometries. Results show that both the #uid and lea#et behaviour are well predicted for di!erent lea#et thicknesses. ( 2000 Elsevier Science Ltd. All rights reserved.
---
paper_title: Computational Approach for Probing the Flow Through Artificial Heart Devices
paper_content:
Computational fluid dynamics (CFD) has become an indispensable part of aerospace research and design. The solution procedure for incompressible Navier-Stokes equations can be used for biofluid mechanics research. The computational approach provides detailed knowledge of the flowfield complementary to that obtained by experimental measurements. This paper illustrates the extension of CFD techniques to artificial heart flow simulation. Unsteady incompressible Navier-Stokes equations written in three-dimensional generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudocompressibility approach. It uses an implicit upwind-differencing scheme together with the Gauss-Seidel line-relaxation method. The efficiency and robustness of the time-accurate formulation of the numerical algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated by experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapped grid embedding scheme are employed, respectively. Steady-state solutions for the flow through a tilting-disk heart valve are compared with experimental measurements. Good agreement is obtained. Aided by experimental data, the flow through an entire Penn State artificial heart model is computed.
---
paper_title: Characterization of Hemodynamic Forces Induced by Mechanical Heart Valves: Reynolds vs. Viscous Stresses
paper_content:
Bileaflet mechanical heart valves (BMHV) are widely used to replace diseased heart valves. Implantation of BMHV, however, has been linked with major complications, which are generally considered to be caused by mechanically induced damage of blood cells resulting from the non-physiological hemodynamics environment induced by BMHV, including regions of recirculating flow and elevated Reynolds (turbulence) shear stress levels. In this article, we analyze the results of 2D high-resolution velocity measurements and full 3D numerical simulation for pulsatile flow through a BMHV mounted in a model axisymmetric aorta to investigate the mechanical environment experienced by blood elements under physiologic conditions. We show that the so-called Reynolds shear stresses neither directly contribute to the mechanical load on blood cells nor is a proper measurement of the mechanical load experienced by blood cells. We also show that the overall levels of the viscous stresses, which comprise the actual flow environment experienced by cells, are apparently too low to induce damage to red blood cells, but could potentially damage platelets. The maximum instantaneous viscous shear stress observed throughout a cardiac cycle is <15 N/m(2). Our analysis is restricted to the flow downstream of the valve leaflets and thus does not address other areas within the BMHV where potentially hemodynamically hazardous levels of viscous stresses could still occur (such as in the hinge gaps and leakage jets).
---
paper_title: Numerical Simulations of Fluid-Structure Interaction Problems in Biological Flows
paper_content:
University of Minnesota Ph.D. dissertation. June, 2008. Major: Mechanical Engineering. Advisor: Fotis Sotiropoulos. 1 computer file (PDF); xvi, 273 pages.
---
paper_title: High-Resolution Fluid–Structure Interaction Simulations of Flow Through a Bi-Leaflet Mechanical Heart Valve in an Anatomic Aorta
paper_content:
We have performed high-resolution fluid-structure interaction simulations of physiologic pulsatile flow through a bi-leaflet mechanical heart valve (BMHV) in an anatomically realistic aorta. The results are compared with numerical simulations of the flow through an identical BMHV implanted in a straight aorta. The comparisons show that although some of the salient features of the flow remain the same, the aorta geometry can have a major effect on both the flow patterns and the motion of the valve leaflets. For the studied configuration, for instance, the BMHV leaflets in the anatomic aorta open much faster and undergo a greater rebound during closing than the same valve in the straight axisymmetric aorta. Even though the characteristic triple-jet structure does emerge downstream of the leaflets for both cases, for the anatomic case the leaflet jets spread laterally and diffuse much faster than in the straight aorta due to the aortic curvature and complex shape of the anatomic sinus. Consequently the leaflet shear layers in the anatomic case remain laminar and organized for a larger portion of the accelerating phase as compared to the shear layers in the straight aorta, which begin to undergo laminar instabilities well before peak systole is reached. For both cases, however, the flow undergoes a very similar explosive transition to the small-scale, turbulent-like state just prior to reaching peak systole. The local maximum shear stress is used as a metric to characterize the mechanical environment experienced by blood cells. Pockets of high local maximum shear are found to be significantly more widespread in the anatomic aorta than in the straight aorta throughout the cardiac cycle. Pockets of high local maximum shear were located near the leaflets and in the aortic arc region. This work clearly demonstrates the importance of the aortic geometry on the flow phenomena in a BMHV and demonstrates the potential of our computational method to carry out image-based patient-specific simulations for clinically relevant studies of heart valve hemodynamics.
---
| Title: A Review of State-of-the-Art Numerical Methods for Simulating Flow through Mechanical Heart Valves
Section 1: Introduction
Description 1: Introduce the need for heart valve surgeries, complications with bileaflet mechanical heart valves (BMHV), and the significance of numerical simulations in understanding BMHV hemodynamics.
Section 2: Governing Equations and Numerical Methods
Description 2: Describe the fluid dynamics equations governing BMHV flows and review numerical methods for handling fluid-structure interaction problems.
Section 3: Leaflet Equations
Description 3: Discuss the equations governing the motion of BMHV leaflets and their numerical implementation.
Section 4: Boundary Conditions and Coupling for FSI Problems
Description 4: Detail the boundary conditions and methodologies for coupling fluid and structure dynamics in FSI problems involving BMHV.
Section 5: Stability of the FSI Coupling for MHV Simulations
Description 5: Analyze the stability and computational considerations for different FSI coupling methods used in BMHV simulations.
Section 6: Recent Simulations and Insights into the Flow Physics
Description 6: Summarize recent simulation studies, their findings, and insights gained into the flow physics of BMHV.
Section 7: Future Outlook
Description 7: Provide a future perspective on the challenges and potential developments in numerical simulation methods for BMHV. |
The minimal measurement number problem in phase retrieval: a review of recent developments | 5 | ---
paper_title: Generalized phase retrieval : measurement number, matrix recovery and beyond
paper_content:
Abstract In this paper, we develop a framework of generalized phase retrieval in which one aims to reconstruct a vector x in R d or C d through quadratic samples x ⁎ A 1 x , … , x ⁎ A N x . The generalized phase retrieval includes as special cases the standard phase retrieval as well as the phase retrieval by orthogonal projections. We first explore the connections among generalized phase retrieval, low-rank matrix recovery and nonsingular bilinear form. Motivated by the connections, we present results on the minimal measurement number needed for recovering a matrix that lies in a set W ∈ C d × d . Applying the results to phase retrieval, we show that generic d × d matrices A 1 , … , A N have the phase retrieval property if N ≥ 2 d − 1 in the real case and N ≥ 4 d − 4 in the complex case for very general classes of A 1 , … , A N , e.g. matrices with prescribed ranks or orthogonal projections. We also give lower bounds on the minimal measurement number required for generalized phase retrieval. For several classes of dimensions d we obtain the precise values of the minimal measurement number. Our work unifies and enhances results from the standard phase retrieval, phase retrieval by projections and low-rank matrix recovery.
---
paper_title: Saving phase: Injectivity and stability for phase retrieval
paper_content:
Recent advances in convex optimization have led to new strides in the phase retrieval problem over finite-dimensional vector spaces. However, certain fundamental questions remain: What sorts of measurement vectors uniquely determine every signal up to a global phase factor, and how many are needed to do so? Furthermore, which measurement ensembles lend stability? This paper presents several results that address each of these questions. We begin by characterizing injectivity, and we identify that the complement property is indeed a necessary condition in the complex case. We then pose a conjecture that 4M-4 generic measurement vectors are both necessary and sufficient for injectivity in M dimensions, and we prove this conjecture in the special cases where M=2,3. Next, we shift our attention to stability, both in the worst and average cases. Here, we characterize worst-case stability in the real case by introducing a numerical version of the complement property. This new property bears some resemblance to the restricted isometry property of compressed sensing and can be used to derive a sharp lower Lipschitz bound on the intensity measurement mapping. Localized frames are shown to lack this property (suggesting instability), whereas Gaussian random measurements are shown to satisfy this property with high probability. We conclude by presenting results that use a stochastic noise model in both the real and complex cases, and we leverage Cramer-Rao lower bounds to identify stability with stronger versions of the injectivity characterizations.
---
paper_title: Generalized phase retrieval : measurement number, matrix recovery and beyond
paper_content:
Abstract In this paper, we develop a framework of generalized phase retrieval in which one aims to reconstruct a vector x in R d or C d through quadratic samples x ⁎ A 1 x , … , x ⁎ A N x . The generalized phase retrieval includes as special cases the standard phase retrieval as well as the phase retrieval by orthogonal projections. We first explore the connections among generalized phase retrieval, low-rank matrix recovery and nonsingular bilinear form. Motivated by the connections, we present results on the minimal measurement number needed for recovering a matrix that lies in a set W ∈ C d × d . Applying the results to phase retrieval, we show that generic d × d matrices A 1 , … , A N have the phase retrieval property if N ≥ 2 d − 1 in the real case and N ≥ 4 d − 4 in the complex case for very general classes of A 1 , … , A N , e.g. matrices with prescribed ranks or orthogonal projections. We also give lower bounds on the minimal measurement number required for generalized phase retrieval. For several classes of dimensions d we obtain the precise values of the minimal measurement number. Our work unifies and enhances results from the standard phase retrieval, phase retrieval by projections and low-rank matrix recovery.
---
paper_title: Phase retrieval and norm retrieval
paper_content:
Phase retrieval has become a very active area of research. We will classify when phase retrieval by Parseval frames passes to the Naimark complement and when phase retrieval by projections passes to the orthogonal complements. We introduce a new concept we call norm retrieval and show that this is what is necessary for passing phase retrieval to complements. This leads to a detailed study of norm retrieval and its relationship to phase retrieval. One fundamental result: a frame $\{\varphi_i\}_{i=1}^M$ yields phase retrieval if and only if $\{T\varphi_i\}_{i=1}^M$ yields norm retrieval for every invertible operator $T$.
---
paper_title: An algebraic characterization of injectivity in phase retrieval
paper_content:
Abstract A complex frame is a collection of vectors that span C M and define measurements, called intensity measurements, on vectors in C M . In purely mathematical terms, the problem of phase retrieval is to recover a complex vector from its intensity measurements, namely the modulus of its inner product with these frame vectors. We show that any vector is uniquely determined (up to a global phase factor) from 4 M − 4 generic measurements. To prove this, we identify the set of frames defining non-injective measurements with the projection of a real variety and bound its dimension.
---
paper_title: Phase Retrieval By Projections
paper_content:
The problem of recovering a vector from the absolute values of its inner products against a family of measurement vectors has been well studied in mathematics and engineering. A generalization of this phase retrieval problem also exists in engineering: recovering a vector from measurements consisting of norms of its orthogonal projections onto a family of subspaces. There exist semidefinite programming algorithms to solve this problem, but much remains unknown for this more general case. Can families of subspaces for which such measurements are injective be completely classified? What is the minimal number of subspaces required to have injectivity? How closely does this problem compare to the usual phase retrieval problem with families of measurement vectors? In this paper, we answer or make incremental steps toward these questions. We provide several characterizations of subspaces which yield injective measurements, and through a concrete construction, we prove the surprising result that phase retrieval can be achieved with $2M-1$ projections of arbitrary rank in $\HH_M$. Finally we present several open problems as we discuss issues unique to the phase retrieval problem with subspaces.
---
paper_title: Saving phase: Injectivity and stability for phase retrieval
paper_content:
Recent advances in convex optimization have led to new strides in the phase retrieval problem over finite-dimensional vector spaces. However, certain fundamental questions remain: What sorts of measurement vectors uniquely determine every signal up to a global phase factor, and how many are needed to do so? Furthermore, which measurement ensembles lend stability? This paper presents several results that address each of these questions. We begin by characterizing injectivity, and we identify that the complement property is indeed a necessary condition in the complex case. We then pose a conjecture that 4M-4 generic measurement vectors are both necessary and sufficient for injectivity in M dimensions, and we prove this conjecture in the special cases where M=2,3. Next, we shift our attention to stability, both in the worst and average cases. Here, we characterize worst-case stability in the real case by introducing a numerical version of the complement property. This new property bears some resemblance to the restricted isometry property of compressed sensing and can be used to derive a sharp lower Lipschitz bound on the intensity measurement mapping. Localized frames are shown to lack this property (suggesting instability), whereas Gaussian random measurements are shown to satisfy this property with high probability. We conclude by presenting results that use a stochastic noise model in both the real and complex cases, and we leverage Cramer-Rao lower bounds to identify stability with stronger versions of the injectivity characterizations.
---
paper_title: Stable signal recovery from incomplete and inaccurate measurements
paper_content:
Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? ::: To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. ::: As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
---
paper_title: A strong restricted isometry property, with an application to phaseless compressed sensing
paper_content:
The many variants of the restricted isometry property (RIP) have proven to be crucial theoretical tools in the fields of compressed sensing and matrix completion. The study of extending compressed sensing to accommodate phaseless measurements naturally motivates a strong notion of restricted isometry property (SRIP), which we develop in this paper. We show that if $A \in \mathbb{R}^{m\times n}$ satisfies SRIP and phaseless measurements $|Ax_0| = b$ are observed about a $k$-sparse signal $x_0 \in \mathbb{R}^n$, then minimizing the $\ell_1$ norm subject to $ |Ax| = b $ recovers $x_0$ up to multiplication by a global sign. Moreover, we establish that the SRIP holds for the random Gaussian matrices typically used for standard compressed sensing, implying that phaseless compressed sensing is possible from $O(k \log (n/k))$ measurements with these matrices via $\ell_1$ minimization over $|Ax| = b$. Our analysis also yields an erasure robust version of the Johnson-Lindenstrauss Lemma.
---
paper_title: Stable Signal Recovery from Phaseless Measurements
paper_content:
The aim of this paper is to study the stability of the $\ell_1$ minimization for the compressive phase retrieval and to extend the instance-optimality in compressed sensing to the real phase retrieval setting. We first show that the $m={\mathcal O}(k\log(N/k))$ measurements is enough to guarantee the $\ell_1$ minimization to recover $k$-sparse signals stably provided the measurement matrix $A$ satisfies the strong RIP property. We second investigate the phaseless instance-optimality with presenting a null space property of the measurement matrix $A$ under which there exists a decoder $\Delta$ so that the phaseless instance-optimality holds. We use the result to study the phaseless instance-optimality for the $\ell_1$ norm. The results build a parallel for compressive phase retrieval with the classical compressive sensing.
---
paper_title: Robust Compressive Phase Retrieval via L1 Minimization With Application to Image Reconstruction
paper_content:
Phase retrieval refers to a classical nonconvex problem of recovering a signal from its Fourier magnitude measurements. Inspired by the compressed sensing technique, signal sparsity is exploited in recent studies of phase retrieval to reduce the required number of measurements, known as compressive phase retrieval (CPR). In this paper, l1 minimization problems are formulated for CPR to exploit the signal sparsity and alternating direction algorithms are presented for problem solving. For real-valued, nonnegative image reconstruction, the image of interest is shown to be an optimal solution of the formulated l1 minimization in the noise free case. Numerical simulations demonstrate that the proposed approach is fast, accurate and robust to measurements noises.
---
paper_title: Compressive phase retrieval via generalized approximate message passing
paper_content:
In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PR-GAMP algorithm has excellent phase-transition behavior, noise robustness, and runtime. In particular, for successful recovery of synthetic Bernoulli-circular-Gaussian signals, PR-GAMP requires ≈4 times the number of measurements as a phase-oracle version of GAMP and, at moderate to large SNR, the NMSE of PR-GAMP is only ≈3 dB worse than that of phase-oracle GAMP. A comparison to the recently proposed convex-relation approach known as “CPRL” reveals PR-GAMP's superior phase transition and orders-of-magnitude faster runtimes, especially as the problem dimensions increase. When applied to the recovery of a 65k-pixel grayscale image from 32k randomly masked magnitude measurements, numerical results show a median PR-GAMP runtime of only 13.4 seconds.
---
paper_title: The minimal measurement number for low-rank matrices recovery
paper_content:
Abstract The paper presents several results that address a fundamental question in low-rank matrix recovery: how many measurements are needed to recover low-rank matrices? We begin by investigating the complex matrices case and show that 4 n r − 4 r 2 generic measurements are both necessary and sufficient for the recovery of rank-r matrices in C n × n . Thus, we confirm a conjecture which is raised by Eldar, Needell and Plan for the complex case. We next consider the real case and prove that the bound 4 n r − 4 r 2 is tight provided n = 2 k + r , k ∈ Z + . Motivated by Vinzant's work [19] , we construct 11 matrices in R 4 × 4 by computer random search and prove they define injective measurements on rank-1 matrices in R 4 × 4 . This disproves the conjecture raised by Eldar, Needell and Plan for the real case. Finally, we use the results in this paper to investigate the phase retrieval by projection and show fewer than 2 n − 1 orthogonal projections are possible for the recovery of x ∈ R n from the norm of them, which gives a negative answer for a question raised in [1] .
---
| Title: The minimal measurement number problem in phase retrieval: a review of recent developments
Section 1: Introduction
Description 1: Provide an overview of the phase retrieval problem, its importance in various fields, and the fundamental goal of determining the minimal measurement number for unique phase retrieval.
Section 2: Minimal measurement number in real and complex fields
Description 2: Introduce and review the results related to the minimal measurement number \( m_F(d) \) for \( F = \mathbb{R} \) and \( F = \mathbb{C} \).
Section 3: Phase retrieval for sparse signals
Description 3: Discuss the phase retrieval problem under the assumption that the signal \( x_0 \) is s-sparse, including conditions and results related to sparse phase retrieval.
Section 4: Generalized phase retrieval
Description 4: Present the concept of generalized phase retrieval and summarize the results, including the connections to other mathematical topics such as nonsingular bilinear forms and topology embedding.
Section 5: Conclusion
Description 5: Summarize the recent developments reviewed in the paper, highlight the relevance of algebraic geometry and topology in obtaining these results, and discuss generalized problems and open questions for future research. |
A Survey of User Interfaces for Computer Algebra Systems | 17 | ---
paper_title: User Interface Considerations for Algebraic Manipulation Systems
paper_content:
The background of mathematical manipulation systems is presented. Current user interface tools and issues are then surveyed, with a focus on the special problems of algebraic manipulation systems. A mathematical expression editor is used as an illustrative example.
---
paper_title: Maple V Language Reference Manual
paper_content:
This text describes the Maple Symbolic Computation System and the Maple V language. It describes the numeric and symbolic expressions that can be used in Maple V. All the basic data types, such as names, polynomials and functions, as well as structured data types, are covered. The book also gives a complete description of the programming language statements that are provided in the Maple V system and shows how a user can extend the functionality of the Maple V system by adding user-defined routines. The manual also provides a complete description of the Maple V system, including its 2D and 3D graphics. Maple V features a newly designed user interface on many systems. Separate appendices describe how to use Maple V on systems using the X Window System, DOS and the Macintosh.
---
paper_title: GI/S: A graphical user interface for symbolic computation systems
paper_content:
The design and implementation of GI/S, a Graphical user Interface for Symbolic computation systems, is described. The system provides a multiple window environment for the high-resolution 2-D display of mathematical expressions, the ability to select and manipulate parts of expressions with a mouse, and graphics plotting of mathematical expressions. GI/S also provides command line editing and a history mechanism for editing and re-executing past commands. GI/S is written in Franz Lisp for the Macsyma system and has been implemented on the Tektronix 4404 workstation.
---
paper_title: A structural view of the Cedar programming environment
paper_content:
This paper presents an overview of the Cedar programming environment, focusing on its overall structure—that is, the major components of Cedar and the way they are organized. Cedar supports the development of programs written in a single programming language, also called Cedar. Its primary purpose is to increase the productivity of programmers whose activities include experimental programming and the development of prototype software systems for a high-performance personal computer. The paper emphasizes the extent to which the Cedar language, with run-time support, has influenced the organization, flexibility, usefulness, and stability of the Cedar environment. It highlights the novel system features of Cedar, including automatic storage management of dynamically allocated typed values, a run-time type system that provides run-time access to Cedar data type definitions and allows interpretive manipulation of typed values, and a powerful device-independent imaging model that supports the user interface facilities. Using these discussions to set the context, the paper addresses the language and system features and the methodologies used to facilitate the integration of Cedar applications. A comparison of Cedar with other programming environments further identifies areas where Cedar excels and areas where work remains to be done.
---
paper_title: An interactive graphical interface for REDUCE
paper_content:
The availability of workstations with bit-mapped displays opens up new possibilities for displaying and interacting with mathematical expressions. This paper describes an interactive graphical interface to Reduce. This system displays the output from Reduce in its natural, two dimensional form. The interactivity of the workstation is used to advantage in several ways, including allowing subexpressions to be selected using a mouse and reentered into Reduce.
---
paper_title: MathScribe: a user interface for computer algebra systems
paper_content:
This paper describes MathScribe, a powerful user interface for computer algebra systems. The interface makes use of a bitmapped display, windows, menus, and a mouse. Significant new features of MathScribe are its display of both input and output in two-dimensional form, its ability to select previous expressions, and its computationally efficient manner of displaying large expressions.
---
paper_title: EMACS the extensible, customizable self-documenting display editor
paper_content:
EMACS is a display editor which is implemented in an interpreted high level language. This allows users to extend the editor by replacing parts of it, to experiment with alternative command languages, and to share extensions which are generally useful. The ease of extension has contributed to the growth of a large set of useful features. This paper describes the organization of the EMACS system, emphasizing the way in which extensibility is achieved and used. ::: This report describes work done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505.
---
paper_title: PowerMath: a system for the Macintosh
paper_content:
PowerMath is a symbolic algebra system for the MacIntosh computer. This paper outlines the design decisions that were made during its development, and explains how the novel MacIntosh environment helped and hindered the development of the system. While the interior of PowerMath is fairly conventional, the user interface has many novel features. It is these that make PowerMath not just another micro-computer algebra system.
---
paper_title: Capabilities of the MUMATH-78 computer algebra system for the INTEL-8080 microprocessor (invited)
paper_content:
This paper describes the capabilities of a microcomputer algebra system intended for educational and personal use. Currently implemented for INTEL-8080 based microcomputers, the system offers a broad range of facilities from indefinite-precision rational arithmetic through symbolic integration, symbolic summation, matrix algebra, and solution of a nonlinear algebraic equation. The talk will include a filmed demonstration, and informal live demonstrations will be given afterward.
---
paper_title: Template-based Formula Editing in Kaava
paper_content:
This paper describes a user interface for entering mathematical formulas directly in two-dimensional notation. This interface is part of a small, experimental computer algebra system called Kaava. It demonstrates a data-driven structure editing style that partially accommodates the user's view of formulas as two-dimensional arrangements of symbols on the page.
---
paper_title: Mathematica - A System for Doing Mathematics by Computer
paper_content:
This book will be released simultaneously with Release 2.0 of Mathematica and will cover all the new features of Release 2.0. This new edition maintains the format of the original book and is the single most important user guide and reference for Mathematica--all users of Mathematica will need this edition. Includes 16 pages of full-color graphics.
---
paper_title: Standard handbook of engineering calculations
paper_content:
This book provides engineering calculation procedures for solving routine and non-routine problems found in engineering fields such as aeronautical and astronautical, architectural, marine, nuclear, and sanitary. Activities of design, operation, analysis, and economic evaluation are covered.
---
paper_title: A Multiple-Representation Paradigm for Document Development
paper_content:
Powerful personal workstations with high-resolution displays, pointing devices, and windowing environments have created many new possibilities in presenting information, accessing data, and efficient computing in general. In the context of document preparation, this workstation-based technology has made it possible for the user to directly manipulate a document in its final form. The central idea is that a document is immediately reprocessed as it is edited; no syntactic constructs are explicitly used to express the desired operations. This so-called direct manipulation approach differs substantially from the traditional source language model, in which document semantics (structures and appearances) are specified with interspersed markup commands. In the source language model, a document is first prepared with a text editor, its formatting and other related processors are then executed, usually in batch mode, and the result is obtained. ::: In this dissertation, the concept of multiple representations is first examined. A complete document development environment's task domain is then identified and several aspects of such an environment under both source-language and direct-manipulation paradigms are compared and analyzed. A simple but robust framework is introduced to model multiple-representation systems in general. Based upon this framework, a top-down design methodology is derived. As a case study of this methodology, the design of V$\sb{\rm OR}$T$\sb{\rm E}$X (Visually-ORiented T$\sb{\rm E}$X), a multiple-representation environment for document development, is described. Focuses are on design options and decisions to solving the problems mentioned above. ::: Specifically, the design and implementation of V$\sb{\rm OR}$T$\sb{\rm E}$X's underlying representation transformation mechanisms in both the forward and backward directions (i.e., the incremental formatter in the forward direction and the reverse mapping engine in the backward direction) and the integration techniques used are discussed in detail. A prototype of the V$\sb{\rm OR}$T$\sb{\rm E}$X system has been implemented and it works. Finally, this multiple representation paradigm for document development is evaluated and the underlying principles with implications to other application domains are discussed. Some research directions are pointed out at the end. (Abstract shortened with permission of author.)
---
paper_title: Incremental parsing of expressions
paper_content:
Abstract In syntax-directed editors, the user edits some object on the screen according to the structure of that object. The appearance of the object is derived from some internal representation maintained by the editor. If the object is an expression, the editor maintains a tree-representation internally. During editing of an expression, a minor change in the appearance of an expression (e.g., ab × c is to be changed into a + b × c ) may necessitate a significant restructuring of this tree-representation. This article describes an algorithm that incrementally adjusts the tree-representation of an expression, while the user edits the expression in terms of its appearance rather than in terms of tree-transformations.
---
paper_title: Incremental parsing without a parser
paper_content:
This article describes an algorithm for incremental parsing of expressions in the context of syntax-directed editors for programming languages. Since a syntax-directed editor represents programs as trees and statements and expressions as nodes in trees, making minor modifications in an expression can be difficult. Consider, for example, changing a '' + '' operator to a ''*'' operator or adding a short subexpression at a syntactically but not structurally correct position, such as inserting '') * (d'' at the # mark in'' (a + b # + c)''. To make these changes in a typical syntax-directed editor, the user must understand the tree structure and type a number of tree-oriented construction and manipulation commands. This article describes an algorithm that allows the user to think in terms of the syntax of the expression as it is displayed on the screen (in infix notation) rather than in terms of its internal representation (which is effectively prefix), while maintaining the benefits of syntax-directed editing. This algorithm is significantly different from other incremental parsing algorithms in that it does not involve modifications to a traditional parsing algorithm or the overhead of maintaining a parser stack or any data structure other than the syntax tree. Instead, the algorithm applies tree transformations, in real-time as each token is inserted or deleted, to maintain a correct syntax tree.
---
paper_title: The T E Xbook
paper_content:
A semiconductor device is described containing at least two insulating gate field effect transistors in a common wafer. One of the transistors exhibits high gain but the other transistor exhibits low gain as a result of selectively implanting into its channel neutral ions and crystal damage which reduce the effective mobility of charge carriers therein. In one embodiment, the low gain transistor serves as a load for the high gain transistor. In a second embodiment, the low gain transistor is a parasitic transistor formed between adjacent circuit elements.
---
paper_title: Mathematical formula editor for CAI
paper_content:
Many students in lower grades who study mathematics with computers have difficulty in inputting formulas by using existing methods. It would be much easier for them if they could input formulas naturally, as they appear in textbooks. This paper describes such an interface program module for use in CAI. This module makes it easy for students to input and edit complex formulas solely by key operations, without using a mouse. The difference between the module and existing mathematical expression editors is that it converts formulas into character strings syntactically. In this way, CAI programs can understand the meanings of the formulas.
---
paper_title: An Automated Consultant for MACSYMA
paper_content:
A consultant is necessary whenever on is faced with a problem solving situation in a domain one does not fully understand. The lack of knowledge may be incidental, as it is when the domain or device is fairly simple but time constraints make it impossible for the user to learn all that is necessary. Computer systems like MACSYMA in which the level of commands is so close to the level of the task environment that the user is apt to confuse a simply defined procedure (like COEFF) with its mathematical counterpart (here coefficient) that it at best approximates. A computer program is described which has the capability of conversing with its user in English about a difficulty he has encountered, and providing information tailored to his need. The MACSYMA Advisor is a program distinct from MACSYMA with its own separate data base and expertise. For convenience the program can be called directly from MACSYMA and can access the user's data structures contained therein. The Advisor described here deals only with the "straightline" or nested use of MACSYMA commands and not loops or user-defined functions. The implementation of the Advisor relies heavily on an explict, internal "model" of the user's state of knowledge, his goals, and his "plan" for achieving them.
---
paper_title: Tracing occurrences of patterns in symbolic computations
paper_content:
A report is made on the present state of development of a project to construct a tracing aid for users of symbolic computing systems that are written in LISP (or, in principle, any similar high-level language). The traces in question are intended to provide information which is primarily in terms that are natural for a user, e.g. on patterns of actions performed on his data, or patterns occurring in the data themselves during the operation of his program. Patterns are described in a syntax which is inspired by SNOBOL.
---
paper_title: Logic and computation in MATHPERT: an expert system for learning mathematics
paper_content:
MATHPERT (as in “Math Expert”) is an expert system in mathematics explicitly designed to support the learning of algebra, trigonometry, and first semester calculus. This paper gives an overview of the design of MATHPERT and goes into detail about some connections it has with automated theorem proving. These connections arise at the borderline between logic and computation, which is to be found when computational “operators” have logical side conditions that must be satisfied before they are applicable. The paper also explains how MATHPERT maintains and uses an internal model of its user to produce individually tailored explanations, and how it dynamically generates individualized and helpful error messages by comparing user errors to its own internal solution of the problem.
---
paper_title: A knowledge-based approach to user-friendliness in symbolic computing
paper_content:
An experiment in making a symbolic computing system tolerant of technical deficiencies in programming by novice users is outlined. The work uses a knowledge-base of information on general properties of symbolic computations, and a pattern-matcher which can specify instances of relevant properties in a user's program or inputs. Both of these components are implemented in LISP, and have been used in conjunction with REDUCE.
---
paper_title: Representation of inference in computer algebra systems with applications to intelligent tutoring
paper_content:
Presently computer algebra systems share with calculators the property thai a sequence of computations is not a unified computational sequence, thereby allowing fallacies to occur. We argue thai if computer algebra systems operate in a fram,ework of strict mathematical proof fallacies are eliminated. We show that this is possible in a working interactive system, REQD. We explain why computaiional algebra, done under the strict constraints of proof, is relevant to uses of computer algebra systems in instruction.
---
paper_title: The Progress Towards an Intelligent Assistent - A Discussion Paper
paper_content:
Powerful computer workstations, communication networks, algebraic and numeric software systems are changing the way mathematicians work. Expert systems already provide user-friendly interfaces to several software tools, but will AI play a still larger role in the future? We open a discussion on the interaction of AI and Symbolic Mathematical Computation, and pose many questions which should be addressed.
---
paper_title: A practical method for LR and LL syntactic error diagnosis and recovery
paper_content:
This paper presents a powerful, practical, and essentially language-independent syntactic error diagnosis and recovery method that is applicable within the frameworks of LR and LL parsing. The method generally issues accurate diagnoses even where multiple errors occur within close proximity, yet seldom issues spurious error messages. It employs a new technique, parse action deferral, that allows the most appropriate recovery in cases where this would ordinarily be precluded by late detection of the error. The method is practical in that it does not impose substantial space or time overhead on the parsing of correct programs, and in that its time efficiency in processing an error allows for its incorporation in a production compiler. The method is language independent, but it does allow for tuning with respect to particular languages and implementations through the setting of language-specific parameters.
---
paper_title: Mathematica - A System for Doing Mathematics by Computer
paper_content:
This book will be released simultaneously with Release 2.0 of Mathematica and will cover all the new features of Release 2.0. This new edition maintains the format of the original book and is the single most important user guide and reference for Mathematica--all users of Mathematica will need this edition. Includes 16 pages of full-color graphics.
---
paper_title: The T E Xbook
paper_content:
A semiconductor device is described containing at least two insulating gate field effect transistors in a common wafer. One of the transistors exhibits high gain but the other transistor exhibits low gain as a result of selectively implanting into its channel neutral ions and crystal damage which reduce the effective mobility of charge carriers therein. In one embodiment, the low gain transistor serves as a load for the high gain transistor. In a second embodiment, the low gain transistor is a parasitic transistor formed between adjacent circuit elements.
---
paper_title: How to write a long formula
paper_content:
Standard mathematical notation works well for short formulas, but not for the longer ones often written by computer scientists. Notations are proposed to make one or two-page formulas easier to read and reason about.
---
paper_title: A System Independent Graphing Package for Mathematical Functions
paper_content:
SIG is a compact graphics system for the display of curves and surfaces defined by mathematical formulas in a symbolic system. It is available from Kent State University. SIG consists of two parts: xgraph and mgraph that run as concurrent processes. Xgraph is a stand-alone graphics facility written in C to work with the X Window System. Mgraph is the part that is symbolic system dependent. SIG achieves display device independence through X and portability through mgraph. Capabilities of SIG, its design and implementation, as well as plans for further development are presented.
---
paper_title: GI/S: A graphical user interface for symbolic computation systems
paper_content:
The design and implementation of GI/S, a Graphical user Interface for Symbolic computation systems, is described. The system provides a multiple window environment for the high-resolution 2-D display of mathematical expressions, the ability to select and manipulate parts of expressions with a mouse, and graphics plotting of mathematical expressions. GI/S also provides command line editing and a history mechanism for editing and re-executing past commands. GI/S is written in Franz Lisp for the Macsyma system and has been implemented on the Tektronix 4404 workstation.
---
paper_title: MathScribe: a user interface for computer algebra systems
paper_content:
This paper describes MathScribe, a powerful user interface for computer algebra systems. The interface makes use of a bitmapped display, windows, menus, and a mouse. Significant new features of MathScribe are its display of both input and output in two-dimensional form, its ability to select previous expressions, and its computationally efficient manner of displaying large expressions.
---
paper_title: IZIC: a Portable Language-Driven Tool for Mathematical Surfaces Visualization
paper_content:
This paper presents IZIC, a stand-alone high-quality 3D graphic tool driven by a command language. IZIC is an interactive version of ZICLIB, a 3D graphic library allowing efficient curve and surface manipulations using a virtual graphic device. Capabilities of ZICLIB include management of pseudo or true colors, illumination model, shading, transparency, etc. As an interactive tool, IZIC is run as a Unix server which can be driven from one or more Computer Algebra Systems, including Maple, Mathematica, and Ulysse, or through an integrated user interface such as CAS/PI. Connecting IZIC with a different system is a very simple task which can be achieved at run-time and require no compilation. Also important is the possibility to drive IZIC both through its freely-reconfigurable menus-buttons user interface, and through its command language, allowing for instance the animation of surfaces in a very flexible way.
---
paper_title: PowerMath: a system for the Macintosh
paper_content:
PowerMath is a symbolic algebra system for the MacIntosh computer. This paper outlines the design decisions that were made during its development, and explains how the novel MacIntosh environment helped and hindered the development of the system. While the interior of PowerMath is fairly conventional, the user interface has many novel features. It is these that make PowerMath not just another micro-computer algebra system.
---
paper_title: Iris: design of an user interface program for symbolic algebra
paper_content:
We present the design of a user interface program that can be used with Maple and other symbolic algebra packages. Through the use of a standard communications protocol to such a program, symbolic algebra packages can shed the bulk of code not directly related to algebraic manipulations but can still use the facilities of a powerful user interface. This interface program is designed to be used on a variety of workstations in a consistent fashion.
---
paper_title: Building a Computer Algebra Environment by Composition of Collaborative Tools
paper_content:
Building a software environment for Computer Algebra is quite a complex issue. Such an environment may include one or more Symbolic Computation tools, some devices, such as plotting engines or code generators, and a way to link others scientific applications. It is also expected that any of these components may be run on a remote processor and that the whole system is used via a convenient graphical user interface. The natural extensibility of Computer Algebra software, as well as the diversity of the needs expressed by their users, necessitate a highly open and customizable software architecture allowing different kinds of extensions and adaptations. Our approach consists of building the environment by composition of separately developed packages, using state of the art software engineering technologies in the spirit of the tool integration paradigm. This way, the different software components should be able to exchange data and freely cooperate with each other, without being too tightly coupled as in a monolithic system. A prototype of such an environment is currently under development in the framework of the SAFIR project. It will be built using an implementation of the software bus concept developed for the next version of Centaur, and should include a set of components, developed both internally and externally, and a homogeneous user interface.
---
paper_title: Tcl and the Tk Toolkit
paper_content:
Copyright © 1993 Addison-Wesley Publishing Company, Inc. All rights reserved. Duplication of this draft is permitted by individuals for personal use only. Any other form of duplication or reproduction requires prior written permission of the author or publisher. This statement must be easily visible on the first page of any reproduced copies. The publisher does not offer warranties in regard to this draft.
---
paper_title: EMACS the extensible, customizable self-documenting display editor
paper_content:
EMACS is a display editor which is implemented in an interpreted high level language. This allows users to extend the editor by replacing parts of it, to experiment with alternative command languages, and to share extensions which are generally useful. The ease of extension has contributed to the growth of a large set of useful features. This paper describes the organization of the EMACS system, emphasizing the way in which extensibility is achieved and used. ::: This report describes work done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505.
---
paper_title: Your own handprinting recognition engine
paper_content:
This invention relates to a process for separating nickel, cobalt and copper wherein a molten mixture containing either or both of nickel and cobalt and copper as an alloy and a matte or either of them is mixed in the presence of a matte, metallic iron and carbon to form a high carbon ferrous alloy and a matte in two separate phases. Both nickel and cobalt or either of them is extracted predominantly in the high carbon ferrous alloy and copper is extracted predominantly in the matte.
---
paper_title: Computer input/output of mathematical expressions
paper_content:
Studying mathematics is, in part, a language problem. Naturally, mathematicians are more likely to resist using a computer as a tool in their work if the tedious task of learning a new language for mathematics is part of the bargain. Furthermore, until a new language is thoroughly learned, difficulty in communication will make it less likely that their new experience will be a successful one. Beyond that, if a new computer language does not provide the same visual clues as standard mathematical notation, it may never adequately serve the mathematician. It therefore seems fair to assume that the computer input/output of mathematical expressions in a form resembling standard notation is an important goal. This form of expression is considerably more complex and expensive to handle than those used in, for example, the programming language FORTRAN. But the alternative (e.g., correctly manipulating a one page FORTRAN expression) is often quite painful.
---
| Title: A Survey of User Interfaces for Computer Algebra Systems
Section 1: Introduction
Description 1: Discusses the scope of the paper, problems with current CAS interfaces, and the goals of the survey.
Section 2: Previous Surveys and Overview Works
Description 2: Reviews previous efforts and significant publications in improving CAS interfaces.
Section 3: History
Description 3: Provides a historical perspective on the development of CAS interfaces from batch processing systems to the present.
Section 4: Numerical Interfaces
Description 4: Describes systems for numerical calculations with two-dimensional editing capabilities developed by MathSoft Inc.
Section 5: Document Processing Systems
Description 5: Relates CAS interfaces to document processing systems that can typeset mathematics and discusses their methods.
Section 6: Artificial Intelligence and Education
Description 6: Explores AI-directed attempts to improve CAS usability and the use of CAS in education.
Section 7: Entering Expressions
Description 7: Discusses the various methods used for entering expressions in traditional and modern CAS interfaces.
Section 8: Selecting and Editing Expressions
Description 8: Describes how different interfaces allow users to select and edit parts of mathematical expressions.
Section 9: Direct Manipulation
Description 9: Discusses the concept and implementation of direct manipulation of mathematical expressions in CAS interfaces.
Section 10: Formatting and Displaying Expressions
Description 10: Examines the ways in which CAS interfaces format and display mathematical expressions.
Section 11: Ambiguous Notation
Description 11: Discusses the issues and solutions regarding ambiguous mathematical notations in CAS interfaces.
Section 12: Session Layout
Description 12: Explores how different CAS interfaces manage session layouts and the organization of input and output.
Section 13: Graphics
Description 13: Describes the integration of graphical capabilities in CAS interfaces for plotting curves and surfaces.
Section 14: Portability to Different CASs
Description 14: Discusses the challenges and solutions for creating user interfaces that are portable across different CASs.
Section 15: Extensibility of the User Interface
Description 15: Examines the importance of extensibility in CAS interfaces to handle new notations and functionalities.
Section 16: Efficiency Considerations
Description 16: Discusses the efficiency aspects of CAS interfaces in terms of space and time, especially when dealing with large expressions.
Section 17: Alternative Input Technologies
Description 17: Explores the use of alternative input technologies such as handwriting and speech recognition in CAS interfaces. |
Survey on Human Activity Recognition based on Acceleration Data | 8 | ---
paper_title: Healthy: A Diary System Based on Activity Recognition Using Smartphone
paper_content:
An activity-diary system, named Healthy, is presented in this paper. Healthy can infer users diary of physical activities and energy expenditure based on METS (Metabolic Equivalents) values via recognizing general human activities. In this system, we design a two-layer classifier which costs less energy and memory with satisfactory accuracy. Our classifier divides the activities into two categories: periodic and nonperiodic. And a different sub-classifier is applied for each category. Meanwhile, We design a state listener to recognize more complicated activities. To further improve recognition accuracy, in the second layer sub-classifier, we put forward an adaptive framing algorithm based on the period length of periodical activities to determine the time during which features are extracted. By testing Healthy in real situation, we obtained an average recognition accuracy of 98.0%.
---
paper_title: A Survey on Human Activity Recognition using Wearable Sensors
paper_content:
Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
---
paper_title: An activity monitoring system for elderly care using generative and discriminative models
paper_content:
An activity monitoring system allows many applications to assist in care giving for elderly in their homes. In this paper we present a wireless sensor network for unintrusive observations in the home and show the potential of generative and discriminative models for recognizing activities from such observations. Through a large number of experiments using four real world datasets we show the effectiveness of the generative hidden Markov model and the discriminative conditional random fields in activity recognition.
---
paper_title: Machine Recognition of Human Activities: A Survey
paper_content:
The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) "actions" and 2) "activities." "Actions" are characterized by simple motion patterns typically executed by a single human. "Activities" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.
---
paper_title: GPARS: a general-purpose activity recognition system
paper_content:
The fundamental problem of the existing Activity Recognition (AR) systems is that these are not general-purpose. An AR system trained in an environment would only be applicable to that environment. Such a system would not be able to recognize the new activities of interest. In this paper we propose a General-Purpose Activity Recognition System (GPARS) using simple and ubiquitous sensors. It would be applicable to almost any environment and would have the ability to handle growing amounts of activities and sensors in a graceful manner (Scalable). Given a set of activities to monitor, object names (with embedded sensors) and their corresponding locations, the GPARS first mines activity knowledge from the web, and then uses them as the basis of AR. The novelty of our system, compared to the existing general-purpose systems, lies in: (1) it uses more robust activity models, (2) it significantly reduces the mining time. We have tested our system with three real world datasets. It is observed that the accuracy of activity recognition using our system is more than 80%. Our proposed mechanism yields significant improvement (more than 30%) in comparison with its counterpart.
---
paper_title: Detection of posture and motion by accelerometry: a validation study in ambulatory monitoring
paper_content:
The suitable placement of a small number of calibrated piezoresistive accelerometer devices may suffice to assess postures and motions reliably. This finding, which was obtained in a previous investigation, led to the further development of this methodology and to an extension from the laboratory to conditions of daily life. The intention was to validate the accelerometric assessment against behavior observation and to examine the retest reliability. Twenty-four participants were recorded, according to a standard protocol consisting of nine postures/motions (repeated once) which served as reference patterns. The recordings were continued outside the laboratory. A participant observer classified the postures and motions. Four sensor placements (sternum, wrist, thigh, and lower leg) were used. The findings indicated that the detection of posture and motion based on accelerometry is highly reliable. The correlation between behavior observation and kinematic analysis was satisfactory, although some participants showed discrepancies regarding specific motions.
---
paper_title: Activity recognition based on RFID object usage for smart mobile devices
paper_content:
Activity recognition is a core aspect of ubiquitous computing applications. In order to deploy activity recognition systems in the real world, we need simple sensing systems with lightweight computational modules to accurately analyze sensed data. In this paper, we propose a simple method to recognize human activities using simple object information involved in activities. We apply activity theory for representing complex human activities and propose a penalized naive Bayes classifier for performing activity recognition. Our results show that our method reduces computation up to an order of magnitude in both learning and inference without penalizing accuracy, when compared to hidden Markov models and conditional random fields.
---
paper_title: Understanding Transit Scenes: A Survey on Human Behavior-Recognition Algorithms
paper_content:
Visual surveillance is an active research topic in image processing. Transit systems are actively seeking new or improved ways to use technology to deter and respond to accidents, crime, suspicious activities, terrorism, and vandalism. Human behavior-recognition algorithms can be used proactively for prevention of incidents or reactively for investigation after the fact. This paper describes the current state-of-the-art image-processing methods for automatic-behavior-recognition techniques, with focus on the surveillance of human activities in the context of transit applications. The main goal of this survey is to provide researchers in the field with a summary of progress achieved to date and to help identify areas where further research is needed. This paper provides a thorough description of the research on relevant human behavior-recognition methods for transit surveillance. Recognition methods include single person (e.g., loitering), multiple-person interactions (e.g., fighting and personal attacks), person-vehicle interactions (e.g., vehicle vandalism), and person-facility/location interactions (e.g., object left behind and trespassing). A list of relevant behavior-recognition papers is presented, including behaviors, data sets, implementation details, and results. In addition, algorithm's weaknesses, potential research directions, and contrast with commercial capabilities as advertised by manufacturers are discussed. This paper also provides a summary of literature surveys and developments of the core technologies (i.e., low-level processing techniques) used in visual surveillance systems, including motion detection, classification of moving objects, and tracking.
---
paper_title: Human activity recognition: Various paradigms
paper_content:
Action and activity representation and recognition are very demanding research area in computer vision and man-machine interaction. Though plenty of researches have been done in this arena, the field is still immature. Over the last decades, extensive research methodologies have been developed on human activity analysis and recognition for various applications. This paper overviews various recent methods for human activity recognition with analysis. We attempt to sum up the various methods related to human motion representation and recognition. We make an effort to categorize the recent methods from the best in the business, and finally figure out the short-comings and challenges to dig out in future to develop robust action recognition approaches. This work exclusively endeavors to encompass the researches related only to human action recognition mainly from 2001 till-to-date with critical assessment of the methods. We also present our work along with to solve some of the shortcomings. It will widely benefit the researchers to understand and compare the related advancements in this area.
---
paper_title: A framework for whole-body gesture recognition from video feeds
paper_content:
The growth of technology continues to make both hardware and software affordable and accessible creating space for the emergence of new applications. Rapid growth in computer vision and image processing applications have been evident in recent years. One area of interest in vision and image processing is automated identification of objects in real-time or recorded video streams and analysis of these identified objects. An important topic of research in this context is identification of humans and interpreting their actions. Human motion identification and video processing have been used in critical crime investigations and highly technical applications usually involving skilled human experts. Although the technology has many uses that can be applied in every day activities, it has not been put into such use due to requirements in sophisticated technology, human skill and high implementation costs. This paper presents a system, which is a major part of a project called moveIt (movements interpreted), that receives video as input to process and recognize gestures of the objects of interest (the human whole body). Basic functionality of this system is to receive video stream as input and produce outputs gesture analysis of each object through a staged process of object detection, tracking, modeling and recognition of gestures as intermediate steps.
---
paper_title: Comparison of fusion methods based on DST and DBN in human activity recognition
paper_content:
Ambient assistive living environments require sophisticated information fusion and reasoning techniques to accurately identify activities of a person under care. In this paper, we explain, compare and discuss the application of two powerful fusion methods, namely dynamic Bayesian networks (DBN) and Dempster-Shafer theory (DST), for human activity recognition. Both methods are described, the implementation of activity recognition based on these methods is explained, and model acquisition and composition are suggested. We also provide functional comparison of both methods as well as performance comparison based on the publicly available activity dataset. Our findings show that in performance and applicability, both DST and DBN are very similar; however, significant differences exist in the ways the models are obtained. DST being top-down and knowledge-based, differs significantly in qualitative terms, when compared with DBN, which is data-driven. These qualitative differences between DST and DBN should therefore dictate the selection of the appropriate model to use, given a particular activity recognition application.
---
paper_title: A Survey on Human Activity Recognition using Wearable Sensors
paper_content:
Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
---
paper_title: A state classification method based on space-time signal processing using SVM for wireless monitoring systems
paper_content:
In this paper we focus on improving state classification methods that can be implemented in elderly care monitoring systems. The authors group has previously proposed an indoor monitoring and security system (array sensor) that uses only one array antenna as the receiver. The clear advantages over conventional systems are improvement of privacy concern from the usage of closed-circuit television (CCTV) cameras, and elimination of installation difficulties. Our approach is different from the previous detection method which uses an array of sensors and a threshold that can classify only two states: nothing and something happening. In this paper, we present a state classification method that uses only one feature obtained from the radio wave propagation, and assisted by multiclass support vector machines (SVM) to classify the occurring states. The feature is the first eigenvector that spans the signal subspace of interest. The proposed method can be applied to not only indoor environments but also outdoor environments such as vehicle monitoring system. We performed experiments to classify seven states in an indoor setting: “No event,” “Walking,” “Entering into a bathtub,” “Standing while showering,” “Sitting while showering,” “Falling down,” and “Passing out;” and two states in an outdoor setting: “Normal state” and “Abnormal state.” The experimental results show that we can achieve 96.5 % and 100 % classification accuracy for indoor and outdoor settings, respectively.
---
paper_title: Recruitment Framework for Participatory Sensing Data Collections
paper_content:
Mobile phones have evolved from devices that are just used for voice and text communication to platforms that are able to capture and transmit a range of data types (image, audio, and location). The adoption of these increasingly capable devices by society has enabled a potentially pervasive sensing paradigm - participatory sensing. A coordinated participatory sensing system engages individuals carrying mobile phones to explore phenomena of interest using in situ data collection. For participatory sensing to succeed, several technical challenges need to be solved. In this paper, we discuss one particular issue: developing a recruitment framework to enable organizers to identify well-suited participants for data collections based on geographic and temporal availability as well as participation habits. This recruitment system is evaluated through a series of pilot data collections where volunteers explored sustainable processes on a university campus.
---
paper_title: EFM: evolutionary fuzzy model for dynamic activities recognition using a smartphone accelerometer
paper_content:
Activity recognition is an emerging field of research that enables a large number of human-centric applications in the u-healthcare domain. Currently, there are major challenges facing this field, including creating devices that are unobtrusive and handling uncertainties associated with dynamic activities. In this paper, we propose a novel Evolutionary Fuzzy Model (EFM) to measure the uncertainties associated with dynamic activities and relax the domain knowledge constraints which are imposed by domain experts during the development of fuzzy systems. Based on the time and frequency domain features, we define the fuzzy sets and estimate the natural grouping of data through expectation maximization of the likelihoods. A Genetic Algorithm (GA) is investigated and designed to determine the optimal fuzzy rules. To evaluate the EFM, we performed experiments on seven daily life activities of ten human subjects. Our experiments show significant improvement of 9 % in class-accuracy and 11 % in the F-measures of recognized activities compared to existing counterparts. The practical solution to dynamic activity recognition problems is expected to be an EFM, due to EFM's utilization of smartphones and natural way of handling uncertainties.
---
paper_title: IoT system for Human Activity Recognition using BioHarness 3 and Smartphone
paper_content:
This paper presents an Internet of Things (IoT) approach to Human Activity Recognition (HAR) using remote monitoring of vital signs in the context of a healthcare system for self-managed chronic heart patients. Our goal is to create a HAR-IoT system using learning algorithms to infer the activity done within 4 categories (lie, sit, walk and run) as well as the time consumed performing these activities and, finally giving feedback during and after the activity. Alike in this work, we provide a comprehensive insight on the cloud-based system implemented and the conclusions after implementing two different learning algorithms and the results of the overall system for larger implementations.
---
paper_title: Towards Physical Activity Recognition Using Smartphone Sensors
paper_content:
In recent years, the use of a smartphone accelerometer in physical activity recognition has been well studied. However, the role of a gyroscope and a magnetometer is yet to be explored, both when used alone as well as in combination with an accelerometer. For this purpose, we investigate the role of these three smartphone sensors in activity recognition. We evaluate their roles on four body positions using seven classifiers while recognizing six physical activities. We show that in general an accelerometer and a gyroscope complement each other, thereby making the recognition process more reliable. Moreover, in most cases, a gyroscope does not only improve the recognition accuracy in combination with an accelerometer, but it also achieves a reasonable performance when used alone. The results for a magnetometer are not encouraging because it causes over-fitting in training classifiers due to its dependence on directions. Based on our evaluations, we show that it is difficult to make an exact general statement about which sensor performs better than the others in all situations because their recognition performance depends on the smartphone's position, the selected classifier, and the activity being recognized. However, statements about their roles in specific situations can be made. We report our observations and results in detail in this paper, while our data-set and data-collection app is publicly available, thereby making our experiments reproducible.
---
paper_title: Fusion of Smartphone Motion Sensors for Physical Activity Recognition
paper_content:
For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.
---
paper_title: A Survey on Human Activity Recognition using Wearable Sensors
paper_content:
Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
---
paper_title: Wearable Internet of Things - from human activity tracking to clinical integration
paper_content:
Wearable devices for human activity tracking have been emerging rapidly. Most of them are capable of sending health statistics to smartphones, smartwatches or smart bands. However, they only provide the data for individual analysis and their data is not integrated into clinical practice. Leveraging on the Internet of Things (IoT), edge and cloud computing technologies, we propose an architecture which is capable of providing cloud based clinical services using human activity data. Such services could supplement the shortage of staff in primary healthcare centers thereby reducing the burden on healthcare service providers. The enormous amount of data created from such services could also be utilized for planning future therapies by studying recovery cycles of existing patients. We provide a prototype based on our architecture and discuss its salient features. We also provide use cases of our system in personalized and home based healthcare services. We propose an International Telecommunication Union based standardization (ITU-T) for our design and discuss future directions in wearable IoT.
---
paper_title: Energy-Efficient Motion Related Activity Recognition on Mobile Devices for Pervasive Healthcare
paper_content:
Activity recognition plays an important role for pervasive healthcare such as health monitoring, assisted living and pro-active services. Despite of the continuous and transparent sensing with various built-in sensors in mobile devices, activity recognition on mobile devices for pervasive healthcare is still a challenge due to the constraint of resources, such as battery limitation, computation workload, etc. Keeping in view the demand of energy-efficient activity recognition, we propose a hierarchical method to recognize user activities based on a single tri-axial accelerometer in smart phones for health monitoring. Specifically, the contribution of this paper is two-fold. First, it is demonstrated that the activity recognition based on the low sampling frequency is feasible for the long-term activity monitoring. Second, this paper presents a hierarchical recognition scheme. The proposed algorithm reduces the opportunity of usage of time-consuming frequency-domain features and adjusts the size of sliding window to improve recognition accuracy. Experimental results demonstrate the effectiveness of the proposed algorithm, with more than 85 % recognition accuracy rate for 11 activities and 3.2 h extended battery life for mobile phones. Our energy efficient recognition algorithm extends the battery time for activity recognition on mobile devices and contributes to the health monitoring for pervasive healthcare.
---
paper_title: A tutorial on human activity recognition using body-worn inertial sensors
paper_content:
The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.
---
paper_title: Activity recognition using cell phone accelerometers
paper_content:
Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily/weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise.
---
paper_title: Energy-Efficient Motion Related Activity Recognition on Mobile Devices for Pervasive Healthcare
paper_content:
Activity recognition plays an important role for pervasive healthcare such as health monitoring, assisted living and pro-active services. Despite of the continuous and transparent sensing with various built-in sensors in mobile devices, activity recognition on mobile devices for pervasive healthcare is still a challenge due to the constraint of resources, such as battery limitation, computation workload, etc. Keeping in view the demand of energy-efficient activity recognition, we propose a hierarchical method to recognize user activities based on a single tri-axial accelerometer in smart phones for health monitoring. Specifically, the contribution of this paper is two-fold. First, it is demonstrated that the activity recognition based on the low sampling frequency is feasible for the long-term activity monitoring. Second, this paper presents a hierarchical recognition scheme. The proposed algorithm reduces the opportunity of usage of time-consuming frequency-domain features and adjusts the size of sliding window to improve recognition accuracy. Experimental results demonstrate the effectiveness of the proposed algorithm, with more than 85 % recognition accuracy rate for 11 activities and 3.2 h extended battery life for mobile phones. Our energy efficient recognition algorithm extends the battery time for activity recognition on mobile devices and contributes to the health monitoring for pervasive healthcare.
---
paper_title: A tutorial on human activity recognition using body-worn inertial sensors
paper_content:
The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.
---
paper_title: A Study on Human Activity Recognition Using Accelerometer Data from Smartphones
paper_content:
Abstract This paper describes how to recognize certain types of human physical activities using acceleration data generated by a user's cell phone. We propose a recognition system in which a new digital low-pass filter is designed in order to isolate the component of gravity acceleration from that of body acceleration in the raw data. The system was trained and tested in an experiment with multiple human subjects in real-world conditions. Several classifiers were tested using various statistical features. High-frequency and low-frequency components of the data were taken into account. We selected five classifiers each offering good performance for recognizing our set of activities and investigated how to combine them into an optimal set of classifiers. We found that using the average of probabilities as the fusion method could reach an overall accuracy rate of 91.15%.
---
paper_title: Real-time activity recognition in mobile phones based on its accelerometer data
paper_content:
Context awareness is one of the important keys in a pervasive and ubiquitous environment. Activity recognition by utilizing accelerometer sensor is one of the context aware studies that has attracted many researchers, even up until today. Inspired by these researches, we came out with this presented study, which is a continuation of our previous workswhere we explore the possibility of using accelerometer embedded in smartphones in recognizing basic user activity through client/server architecture. In this paper, we present our work in exploring the influence of training data size on recognition accuracy in building classifier model by studying two algorithms, Naive Bayes and Instance Based classifier (IBk, k=3). The result shows that 13 out of 18 possible combinations for both algorithms gave 90% training data size as the best accuracy, thus proving the assumption that bigger size of training data gives better classification accuracy compared to small sized training data, in most cases. Based on the outcome from the study, it is then implemented in Actiware, which is an activity aware application prototype that uses built in accelerometer sensor in smartphones to perform real-time/online activity recognition. The recognition process is done by utilizing available phone resources locally, without the involvement of any external server connection. ActiWare manages to exhibit encouraging result by recognizing basic user activities with relatively small confusion when tested.
---
paper_title: A Survey on Human Activity Recognition using Wearable Sensors
paper_content:
Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
---
paper_title: DTWDIR: AN ENHANCED DTW ALGORITHM FOR AUTISTIC CHILD BEHAVIOUR MONITORING
paper_content:
Autism has symptoms can hardly be recognized in the early stages of the disease, and it affects the child's mental health on the long term. Autism can be identified by parents monitoring to the child and diagnosed by psychiatrists using an international standard checklist. The checklist questions should be answered by the parent and psychiatrist to determine the risk level of autism (high, medium, or low risk). It is hard for parents to monitor more than 20 child's behaviours at the same time regardless lack of accuracy for answering on most of these questions. We propose a system for monitoring autistic child behaviours by analysing accelerometer data collected from wearable mobile device. The behaviours are recognized by using a novel algorithm called DTWDir that based on calculating displacement and direction between two signals. DTWDir is evaluated by comparing it to KNN, classical Dynamic Time Warping (DTW), and One Dollar Recognition ($1) algorithms. The results show that DTWDir accuracy is higher than the others.
---
paper_title: Daily activity recognition based on DNN using environmental sound and acceleration signals
paper_content:
We propose a new method of recognizing daily human activities based on a Deep Neural Network (DNN), using multimodal signals such as environmental sound and subject acceleration. We conduct recognition experiments to compare the proposed method to other methods such as a Support Vector Machine (SVM), using real-world data recorded continuously over 72 hours. Our proposed method achieved a frame accuracy rate of 85.5% and a sample accuracy rate of 91.7% when identifying nine different types of daily activities. Furthermore, the proposed method outperformed the SVM-based method when an additional "Other" activity category was included. Therefore, we demonstrate that DNNs are a robust method of daily activity recognition.
---
paper_title: Iterative Learning for Human Activity Recognition from Wearable Sensor Data
paper_content:
Wearable sensor technologies are a key component in the design of applications for human activity recognition, in areas like healthcare, sports and safety. In this paper, we present an iterative learning method to classify human locomotion activities extracted from the Opportunity dataset by implementing a data-driven architecture. Data collected by twelve 3D acceleration sensors and seven inertial measurement units are de-noised using a wavelet filter, prior to the extraction of statistical parameters of kinematical features, such as Principal Components Analysis and Singular Value Decomposition of roll, pitch, yaw and the norm of the axial components. A novel approach is proposed to minimize the number of samples required to classify walk, stand, lie and sit human locomotion activities based on these features. The methodology consists in an iterative extraction of the best candidates for building the training dataset. The best training candidates are selected when the Euclidean distance between an input data and its cluster’s centroid is larger than the mean plus the standard deviation of all Euclidean distances between all input data and their corresponding clusters. The resulting datasets are then used to train an SVM multi-class classifier that produces the lowest prediction error. The learning method presented in this paper ensures a high level of robustness to variations in the quality of input data while only using a much lower number of training samples and therefore a much shorter training time, which is an important aspect given the large size of the dataset.
---
paper_title: Smartphone-based monitoring system for activities of daily living for elderly people and their relatives etc.
paper_content:
We developed a smartphone-based monitoring system to allay the anxiety of elderly people and that of their relatives, friends and caregivers by unobtrusively monitoring an elderly person's activities of daily living. A smartphone of the elderly person continuously recognizes indoor-outdoor activities by using only built-in sensors and uploads the activity log to a web server. By accessing the server, relatives etc. at remote locations can browse the log to make sure the elderly person is safe and sound. We conducted an evaluation experiment and confirmed that the proposed system had practical recognition accuracy and satisfied the users' needs.
---
paper_title: An Experimental Comparison Between Seven Classification Algorithms for Activity Recognition
paper_content:
The daily activities recognition is one of the most important areas that attract the attention of researchers. Automatic classification of activities of daily living (ADL) can be used to promote healthier lifestyle, though it can be challenging when it comes to intellectual disability personals, the elderly, or children. Thus developing a technique to recognize activities with high quality is critical for such applications. In this work, seven algorithms are developed and evaluated for classification of everyday activities like climbing the stairs, drinking water, getting up from bed, pouring water, sitting down on a chair, standing up from a chair, and walking. Algorithms of concern are K-nearest Neighbor, Artificial Neural Network, and Naive Bayes, Dynamic Time Warping, $1 recognizer, Support Vector Machine, and a novel classifier (D$1). We explore different algorithm activities with regard to recognizing everyday activities. We also present a technique based on $1 and DTW to enhance the recognition accuracy of ADL. Our result show that we can achieve up to 83 % accuracy for seven different activities.
---
paper_title: Sensor Placement for Activity Detection Using Wearable Accelerometers
paper_content:
Activities of daily living are important for assessing changes in physical and behavioural profiles of the general population over time, particularly for the elderly and patients with chronic diseases. Although accelerometers are widely integrated with wearable sensors for activity classification, the positioning of the sensors and the selection of relevant features for different activity groups still pose interesting research challenges. This paper investigates wearable sensor placement at different body positions and aims to provide a framework that can answer the following questions: (i) What is the ideal sensor location for a given group of activities? (ii) Of the different time-frequency features that can be extracted from wearable accelerometers, which ones are most relevant for discriminating different activity types?
---
paper_title: Mobile Online Activity Recognition System Based on Smartphone Sensors
paper_content:
In this paper, we propose an efficient and flexible framework for activity recognition based on smartphone sensors, so called Mobile Online Activity Recognition System (MOARS). This system comprises data collection, training, activity recognition, and feedback monitoring. It allows users to put their smartphones in any position and at any direction. In our proposed framework, a set of power-based and frequency-based features is extracted from sensor data. Then, Random Forest, Naive Bayes, K-Nearest Neighbor (KNN), and Support Vector Machine (SVM) classification algorithms are deployed for recognizing a set of user activities. Our framework dynamically takes into account real-time user feedbacks to increase the accuracy of activity prediction. This framework is able to apply for intelligent mobile applications. A number of experiments are carried out to show the high accuracy of MOARS in detecting user activities when walking or driving a motorbike.
---
paper_title: Accelerometry: providing an integrated, practical method for long-term, ambulatory monitoring of human movement.
paper_content:
Accelerometry offers a practical and low cost method of objectively monitoring human movements, and has particular applicability to the monitoring of free-living subjects. Accelerometers have been used to monitor a range of different movements, including gait, sit-to-stand transfers, postural sway and falls. They have also been used to measure physical activity levels and to identify and classify movements performed by subjects. This paper reviews the use of accelerometer-based systems in each of these areas. The scope and applicability of such systems in unsupervised monitoring of human movement are considered. The different systems and monitoring techniques can be integrated to provide a more comprehensive system that is suitable for measuring a range of different parameters in an unsupervised monitoring context with free-living subjects. An integrated approach is described in which a single, waist-mounted accelerometry system is used to monitor a range of different parameters of human movement in an unsupervised setting.
---
paper_title: Wearable Sensor Data Classification for Human Activity Recognition Based on an Iterative Learning Framework †
paper_content:
The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.
---
paper_title: Optimal Placement of Accelerometers for the Detection of Everyday Activities
paper_content:
This article describes an investigation to determine the optimal placement of accelerometers for the purpose of detecting a range of everyday activities. The paper investigates the effect of combining data from accelerometers placed at various bodily locations on the accuracy of activity detection. Eight healthy males participated within the study. Data were collected from six wireless tri-axial accelerometers placed at the chest, wrist, lower back, hip, thigh and foot. Activities included walking, running on a motorized treadmill, sitting, lying, standing and walking up and down stairs. The Support Vector Machine provided the most accurate detection of activities of all the machine learning algorithms investigated. Although data from all locations provided similar levels of accuracy, the hip was the best single location to record data for activity detection using a Support Vector Machine, providing small but significantly better accuracy than the other investigated locations. Increasing the number of sensing locations from one to two or more statistically increased the accuracy of classification. There was no significant difference in accuracy when using two or more sensors. It was noted, however, that the difference in activity detection using single or multiple accelerometers may be more pronounced when trying to detect finer grain activities. Future work shall therefore investigate the effects of accelerometer placement on a larger range of these activities.
---
paper_title: From smart to deep: Robust activity recognition on smartwatches using deep learning
paper_content:
The use of deep learning for the activity recognition performed by wearables, such as smartwatches, is an understudied problem. To advance current understanding in this area, we perform a smartwatch-centric investigation of activity recognition under one of the most popular deep learning methods — Restricted Boltzmann Machines (RBM). This study includes a variety of typical behavior and context recognition tasks related to smartwatches (such as transportation mode, physical activities and indoor/outdoor detection) to which RBMs have previously never been applied. Our findings indicate that even a relatively simple RBM-based activity recognition pipeline is able to outperform a wide-range of common modeling alternatives for all tested activity classes. However, usage of deep models is also often accompanied by resource consumption that is unacceptably high for constrained devices like watches. Therefore, we complement this result with a study of the overhead of specifically RBM-based activity models on representative smartwatch hardware (the Snapdragon 400 SoC, present in many commercial smartwatches). These results show, contrary to expectation, RBM models for activity recognition have acceptable levels of resource use for smartwatch-class hardware already on the market. Collectively, these two experimental results make a strong case for more widespread adoption of deep learning techniques within smartwatch designs moving forward.
---
paper_title: Detection of Daily Activities and Sports With Wearable Sensors in Controlled and Uncontrolled Conditions
paper_content:
Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.
---
paper_title: A Study on Human Activity Recognition Using Accelerometer Data from Smartphones
paper_content:
Abstract This paper describes how to recognize certain types of human physical activities using acceleration data generated by a user's cell phone. We propose a recognition system in which a new digital low-pass filter is designed in order to isolate the component of gravity acceleration from that of body acceleration in the raw data. The system was trained and tested in an experiment with multiple human subjects in real-world conditions. Several classifiers were tested using various statistical features. High-frequency and low-frequency components of the data were taken into account. We selected five classifiers each offering good performance for recognizing our set of activities and investigated how to combine them into an optimal set of classifiers. We found that using the average of probabilities as the fusion method could reach an overall accuracy rate of 91.15%.
---
paper_title: Deep Activity Recognition Models with Triaxial Accelerometers
paper_content:
Despite the widespread installation of accelerometers in almost all mobile phones and wearable devices, activity recognition using accelerometers is still immature due to the poor recognition accuracy of existing recognition methods and the scarcity of labeled training data. We consider the problem of human activity recognition using triaxial accelerometers and deep learning paradigms. This paper shows that deep activity recognition models (a) provide better recognition accuracy of human activities, (b) avoid the expensive design of handcrafted features in existing systems, and (c) utilize the massive unlabeled acceleration samples for unsupervised feature extraction. Moreover, a hybrid approach of deep learning and hidden Markov models (DL-HMM) is presented for sequential activity recognition. This hybrid approach integrates the hierarchical representations of deep activity recognition models with the stochastic modeling of temporal sequences in the hidden Markov models. We show substantial recognition improvement on real world datasets over state-of-the-art methods of human activity recognition using triaxial accelerometers.
---
paper_title: Real-time activity recognition in mobile phones based on its accelerometer data
paper_content:
Context awareness is one of the important keys in a pervasive and ubiquitous environment. Activity recognition by utilizing accelerometer sensor is one of the context aware studies that has attracted many researchers, even up until today. Inspired by these researches, we came out with this presented study, which is a continuation of our previous workswhere we explore the possibility of using accelerometer embedded in smartphones in recognizing basic user activity through client/server architecture. In this paper, we present our work in exploring the influence of training data size on recognition accuracy in building classifier model by studying two algorithms, Naive Bayes and Instance Based classifier (IBk, k=3). The result shows that 13 out of 18 possible combinations for both algorithms gave 90% training data size as the best accuracy, thus proving the assumption that bigger size of training data gives better classification accuracy compared to small sized training data, in most cases. Based on the outcome from the study, it is then implemented in Actiware, which is an activity aware application prototype that uses built in accelerometer sensor in smartphones to perform real-time/online activity recognition. The recognition process is done by utilizing available phone resources locally, without the involvement of any external server connection. ActiWare manages to exhibit encouraging result by recognizing basic user activities with relatively small confusion when tested.
---
paper_title: On preserving statistical characteristics of accelerometry data using their empirical cumulative distribution
paper_content:
The majority of activity recognition systems in wearable computing rely on a set of statistical measures, such as means and moments, extracted from short frames of continuous sensor measurements to perform recognition. These features implicitly quantify the distribution of data observed in each frame. However, feature selection remains challenging and labour intensive, rendering a more generic method to quantify distributions in accelerometer data much desired. In this paper we present the ECDF representation, a novel approach to preserve characteristics of arbitrary distributions for feature extraction, which is particularly suitable for embedded applications. In extensive experiments on six publicly available datasets we demonstrate that it outperforms common approaches to feature extraction across a wide variety of tasks.
---
paper_title: Deep Recurrent Neural Networks for Human Activity Recognition
paper_content:
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
---
paper_title: Mobile Online Activity Recognition System Based on Smartphone Sensors
paper_content:
In this paper, we propose an efficient and flexible framework for activity recognition based on smartphone sensors, so called Mobile Online Activity Recognition System (MOARS). This system comprises data collection, training, activity recognition, and feedback monitoring. It allows users to put their smartphones in any position and at any direction. In our proposed framework, a set of power-based and frequency-based features is extracted from sensor data. Then, Random Forest, Naive Bayes, K-Nearest Neighbor (KNN), and Support Vector Machine (SVM) classification algorithms are deployed for recognizing a set of user activities. Our framework dynamically takes into account real-time user feedbacks to increase the accuracy of activity prediction. This framework is able to apply for intelligent mobile applications. A number of experiments are carried out to show the high accuracy of MOARS in detecting user activities when walking or driving a motorbike.
---
paper_title: Deep learning for human activity recognition: A resource efficient implementation on low-power devices
paper_content:
Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.
---
paper_title: A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices
paper_content:
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
---
paper_title: Activity recognition using cell phone accelerometers
paper_content:
Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily/weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise.
---
paper_title: An activity monitoring system for elderly care using generative and discriminative models
paper_content:
An activity monitoring system allows many applications to assist in care giving for elderly in their homes. In this paper we present a wireless sensor network for unintrusive observations in the home and show the potential of generative and discriminative models for recognizing activities from such observations. Through a large number of experiments using four real world datasets we show the effectiveness of the generative hidden Markov model and the discriminative conditional random fields in activity recognition.
---
paper_title: Machine Recognition of Human Activities: A Survey
paper_content:
The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) "actions" and 2) "activities." "Actions" are characterized by simple motion patterns typically executed by a single human. "Activities" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.
---
paper_title: GPARS: a general-purpose activity recognition system
paper_content:
The fundamental problem of the existing Activity Recognition (AR) systems is that these are not general-purpose. An AR system trained in an environment would only be applicable to that environment. Such a system would not be able to recognize the new activities of interest. In this paper we propose a General-Purpose Activity Recognition System (GPARS) using simple and ubiquitous sensors. It would be applicable to almost any environment and would have the ability to handle growing amounts of activities and sensors in a graceful manner (Scalable). Given a set of activities to monitor, object names (with embedded sensors) and their corresponding locations, the GPARS first mines activity knowledge from the web, and then uses them as the basis of AR. The novelty of our system, compared to the existing general-purpose systems, lies in: (1) it uses more robust activity models, (2) it significantly reduces the mining time. We have tested our system with three real world datasets. It is observed that the accuracy of activity recognition using our system is more than 80%. Our proposed mechanism yields significant improvement (more than 30%) in comparison with its counterpart.
---
paper_title: Activity recognition based on RFID object usage for smart mobile devices
paper_content:
Activity recognition is a core aspect of ubiquitous computing applications. In order to deploy activity recognition systems in the real world, we need simple sensing systems with lightweight computational modules to accurately analyze sensed data. In this paper, we propose a simple method to recognize human activities using simple object information involved in activities. We apply activity theory for representing complex human activities and propose a penalized naive Bayes classifier for performing activity recognition. Our results show that our method reduces computation up to an order of magnitude in both learning and inference without penalizing accuracy, when compared to hidden Markov models and conditional random fields.
---
paper_title: Fall Detection Based on Movement and Smart Phone Technology
paper_content:
Nowadays, recognizing human activities is an important subject; it is exploited widely and applied to many fields in real-life, especially health care or context aware application. Research achievements are mainly focused on activities of daily living which are useful for suggesting advises to health care applications. Falling event is one of the biggest risks to the health and well being of the elderly especially in independent living because falling accidents may be caused from heart attack. Recognizing this activity still remains in difficult research area. Many systems which equip wearable sensors have been proposed but they are not useful if users forget to wear the clothes or lack ability to adapt themselves to mobile systems without specific wearable sensors. In this paper, we develop novel method based on analyzing the change of acceleration, orientation when the fall occurs. In this study, we recruit five volunteers in our experiment including various fall categories. The results are effective for recognizing fall activity. Our system is implemented on Google Android smart phone which already plugged accelerometer and orientation sensors. The popular phone is used to get data from accelerometer and results show the feasibility of our method and contribute significantly to fall detection in Health care.
---
paper_title: Understanding Transit Scenes: A Survey on Human Behavior-Recognition Algorithms
paper_content:
Visual surveillance is an active research topic in image processing. Transit systems are actively seeking new or improved ways to use technology to deter and respond to accidents, crime, suspicious activities, terrorism, and vandalism. Human behavior-recognition algorithms can be used proactively for prevention of incidents or reactively for investigation after the fact. This paper describes the current state-of-the-art image-processing methods for automatic-behavior-recognition techniques, with focus on the surveillance of human activities in the context of transit applications. The main goal of this survey is to provide researchers in the field with a summary of progress achieved to date and to help identify areas where further research is needed. This paper provides a thorough description of the research on relevant human behavior-recognition methods for transit surveillance. Recognition methods include single person (e.g., loitering), multiple-person interactions (e.g., fighting and personal attacks), person-vehicle interactions (e.g., vehicle vandalism), and person-facility/location interactions (e.g., object left behind and trespassing). A list of relevant behavior-recognition papers is presented, including behaviors, data sets, implementation details, and results. In addition, algorithm's weaknesses, potential research directions, and contrast with commercial capabilities as advertised by manufacturers are discussed. This paper also provides a summary of literature surveys and developments of the core technologies (i.e., low-level processing techniques) used in visual surveillance systems, including motion detection, classification of moving objects, and tracking.
---
paper_title: Human activity recognition: Various paradigms
paper_content:
Action and activity representation and recognition are very demanding research area in computer vision and man-machine interaction. Though plenty of researches have been done in this arena, the field is still immature. Over the last decades, extensive research methodologies have been developed on human activity analysis and recognition for various applications. This paper overviews various recent methods for human activity recognition with analysis. We attempt to sum up the various methods related to human motion representation and recognition. We make an effort to categorize the recent methods from the best in the business, and finally figure out the short-comings and challenges to dig out in future to develop robust action recognition approaches. This work exclusively endeavors to encompass the researches related only to human action recognition mainly from 2001 till-to-date with critical assessment of the methods. We also present our work along with to solve some of the shortcomings. It will widely benefit the researchers to understand and compare the related advancements in this area.
---
paper_title: A framework for whole-body gesture recognition from video feeds
paper_content:
The growth of technology continues to make both hardware and software affordable and accessible creating space for the emergence of new applications. Rapid growth in computer vision and image processing applications have been evident in recent years. One area of interest in vision and image processing is automated identification of objects in real-time or recorded video streams and analysis of these identified objects. An important topic of research in this context is identification of humans and interpreting their actions. Human motion identification and video processing have been used in critical crime investigations and highly technical applications usually involving skilled human experts. Although the technology has many uses that can be applied in every day activities, it has not been put into such use due to requirements in sophisticated technology, human skill and high implementation costs. This paper presents a system, which is a major part of a project called moveIt (movements interpreted), that receives video as input to process and recognize gestures of the objects of interest (the human whole body). Basic functionality of this system is to receive video stream as input and produce outputs gesture analysis of each object through a staged process of object detection, tracking, modeling and recognition of gestures as intermediate steps.
---
paper_title: User, device and orientation independent human activity recognition on mobile phones: challenges and a proposal
paper_content:
Smart phones equipped with a rich set of sensors are explored as alternative platforms for human activity recognition in the ubiquitous computing domain. However, there exist challenges that should be tackled before the successful acceptance of such systems by the masses. In this paper, we particularly focus on the challenges arising from the differences in user behavior and in the hardware. To investigate the impact of these factors on the recognition accuracy, we performed tests with 20 different users focusing on the recognition of basic locomotion activities using the accelerometer, gyroscope and magnetic field sensors. We investigated the effect of feature types, to represent the raw data, and the use of linear acceleration for user, device and orientation-independent activity recognition.
---
paper_title: Comparison of fusion methods based on DST and DBN in human activity recognition
paper_content:
Ambient assistive living environments require sophisticated information fusion and reasoning techniques to accurately identify activities of a person under care. In this paper, we explain, compare and discuss the application of two powerful fusion methods, namely dynamic Bayesian networks (DBN) and Dempster-Shafer theory (DST), for human activity recognition. Both methods are described, the implementation of activity recognition based on these methods is explained, and model acquisition and composition are suggested. We also provide functional comparison of both methods as well as performance comparison based on the publicly available activity dataset. Our findings show that in performance and applicability, both DST and DBN are very similar; however, significant differences exist in the ways the models are obtained. DST being top-down and knowledge-based, differs significantly in qualitative terms, when compared with DBN, which is data-driven. These qualitative differences between DST and DBN should therefore dictate the selection of the appropriate model to use, given a particular activity recognition application.
---
paper_title: YOLO9000: Better, Faster, Stronger
paper_content:
We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.
---
paper_title: View invariant human action recognition using histograms of 3D joints
paper_content:
In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.
---
paper_title: Optimal Placement of Accelerometers for the Detection of Everyday Activities
paper_content:
This article describes an investigation to determine the optimal placement of accelerometers for the purpose of detecting a range of everyday activities. The paper investigates the effect of combining data from accelerometers placed at various bodily locations on the accuracy of activity detection. Eight healthy males participated within the study. Data were collected from six wireless tri-axial accelerometers placed at the chest, wrist, lower back, hip, thigh and foot. Activities included walking, running on a motorized treadmill, sitting, lying, standing and walking up and down stairs. The Support Vector Machine provided the most accurate detection of activities of all the machine learning algorithms investigated. Although data from all locations provided similar levels of accuracy, the hip was the best single location to record data for activity detection using a Support Vector Machine, providing small but significantly better accuracy than the other investigated locations. Increasing the number of sensing locations from one to two or more statistically increased the accuracy of classification. There was no significant difference in accuracy when using two or more sensors. It was noted, however, that the difference in activity detection using single or multiple accelerometers may be more pronounced when trying to detect finer grain activities. Future work shall therefore investigate the effects of accelerometer placement on a larger range of these activities.
---
paper_title: Head detection using Kinect camera and its application to fall detection
paper_content:
This article proposes a head detection algorithm for depth video provided by a Kinect camera and its application to fall detection. The proposed algorithm first detects possible head positions and then based on these positions, recognizes people by detecting the head and the shoulders. Searching for head positions is rapid because we only look for the head contour on the human outer contour. The human recognition is a modification of HOG (Histogram of Oriented Gradient) for the head and the shoulders. Compared with the original HOG, our algorithm is more robust to human articulation and back bending. The fall detection algorithm is based on the speed of the head and the body centroid and their distance to the ground. By using both the body centroid and the head, our algorithm is less affected by the centroid fluctuation. Besides, we also present a simple but effective method to verify the distance from the ground to the head and the centroid.
---
paper_title: A state classification method based on space-time signal processing using SVM for wireless monitoring systems
paper_content:
In this paper we focus on improving state classification methods that can be implemented in elderly care monitoring systems. The authors group has previously proposed an indoor monitoring and security system (array sensor) that uses only one array antenna as the receiver. The clear advantages over conventional systems are improvement of privacy concern from the usage of closed-circuit television (CCTV) cameras, and elimination of installation difficulties. Our approach is different from the previous detection method which uses an array of sensors and a threshold that can classify only two states: nothing and something happening. In this paper, we present a state classification method that uses only one feature obtained from the radio wave propagation, and assisted by multiclass support vector machines (SVM) to classify the occurring states. The feature is the first eigenvector that spans the signal subspace of interest. The proposed method can be applied to not only indoor environments but also outdoor environments such as vehicle monitoring system. We performed experiments to classify seven states in an indoor setting: “No event,” “Walking,” “Entering into a bathtub,” “Standing while showering,” “Sitting while showering,” “Falling down,” and “Passing out;” and two states in an outdoor setting: “Normal state” and “Abnormal state.” The experimental results show that we can achieve 96.5 % and 100 % classification accuracy for indoor and outdoor settings, respectively.
---
paper_title: Towards Automatic Feature Extraction for Activity Recognition from Wearable Sensors: A Deep Learning Approach
paper_content:
This paper presents a novel approach for activity recognition from accelerometer data. Existing approaches usually extract hand-crafted features that are used as input for classifiers. However, hand-crafted features are data dependent and could not be generalized for different application domains. To overcome these limitations, our approach relies on matrix factorization for dimensionality reduction and deep learning algorithm such as a stacked auto-encoder to automatically learn suitable features, which will be then fed into a softmax classifier for classification. Our approach has potential advantages over existing approaches in terms of automatic feature extraction and generalization across different application domains. The proposed approach is validated using extensive experiments on various publicly available datasets. We empirically demonstrate that our proposed approach accurately discriminates between human activities and performs better than several state-of-the-art approaches.
---
paper_title: Deep Recurrent Neural Networks for Human Activity Recognition
paper_content:
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
---
| Title: Survey on Human Activity Recognition based on Acceleration Data
Section 1: INTRODUCTION
Description 1: This section introduces the importance and applications of human activity recognition (HAR) and provides an overview of the paper structure.
Section 2: Sensor Approaches
Description 2: This section discusses the different types of sensors used in HAR, comparing external sensors and wearable sensors, and highlighting the limitations and motivations for using wearable sensors.
Section 3: Challenges Face HAR System Designers
Description 3: This section outlines the challenges faced by HAR system designers, including the selection of attributes, the construction of portable and cost-effective systems, and the need for online recognition for certain applications.
Section 4: Machine Learning Techniques
Description 4: This section describes the various machine learning techniques used in HAR, distinguishing between traditional machine learning approaches and deep learning techniques.
Section 5: ONLINE VS. OFFLINE HAR SYSTEMS
Description 5: This section compares online and offline HAR systems, discussing their training and classification phases and detailing the advantages and drawbacks of each approach.
Section 6: TRADITIONAL AND DEEP LEARNING TECHNIQUES
Description 6: This section contrasts traditional machine learning algorithms with deep learning techniques in the context of HAR, examining their performance, accuracy, and recent trends in their application.
Section 7: ISSUES AND CHALLENGES
Description 7: This section explores the issues and challenges associated with HAR, including sensor types, energy consumption, sampling rates, the need for high-end machines, and differences between machine learning and deep learning approaches.
Section 8: CONCLUSION
Description 8: This section summarizes the survey’s findings, highlighting the state-of-the-art in HAR using acceleration data, comparing traditional and deep learning algorithms, and discussing their respective advantages and challenges. |
The MPEG-7 Visual standard for content description - an overview | 9 | ---
paper_title: Digital video coding standards and their role in video communications
paper_content:
The efficient digital representation of image and video signals has been subject of considerable research over the past 20 years. With the growing availability of digital transmission links, progress in signal processing, VLSI technology and image compression research, visual communications has become more feasible than ever. Digital video coding technology has developed into a mature field and a diversity of products has been developed-targeted for a wide range of emerging applications, such as video on demand, digital TV/HDTV broadcasting, and multimedia image/video database services. With the increased commercial interest in video communications the need for international image and video coding standards arose. Standardization of video coding algorithms holds the promise of large markets for video communication equipment. Interoperability of implementations from different vendors enables the consumer to access video from a wider range of services and VLSI implementations of coding algorithms conforming to international standards can be manufactured at considerably reduced costs. The purpose of this paper is to provide an overview of today's image and video coding standards and their role in video communications. The different coding algorithms developed for each standard are reviewed and the commonalities between the standards are discussed. >
---
paper_title: MPEG-7 Systems: overview
paper_content:
This paper gives an overview of Part 1 of ISO/IEC 15939 (MPEG-7 Systems). It first presents the objectives of the MPEG-7 Systems activity. In the MPEG-1 and MPEG-2 standards, "Systems" referred only to overall architecture, multiplexing, and synchronization. In MPEG-4, in addition to these issues, the Systems part encompasses interactive scene description, content description, and programmability, MPEG-7 brings new challenges to the Systems expertise, such as languages for description representation, binary representation of descriptions, and delivery of descriptions either separate or jointly with the audio-visual content. The paper then presents the description of the MPEG-7 Systems specification, starting from the general architecture up to the description of the individual MPEG-7 Systems tools. Finally, a conclusion describes the status of the standardization effort, as well as future extensions of the specification.
---
paper_title: Digital video coding standards and their role in video communications
paper_content:
The efficient digital representation of image and video signals has been subject of considerable research over the past 20 years. With the growing availability of digital transmission links, progress in signal processing, VLSI technology and image compression research, visual communications has become more feasible than ever. Digital video coding technology has developed into a mature field and a diversity of products has been developed-targeted for a wide range of emerging applications, such as video on demand, digital TV/HDTV broadcasting, and multimedia image/video database services. With the increased commercial interest in video communications the need for international image and video coding standards arose. Standardization of video coding algorithms holds the promise of large markets for video communication equipment. Interoperability of implementations from different vendors enables the consumer to access video from a wider range of services and VLSI implementations of coding algorithms conforming to international standards can be manufactured at considerably reduced costs. The purpose of this paper is to provide an overview of today's image and video coding standards and their role in video communications. The different coding algorithms developed for each standard are reviewed and the commonalities between the standards are discussed. >
---
paper_title: MPEG-7 Visual Shape Descriptors
paper_content:
This paper describes techniques and tools for shape representation and matching, developed in the context of MPEG-7 standardization. The application domains for each descriptor are considered, and the contour-based shape descriptor is presented in some detail. Example applications are also shown.
---
paper_title: MPEG-7 multimedia description schemes
paper_content:
MPEG-7 multimedia description schemes (MDSs) are metadata structures for describing and annotating audio-visual (AV) content. The description schemes (DSs) provide a standardized way of describing in XML the important concepts related to AV content description and content management in order to facilitate searching, indexing, filtering, and access. The DSs are defined using the MPEG-7 description definition language, which is based on the XML Schema language, and are instantiated as documents or streams. The resulting descriptions can be expressed in a textual form (i.e., human readable XML for editing, searching, filtering) or compressed binary form (i.e., for storage or transmission). In this paper, we provide an overview of the MPEG-7 MDSs and describe their targeted functionality and use in multimedia applications.
---
paper_title: Overview of MPEG-7 audio
paper_content:
MPEG-7 is a new ISO standard that facilitates searching for media content much as current text-based search engines ease retrieval of HTML content. This paper gives an overview of the MPEG-7 audio standard, in terms of the applications it might support, its structure, the process by which it was developed, and its specific descriptors and description schemes.
---
paper_title: MPEG-7 Systems: overview
paper_content:
This paper gives an overview of Part 1 of ISO/IEC 15939 (MPEG-7 Systems). It first presents the objectives of the MPEG-7 Systems activity. In the MPEG-1 and MPEG-2 standards, "Systems" referred only to overall architecture, multiplexing, and synchronization. In MPEG-4, in addition to these issues, the Systems part encompasses interactive scene description, content description, and programmability, MPEG-7 brings new challenges to the Systems expertise, such as languages for description representation, binary representation of descriptions, and delivery of descriptions either separate or jointly with the audio-visual content. The paper then presents the description of the MPEG-7 Systems specification, starting from the general architecture up to the description of the individual MPEG-7 Systems tools. Finally, a conclusion describes the status of the standardization effort, as well as future extensions of the specification.
---
| Title: The MPEG-7 Visual Standard for Content Description - An Overview
Section 1: Abstract
Description 1: Provide a summary of the paper including its aims, methodologies, and key points of MPEG-7 standard development.
Section 2: Introduction
Description 2: Discuss the increasing volume of image and video collections, challenges in accessing visual information, and the emergence of content-based image retrieval.
Section 3: Scope of MPEG-7 Visual Standard
Description 3: Explain the goal and objective of the MPEG-7 Visual Standard, the types of visual descriptors specified, and their application examples.
Section 4: Development of the Standard
Description 4: Describe the standard development framework, the process followed by MPEG for creating the standard, and detail the Experimentation Model and Core Experiments.
Section 5: Visual Descriptors for Images and Video
Description 5: Provide an overview of the various visual descriptors developed, including their classifications and applications.
Section 6: Visual Color Descriptors
Description 6: Detail the color descriptors, their applications, and how they are used in image and video retrieval.
Section 7: Visual Texture Descriptors
Description 7: Cover the texture descriptors, their properties, and their suitability for different applications.
Section 8: Visual Shape Descriptors
Description 8: Discuss the shape descriptors, both 2-D and 3-D, and their use in image and video applications.
Section 9: Motion Descriptors for Video
Description 9: Describe the descriptors for capturing motion in video sequences, including camera motion and object motion descriptors.
Section 10: Summary and Conclusion
Description 10: Summarize the content of the paper, the importance of MPEG-7 Visual descriptors, and their potential applications. |
APPLICATION OF NEAR- AND MID-INFRARED SPECTROSCOPY COMBINED WITH CHEMOMETRICS FOR DISCRIMINATION AND AUTHENTICATION OF HERBAL PRODUCTS: A REVIEW - | 7 | ---
paper_title: An authenticity survey of herbal medicines from markets in China using DNA barcoding
paper_content:
Adulterant herbal materials are a threat to consumer safety. In this study, we used DNA barcoding to investigate the proportions and varieties of adulterant species in traditional Chinese medicine (TCM) markets. We used a DNA barcode database of TCM (TCMD) that was established by our group to investigate 1436 samples representing 295 medicinal species from 7 primary TCM markets in China. The results indicate that ITS2 barcodes could be generated for most of the samples (87.7%) using a standard protocol. Of the 1260 samples, approximately 4.2% were identified as adulterants. The adulterant focused on medicinal species such as Ginseng Radix et Rhizoma (Renshen), Radix Rubi Parvifolii (Maomeigen), Dalbergiae odoriferae Lignum (Jiangxiang), Acori Tatarinowii Rhizoma (Shichangpu), Inulae Flos (Xuanfuhua), Lonicerae Japonicae Flos (Jinyinhua), Acanthopanacis Cortex (Wujiapi) and Bupleuri Radix (Chaihu). The survey revealed that adulterant species are present in the Chinese market, and these adulterants pose a risk to consumer health. Thus, regulatory measures should be adopted immediately. We suggest that a traceable platform based on DNA barcode sequences be established for TCM market supervision.
---
paper_title: Quality control of herbal medicines.
paper_content:
Different chromatographic and electrophoretic techniques commonly used in the instrumental inspection of herbal medicines (HM) are first comprehensively reviewed. Chemical fingerprints obtained by chromatographic and electrophoretic techniques, especially by hyphenated chromatographies, are strongly recommended for the purpose of quality control of herbal medicines, since they might represent appropriately the "chemical integrities" of the herbal medicines and therefore be used for authentication and identification of the herbal products. Based on the conception of phytoequivalence, the chromatographic fingerprints of herbal medicines could be utilized for addressing the problem of quality control of herbal medicines. Several novel chemometric methods for evaluating the fingerprints of herbal products, such as the method based on information theory, similarity estimation, chemical pattern recognition, spectral correlative chromatogram (SCC), multivariate resolution, etc. are discussed in detail with examples, which showed that the combination of chromatographic fingerprints of herbal medicines and the chemometric evaluation might be a powerful tool for quality control of herbal products.
---
paper_title: Assessment of herbal medicinal products: Challenges, and opportunities to increase the knowledge base for safety assessment
paper_content:
Although herbal medicinal products (HMP) have been perceived by the public as relatively low risk, there has been more recognition of the potential risks associated with this type of product as the use of HMPs increases. Potential harm can occur via inherent toxicity of herbs, as well as from contamination, adulteration, plant misidentification, and interactions with other herbal products or pharmaceutical drugs. Regulatory safety assessment for HMPs relies on both the assessment of cases of adverse reactions and the review of published toxicity information. However, the conduct of such an integrated investigation has many challenges in terms of the quantity and quality of information. Adverse reactions are under-reported, product quality may be less than ideal, herbs have a complex composition and there is lack of information on the toxicity of medicinal herbs or their constituents. Nevertheless, opportunities exist to capitalise on newer information to increase the current body of scientific evidence. Novel sources of information are reviewed, such as the use of poison control data to augment adverse reaction information from national pharmacovigilance databases, and the use of more recent toxicological assessment techniques such as predictive toxicology and omics. The integration of all available information can reduce the uncertainty in decision making with respect to herbal medicinal products. The example of Aristolochia and aristolochic acids is used to highlight the challenges related to safety assessment, and the opportunities that exist to more accurately elucidate the toxicity of herbal medicines.
---
paper_title: Quality and safety of herbal medical products: regulation and the need for quality assurance along the value chains.
paper_content:
Herbal medicines and products derived from them are a diverse group of products for which different (and often limited) levels of evidence are available. As importantly, such products generally vary in their composition and are at the end of an often poorly understood value chain, which often links producers in biodiversity rich countries with the large markets in the North. This paper discusses the current regulatory framework of such herbal medical products (with a focus on the UK) and using examples from our own metabolomic research on Curcumal longa L. (turmeric, Zingiberaceae) how value chains impact on the composition and quality (and thus the safety) of such products. Overall, our recent research demonstrates the need for studying the links between producers and consumers of commodities produced in provider countries and that plant metabolomics offer a novel way of assessing the chemical variability along a value chain.
---
paper_title: Potential and limitations of non-targeted fingerprinting for authentication of food in official control
paper_content:
The investigation of the so-called food fingerprints provides high potential with regard to the characterization and identity verification of food. Therefore, this kind of non-targeted analysis obtained increasingly importance during the recent years. These applications are usually based on spectroscopic and spectrometric data providing the capability for a comprehensive characterization of the investigated matrices. The subsequent statistical multivariate data analysis enables a general identification of many deviations from the expected product composition. Besides the classical tests of authenticity of foods, a comprehensive analysis that also allows the detection of hazardous or safety-relevant manipulations and violations of the respective laws e.g. with regard to non-authorized food additives or a prohibited use of technological processes is of urgent need in food control. In the literature, several approaches are already pursuing the non-targeted observation of abnormalities in various foods covering a broad variety of analytical methods. This review highlights a current overview of the applicability of this approach using classic spectroscopic as well as spectrometric analytical techniques on the basis of examples of the three most investigated food matrices: honey, olive oil and wine. Furthermore, difficulties as well as challenges regarding the use of food fingerprinting in official food control are discussed
---
paper_title: Curcuminoid’s Content and Fingerprint Analysis for Authentication and Discrimination of Curcuma xanthorrhiza from Curcuma longa by High-Performance Liquid Chromatography-Diode Array Detector
paper_content:
An accurate and reliable method for authentication and discrimination of Curcuma xanthorrhiza (CX) from Curcuma longa (CL) by determining the curcuminoid’s content and analyzing the HPLC fingerprint combined with discriminant analysis (DA) was developed. By using the proposed method, it was found that CL had higher amount of all curcuminoid compounds compared to CX. Therefore, these two closely related species could be authenticated and discriminated by the amount of curcuminoids present in the samples. Authentication and discrimination of the two species were also achieved by comparing their HPLC fingerprint chromatograms using their typical marker peaks. To be more convincing, an aid from DA was also used. Combination of HPLC fingerprint analysis and DA gave excellent result that the two species were separated clearly, including CX samples adulterated with CL. The developed method was successfully used for quality control of the two plants.
---
paper_title: Review of validation and reporting of non-targeted fingerprinting approaches for food authentication.
paper_content:
Food fingerprinting approaches are expected to become a very potent tool in authentication processes aiming at a comprehensive characterization of complex food matrices. By non-targeted spectrometric or spectroscopic chemical analysis with a subsequent (multivariate) statistical evaluation of acquired data, food matrices can be investigated in terms of their geographical origin, species variety or possible adulterations. Although many successful research projects have already demonstrated the feasibility of non-targeted fingerprinting approaches, their uptake and implementation into routine analysis and food surveillance is still limited. In many proof-of-principle studies, the prediction ability of only one data set was explored, measured within a limited period of time using one instrument within one laboratory. Thorough validation strategies that guarantee reliability of the respective data basis and that allow conclusion on the applicability of the respective approaches for its fit-for-purpose have not yet been proposed. Within this review, critical steps of the fingerprinting workflow were explored to develop a generic scheme for multivariate model validation. As a result, a proposed scheme for "good practice" shall guide users through validation and reporting of non-targeted fingerprinting results. Furthermore, food fingerprinting studies were selected by a systematic search approach and reviewed with regard to (a) transparency of data processing and (b) validity of study results. Subsequently, the studies were inspected for measures of statistical model validation, analytical method validation and quality assurance measures. In this context, issues and recommendations were found that might be considered as an actual starting point for developing validation standards of non-targeted metabolomics approaches for food authentication in the future. Hence, this review intends to contribute to the harmonization and standardization of food fingerprinting, both required as a prior condition for the authentication of food in routine analysis and official control.
---
paper_title: USE AND ABUSE OF CHEMOMETRICS IN CHROMATOGRAPHY
paper_content:
This article presents a selection of the relevant issues that emerge at the interface between chromatography and chemometrics. In the first part, we present advantages and drawbacks of applying signal-enhancement, warping and mixture-analysis methods. In the second part, we discuss typical examples of misuse and abuse of chemometrics that can occur with those less familiar with the data-processing approaches. Finally, we conclude that close collaboration between the communities of chromatographers and chemometricians will allow a deeper insight into the chromatographic systems being analyzed and permit new chromatographic problems to be solved in an efficient, elegant manner.
---
paper_title: The future of NMR-based metabolomics
paper_content:
The two leading analytical approaches to metabolomics are mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectroscopy. Although currently overshadowed by MS in terms of numbers of compounds resolved, NMR spectroscopy offers advantages both on its own and coupled with MS. NMR data are highly reproducible and quantitative over a wide dynamic range and are unmatched for determining structures of unknowns. NMR is adept at tracing metabolic pathways and fluxes using isotope labels. Moreover, NMR is non-destructive and can be utilized in vivo. NMR results have a proven track record of translating in vitro findings to in vivo clinical applications.
---
paper_title: Chemometrics coupled to vibrational spectroscopy and spectroscopic imaging for the analysis of solid-phase pharmaceutical products: A brief review on non-destructive analytical methods
paper_content:
Abstract This brief review reports on selected case studies aimed at verifying the authenticity of medicinal products, content uniformity of tablets and polymorphic forms in final products, and monitoring pharmaceutical cocrystallization processes. The studies combine chemometrics with vibrational spectroscopy or spectroscopic imaging, leading to non-destructive analytical methods. These methodologies allow one to analyze intact pharmaceutical formulations. Emphasis is directed to: (a) fighting against counterfeit pharmaceutical products, (b) spatial distribution of active pharmaceutical ingredients (API) in final products, (c) occurrence of polymorphic transitions in commercial tablets due to unsuitable storage conditions or excipient moisture, which could affect the apparent solubility, and (d) solubility enhancement of API polymorphic forms through a cocrystallization process.
---
paper_title: Methods for detection of pork adulteration in veal product based on FT-NIR spectroscopy for laboratory, industrial and on-site analysis
paper_content:
Abstract Three different methods for near infrared (NIR) based multivariate analyses were developed to reveal deliberate adulteration or accidental contamination of a pure veal product with pork and pork fat. More precise, methods for laboratory use of high performance Fourier transform-NIR (FT-NIR) desktop devices, methods suitable for industrial purpose like in- and on-line application with a fibre optic probe and methods applying a handheld spectrometer ready for on-site analyses were established. The methods were developed for the detection of pork adulteration in the meat and fat part of veal sausages. Therefore sausages were self-made based on a commercial veal product. Adulterations up to 50% (in 10% steps) with pork and pork fat were analyzed, respectively. Principal component analyses (PCA) were developed for every setup with previous data pre-treatment steps including wavelength selection, scattering corrections and derivatives of the spectral data. PCA scores were used as input data for support vector machines (SVM) classification and validation. Advantages and disadvantages of the equipment were discussed and the limits of detection regarding the setups were determined. Measurements were also carried out directly through a polymer packaging of the samples and compared to measurements through quartz cuvettes. Meat and fat adulteration could be detected up to the lowest level of contamination (10%) applying the laboratory setup and the industrial fibre optics setup, regarding measurements through quartz and polymer packaging. Analyses with the on-site setup led to successful separation up to the lowest degree of contamination (10%, measurement through quartz cuvettes) regarding meat adulteration and up to 20% and 40% contamination regarding the fat adulteration performing measurements through quartz cuvettes and through polymer packaging, respectively.
---
paper_title: Edible oils and fats authentication by Fourier transform Raman spectrometry
paper_content:
The European project FAIR-CT96-5053 concerned the application of the Fourier transform Raman and infrared spectroscopy in food chemistry and quality control. Our research mainly concerned the study of the potential of Raman spectroscopy and the comparison with the results achieved in infrared spectroscopy. The discrimination of virgin olive oil from other edible oils and the detection and quantification of virgin olive oil adulteration have been experimented with this new technique of fast and non-destructive analysis.
---
paper_title: Trends in Chemometrics: Food Authentication, Microbiology, and Effects of Processing
paper_content:
In the last decade, the use of multivariate statistical techniques developed for analytical chemistry has been adopted widely in food science and technology. Usually, chemometrics is applied when there is a large and complex dataset, in terms of sample numbers, types, and responses. The results are used for authentication of geographical origin, farming systems, or even to trace adulteration of high value-added commodities. In this article, we provide an extensive practical and pragmatic overview on the use of the main chemometrics tools in food science studies, focusing on the effects of process variables on chemical composition and on the authentication of foods based on chemical markers. Pattern recognition methods, such as principal component analysis and cluster analysis, have been used to associate the level of bioactive components with in vitro functional properties, although supervised multivariate statistical methods have been used for authentication purposes. Overall, chemometrics is a useful aid when extensive, multiple, and complex real-life problems need to be addressed in a multifactorial and holistic context. Undoubtedly, chemometrics should be used by governmental bodies and industries that need to monitor the quality of foods, raw materials, and processes when high-dimensional data are available. We have focused on practical examples and listed the pros and cons of the most used chemometric tools to help the user choose the most appropriate statistical approach for analysis of complex and multivariate data.
---
paper_title: Review of validation and reporting of non-targeted fingerprinting approaches for food authentication.
paper_content:
Food fingerprinting approaches are expected to become a very potent tool in authentication processes aiming at a comprehensive characterization of complex food matrices. By non-targeted spectrometric or spectroscopic chemical analysis with a subsequent (multivariate) statistical evaluation of acquired data, food matrices can be investigated in terms of their geographical origin, species variety or possible adulterations. Although many successful research projects have already demonstrated the feasibility of non-targeted fingerprinting approaches, their uptake and implementation into routine analysis and food surveillance is still limited. In many proof-of-principle studies, the prediction ability of only one data set was explored, measured within a limited period of time using one instrument within one laboratory. Thorough validation strategies that guarantee reliability of the respective data basis and that allow conclusion on the applicability of the respective approaches for its fit-for-purpose have not yet been proposed. Within this review, critical steps of the fingerprinting workflow were explored to develop a generic scheme for multivariate model validation. As a result, a proposed scheme for "good practice" shall guide users through validation and reporting of non-targeted fingerprinting results. Furthermore, food fingerprinting studies were selected by a systematic search approach and reviewed with regard to (a) transparency of data processing and (b) validity of study results. Subsequently, the studies were inspected for measures of statistical model validation, analytical method validation and quality assurance measures. In this context, issues and recommendations were found that might be considered as an actual starting point for developing validation standards of non-targeted metabolomics approaches for food authentication in the future. Hence, this review intends to contribute to the harmonization and standardization of food fingerprinting, both required as a prior condition for the authentication of food in routine analysis and official control.
---
paper_title: USE AND ABUSE OF CHEMOMETRICS IN CHROMATOGRAPHY
paper_content:
This article presents a selection of the relevant issues that emerge at the interface between chromatography and chemometrics. In the first part, we present advantages and drawbacks of applying signal-enhancement, warping and mixture-analysis methods. In the second part, we discuss typical examples of misuse and abuse of chemometrics that can occur with those less familiar with the data-processing approaches. Finally, we conclude that close collaboration between the communities of chromatographers and chemometricians will allow a deeper insight into the chromatographic systems being analyzed and permit new chromatographic problems to be solved in an efficient, elegant manner.
---
paper_title: Pharmaceutical Applications of Chemometric Techniques
paper_content:
Chemometrics involves application of various statistical methods for drawing vital information from various manufacturing-related processes. Multiway chemometric models like parallel factor analysis (PARAFAC), Tucker-3, N-partial least square (N-PLS), and bilinear models like principle component regression (PCR) and partial least squares (PLS) have been discussed in the paper. Chemometric approaches can be used to analyze the data obtained from various instruments including near infrared (NIR), attenuated total reflectance Fourier transform infrared (ATR-FTIR), high-performance liquid chromatography (HPLC), and terahertz pulse spectroscopy. The technique has been used in the quality assurance and quality control of pharmaceutical solid dosage forms. Moreover, application of chemometric methods in the evaluation of properties of pharmaceutical powders and tablet parametric tests has also been discussed in the review. It has been suggested as a useful method for the real-time in-process testing and is a valuable process analytical tool.
---
paper_title: Discrimination of Rhizoma Corydalis from two sources by near-infrared spectroscopy supported by the wavelet transform and least-squares support vector machine methods
paper_content:
Abstract Near-infrared spectroscopy (NIRS) was applied for direct and rapid collection of characteristic spectra from Rhizoma Corydalis , a common traditional Chinese medicine (TCM), with the aim of developing a method for the classification of such substances according to their geographical origin. The powdered form of the TCM was collected from two such different sources, and their NIR spectra were pretreated by the wavelet transform (WT) method. A training set of such Rhizoma Corydalis spectral objects was modeled with the use of the least-squares support vector machines (LS-SVM), radial basis function artificial neural networks (RBF-ANN), partial least-squares discriminant analysis (PLS-DA) and K -nearest neighbors (KNN) methods. All the four chemometrics models performed reasonably on the basis of spectral recognition and prediction criteria, and the LS-SVM method performed best with over 95% success on both criteria. Generally, there are no statistically significant differences in all these four methods. Thus, the NIR spectroscopic method supported by all the four chemometrics models, especially the LS-SVM, are recommended for application to classify TCM, Rhizoma Corydalis , samples according to their geographical origin.
---
paper_title: Application of near infrared spectroscopy for authentication of Picea abies seed provenance
paper_content:
Authentication of seed provenance is an importance issue to avoid the negative impact of poor adaptation of progenies when planted outside their natural environmental conditions. The objective of this study was to evaluate the potential of near infrared (NIR) spectroscopy as rapid and non-destructive method for authentication of Picea abies L. Karst seed provenances. For this purpose, five seed lots from Sweden, Finland, Poland and Lithuania each were used. NIR reflectance spectra were recorded on individual seeds (n = 150 seeds × 5 seed lots × 4 provenances = 3000 seeds) using XDS Rapid Content Analyzer from 780 to 2500 nm with a resolution of 0.5 nm. Classification model was developed by orthogonal projection to latent structures-discriminant analysis. The performance of the computed classification model was validated using two test sets—internal (the same seed lots as the model but excluded during model development; n = 600 seeds) and external (seed lots not included in the model; n = 1158 seeds). For the internal test, the model correctly recognized 99% of Swedish, Finnish and Polish samples and 97% of Lithuanian seeds. For the external test samples, the model correctly assigned 81% of Swedish, 96% of Finnish, 98% of Lithuanian and 93% of Polish seeds to their respective classes. The mean classification accuracy was 99 and 95% for internal and external test set, respectively. The spectral differences among seed lots were attributed to differences in chemical composition of seeds, presumably fatty acids and proteins, which are the dominant storage reserves in P. abies seeds. In conclusion, the results demonstrate that NIR spectroscopy is a very promising method for monitoring putative seed provenances and in seed certification.
---
paper_title: Rapid discrimination of geographical origin and evaluation of antioxidant activity of Salvia miltiorrhiza var. alba by Fourier transform near infrared spectroscopy.
paper_content:
Radix Salvia miltiorrhiza Bge. var. alba C.Y. Wu and H.W. Li and Radix S. miltrorrhiza belong to the same genus. S. miltiorrhiza var. alba has a unique effectiveness for thromboangiitis besides therapeutical efficay of S. miltrorrhiza. It exhibits antioxidant activity (AA), while its quality and efficacy also vary with geographic locations. Therefore, a rapid and nondestructive method based on Fourier transform near infrared spectroscopy (FT-NIRS) was developed for discrimination of geographical origin and evaluation of AA of S. miltiorrhiza var. alba. The discrimination of geographical origin was achieved by using discriminant analysis and the accuracy was 100%. Partial least squares (PLS) regression was employed to establish the model for evaluation of AA by NIRS. The spectral regions were selected by interval PLS (i-PLS) method. Different pre-treated methods were compared for the spectral pre-processing. The final optimal results of PLS model showed that correlation coefficients in the calibration set (Rc) and the prediction set (Rp), root mean square error of prediction (RMSEP) and residual prediction deviation (RPD) were 0.974, 0.950, 0.163 mg mL(-1) and 2.66, respectively. The results demonstrated that NIRs combined with chemometric methods could be a rapid and nondestructive tool to discriminate geographical origin and evaluate AA of S. miltiorrhiza var. alba. The developed NIRS method might have a potential application to high-throughput screening of a great number of raw S. miltiorrhiza var. alba samples for AA.
---
paper_title: Fourier transform near- and mid-infrared spectroscopy can distinguish between the commercially important Pelargonium sidoides and its close taxonomic ally P. reniforme
paper_content:
Abstract Pelargonium sidoides is indigenous to South Africa and abundant in the Eastern Cape Province. Several herbal products have been formulated using P. sidoides of which Umckaloabo® is probably the most popular and successfully marketed in Germany. The objective of this study was to discriminate between P. sidoides and Pelargonium reniforme by FT-IR spectroscopy. Absorbance spectra were collected for P. sidoides (n = 96) and its close taxonomic ally P. reniforme (n = 57) in the near infrared (NIR) and mid infrared (MIR) regions. The spectroscopic data were analysed using chemometric computations including principal component analysis and orthogonal projections to latent structures discriminant analysis. Phytochemical variation of 5.79% in the NIR dataset (R2X(cum) = 0.962; Q2(cum) = 0.918) and 9.22% variation in the MIR dataset (R2X(cum) = 0.497; Q2(cum) = 0.658) was responsible for the separation of the two species. Seven absorption areas were identified as putative biomarkers responsible for the differences between the two species. These results indicate that FT-NIR and FT-MIR spectroscopy can be used to discriminate between these two closely related species which occupy a sympatric distribution in South Africa.
---
paper_title: Discrimination and prediction of cultivation age and parts of Panax ginseng by Fourier-transform infrared spectroscopy combined with multivariate statistical analysis
paper_content:
Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values.
---
paper_title: Identification of geographical origin of Lignosus samples using Fourier transform infrared and two-dimensional infrared correlation spectroscopy
paper_content:
Abstract Lignosus spp . is a medicinal mushroom that has been used as a folk remedy for ‘clearing heat’, eliminating phlegm, ‘moistening the lungs’ and as an anti-breast cancer agent. The objective of this study was to identify the active chemical constituents of the mushroom limited number of sample by using Fourier transform infrared (FTIR) and two-dimensional correlation Fourier transform infrared spectroscopy (2DIR). The sample M26/08 was purchased from a Chinese medicine shop in Kuala Lumpur, while M49/07 and M23/08 were collected from Semenyih and Kuala Lipis respectively. The three samples have strong absorption peaks corresponding to the stretching vibration of conjugated carbonyl C O group. Both fresh sample M49/07 and M23/08 showed an identical peak of 1655 cm −1 , whereby M26/08 contained stretching vibration of 1648 cm −1 . The peaks from 1260 cm −1 onwards were assignation of carbohydrate content including saccharides. Spectrum of M26/08 showed region from 1260 cm −1 to 950 cm −1 which was 99.4% similar to M23/08. The chemical constitutes of M26/08 and M23/08 were closely correlated ( r = 0.97), whereas the correlation coefficient of M26/08 and M49/07 was 0.94. The use of second derivative and 2DIR spectroscopy enhanced the distinct differences to a more significant level. Although the geographical origin of M26/08 was unknown, its origin was determined by comparing with M49/07 and M23/08. The visual and colorful 2DIR spectra provided dynamic structural information of the chemical components analyzed and demonstrated a powerful and useful approach using the spectroscopy of different samples.
---
paper_title: Spectroscopy: Developments in instrumentation and analysis
paper_content:
This review presents the characteristics, advantages, limits and potential of three spectroscopic techniques: near-infrared spectroscopy (NIR), mid-infrared spectroscopy (MIR) and Raman spectroscopy. The theoretical aspects related with these techniques, the information that can supplied and the main features of the instrumentation are presented and briefly discussed. The last part of the review concerns the application of the spectroscopy to food analysis, with special emphasis on the lipid analysis. The illustrations and examples have been chosen to demonstrate the importance of spectroscopic techniques both in process (on-line) control and in laboratories for the analysis of major or minor compounds.
---
paper_title: Mid-Infrared Spectroscopy Coupled with Chemometrics: A Tool for the Analysis of Intact Food Systems and the Exploration of Their Molecular Structure-Quality Relationships - A Review
paper_content:
Public interest in food quality and methods of production has increased significantly in recent decades, due in part to changes in eating habits, consumer behavior, and the increased industrialization and globalization of food supply chains.1 Demand for high levels of quality and safety in food production obviously requires high standards in quality assurance and process control; satisfying this demand in turn requires appropriate analytical tools for food analysis both during and after production. Desirable features of such tools include speed, ease-of-use, minimal or no sample preparation, and the avoidance of sample destruction. These features are characteristic of a range of spectroscopic methods including the mid-infrared (MIR). While it is true that near-infrared (NIR) spectroscopy has achieved greater uptake by the food industry,2 reported applications of MIR in this sector have increased over the past decade or more. Foods represent significant analytical challenges. They are highly complex, variable and can be found in a number of different physical states: these include solids, dilute solutions, emulsions, foams, highly visco-elastic forms, and glassy
---
paper_title: Quality control of herbal medicines by using spectroscopic techniques and multivariate statistical analysis.
paper_content:
Herbal medicines play an important role in modern human life and have significant effects on treating diseases; however, the quality and safety of these herbal products has now become a serious issue due to increasing pollution in air, water, soil, etc. The present study proposes Fourier transform infrared spectroscopy (FTIR) along with the statistical method principal component analysis (PCA) to identify and discriminate herbal medicines for quality control. Herbal plants have been characterized using FTIR spectroscopy. Characteristic peaks (strong and weak) have been marked for each herbal sample in the fingerprint region (400–2000 cm −1 ). The ratio of the areas of any two marked characteristic peaks was found to be nearly consistent for the same plant from different regions, and thus the present idea suggests an additional discrimination method for herbal medicines. PCA clusters herbal medicines into different groups, clearly showing that this method can adequately discriminate different herbal medicines using FTIR data. Toxic metal contents (Cd, Pb, Cr, and As) have been determined and the results compared with the higher permissible daily intake limit of heavy metals proposed by the World Health Organization (WHO).
---
paper_title: Assessment of Herbal Medicines by Chemometrics - Assisted Interpretation of FTIR Spectra
paper_content:
Pharmacognosical analysis of medicinal herbs remain challenging issues for analytical chemists, as herbs are a complicated system of mixtures. Analytical separation techniques for example high performance liquid chromatography (HPLC), gas chromatography (GC) and mass spectrometry (MS) were among the most popular methods of choice used for quality control of raw material and finished herbal product. The application of infrared (IR) spectroscopy in herbal analysis is still very limited compared to its applications in other areas (food and beverage industry, microbiology, pharmaceutical etc). This article attempts to expand the use of FTIR spectroscopy and at the same time creating interest among prospective researcher in herbal analysis. A case study was conducted by incorporating appropriate chemometric methods (Principal Components Analysis, PCA and Soft Independent Modelling of Class Analogy, SIMCA) as tools for extracting relevant chemical information from the obtained infrared data. The developed method can be used as a quality control tool for rapid authentication from a wide variety of herbal samples.
---
paper_title: Fast discrimination of traditional Chinese medicine according to geographical origins with FTIR spectroscopy and advanced pattern recognition techniques.
paper_content:
Combined with Fourier transform infrared (FTIR) spectroscopy and three kinds of pattern recognition techniques, 53 traditional Chinese medicine danshen samples were rapidly discriminated according to geographical origins. The results showed that it was feasible to discriminate using FTIR spectroscopy ascertained by principal component analysis (PCA). An effective model was built by employing the Soft Independent Modeling of Class Analogy (SIMCA) and PCA, and 82% of the samples were discriminated correctly. Through use of the artificial neural network (ANN)-based back propagation (BP) network, the origins of danshen were completely classified.
---
paper_title: APPLICATION OF NEAR- AND MID-INFRARED SPECTROSCOPY COMBINED WITH CHEMOMETRICS FOR DISCRIMINATION AND AUTHENTICATION OF HERBAL PRODUCTS: A REVIEW -
paper_content:
Herbal medicines along with its preparations have been commonly used in preventive and promotive agents around the world, especially in developing countries. Motivated by economical profits, the high priced value of herbal medicines may be substituted or adulterated with less expensive ones, therefore the authentication methods must be developed to overcome the adulteration practices. Due to its properties as fingerprint analytical techniques, near-infrared (NIR) and mid infrared (MIR) spectroscopy offered fast and reliable techniques for authentication of herbal medicine. The data generated during authentication of herbal medicines were complex and difficult to be interpreted, as a consequence, the statistical approach called with chemometrics has been used to treat data. The objective of present review was to highlight the updates on application of NIR and MIR spectroscopy and chemometrics techniques (discrimination, classification, and quantification) for discrimination and authentication of herbal medicine.
---
paper_title: Fourier transform mid-infrared spectroscopy and chemometrics to identify and discriminate Boletus edulis and Boletus tomentipes mushrooms
paper_content:
ABSTRACT Boletus edulis and Boletus tomentipes are two well-known mushroom species which are widely consumed in Yunnan province due to their high nutritional and medicinal values. Fourier transform mid-infrared spectroscopy can determine exclusive spectra fingerprint of a sample and further analyse its quality when combined with appropriate chemometrics. In this study, identification and discrimination of B. edulis and B. tomentipes mushrooms from different geographical locations were performed based on Fourier transform mid-infrared spectroscopy and chemometrics. Principal component analysis, hierarchical cluster analysis, and partial least squares discriminant analysis allowed us to identify and discriminate mushroom samples depending on their unique metabolic spectral fingerprints. The range of 1800–400 cm−1, which exhibited major characteristics of mushroom samples was selected for next analysis. The unsupervised principal component analysis and hierarchical cluster analysis showed that mushroom samples from different geographical locations could be effectively identified. Furthermore, the supervised partial least squares discriminant analysis method was used to predict unknown mushroom samples successfully based on developed calibration model. In conclusion, these results indicated that Fourier transform mid-infrared technique combined with appropriate chemometrics can be used as an effective and rapid strategy for quality control of B. edulis and B. tomentipes mushrooms with respect to their geographical locations. In addition, this technique also can be applied in other mushroom species for this purpose when coupled with reasonable chemometrics.
---
| Title: APPLICATION OF NEAR- AND MID-INFRARED SPECTROSCOPY COMBINED WITH CHEMOMETRICS FOR DISCRIMINATION AND AUTHENTICATION OF HERBAL PRODUCTS: A REVIEW
Section 1: INTRODUCTION
Description 1: Write about the widespread use of herbal medicine, the issues related to adulteration and misuse, and the necessity for authentication methods.
Section 2: DISCRIMINATION AND AUTHENTICATION TESTING
Description 2: Describe the various methods used for the identification, discrimination, and authentication of herbal ingredients, and introduce the targeted and non-targeted analytical approaches.
Section 3: INFRARED SPECTROSCOPY
Description 3: Explain what infrared spectroscopy is, the different regions of IR spectroscopy (NIR, MIR, and FIR), and its application in the analysis of herbal medicines.
Section 4: CHEMOMETRICS
Description 4: Define chemometrics and discuss its role in processing and evaluating the complex data obtained from IR spectroscopy for herbal medicine authentication.
Section 5: AUTHENTICATION OF HERBAL MEDICINE USING NEAR INFRARED SPECTROSCOPY
Description 5: Provide detailed examples and case studies of the use of NIR spectroscopy combined with chemometrics for the authentication of different herbal products.
Section 6: AUTHENTICATION OF HERBAL MEDICINES USING MIR SPECTROSCOPY
Description 6: Discuss the application of MIR spectroscopy in combination with chemometrics for the authentication and classification of various herbal medicines, including relevant case studies.
Section 7: CONCLUSION
Description 7: Summarize the key points discussed in the review, emphasizing the strengths and potential of using NIR and MIR spectroscopies combined with chemometrics for the authentication of herbal products. |
Advances in Software-Defined Technologies for Underwater Acoustic Sensor Networks: A Survey | 5 | ---
paper_title: Autonomous deployment of sensors for maximized coverage and guaranteed connectivity in Underwater Acoustic Sensor Networks
paper_content:
Self-deployment of sensors with maximized coverage in Underwater Acoustic Sensor Networks (UWASNs) is challenging due to difficulty of access to 3-D underwater environments. The problem is further compounded if the connectivity of the final network is required. One possible approach is to drop the sensors on the surface and then move them to certain depths in the water to maximize the 3-D coverage while maintaining the connectivity. In this paper, we propose a purely distributed node deployment scheme for UWASNs which only requires random dropping of sensors on the water surface. The goal is to expand the initial network to 3-D with maximized coverage and guaranteed connectivity with a surface station. The idea is based on determining the connected dominating set of the initial network and then adjust the depths of all dominatee and dominator neighbors of a particular dominator node for minimizing the coverage overlaps among them while still keeping the connectivity with the dominator. The process starts with a leader node and spans all the dominators in the network for repositioning. Simulations results indicate that connectivity can be guaranteed regardless of the transmission and sensing range ratio with a coverage very close to a coverage-aware deployment approach.
---
paper_title: A Survey on Software-Defined Wireless Sensor Networks: Challenges and Design Requirements
paper_content:
Software defined networking (SDN) brings about innovation, simplicity in network management, and configuration in network computing. Traditional networks often lack the flexibility to bring into effect instant changes because of the rigidity of the network and also the over dependence on proprietary services. SDN decouples the control plane from the data plane, thus moving the control logic from the node to a central controller. A wireless sensor network (WSN) is a great platform for low-rate wireless personal area networks with little resources and short communication ranges. However, as the scale of WSN expands, it faces several challenges, such as network management and heterogeneous-node networks. The SDN approach to WSNs seeks to alleviate most of the challenges and ultimately foster efficiency and sustainability in WSNs. The fusion of these two models gives rise to a new paradigm: Software defined wireless sensor networks (SDWSN). The SDWSN model is also envisioned to play a critical role in the looming Internet of Things paradigm. This paper presents a comprehensive review of the SDWSN literature. Moreover, it delves into some of the challenges facing this paradigm, as well as the major SDWSN design requirements that need to be considered to address these challenges.
---
paper_title: Impacts of Deployment Strategies on Localization Performance in Underwater Acoustic Sensor Networks
paper_content:
When setting up an underwater acoustic sensor network (UASN), node deployment is the first and foremost task, upon which many fundamental network services, such as network topology control, routing, and boundary detection, will be built. While node deployment in 2-D terrestrial wireless sensor networks has been extensively studied, little attention has been received by their 3-D counterparts. This paper aims at analyzing the impacts of node deployment strategies on localization performances in a 3-D environment. More specifically, the simulations conducted in this paper reveal that the regular tetrahedron deployment scheme outperforms the random deployment scheme and the cube deployment scheme in terms of reducing localization error and increasing localization ratio while maintaining the average number of neighboring anchor nodes and network connectivity. Given the fact that random deployment is the primary choice for most of practical applications to date, our results are expected to shed some light on the design of UASNs in the near future.
---
paper_title: Ocean current measurement using acoustic sensor network: ‘Challenges, simulation, deployement’
paper_content:
Underwater Acoustic Sensor Networks (UASNs) are increasingly attracting researchers' attention as they have a wide range of applications among which we can name oceanography and disaster prevention being the main focus of our research. In connection with the principal topics, in this paper, we review the most challenging issues of UASNs that have been discussed in recent publications. These include simulating the underwater environment and networks as well as practical issues and deployment concerns. After investigating the main subjects, we address open areas for research in this field.
---
paper_title: On the design of green protocols for underwater sensor networks
paper_content:
Underwater sensor networks have enabled a new era in scientific and industrial underwater monitoring and exploration applications. However, these networks are energy-constrained and, more problematically, energy-hungry, as a consequence of the use of underwater acoustic links. In this work, we thoroughly review potential techniques for greening underwater sensor networks. In a top-down approach, we discuss the principal design and challenges of the appealing highlighted techniques. We also exemplify their use by surveying recent proposals in underwater sensor networks. Finally, we describe potential future research directions for energy conservation in underwater networks.
---
paper_title: Simulation of underwater sensor networks
paper_content:
This paper highlights some of the key areas of investigation in the network design and analysis of the Deployable Autonomous Distributed System (DADS). DADS is an exploratory research program sponsored by the Office of Naval Research. Simulation was essential in providing system trade-offs and parametric studies of critical system functions. The study of network operations in the DADS environment has resulted in a variety of requirements and lessons learned that are beyond the scope of traditional terrestrial wireless communications systems. This paper also discusses the results of a simulation study of DADS deployment in a typical operational scenario.
---
paper_title: Software Defined Wireless Sensor Networks: A Review
paper_content:
Wireless sensor networks (WSNs) have well known limitations such as battery energy, computing power and bandwidth resources that sometimes limit their widespread use. Current researches are mainly concentrated to propose solutions for nodes energy optimization, network load balancing and the improvement of WSN robustness; however, the software defined network (SDN) paradigm uses the theory of forwarding phase separating from control, simplifying management and configuration of the network to improve network extension and flexibility. It could further optimize WSNs deployment and improve their transmission performance. In this paper, we firstly describe the general architecture and the main features of software defined networks; then, we analyze the current integrated SD-WSN scheme and summarize these results in detail.
---
paper_title: Software-Defined Architectures and Technologies for Underwater Wireless Sensor Networks: A Survey
paper_content:
The ocean covers nearly two-thirds of the surface on the Earth, and there has been great interest in developing underwater wireless sensor networks (UWSNs) to help us explore the ocean realm. A great deal of efforts have been devoted to it, and significant progress has been made since the beginning of 2000s. However, most of the networks are isolatedly developed currently, inherently hardware-based and application-oriented with inflexible closed-form architectures, which are difficult to reconfigure, reprogram and evolve. They also lack the capability in sharing resources, and are far from service-oriented networks. These limitations impair their capacity for wide range of applications. To further propel the development of UWSNs, next-generation UWSNs have been proposed recently, which are robust, flexible, adaptive, programmable, support resource-sharing feature and are easy to manage and evolve. Moreover, a number of novel software-defined techniques and paradigms, such as software-defined radio, cognitive acoustic radio, network function virtualization, software-defined networking, Internet of Underwater Things, and sensor-cloud, have been emerging. These software-defined technologies have the capability of softwarizing network resources, and then redefining them to satisfy diverse application requirements, improve resource utilization efficiency and simplify network management. Consequently, these evolving technologies are envisioned as critical building blocks and major driving forces, which will transform conventional UWSNs toward software-based, programmable, user-customizable, and service-oriented next-generation UWSNs. In this paper, we provide a comprehensive review of existing works on implementing these techniques, and also present discussions for future research. We hope to inspire more active research on these areas and take a step further toward realizing next-generation UWSNs.
---
paper_title: Development of software-defined acoustic communication platform and its evaluations
paper_content:
In recent years, researches of underwater sensor networks have continued to investigate environment and resources of the sea. Acoustic waves are used instead of radio waves for wireless communication in underwater. However, dedicated hardware is very expensive, experiments on the sea are very time-consuming, and huge water spaces are necessary to study the underwater acoustic communication. In this paper, we present a cheap and tractable software-defined acoustic communication platform running on PCs using MATLAB, and evaluate its characteristics in a variety of communication methods by changing modulation schemes, error correction codes, transmission power and frequency by using commercial speaker and microphone devices. Our current implementation achieves data rate of up to 4.5 Kbps.
---
paper_title: Challenges and issues in underwater acoustics sensor networks: A review
paper_content:
The Underwater Acoustic Sensor Networks (UWASN) consists of sensors that are deployed underwater for gathering information for the unexplored parts of oceans or rivers. UWASN consists of variable number of floating and anchored sensors, sink and vehicles that are deployed over an area to be explored. The characteristics of UWASN are mainly node mobility for floating, capacity for data collection and recording and autonomous vehicles which are battery operable. The communication is possible among underwater devices through optical waves, radio waves, electromagnetic and acoustics. Out of these, acoustics communication is best suited as it can carry digital information through underwater channel and can travel to longer distances. The communication can be classified in two parts: Single and multi hoping. But in underwater we use multi-hop communication for sending data from end nodes to sink nodes. The various challenges to UWASN are limited bandwidth, multipath fading, limited battery, limited data capacity and delay in propagation. Hence, in this paper we have focussed on various issues and challenges in underwater wireless sensor networks for acoustic communications.
---
paper_title: Key technology and experimental research of underwater acoustic networks
paper_content:
In recent years, whether autonomous monitoring of ocean environment, deep seabed resources surveying, underwater sensor networks, or military underwater detection networks, etc, there are widespread needs of underwater acoustic networks (UANs). This paper provides a brief overview of the characteristics and advances of UANs, and the main research contents and key technologies of UANs are analyzed, including physical layer technologies, media access control (MAC) protocols and routing protocols. Underwater acoustic communication (UWAC) in the physical layer is the basis for UANs, and its research covers multi-frequency shift keying (MFSK), multi-phase shift keying (MPSK), direct-sequence spread spectrum (DSSS), and orthogonal frequency division multiplexing (OFDM). Common MAC protocols include Time Division Multiple Access (TDMA), ALOHA and MACAW (MACA for Wireless), and are all used in UANs. Routing protocols involve static routing, hybrid routing, and self-organizing routing. The experimental research on underwater acoustic network in the Sanya area is introduced in detail. The network was composed of 15 nodes developed by three institutes. The modems of the nodes deployed used UWAC technology, such as MFSK, MPSK and OFDM. Each network node installed TD, CTD, acoustic Doppler current profiler (ADCP) or other ocean monitoring equipment, gateway was connected with the shore station via radio, and the server of the shore station was connected to the Internet. The network was running for 43 days, and online monitoring of the ocean environment was realized. This paper gives analysis on packet loss ratio, transmission delay and network energy efficiency of the nodes developed by the Hangzhou Applied Acoustics Research Institute. The node packet loss ratio was 2.9%, the average delay was 0.947 minute per hop, when the packet size was 404 bits, and the energy efficiency was 0.7831 bit / J.
---
paper_title: Controllers in SDN: A Review Report
paper_content:
Software-defined networking (SDN) is a networking scenario which changes the traditional network architecture by bringing all control functionalities to a single location and making centralized decisions. Controllers are the brain of SDN architecture, which perform the control decision tasks while routing the packets. Centralized decision capability for routing enhances the network performance. Through this paper, we presented a review report on various available SDN controllers. Along with the SDN introduction, we discuss the prior work in the field. The review states how the centralized decision capability of the controller changes the network architecture with network flexibility and programmability. We also discuss the two categories of the controller along with some popular available controller. For each controller, we discuss the architectural overview, design aspects, and so on. We also evaluate the performance characteristics by using various metrics, such as throughput, response time, and so on. This paper points to the major state-of-the-art controllers used in industry and academia. Our review work covers major popular controllers used in SDN paradigm.
---
paper_title: A literature review on Software-Defined Networking (SDN) research topics, challenges and solutions
paper_content:
Cloud computing data centers are becoming increasingly popular for the provisioning of computing resources. In the past, most of the research works focused on the effective use of the computational and storage resources by employing the Virtualization technology. Network automation and virtualization of data center LAN and WAN were not the primary focus. Recently, a key emerging trend in Cloud computing is that the core systems infrastructure, including compute resources, storage and networking, is increasingly becoming Software-Defined. In particular, instead of being limited by the physical infrastructure, applications and platforms will be able to specify their fine-grained needs, thus precisely defining the virtual environment in which they wish to run. Software-Defined Networking (SDN) plays an important role in paving the way for effectively virtualizing and managing the network resources in an on demand manner. Still, many research challenges remain: how to achieve network Quality of Service (QoS), optimal load balancing, scalability, and security. Hence, it is the main objective of this article to survey the current research work and describes the ongoing efforts to address these challenging issues.
---
paper_title: Localization, routing and its security in UWSN — A survey
paper_content:
Underwater sensor networks are promising apparatus for the discovery of the ocean. For localization and networking protocol, this sensor network requires new robust solution. For terrestrial sensor network various localization algorithms and routing protocols have been proposed, there are very few localization and routing techniques for UWSN. Compare to terrestrial sensor network the features of underwater sensor network are basically different. Underwater acoustic communication is described by stringent physical layer condition with severe bandwidth restrictions. In underwater the uneven rate of sound and the long propagation delay create a distinct set of challenges for localization and routing in UWSN. This paper surveyed the different localization and routing schemes that are applicable to underwater sensor networks, the dispute in meeting the condition created by rising function for such network, attacks, security requirement in UWSN and localization and routing security in UWSN.
---
paper_title: A Split Architecture Approach to Terabyte-Scale Caching in a Protocol-Oblivious Forwarding Switch
paper_content:
Research has proven that in-network caching is an effective way of eliminating redundant network traffic. For a larger cache that scales up to terabytes, a network element must utilize block storage devices. Nevertheless, accessing block devices in packet forwarding paths could be a major performance bottleneck because storage devices are prone to be much slower than memory devices concerning bandwidth and latency. Software-defined networking (SDN) has entered into all aspects of network architecture by separating the control and forwarding plane to make it more programmable and application-aware. Protocol-oblivious forwarding (POF), which is an enhancement to current OpenFlow-based SDN forwarding architecture, enhances the network programmability further. In this paper, we proposed a novel split architecture to cope with the problem of speed mismatch between high-speed packet forwarding and low-speed block I/O operation over POF switches. The issues raised by this split architecture were first explored and could be summarized as packet dependency and protocol conversion. Then, we focused on solving these two problems and proposed an efficient and scalable design. Finally, we conducted extensive experiments to evaluate the split architecture along with the proposed approaches for packet dependency and protocol conversion.
---
paper_title: Research on water surface gateway deployment in underwater acoustic sensor networks
paper_content:
In the underwater sensor network, the deployment of multiple water surface gateway can effectively improve the network capacity, reduce the network delay and save the energy of underwater sensor nodes. Due to the underwater sensor nodes is easily affected by ocean currents affect which occurred in the position drift, surface gateway deployment locations need regular dynamic optimization and update. This paper designs an improved cuckoo optimization algorithm, to achieve a fair and efficient surface gateway layout optimization. At the same time, the influence of different gateway quantity on network capacity and network delay is simulated, which provides scientific decision support for the overall optimization of underwater sensor networks.
---
paper_title: Software defined networks for multitenant, multiplatform applications
paper_content:
Network management in virtualized computing domains is a challenging task in the ever-increasing dimensions of user's interaction, network size, and types of applications. Internet of Things (IoT), smart sensing and communicating devices over the IP domain and strong demand of QoS pose multitude of new challenges. The traditional network management is simply unable to keep up with dynamically varying data management and transport requirements in large-scale multitenant networks. The switches and routers in conventional networks have to be reconfigured for changes in flow requirements, often manually. The Software Defined Networks (SDN) decouples the complexity of state distribution from network specifications. The ‘control plane’ manages data flows, routing and forwarding states through vendor neutral interfaces to the switches. The ‘data plane’ uses the underlying systems that forward traffic to the destinations. Multi-platform, multitenant management support for virtualized domains and cloud computing under SDN architecture has been reviewed in this paper. The scalability, controllability, dependability and security issues in SDN have been discussed along with an overview of core-technologies of SDN, like OpenFlow. A model SDN for distributed flow management in an enterprise environment has also been presented. The challenges and future directions of research are also briefly discussed.
---
paper_title: Application identification system for SDN QoS based on machine learning and DNS responses
paper_content:
In recent years, the demand for application-specific qualify of service (QoS) management has grown. To effectively do application-specific QoS, a system albe to do flow classification at the application level is required. This paper presents an application identification system that can be integrated with a QoS management system in a software defined network (SDN). This paper describes the method to obtain ground truth (label) of the flow from four mainstream operating systems (OS), and the method to classify flow based on supervised machine learning and DNS responses. In our experiment, average F-measure of all applications reached 93.48%. The testing data set contained 294 applications, given that each platform version or execution file of an application was one application. The testing data set included Skype, Facebook, and other popular applications. Results showed that this system can identify application traffic on different platforms with high accuracy.
---
paper_title: Evolving trends and challenges in applied underwater acoustic modeling
paper_content:
Prominent trends and challenges in applied underwater acoustic modeling have been motivated largely by marine-mammal protection research focused on the mitigation of naval-sonar, seismic-source, and pile-driving noise. Channel modeling, underwater-acoustic networks, and communications technologies have evolved to support the increased bandwidths needed for undersea data collection. Energy-flux models, not traditionally used in naval sonar applications, have proved useful for assessing marine-mammal impacts. Developments in inverse sensing include seismic oceanography, which employs low-frequency marine seismic reflection data to image ocean dynamics. Interest in the polar regions has increased due to the well-publicized effects of global warming. Collectively, these trends have added new analytical tools to the existing inventory of propagation, noise, reverberation, and sonar performance models.
---
paper_title: Review on Clustering, Coverage and Connectivity in Underwater Wireless Sensor Networks: A Communication Techniques Perspective
paper_content:
With a wide scope to explore and harness the oceanic sources of interest, the field of underwater wireless sensor networks (UWSNs) is attracting a growing interest of researchers. Owing to the real-time remote data monitoring requirements, underwater acoustic sensor networks (UASNs) emerged as a preferred network to a great extent. In UASN, the limited availability and non-rechargeability of energy resources along with the relative inaccessibility of deployed sensor nodes for energy replenishments necessitated the evolution of several energy optimization techniques. Clustering is one such technique that increases system scalability and reduces energy consumption. Besides clustering, coverage and connectivity are two significant properties that decide the proper detection and communication of events of interest in UWSN due to unstable underwater environment. Underwater communication is also possible with non-acoustic communication techniques like radio frequency, magnetic induction, and underwater free-space optics. In this paper, we surveyed clustering, coverage, and connectivity issues of UASN and qualitatively compared their performance. Particularly, the impact of these non-conventional communication techniques on clustering, coverage, and connectivity aspects is demonstrated. Additionally, we highlighted some key open issues related to the UWSN. This paper provides a broad view of existing algorithms of clustering, coverage, and connectivity based on acoustic communication. It also provides a useful guidance to the researchers in UWSN from various other communication techniques’ perspective.
---
paper_title: Software-defined open-architecture modems: Historical review and the NILUS approach
paper_content:
Flexible/adaptive modems that are reprogramma-ble/reconfigurable at all layers of the communication stack, either by a user or by means of autonomous decisions, are considered as an important enabler for interoperability and cognitive networking in the underwater domain. This paper reviews existing literature on software-defined open-architecture modems (SDOAMs) for underwater communications and networking, and zooms in on relevant R&D efforts currently taking place in a Netherlands-Norway defense cooperation.
---
paper_title: Software-Defined Wireless Networking Opportunities and Challenges for Internet-of-Things: A Review
paper_content:
With the emergence of Internet-of-Things (IoT), there is now growing interest to simplify wireless network controls. This is a very challenging task, comprising information acquisition, information analysis, decision-making, and action implementation on large scale IoT networks. Resulting in research to explore the integration of software-defined networking (SDN) and IoT for a simpler, easier, and strain less network control. SDN is a promising novel paradigm shift which has the capability to enable a simplified and robust programmable wireless network serving an array of physical objects and applications. This paper starts with the emergence of SDN and then highlights recent significant developments in the wireless and optical domains with the aim of integrating SDN and IoT. Challenges in SDN and IoT integration are also discussed from both security and scalability perspectives.
---
paper_title: Secure underwater acoustic networks: Current and future research directions
paper_content:
Underwater Acoustic Networks (UANs) are widely used in various applications such as climate change monitoring, pollution control and tracking, tactical surveillance and offshore exploration. However, limited consideration is given to the security of such networks, despite the fact that the unique characteristics of UANs make these networks vulnerable to various malicious attacks. In this paper, we address future aspects of how to improve security in UANs. We start by reviewing and discussing the state-of-the-art security threats for underwater networks along with their existing solutions. We then identify the open research issues and challenges in the design of secure protocols for communication in UANs. We propose innovative approaches based on node cooperation, cross-layering, software-defined cognitive networking and context-aware communication in order to effectively provision new or strengthen existing security frameworks in UANs. By using these approaches, we address the problem of detecting malicious behaviours and rogue nodes in order to address the major security issues in UANs. We also investigate the use of a covert channel based detection mechanism which needs to be considered when monitoring or deploying UANs at sea. We believe that the issues raised and future possible solution approaches proposed in this paper will greatly help the researchers contributing towards fortifying security in an inherently in-secure UAN.
---
paper_title: Joint Routing and Energy Management in UnderWater Acoustic Sensor Networks
paper_content:
Interest in underwater acoustic sensor networks (UW-ASNs) has rapidly increased with the desire to control the large portion of the world covered by oceans. Fundamental differences between underwater acoustic propagation and terrestrial radio propagation may impose the design of new networking protocols and management schemes. In this paper, we focus on these fundamental differences in order to conceive a balanced routing strategy that overcomes the energy holes problem. Indeed, energy management is one of the major concerns in UW-ASNs due to the limited energy budget of the underwater sensor nodes. In this paper, we tackle the problem of energy holes in UW-ASNs while taking into consideration the unique characteristics of the underwater channel. The main contribution of this study is an in-depth analysis of the impact of these unique underwater characteristics on balancing the energy consumption among all underwater sensors. We prove that we can evenly distribute the transmission load among sensor nodes provided that sensors adjust their communication power when they send or forward the periodically generated data. In particular, we propose a balanced routing strategy along with the associated deployment pattern that meticulously determines the load weight for each possible next hop, that leads to fair energy consumption among all underwater sensors. Consequently, the energy holes problem is overcome and hence the network lifetime is improved.
---
paper_title: Efficient Use of Space-Time Clustering for Underwater Acoustic Communications
paper_content:
Underwater acoustical communication channels are characterized by the spreading of received signals in space (direction of arrival) and in time (delay). The spread is often limited to a small number of space-time clusters. In this paper, the space-time clustering is exploited in a proposed receiver designed for guard-free orthogonal frequency-division multiplexing with superimposed data and pilot signals. For separation of space clusters, the receiver utilizes a vertical linear array (VLA) of hydrophones, whereas for combining delay-spread signals within a space cluster, a time-domain equalizer is used. We compare a number of space-time processing techniques, including a proposed reduced-complexity spatial filter, and show that techniques exploiting the space-time clustering demonstrate an improved detection performance. The comparison is done using signals transmitted by a moving transducer, and recorded on a 14-element nonuniform VLA in sea trials at distances of 46 and 105 km.
---
paper_title: An Underwater Sensor Network Deployment Algorithm Based on Submarine Depth
paper_content:
For three-dimensional underwater wireless sensor networks in anti-submarine,a kind of deployment method based on the prior probability model of submarine depth information is proposed.According to the depth information of the submarine,fewer nodes will go to sleep in order to increase the density of active nodes for the area with larger submarine emergence probability.For the other regions,more nodes will be sleep,to decrease the active node density and reduce coverage ratio.The simulation results show that the algorithm can ensure higher coverage quality,reduce the overall energy consumption of the network,and extend the lifetime of underwater wireless sensor networks.
---
paper_title: Underwater Acoustic Networks - Survey on Communication Challenges with Transmission Simulations
paper_content:
Underwater sensor networks may be used in many underwater applications and some of these applications may transmit large amounts of data. Underwater acoustic communication is limited by several physical factors affecting communication. Time spread caused by multipath effects and frequency selective fading are major concerns.The choice of modulation technique is important. Due to the high data rate required in network applications coherent modulation techniques should be considered. In the choice of multiple access method Code Division Multiple Access is considered the most promising technique because of its robustness and resistance against frequency selective fading. Also by using a Rake receiver multipath arrivals can coherently be combined.To get an idea of multipath effects for different propagation conditions the acoustic modeling program EasyPLR is used. Sound speed profiles from different seasons of 2005 are implemented to display the seasonal variations. The results indicate multipath effects and time spread for all seasons at short range. For ranges up to 3 km time spread is kept at tens of micro seconds.
---
paper_title: A Novel SDN Scheme for QoS Path Allocation in Wide Area Networks
paper_content:
The massive adoption of Cloud services has led to the explosion of traffic transiting over the Cloud infrastructure. Such an impressive evolution of data demand will inevitably be the catalyst of Operator infrastructure transformation. In this context, Software Defined Networking (SDN) is the technology that is shaping the future of carriers' networks. SDN considerably reduces the complexity of managing the network infrastructure while providing tremendous computational power compared to legacy devices. In this paper, we address the resource allocation issue in Wide Area Networks (WAN) while considering the requested QoS. To do so, we design an SD-WAN architecture to enhance the network resources allocation and hence improve the QoS of distributed applications. We formulate first the path computation problem as an Integer Linear Program while taking into consideration both network application requirements and the network occupation status. The problem is then resolved in a polynomial time leveraging the Branch-and-Cut algorithm. Results obtained with our experimental platform, show that the proposed SD-WAN framework outperforms the most prominent related solutions in terms of applications' satisfaction level and consumption of network's resources.
---
paper_title: Wireless Software Defined Networking: A Survey and Taxonomy
paper_content:
One of the primary architectural principles behind the Internet is the use of distributed protocols, which facilitates fault tolerance and distributed management. Unfortunately, having nodes (i.e., switches and routers) perform control decisions independently makes it difficult to control the network or even understand or debug its overall emergent behavior. As a result, networks are often inefficient, unstable, and fragile. This Internet architecture also poses a significant, often insurmountable, challenge to the deployment of new protocols and evolution of existing ones. Software defined networking (SDN) is a recent networking architecture with promising properties relative to these weaknesses in traditional networks. SDN decouples the control plane, which makes the network forwarding decisions, from the data plane, which mainly forwards the data. This decoupling enables more centralized control where coordinated decisions directly guide the network to desired operating conditions. Moreover, decoupling the control enables graceful evolution of protocols, and the deployment of new protocols without having to replace the data plane switches. In this survey, we review recent work that leverages SDN in wireless network settings, where they are not currently widely adopted or well understood. More specifically, we evaluate the use of SDN in four classes of popular wireless networks: cellular, sensor, mesh, and home networks. We classify the different advantages that can be obtained by using SDN across this range of networks, and hope that this classification identifies unexplored opportunities for using SDN to improve the operation and performance of wireless networks.
---
paper_title: 47 Evolutionary Control of an Autonomous Field
paper_content:
An autonomous field of sensor nodes needs to acquire and track targets of interest traversing the field. Small detection ranges limit the detectability of the field. As detections occur in the field detections are transmitted acoustically to a master node. Both detection processing and acoustic communication drain a node's power source. In order to maximize field life, an approach must be developed to control processes carried out in the field. In this paper we develop an adaptive threshold control scheme. This technique will minimize the power consumption while still maintaining the field-level probability of detection. The power consumption of the field of sensor nodes is driven by the false alarm rate and target detection rate at the individual sensor nodes in this problem formulation. The control law to be developed is based upon a stochastic optimization technique known as evolutionary programming. At the end of the paper, a set of results are presented that will show that by dynamically adjusting sensor thresholds and routing structures, the controlled field will have twice the life of the fixed field.
---
paper_title: Underwater Acoustic Communications and Networks for the US Navy's Seaweb Program
paper_content:
At present, the realities of acoustic communications and undersea networks differ substantially from the expectations arising from academic theory and investigation. The issue is not that the fundamentals of communications theory do not apply, but rather the fact that the difficulties of the acoustic channel do not necessarily fit the assumptions underlying conventional applications of that theory. In this paper we describe many of the practical problems that have been addressed over the past ten years of modem development. Some of the issues addressed have an RF analog, but the severity of the channel and the combination of channel constraints and modem-platform operations makes acoustic communications a very different problem. In particular, we address these issues via their impact on physically small, battery powered, DSP-based, omni-directional modems. Next we describe progress toward using commercial acoustic modems as the basis for underwater networks. We discuss design choices at the physical, link, and network layers that are consistent with the compound constraints of the transmission channel and modem.
---
paper_title: OpenFlow: a radical new idea in networking
paper_content:
An open standard that enables software-defined networking.
---
paper_title: Firewall application for Floodlight SDN controller
paper_content:
In this article the authors give a general idea of software-defined networking (SDN). The paper contains a description of Floodlight SDN controller and an issue of network security management. The authors present a set of techniques that provide more security in the controller. They are implemented as a network application and describe the basic mechanism.
---
paper_title: Mode decomposition using compressive sensing applied to the SW06 data
paper_content:
A method for mode decomposition is presented using compressive sensing. Simulated data are used to demonstrate its pro and con compared with conventional methods using eigenvector decomposition. Mode cross-correlation and mode coherence time are studied using narrowband signals collected on a vertical line array during the SW06 experiment, that were transmitted from a moving source with different speeds and along paths with different bottom bathymetry. It is found that the coherence time of the dominant modes is longer than the matched-field coherence time and more sensitive to the source speed than the bathymetry. The decomposed modes can be divided into groups where members of each group are more coherent with each other than with members outside of the group.
---
paper_title: The DESERT underwater framework v2: Improved capabilities and extension tools
paper_content:
The DESERT Underwater emulation system (http://nautilus.dei.unipd.it/desert-underwater), originally designed for testing underwater acoustic networks, has been recently extended. The new framework now includes multi-modal communication functionalities encompassing low rate and high rate acoustics as well as optics, the capability of testing wireless telemetry for underwater equipment, a connection to the most recent version of the World Ocean Simulation System (WOSS), a modification to the RECORDS system for sea trial remote control, and an interface between external tools, e.g., Matlab, and the EvoLogics modem. In addition, experimental activities are now supported by an accurate real-time event scheduler which has been shown to support, among others, long experiments involving time-division multiple-access (TDMA)-based MAC protocols. These additional protocol schemes from the MAC to the application layer (most of which have been tested in controlled environments and sea trials) now make DESERT Underwater a comprehensive tool for underwater network simulation and experimentation. In this paper, we present the new functionalities developed over the last two years.
---
paper_title: A preliminary examination of the low-frequency ambient noise field in the South China Sea during the 2001 ASIAEX experiment
paper_content:
This correspondence presents a preliminary examination of the low frequency ambient noise field measured in the South China Sea component of the Asian Seas International Acoustics Experiment (ASIAEX), concentrating on the frequencies of 50, 100, 200, 400, 800, and 1200 Hz. A two-week-long time series of the noise at these frequencies is examined for structure in both the time and frequency domains. Three features of particular interest in these series are: 1) the noise due to a typhoon, which passed near the experimental site, 2) the weak tidal frequency variability of the noise field, which is probably due to internal tide induced variability in the propagation conditions, and 3) the vertical angle dependence of the noise, particularly as regards the shallow water "noise notch" phenomenon. The acoustic frequency dependence and the vertical dependence of the noise field are also examined over the course of the time series. A simple look at the noise variability statistics is presented. Finally, directions for further analysis are discussed.
---
paper_title: Auto-Configuration of ACL Policy in Case of Topology Change in Hybrid SDN
paper_content:
Software-defined networking (SDN) has emerged as a new network architecture, which decouples both the control and management planes from data plane at forwarding devices. However, SDN deployment is not widely adopted due to the budget constraints of organizations. This is because organizations are always reluctant to invest too much budget to establish a new network infrastructure from scratch. One feasible solution is to deploy a limited number of SDN-enabled devices along with traditional (legacy) network devices in the network of an organization by incrementally replacing traditional network by SDN, which is called hybrid SDN (Hybrid SDN) architecture. Network management and control in Hybrid SDN are vital tasks that require significant effort and resources. Manual handling of these tasks is error prone. Whenever network topology changes, network policies (e.g., access control list) configured at the interfaces of forwarding devices (switches/routers) may be violated. That creates severe security threats for the whole network and degrades the network performance. In this paper, we propose a new approach for Hybrid SDN that auto-detects the interfaces of forwarding devices and network policies that are affected due to change in network topology. In the proposed approach, we model network-wide policy and local policy at forwarding device using a three-tuple and a six-tuple, respectively. We compute graph to represent the topology of the network. By using graph difference technique, we detect a possible change in topology. In the case of topology change, we verify policy for updated topology by traversing tree using six-tuple. If there is any violation in policy implementation, then affected interfaces are indicated and policies that need to be configured are also indicated. Then, policies are configured on the updated topology according to specification in an improved way. Simulation results show that our proposed approach enhances the network efficiency in term of successful packet delivery ratio, the ratio of packets that violated the policy and normalized overhead.
---
paper_title: OpenRoads: empowering research in mobile networks
paper_content:
We present OpenRoads, an open-source platform for innovation in mobile networks. OpenRoads enable researchers to innovate using their own production networks, through providing an wireless extension OpenFlow. Therefore, you can think of OpenRoads as "OpenFlow Wireless". The OpenRoads' architecture consists of three layers: flow, slicing and controller. These layers provide flexible control, virtualization and high-level abstraction. This allows researchers to implement wildly different algorithms and run them concurrently in one network. OpenRoads also incorporates multiple wireless technologies, specifically WiFi and WiMAX. We have deployed OpenRoads, and used it as our production network. Our goal here is for those to deploy OpenRoads and build their own experiments on it.
---
paper_title: A QoS Framework for SDN-Based Networks
paper_content:
Nowadays, traditional networks are suffering from lack of information, easy management, and hard QoS guarantee. Recently, SDNs overcome these limitations. They provide network agility, programmability, and centralized network control. These features facilitate solving many of the security, performance, management and QoS issues. In this paper, we propose an SDN framework that leverages programmability and centralized control to provide a level of QoS. Knowing the state of the whole network helps optimizing the decision towards enhancing the network efficiency. The presented framework basically contains modules that provide monitoring, route determination, rule preparation, and configuration functionalities. The monitoring module analyzes ports utilization and probs the links delay. The route determination module relies on the shortest path algorithm, with or without QoS guarantee. Two QoS parameters, namely, port utilization and delay are considered in the monitoring and the route determination. The proposed framework is tested in a fat-tree topology with an OpenDayLight (ODL) controller. Experiments are conducted to prove the efficiency of the presented framework over the traditional standalone controller with the built-in features. Results showed that using the presented framework, with or without QoS reduces the overall average delay by 57%, jitter by 25% and packet loss by 67%. Moreover, the monitored port utilization was reduced by 30% on average.
---
paper_title: A survey on software defined network approaches for achieving energy efficiency in wireless sensor network
paper_content:
The proliferation of sensor technology in many areas have gained more attention on Wireless Sensor Network (WSN) from both research community and actual user. However, energy consumption continues to be a constrained resource and remains a crucial issue of this technology. Since data transmission and communication in the network consumed a lot of energy in WSN, the activity of the sensor nodes should be altered to prolong the lifetime of the network. To this end, this paper provides the survey on how Software-Defined Networks (SDN) approach has been used to reduce the energy consumption in WSN. Software-defined network is developed with the purpose to facilitate the emergence of the centralized control of the network data-path in which resulted in compliance by decoupling the control and forwarding planes. With this paper, readers can be more understanding better of how software-defined network works and the effectiveness of using software-defined network in handling the shortcomings in WSN.
---
paper_title: Open source suites for underwater networking: WOSS and DESERT underwater
paper_content:
Simulation and experimentation of underwater networks entail many challenges, which for the former are mainly related to the accurate modeling of the channel behavior, while they are typically logistic in nature for the latter. In this article, we present our experience with WOSS and DESERT Underwater, two open source suites address both classes of challenges. The suites build on and extend the capa- bilities of ns2 and NS-MIRACLE, two widely known software packages for network simulation. WOSS endows NS-MIRACLE with the capability to generate realistic channel patterns by automatically retrieving and processing the environmental boundary conditions that influence such patterns; DESERT Underwater makes it pos- sible to evolve toward at-sea experiments by reusing the same code written for sim- ulations, thereby minimizing the effort required for network deployment and control. Both suites have been widely tested and used in several projects: some examples are provided in this respect, including an account of some experiments carried out in collaboration with the NATO STO Centre for Maritime Research and Experimentation.
---
paper_title: Towards programmable enterprise WLANS with Odin
paper_content:
We present Odin, an SDN framework to introduce programmability in enterprise wireless local area networks (WLANs). Enterprise WLANs need to support a wide range of services and functionalities. This includes authentication, authorization and accounting, policy, mobility and interference management, and load balancing. WLANs also exhibit unique challenges. In particular, access point (AP) association decisions are not made by the infrastructure, but by clients. In addition, the association state machine combined with the broadcast nature of the wireless medium requires keeping track of a large amount of state changes. To this end, Odin builds on a light virtual AP abstraction that greatly simplifies client management. Odin does not require any client side modifications and its design supports WPA2 Enterprise. With Odin, a network operator can implement enterprise WLAN services as network applications. A prototype implementation demonstrates Odin's feasibility.
---
paper_title: SEANet G2: toward a high-data-rate software-defined underwater acoustic networking platform
paper_content:
Existing underwater acoustic networking platforms are for the most part based on inflexible hardware and software architectures that can support mostly point-to-point, low-data-rate, delay-tolerant applications. Most commercial devices do not provide neither the sufficient data rates nor the necessary flexibility to support future underwater networking applications and systems. This article discusses a new high-data rate software-defined underwater acoustic networking platform, SEANet G2, able to support higher data rates (megabit/s data rates are foreseen over short range links), spectrum agility, and hardware/software flexibility in support of distributed networked monitoring operations. The article reports on the main architectural choices of the new platform, as well as some preliminary performance evaluation results. Data rates in the order of megabit/s were demonstrated in a controlled lab environment, and, for the first time to the best of our knowledge, data rates of 522kbit/s where obtained in sea trials over short horizontal links (e.g., 10 m) for a BER lower than 10−3.
---
paper_title: Revisiting traffic anomaly detection using software defined networking
paper_content:
Despite their exponential growth, home and small office/home office networks continue to be poorly managed. Consequently, security of hosts in most home networks is easily compromised and these hosts are in turn used for largescale malicious activities without the home users' knowledge. We argue that the advent of Software Defined Networking (SDN) provides a unique opportunity to effectively detect and contain network security problems in home and home office networks. We show how four prominent traffic anomaly detection algorithms can be implemented in an SDN context using Openflow compliant switches and NOX as a controller. Our experiments indicate that these algorithms are significantly more accurate in identifying malicious activities in the home networks as compared to the ISP. Furthermore, the efficiency analysis of our SDN implementations on a programmable home network router indicates that the anomaly detectors can operate at line rates without introducing any performance penalties for the home network traffic.
---
paper_title: Performance optimisation of control channel in ForCES-based software defined network
paper_content:
The traditional network architecture cannot meet the growing needs of increasingly diverse network applications. The new generation network architectures, software defined network (SDN) which have features of high openness, flexibility, scalability and high controllability have widely been studied. However, a series of new problems are also produced in this new architecture, such as security problem resulted from the high openness, and performance problem resulted from the high flexibility. Currently, the researchers are mainly concentrated in the implementation and standardisation of SDN. There is no good approach to evaluate and optimise the system performance. This studies the performance problems unresolved in SDN research area, and proposed an idea and method to analyse the control channel of ForCES (Forwarding and Control Element Separation)-based SDN by using the stochastic network calculus theory. First, the architecture of ForCES-based SDN is introduced. Then, the performance model of the channel is given. Based on the model, an optimisation method to performance is detailed. This study also gives a simulation by using NS-2 (Network Simulator, Version 2) to verify the correctness and reliability of the performance model.
---
paper_title: Network Innovation using OpenFlow: A Survey
paper_content:
OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology.
---
paper_title: ElasticTree: Saving Energy in Data Center Networks
paper_content:
Networks are a shared resource connecting critical IT infrastructure, and the general practice is to always leave them on. Yet, meaningful energy savings can result from improving a network's ability to scale up and down, as traffic demands ebb and flow. We present ElasticTree, a network-wide power1 manager, which dynamically adjusts the set of active network elements -- links and switches--to satisfy changing data center traffic loads. ::: ::: We first compare multiple strategies for finding minimum-power network subsets across a range of traffic patterns. We implement and analyze ElasticTree on a prototype testbed built with production OpenFlow switches from three network vendors. Further, we examine the trade-offs between energy efficiency, performance and robustness, with real traces from a production e-commerce website. Our results demonstrate that for data center workloads, ElasticTree can save up to 50% of network energy, while maintaining the ability to handle traffic surges. Our fast heuristic for computing network subsets enables ElasticTree to scale to data centers containing thousands of nodes. We finish by showing how a network admin might configure ElasticTree to satisfy their needs for performance and fault tolerance, while minimizing their network power bill.
---
paper_title: Distributed SDN Control: Survey, Taxonomy, and Challenges
paper_content:
As opposed to the decentralized control logic underpinning the devising of the Internet as a complex bundle of box-centric protocols and vertically integrated solutions, the software-defined networking (SDN) paradigm advocates the separation of the control logic from hardware and its centralization in software-based controllers. These key tenets offer new opportunities to introduce innovative applications and incorporate automatic and adaptive control aspects, thereby, easing network management and guaranteeing the user’s quality of experience. Despite the excitement, SDN adoption raises many challenges including the scalability and reliability issues of centralized designs that can be addressed with the physical decentralization of the control plane. However, such physically distributed, but logically centralized systems bring an additional set of challenges. This paper presents a survey on SDN with a special focus on the distributed SDN control. Besides reviewing the SDN concept and studying the SDN architecture as compared to the classical one, the main contribution of this survey is a detailed analysis of state-of-the-art distributed SDN controller platforms which assesses their advantages and drawbacks and classifies them in novel ways (physical and logical classifications) in order to provide useful guidelines for SDN research and deployment initiatives. A thorough discussion on the major challenges of distributed SDN control is also provided along with some insights into emerging and future trends in that area.
---
paper_title: OpenFlow: enabling innovation in campus networks
paper_content:
This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too
---
paper_title: ForSA — A New Software Defined Network Architecture Based on ForCES
paper_content:
In recent years, SDN (Software Defined Network) as a new network architecture has become the hot research point. Meanwhile, the well-known OpenFlow-based SDN got a lot of attention. But it can't provide a flexible and effective network resource description method. As an open programmable technology, ForCES (Forwarding and Control Element Separation) has also been concerned. However, ForCES is confined within a single network node and cannot be applied to the entire network. This paper proposes a new architecture - ForSA (ForCESbased SDN architecture). The architecture is added a configuration layer based on the traditional SDN architecture, which solves the problem that the northbound interface is not clear between the application layer and the control layer in the SDN architecture. ForSA also implements the compatibility within various forwarding devices in the forwarding layer.
---
paper_title: OptoCOMM and SUNSET to enable large data offloading in Underwater Wireless Sensor Networks
paper_content:
In this paper we present the initial implementation of an integrated optical and acoustic system that can enable large data transfer between mobile and static nodes in Underwater Wireless Sensor Networks (UWSNs). The proposed system is based on the OptoCOMM optical modem and on the SUNSET Software Defined Communication Stack (S-SDCS) framework. The OptoCOMM modem allows to overcome the limits of maximum data rate and bandwidth imposed by the use of acoustic communication by providing a data rate of 10Mbps. SUNSET SDCS instead has been used to provide networking and fragmentation capabilities to efficiently offload large data in UWSNs. The performance of the proposed approach has been evaluated through in lab experiments where large files with arbitrary sizes have been optically transferred. The results achieved show that our system is able to transfer up to 1.5 GBytes of data in short time.
---
paper_title: Experimental demonstration of OpenFlow control of packet and circuit switches
paper_content:
OpenFlow is presented as a unified control plane and architecture for packet and circuit switched networks. We demonstrate a simple proof-of-concept testbed, where a bidirectional wavelength circuit is dynamically created to transport a TCP flow.
---
paper_title: Control and understanding: Owning your home network
paper_content:
Wireless home networks are increasingly deployed in people's homes worldwide. Unfortunately, home networks have evolved using protocols designed for backbone and enterprise networks, which are quite different in scale and character to home networks. We believe this evolution is at the heart of widely observed problems experienced by users managing and using their home networks. In this paper we investigate redesign of the home router to exploit the distinct social and physical characteristics of the home. We extract two key requirements from a range of ethnographic studies: users desire greater understanding of and control over their networks' behaviour. We present our design for a home router that focuses on monitoring and controlling network traffic flows, and so provides a platform for building user interfaces that satisfy these two user requirements. We describe and evaluate our prototype which uses NOX and OpenFlow to provide per-flow control, and a custom DHCP implementation to enable traffic isolation and accurate measurement from the IP layer. It also provides finer-grained per-flow control through interception of wireless association and DNS resolution. We evaluate the impact of these modifications, and thus the applicability of flow-based network management in the home.
---
paper_title: Software defined optical networks technology and infrastructure: Enabling software-defined optical network operations
paper_content:
Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed.
---
paper_title: B4: experience with a globally-deployed software defined wan
paper_content:
We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work.
---
paper_title: A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation
paper_content:
Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs.
---
paper_title: Towards QoS-enabled SDN networks
paper_content:
SDN networks have become increasingly popular, especially for real time systems. Optimizing the self-defined network cost and performance is thereby primordial. This requires the network to meet real-time requirements and provide guarantees related to bandwidth and packet loss, and associated with the end-to-end delay and jitter which are essential indicators of QoS. This paper is an effort towards QoS-enabled SDN networks that allows establishing the formulation for both delay and jitter. Furthermore, we evaluate the effect of end-to-end delay and jitter in SDN networks as QoS metrics to finally establish their respective behavior.
---
paper_title: Extension of OpenFlow protocol to support optical transport network, and its implementation
paper_content:
By having the OpenFlow protocol support Optical Transport Networks (OTN) and work as the unified control interface for multilayer (L0-L4) networks, a simple and cost-effective multilayer Software Defined Networking (SDN) controller can be created. Our aim is to propose and standardize the extended OpenFlow protocol to support OTN in Open Networking Foundation (ONF). We describe the approved specification, and the process from proposal to standard. We applied this extension to our OpenFlow controller and OpenFlow agent and succeeded in demonstrating that the extension works as desired in single- and multi-vendor environments. Future directions for OpenFlow extension and modeling of multilayer integrated nodes are also discussed.
---
paper_title: An SDN based fully distributed NAT traversal scheme for IoT global connectivity
paper_content:
Existing NAT solves to IP address exhaustion problem binding private IP address and public IP address, and NAT traversal such as hole punching scheme enables to communicate End-to-End devices located in different private networks. However, such technologies are centralized the workload at NAT gateway and increase transmission delay caused by packet modification per packet. In this paper, we propose an SDN based fully distributed NAT traversal scheme, which can distribute the workload of NAT processing to devices and reduce transmission delay by packet switching instead of packet modification. Furthermore, we describe SDN based IoT connectivity management architecture for supporting IoT global connectivity and enhanced real-time and security.
---
paper_title: Design and evaluation of a low-cost, DIY-inspired, underwater platform to promote experimental research in UWSN
paper_content:
Underwater Acoustic Sensor Networks (UWSN) is challenging research area due to limited bandwidth, low data rate, severe multipath, and high variability in the channel conditions. These complicated and non-linear channel characteristics render incorrect most simplifying assumptions used in simulations. We believe that, while researchers have proposed several novel protocols, their use of models and simulations as the only form of validation and intra-protocol comparison remains removed from reality. We argue that research experimentation is hindered by two fundamental constraints: high cost of underwater networking experiments, and lack of a single, easily-replicable platform for evaluation. We present here Underwater Platform to Promote Experimental Research (UPPER): a low-cost and flexible underwater platform designed to enable cost-effective and repeatable experimentation. We utilize commercial off-the-shelf (COTS) components to provide a HW/SW integrated solution that interfaces to two version of our custom hydrophones, from laptops that act as an Software-Defined Radio (SDR)-based physical layer, while allowing higher layer protocols to interact via a flexible API. With a total cost of $25 and $65 for each version of our underwater communication platform, we evaluate the platforms to demonstrate their data rates (50-600bps) and range (5-10m for v1, 30-50m for v2), thus indicating a cost-range tradeoff. We believe our platform removes the barrier to validating simulation results in underwater environments while also allowing a fair comparison between protocols.
---
paper_title: Coralcon: An open source low-cost modem for underwater IoT applications
paper_content:
Most underwater deployments rely on acoustics for enabling communication. The equipment for underwater communication is usually very expensive. Simulations of underwater systems are insufficient for many researchers. An open source platform built with commercially available hardware at low rates along with customizable software modules may prove to be very useful for researcher. This paper particularly focuses on the details of Coralcon, a low-cost, open-sources based modem designed to be integrated with underwater Internet of Things (IoT). We implemented Coralcon with a customizable minimal software defined radio (SDR). Coralcon can be interfaced with other equipment to send and receive data reliably at rates up to 1000 bits per sec with one antenna.
---
paper_title: Real-time video transmission over different underwater wireless optical channels using a directly modulated 520 nm laser diode
paper_content:
We experimentally demonstrate high-quality real-time video streaming over an underwater wireless optical communication (UWOC) link up to 5 m distance using phase-shift keying (PSK) modulation and quadrature amplitude modulation (QAM) schemes. The communication system uses software defined platforms connected to a commercial TO-9 packaged pigtailed 520 nm directly modulated laser diode (LD) with 1.2 GHz bandwidth as the optical transmitter and an avalanche photodiode (APD) module as the receiver. To simulate various underwater channels, we perform laboratory experiments on clear, coastal, harbor I, and harbor II ocean water types. The measured bit error rates of the received video streams are 1.0 × 10-9 for QPSK, 4-QAM, and 8-QAM and 9.9 × 10-9 for 8-PSK. We further evaluate the quality of the received live video images using structural similarity and achieve values of about 0.9 for the first three water types, and about 0.7 for harbor II. To the best of our knowledge, these results present the highest quality video streaming ever achieved in UWOC systems that resemble communication channels in real ocean water environments.
---
paper_title: Software Defined Open Architecture Modem development at CMRE
paper_content:
This paper covers the first steps in creating a Software Defined Open Architecture Modem (SDOAM). Potentially useful operating environments, platforms and approaches are reviewed as well as relevant work on underwater digital communications at CMRE, including JANUS. A high-level architectural structure, based on a generalisation of the classic OSI communications stack, is proposed, identifying the modules that will make up the system and taking care to include all the features that the major stakeholders will want to see while minimising the investment that will need to be made to migrate from existing systems to SDOAMs. A key new element is the provision of policy engines that will negotiate the switching between different modules in the OSI-like layers. We also propose to formalise the cross-layer linking that is typically found in practical implementations by providing a fully-connected cross-layer framework for all processes to access. Finally, future developments and research directions are identified. The exact definitions of the interfaces between these modules remain to be specified, as is the software support environment on which they will operate.
---
paper_title: Internet of underwater things: Challenges and routing protocols
paper_content:
The Internet of Underwater Things (IoUT) is a novel class of Internet of Things (IoT) that enables practical applications toward developing smart cities. Underwater Wireless Sensor Network (UWSN) shows great potential for the future but also poses new challenges for IoUT. In addition, routing protocol will play a major role in handling how to forward data among the substantial number of things“. In this paper, we call attention to the challenges for IoUT, provide an exhaustive investigation of cutting-edge routing protocol and find the connection between the challenges and routing protocols for IoUT.
---
paper_title: Software-defined underwater acoustic networks: toward a high-rate real-time reconfigurable modem
paper_content:
We review and discuss the challenges of adopting software-defined radio principles in underwater acoustic networks, and propose a software-defined acoustic modem prototype based on commercial off-the-shelf components. We first review current SDR-based architectures for underwater acoustic communications. Then we describe the architecture of a new software-defined acoustic modem prototype, and provide performance evaluation results in both indoor (water tank) and outdoor (lake) environments. We present three experimental testbed scenarios that demonstrate the real-time reconfigurable capabilities of the proposed prototype and show that it exhibits favorable characteristics toward spectrally efficient cognitive underwater networks, and high data rate underwater acoustic links. Finally, we discuss open research challenges for the implementation of next-generation software-defined underwater acoustic networks.
---
paper_title: Bitwise ranging through underwater acoustic communication with frequency hopped FSK utilizing the Goertzel algorithm
paper_content:
Acoustic communication is the most common technique to communicate with an autonomous underwater vehicle (AUV) during its mission. Furthermore, this communication channel is often used for ranging, to support the navigation of the vehicle and localize it in world coordinates, whereas the vehicles dead reckoning only gives a relative position to a starting point. Due to the severe multipath propagation under water, frequency hopped frequency shift keying (FH-FSK) is often used underwater to communicate over long ranges, whereas it can be otherwise deemed bandwidth inefficient. In this work, a version of FH-FSK is realized as a software defined radio (SDR) on a system on chip, that is connected to the peripherals necessary to transmit and receive signals via acoustic transducers. Besides the communication capabilities, we demonstrate the application of this modulation scheme to ranging for each transmitted data bit.
---
paper_title: Software-defined underwater acoustic networking platform and its applications
paper_content:
As underwater communications adopt acoustics as the primary modality, we are confronting several unique challenges such as highly limited bandwidth, severe fading, and long propagation delay. To cope with these, many MAC protocols and PHY layer techniques have been proposed. In this paper, we present a research platform that allows developers to easily implement and compare their protocols in an underwater network and configure them at runtime. We have built our platform using widely supported software that has been successfully used in terrestrial radio and network development. The flexibility of development tools such as software defined radio, TinyOS, and Linux have provided the ability for rapid growth in the community. Our platform adapts some of these tools to work well with the underwater environment while maintaining flexibility, ultimately providing an end-to-end networking approach for underwater acoustic development. To show its applicability, we further implement and evaluate channel allocation and time synchronization protocols on our platform.
---
paper_title: The design and experiment of a software-defined acoustic modem for underwater sensor network
paper_content:
In this paper, the design scheme and system implementation of a compact, low-cost, energy-efficient, real-time, software-defined acoustic (SDA) modem for underwater sensor network are described. The hardware and software performance of the SDA modem are presented by the underwater experiment results of different underwater communication and networking method using the same modem.
---
paper_title: Software defined radio (SDR) foundations, technology tradeoffs: A survey
paper_content:
Software radio has emerged as a focus of both academic research and commercial development for future wireless systems. This paper briefly reviews the foundation concepts of the Software Radio. It then characterizes the tradeoffs among core software-radio technologies. Object oriented analysis leads to the definition of the radio reference platform and the related layered object-oriented architecture supporting simultaneous hardware and software evolution. Research issues include layering, tunneling, virtual machines and intelligent agents.
---
paper_title: Methods of Hybrid Cognitive Radio Network: A Survey
paper_content:
In advanced radio communication systems, most of the radio spectrum remains underutilized. To fully utilize the radio spectrum, an efficient allocation of the scarce and expensive radio resources is most important and challenging. As a solution of this problem, cognitive radio network (CRN) can make better use of the radio spectrum by allowing the secondary users (SU) to opportunistically access and share the licensed spectrum using dynamic spectrum access (DSA) technology. There are two main sharing techniques in CRN based on the access technology: 1) over-lay; 2) under-lay. In over-lay mode, secondary user can access only vacant spectrum in the absence of primary user (PU) but in underlay mode, the SUs can coexist with the PUs in the same channel under the SINR (signal to interference plus noise ratio) constraints. To further improve the spectrum scarcity, both over-lay and under-lay approaches gets combined under SINR constraints to make a hybrid cognitive radio. In this paper, we provide a survey for different methods of hybrid cognitive radio in which under-lay and over-lay modes are merged using different techniques to efficiently utilize the scarce radio spectrum.
---
paper_title: Dolphins First: Dolphin-Aware Communications in Multi-Hop Underwater Cognitive Acoustic Networks
paper_content:
Acoustic communication is the most versatile and widely used technology for underwater wireless networks. However, the frequencies used by current acoustic modems are heavily overlapped with the cetacean communication frequencies, where the man-made noise of underwater acoustic communications may have harmful or even fatal impact on those lovely marine mammals, e.g., dolphins. To pursue the environmental friendly design for sustainable underwater monitoring and exploration, specifically, to avoid the man-made interference to dolphins, in this paper, we propose a cognitive acoustic transmission scheme, called dolphin-aware data transmission (DAD-Tx), in multi-hop underwater acoustic networks. Different from the collaborative sensing approach and the simplified modeling of dolphins’ activities in existing literature, we employ a probabilistic method to capture the stochastic characteristics of dolphins’ communications, and mathematically describe the dolphin-aware constraint. Under dolphin-awareness and wireless acoustic transmission constraints, we further formulate the DAD-Tx optimization problem aiming to maximize the end-to-end throughput. Since the formulated problem contains probabilistic constraint and is NP-hard, we leverage Bernstein approximation and develop a three-phase solution procedure with heuristic algorithms for feasible solutions. Simulation results show the effectiveness of the proposed scheme in terms of both network performance and dolphin awareness.
---
paper_title: Frequency offset estimation for index modulation-based cognitive underwater acoustic communications
paper_content:
As a manner of communications with the high spectral and energy efficiency, cognitive underwater acoustic communications based on index modulation (CUAC-IM) have been investigating by us in more recent years. Like other communication system, estimation algorithm on frequency offset (FO) caused by the relative motion of mobile station is crucial to CUAC-IM system. In this paper, system model, especially for applications of index modulation in cognitive underwater acoustic communications, will be presented firstly. Then, an estimation method on the FO over the above model will be proposed, which employs the high order statistics. Finally, computer simulation will be performed to verify the aforementioned solution of the FO. Simulation results demonstrate that the suggested algorithm is accurate and efficient.
---
paper_title: Research on Dynamic Spectrum Allocation Using Cognitive Radio Technologies
paper_content:
With the fast development of wireless communication technologies and the amazing increasing of user numbers in Internet of Things,the limited spectrum resources have become more and more scarce.However,today's spectrum resources are regulated by a fixed assignment policy and they are in inefficient usage.How to satisfy users' high mobility and mass date transmission requirements are new challenges.Cognitive Radio is one of these technologies that can offer users a seamless accessing environment,and solves the current spectrum inefficiency problems.It represents a great potential for the development of Internet of Things.In this paper,using Cognitive Radio technologies,we propose a cognitive radio users and networks cooperative spectrum allocation framework,then propose a dynamic spectrum allocation solution.This solution consists of two algorithms: One is a Spectrum Ranking Selecting algorithm(SRS) implemented at cognitive radio users,to meet their QoS and mobility requirements;and the other is a Joint Optimization Matching algorithm(JOM) implemented at the networks,by achieving the co-optimization between spectrum utilization and handoff rate to satisfy the mass data transmission requirement.With the cooperation between cognitive radio users and networks,our solution can construct an efficient dynamic spectrum allocation.Simulation results show that,compared with the traditional mapping algorithm,our solution can significantly improve the performance of networks in terms of throughput by 70% and spectrum handoff rate by 56%.
---
paper_title: OFDM-based spectrum-aware routing in underwater cognitive acoustic networks
paper_content:
With the long propagation delay of an acoustic signal in underwater communications systems, relay node selection is one of the key design factors, because it significantly improves end-to-end delay, thereby improving overall network performance. To this end, the authors propose orthogonal frequency division multiplexing-based spectrum-aware routing (OSAR), a scheme in which spectrum sensing is done by an energy detector, and each sensor node broadcasts its local sensing results to all one-hop nodes via an extended beacon message. Each sensor node then selects nodes that agree on an idle channel, consequentially forming a set of neighbouring nodes. The selection of a relay node is determined by calculating the transmission delay – the source/relay node selected is the one that has the minimum transmission delay from among all nodes in the neighbouring set. To evaluate OSAR, the authors perform extensive simulations via ns-MIRACLE for different numbers of channels using a BELLHOP model, and evaluate the average delay for different sensor nodes within the considered network. The results show a substantial decrease in delay as the number of sensor nodes increases in the network. In addition, the authors verify that the packet delivery ratio increases with increases in the number of sensor nodes, and prove better performance in the overhead ratio. The authors' simulation results verify that OSAR outperforms existing solutions.
---
paper_title: On Rethinking Cognitive Access for Underwater Acoustic Communications
paper_content:
In this paper, we investigate how to reformulate the concepts of cognitive access, originally developed for radio communications, in the framework of underwater acoustic communications. A straightforward application of the classical energy-detection-based cognitive approach, such as the one employed for radio communications, would result in a reduced spectrum utilization in an acoustic scenario. Actually, in the underwater scenario, acoustic signals sensed by a network node are likely to be due to communication sources as well as natural/artificial acoustic sources (e.g., mammals, ship engines, and so forth), differently from classical cognitive radio access, where each signal at the receiver is generated by a communication source. To maximize the access probability for cognitive acoustic nodes, we focus on understanding the nature of sensed interference. Toward this aim, we try to discriminate among natural and communications sources by classifying the images representing the time and frequency features of the received signals, obtained by means of the Wigner–Ville transform. Two different classifiers are considered here. The first one is targeted on finding natural interference while the second one looks for communication. Simulation results show how the herein described approach drastically enhances the access probability in an acoustic scenario with respect to a direct rephrasing of classical cognitive access. A possible protocol for implementing cognitive access is also described and its performance evaluated.
---
paper_title: Localization, routing and its security in UWSN — A survey
paper_content:
Underwater sensor networks are promising apparatus for the discovery of the ocean. For localization and networking protocol, this sensor network requires new robust solution. For terrestrial sensor network various localization algorithms and routing protocols have been proposed, there are very few localization and routing techniques for UWSN. Compare to terrestrial sensor network the features of underwater sensor network are basically different. Underwater acoustic communication is described by stringent physical layer condition with severe bandwidth restrictions. In underwater the uneven rate of sound and the long propagation delay create a distinct set of challenges for localization and routing in UWSN. This paper surveyed the different localization and routing schemes that are applicable to underwater sensor networks, the dispute in meeting the condition created by rising function for such network, attacks, security requirement in UWSN and localization and routing security in UWSN.
---
paper_title: A review on recent advances in spectrum sensing, energy efficiency and security threats in cognitive radio network
paper_content:
In the last decade there is a vast development in the wireless communication and new wireless devices so the demand of the radio spectrum is increasing, There is a need of efficient spectrum utilization because due to fixed assignment policy a huge portion of licensed spectrum is underutilized. To exploit the radio spectrum in a more intelligent and flexible way, regulatory bodies are reviewing their policies by adopting innovative communication technology. Cognitive Radio is a revolutionary technology which enables access to underutilized spectrum efficiently and dynamically without causing interference to the licensed users. In the last few years there has been significant development in cognitive radio technology. This paper reviews recent development and advances in three key areas of cognitive radio network — spectrum sensing, security and energy efficiency. In this paper the fundamentals of cognitive radio is first discussed and than existing work is reviewed.
---
paper_title: Joint Relay Selection and Power Allocation in Underwater Cognitive Acoustic Cooperative System with Limited Feedback
paper_content:
We study the problem of joint relay selection and power allocation in a underwater cooperative system with multiple users assisted by multiple relays. Due to the harsh underwater environments, the channel state information (CSI) at the transmitter is imperfect, which leads to the performance degrading in the underwater cooperative acoustic system. Therefore, we analyze the cooperative underwater acoustic channel with limited feedback to increase the sum-rate of the system. Meanwhile, different from other researches, we do not only focus on the single system scenario, but also consider the presence of nearby acoustic activities and the problem of joint relay selection and power allocation is solved in a cognitive acoustic (CA) scenario. Thus the codebook of interference CSI and the codebook of quantized relay selection and power allocation strategy are designed, respectively. Simulation results show that a few bits feedback can significantly improve the performance of the CA cooperative acoustic system.
---
paper_title: Receiver-Initiated Spectrum Management for Underwater Cognitive Acoustic Network
paper_content:
Cognitive acoustic (CA) is emerging as a promising technique for environment-friendly and spectrum-efficient underwater communications. Due to the unique features of underwater acoustic networks (UANs), traditional spectrum management systems designed for cognitive radio (CR) need an overhaul to work efficiently in underwater environments. In this paper, we propose a receiver-initiated spectrum management (RISM) system for underwater cognitive acoustic networks (UCANs). RISM seeks to improve the performance of UCANs through a collaboration of physical layer and medium access control (MAC) layer. It aims to provide efficient spectrum utilization and data transmissions with a small collision probability for CA nodes, while avoiding harmful interference with both “natural acoustic systems”, such as marine mammals, and “artificial acoustic systems”, like sonars and other UCANs. In addition, to solve the unique challenge of deciding when receivers start to retrieve data from their neighbors, we propose to use a traffic predictor on each receiver to forecast the traffic loads on surrounding nodes. This allows each receiver to dynamically adjust its polling frequency according to the variation of a network traffic. Simulation results show that the performance of RISM with smart polling scheme outperforms the conventional sender-initiated approach in terms of throughput, hop-by-hop delay, and energy efficiency.
---
paper_title: Full Spectrum Sharing in Cognitive Radio Networks Toward 5G: A Survey
paper_content:
With the development of wireless communication technology, the need for bandwidth is increasing continuously, and the growing need makes wireless spectrum resources more and more scarce. Cognitive radio (CR) has been identified as a promising solution for the spectrum scarcity, and its core idea is the dynamic spectrum access. It can dynamically utilize the idle spectrum without affecting the rights of primary users, so that multiple services or users can share a part of the spectrum, thus achieving the goal of avoiding the high cost of spectrum resetting and improving the utilization of spectrum resources. In order to meet the critical requirements of the fifth generation (5G) mobile network, especially the Wider-Coverage , Massive-Capacity , Massive-Connectivity , and Low-Latency four application scenarios, the spectrum range used in 5G will be further expanded into the full spectrum era, possibly from 1 GHz to 100 GHz. In this paper, we conduct a comprehensive survey of CR technology and focus on the current significant research progress in the full spectrum sharing towards the four scenarios. In addition, the key enabling technologies that may be closely related to the study of 5G in the near future are presented in terms of full-duplex spectrum sensing, spectrum-database based spectrum sensing, auction based spectrum allocation, carrier aggregation based spectrum access. Subsequently, other issues that play a positive role for the development research and practical application of CR, such as common control channel, energy harvesting, non-orthogonal multiple access, and CR based aeronautical communication are discussed. The comprehensive overview provided by this survey is expected to help researchers develop CR technology in the field of 5G further.
---
paper_title: Cognitive Routing in Software-Defined Underwater Acoustic Networks
paper_content:
There are two different types of primary users (natural acoustic and artificial acoustic), and there is a long propagation delay for acoustic links in underwater cognitive acoustic networks (UCANs). Thus, the selection of a stable route is one of the key design factors for improving overall network stability, thereby reducing end-to-end delay. Software-defined networking (SDN) is a novel approach that improves network intelligence. To this end, we propose a novel SDN-based routing protocol for UCANs in order to find a stable route between source and destination. A main controller is placed in a surface buoy that is responsible for the global view of the network, whereas local controllers are placed in different autonomous underwater vehicles (AUVs) that are responsible for a localized view of the network. The AUVs have fixed trajectories, and sensor nodes within transmission range of the AUVs serve as gateways to relay the gathered information to the controllers. This is an SDN-based underwater communications scheme whereby two nodes can only communicate when they have a consensus about a common idle channel. To evaluate our proposed scheme, we perform extensive simulations and improve network performance in terms of end-to-end delay, delivery ratio, and overhead.
---
paper_title: Design of underwater acoustic sensor communication systems based on software-defined networks in big data
paper_content:
The application based on big data is an important development trend of underwater acoustic sensor networks. However, traditional underwater acoustic sensor networks rely on the hardware infrastructure. The flexibility and scalability cannot be satisfied greatly. Due to the low performance of underwater acoustic sensor networks, it creates significant barriers to the implementation of big data. Software-defined network is regarded as a new infrastructure of next-generation network. It offers a novel solution for designing underwater acoustic sensor networks of high performance. In this article, a software-defined network–based solution is proposed to build the architecture of underwater acoustic sensor networks in big data. The design procedures of the data plane and control plane are described in detail. In the data plane, the works include the hardware design of OpenFlow-based virtual switch and the design of the physical layer based on software-defined radio. The hierarchical clustering technology and t...
---
paper_title: An SDN architecture for under water search and surveillance
paper_content:
Underwater Wireless Networking (UWN) schemes and applications have been attracting considerable interest with both industry and the research community. The nature of water, as a carrier medium, imposes very significant constraints on the both the characteristics and information carrying capacity of underwater communication channels. Currently, acoustic and optical are the two main physical platform/channel choices. Acoustics offers relative simplicity but low data rates. Optical has a considerable bandwidth advantage, but is much more complex to implement, exploit and manage. Leveraging both technologies enables exploitation of their complementarities and synergies. A Software Defined Networking architecture, which separates the control and data planes, enables full exploitation of this acousto-optic combination. In this configuration, the longer-ranged acoustic channel serves as the control plane, allowing the controller to issue mobility and network related commands to distant AUVs, whilst the shorter range (but higher throughput) optical channel serves as the data plane, thereby allowing for fast transfer of data. This paper presents such a system, employing the NATO approved JANUS underwater communications standard for the control channel.
---
paper_title: SoftWater: Software-defined networking for next-generation underwater communication systems
paper_content:
Abstract Underwater communication systems have drawn the attention of the research community in the last 15 years. This growing interest can largely be attributed to new civil and military applications enabled by large-scale networks of underwater devices (e.g., underwater static sensors, unmanned autonomous vehicles (AUVs), and autonomous robots), which can retrieve information from the aquatic and marine environment, perform in-network processing on the extracted data, and transmit the collected information to remote locations. Currently underwater communication systems are inherently hardware-based and rely on closed and inflexible architectural design. This imposes significant challenges into adopting new underwater communication and networking technologies, prevent the provision of truly-differentiated services to highly diverse underwater applications, and induce great barriers to integrate heterogeneous underwater devices. Software-defined networking (SDN), recognized as the next-generation networking paradigm, relies on the highly flexible, programmable, and virtualizable network architecture to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. In this paper, a software-defined architecture, namely SoftWater, is first introduced to facilitate the development of the next-generation underwater communication systems. More specifically, by exploiting the network function virtualization (NFV) and network virtualization concepts, SoftWater architecture can easily incorporate new underwater communication solutions, accordingly maximize the network capacity, can achieve the network robustness and energy efficiency, as well as can provide truly differentiated and scalable networking services. Consequently, the SoftWater architecture can simultaneously support a variety of different underwater applications, and can enable the interoperability of underwater devices from different manufacturers that operate on different underwater communication technologies based on acoustic, optical, or radio waves. Moreover, the essential network management tools of SoftWater are discussed, including reconfigurable multi-controller placement, hybrid in-band and out-of-band control traffic balancing, and utility-optimal network virtualization. Furthermore, the major benefits of SoftWater architecture are demonstrated by introducing software-defined underwater networking solutions, including the throughput-optimal underwater routing, SDN-enhanced fault recovery, and software-defined underwater mobility management. The research challenges to realize the SoftWater are also discussed in detail.
---
paper_title: Design and Implementation of SDN-Based Underwater Acoustic Sensor Networks With Multi-Controllers
paper_content:
Underwater acoustic sensor networks (UASNs) provide new opportunities for exploring oceans and consequently improving our understanding of the underwater world. UASNs usually rely on hardware infrastructure with poor flexibility and versatility. Compared with wireless sensor networks, UASNs are quite expensive to manufacture and deploy. Due to the unique data format, protocols, and service constraints of various applications, UASNs are typically deployed in a redundant manner, which not only leads to waste but also causes serious interference due to the presence of multiple signals in the same underwater region. Software-defined networking (SDN) provides an innovative means of improving the flexibility of underwater systems. In this paper, we present an SDN-based UASN framework, followed by the design of a clustering method in which learning automata and degree-constrained connected dominating sets are employed. We then propose a load balancing mechanism involving multiple controllers, based on the consistent hashing algorithm. Finally, we describe a simulation program (called UASNs hypervisor) that we developed and implemented to assess the network survival time, bit-error rate, and computational complexity. The experimental results show that the UASN was improved significantly. This work provides important theoretical and technical support for the implementation of SDN-based UASNs.
---
paper_title: Convex optimization–based multi-user detection in underwater acoustic sensor networks
paper_content:
Multi-carrier code-division multiple access is an important technical means for high-performance underwater acoustic sensor networks. Nevertheless, severe multiple access interference is a huge cha...
---
paper_title: Toward the Development of Secure Underwater Acoustic Networks
paper_content:
Underwater acoustic networks (UANs) have been recognized as an enabling technology for various applications in the maritime domain. The wireless nature of the acoustic medium makes UANs vulnerable to various malicious attacks, yet, limited consideration has been given to security challenges. In this paper, we outline a hybrid architecture that incorporates aspects of physical layer security, software defined networking, node cooperation, cross-layering, context-awareness, and cognition. The proposed architecture envisions strategies at the node as well as at the network level that adapt to environmental changes, the status of the network and the possible array of attacks. Several examples of attacks and countermeasures are discussed while deployment and functionality issues of the proposed architecture are taken into consideration. This work is not intended to represent a whatsoever proven solution but mainly to suggest future research directions to the scientific community working in the area of UANs.
---
paper_title: WaterCOM: An Architecture Model of Context-Oriented Middleware
paper_content:
Integrating physical and information space into applications increases application's complexity and development difficulty. In Ubiquitous environment, context collection, aggregation and notification raise complex scientific problems and new challenges. In this paper we address these challenges by proposing a conceptual context-oriented middleware architecture. We first discuss the reason to use context in ubiquitous computing, and context-oriented middleware requirements. Then we present our approach by describing a service-oriented architecture model. It provides a dynamic adaptation ability, supports multiple context models and multi-domain context consumer. Finally, we discuss the benefit of our conceptual approach by describing and comparing current context middlewares.
---
paper_title: Improving energy efficiency performance of ALOHA based underwater acoustic sensor networks
paper_content:
The goal of this paper is to explore the throughput and energy efficiency of ALOHA based underwater acoustic sensor networks (UASNs). Primarily, we frame analytical expressions for the throughput and energy efficiency of UW-ALOHA for single hop UASN. We then procure closed form expression for the optimal channel attempt rates that maximize energy efficiency and throughput Independently. In this case, we notice that the attempt rate that maximizes the energy efficiency leads to drop in the network throughput and vice versa. We then consider a cross layer energy optimization problem with the objective of maximizing the energy efficiency of the network while meeting an SNR constraint at the PHY layer and a throughput constraint at the MAC layer. We also consider underwater acoustic channel specific parameters like distance dependent bandwidth and spreading losses for the analysis. Using Karush-Kuhn Tucker conditions, we derive closed form solution for the optimal channel attempt rates that meet the desired objectives. Extensive performance evaluation results show that sensible selection of the attempt rates by the sensor nodes can improve the energy efficiency of the network significantly.
---
paper_title: A SDN-controlled underwater MAC and routing testbed
paper_content:
Efficient data communication among autonomous under-water vehicles (AUVs) is difficult. Challenges include the long propagation delays arising with acoustic communication solutions, and line-of-sight requirements for optical transceivers. Existing multi-hop routing approaches are not always appropriate due to node mobility. This work presents a centralized approach to network control, exploiting the observation that AUV networks will have a bounded number of nodes. The paper describes a SDN realization of AUV networking, and documents the implementation of a small-scale replica of the system in our testbed, which can be accessed remotely via a web page and SSH. We then demonstrate the functionality of our implementation by evaluating the performances of two existing MAC protocols namely Slotted FAMA [11] and UW-Aloha [13], in a multi-hop, underwater scenario.
---
paper_title: C-MAC: A TDMA-Based MAC Protocol for Underwater Acoustic Sensor Networks
paper_content:
Different from terrestrial wireless networks that use radio channel, underwater networks utilize acoustic channel, which poses great research challenges in medium access control (MAC) protocol design due to its low available bandwidth and high propagation delay characteristics. In addition, the high bit-error, high transmission energy cost, and complex multi-path effects in underwater environment make it even harder. In this paper, a suitable MAC protocol, named C-MAC (cellular MAC) for underwater acoustic sensor networks (UWANs) is proposed. C-MAC is a TDMA-based MAC protocol, which divides networks into many cells. Each cell is distributed a time slot; nodes in a cell, can only transmit packets in the cell’s time slot. Experiments show the protocol can avoid collision, minimize the energy consumption, and increase the throughput efficiency.
---
paper_title: Software-defined underwater acoustic networking platform
paper_content:
Currently acoustics is the primary modality for underwater communication even though it presents a difficult channel. To try and cope with the challenges of the channel many MAC protocols and PHY layer techniques have been proposed. In this paper we present a research platform that allows developers to easily implement and compare their protocols in an underwater network and configure them at runtime. We have built our platform using widely supported software that has been successfully used in terrestrial radio and network development. The flexibility of development tools such as software defined radio have provided the ability for rapid growth in the community. Our platform adapts some of these tools to work well with the underwater environment while maintaining flexibility, ultimately providing an end-to-end networking approach for underwater acoustic development.
---
paper_title: Shallow water acoustic channel modeling and MIMO-OFDM simulations
paper_content:
The uniqueness of underwater acoustic (UWA) channel and features like limited bandwidth, multipath spread, severe Doppler Effect and transmission loss make the UWA communication more challenging and intricate. The use of multiple inputs multiple outputs (MIMO) increases the spatial diversity of the system and the bandwidth limitation issue is solved by Orthogonal Frequency Division Multiplexing (OFDM) due to its high spectral efficiency, whereas attenuation coefficient plays a key role in determining the transmission loss. UWA Channel capacity is highly influenced by the transducer spacing and the delay spread. A simple and brief MIMO UWA communication model is presented here to measure the transmission loss by analyzing different models, the spatial coherence by analyzing the effect of spacing between transducers, ambient noise effect, multipath effect and consequently increase the spatial diversity and spectral efficiency by using MIMO and OFDM for shallow water communication. The ambient noise effects are found using Wenz' model whereas three different models; Thorp's, Fisher & Simmons'(F&S) and Francois & Garrison's (F&G) models are analyzed and compared to find the attenuation coefficient of the acoustic signal. Multipath propagation of the signal is considered following the Ray Theory and a model proposed by Bouvet & Loussert is used to find multipath effect. Finally, the modeled underwater channels are used to simulate and analyze the performance of MIMO-OFDM communication.
---
paper_title: Remotely operated underwater vehicle with surveillance system
paper_content:
Remotely operated underwater vehicles or ROVs are underwater robots that are used in science, entertainment, military and offshore oil industries. Their main function is to interact with the environment under the water in various ways. It is a very complicated system and uncommon in the developing or the underdeveloped countries around the world. Countries consisting many water bodies and prone to maritime incidents ROVs can be very useful in rescue missions. In this research work we built a ROV and equipped it with a surveillance system. Our ROV will be quite useful beside rescuers by monitoring the underwater and sending the video. It can also be used in military, scientific research, film making under the water and monitoring underwater industrial structures and underwater network devices.
---
paper_title: Network Function Virtualization: State-of-the-art and Research Challenges
paper_content:
Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.
---
paper_title: Multiple-Input Multiple-Output Technique for Underwater Acoustic Communication System
paper_content:
The performance of an underwater acoustic communication (UAC) system is limited due to tough propagation conditions in the UAC channel. Multiple-Input Multiple-Output (MIMO) technique can improve the reliability of the data transmission system, increase its speed, increase its range, and reduce the energy consumption. The paper presents an implementation method of MIMO technique in the form of coding the Space-Time Block Code and its optimal case in the form of Alamouti coding. The results of simulation tests in a channel with flat Rayleigh fading were included, which were compared with the quality of the SISO system.
---
paper_title: GENI with a Network Processing Unit: Enriching SDN Application Experiments
paper_content:
This paper reports the integration of Dell's specialized split data plane (SDP) OpenFlow switch into the GENI testbed. In addition, the paper outlines the research directions in network science and engineering that such a switch may enable together with a new perspective on education in network programming. An SDP switch can be used to perform some specialized processing on flows with special hardware accelerators in addition to hosting any application (running on a Linux OS) that a user may insert on the path of a flow. The SDP switch is composed of a Dell switch (PowerConnect 7024) with an internal physical connection to a sub-unit, Network Processor Unit (NPU), by Cavium Networks. Hosting an OpenvSwitch on the NPU with open hosting of Linux applications enables software-defined networking experiments. The integration challenges/process associated with this unit is presented as a future reference to other such foreign box integrations.
---
paper_title: Adaptive combining multi-branch frequency-domain detector for underwater acoustic cooperative communication
paper_content:
In this paper, we propose an adaptive combining multi-branch frequency-domain detector (ACMFD) for amplify-and-forward (AF) underwater acoustic cooperative communication. In contrast to the existing methods, the proposed detector can adaptively combine the received signals from different nodes at destination, and don't need the assumption that full and perfect channel state information (CSI) of all the relayed paths at the receiver is known. Simulation results show that the proposed ACMFD has better performance than existing counterparts.
---
paper_title: Minimization of flow table for TCAM based openflow switches by virtual compression approach
paper_content:
Forwarding lookup in the open flow switch can be done for each arriving packet by every switch in path. Every switch maintains a set of IP prefix value in its lookup table. For a given IP address it finds a longest prefix match in the forwarding table that matches first fewer bits of destination address. Earlier, there are several technique that has been proposed to optimize the table space and to reduce the number of entries in the forwarding table. In TCAM prefix values needs to be stored in descending order of its length, it require more number of entries needs to be shifted. In this paper we propose a method to reduce the length of the prefix value up to 50 to 60% horizontally as well as vertically which will be stored in the forwarding table. It involves significant prefix methods to reduce number of entries in the forwarding table.
---
paper_title: TCAM-based flow lookup design on FPGA and its applications
paper_content:
TCAM is now emulated using FPGAs and based on memory known as RAM or register. Most of them are made just using register- or RAM-based resource individually when mapped to FPGA hardware. This paper presents a 4×4 flexible Basic-TCAM architecture that can be flexibly designed based selectively on memory-based or register-based FPGA resource or both of them in combined manner. In addition, it can also allow a scalable architecture. A wider and deeper TCAM of size 512×36 built based on it seizes only 73,728 bits and 1,503 logic utilization and supports competitive search latency of 1 clock cycle over Altera Cyclone V FPGA. A typical flow lookup design based on this Basic-TCAM architecture uses a TCAM of size 4×16 and a simple parser to do a flow lookup. This design application is then taken place in a simple VLAN-based switch system for further feasible application of the TCAM in this work.
---
paper_title: BER performance of underwater multiuser cooperative communication based on MMSE-DFD algorithm
paper_content:
An underwater multiuser cooperative communication system based on minimum mean-square error decision feedback detection (MMSE-DFD) algorithm is proposed, where communication between two source nodes and one destination node is assumed. In this paper, underwater acoustic (UWA) channel is modeled as a sparse multipath channel experiencing Rayleigh fading. Compared with the traditional multiuser cooperative communication, the proposed communication system is able to increase the throughput of the communication system, when the half-duplex mode is assumed. Furthermore, computer simulations and analyses show that through the time diversity obtained by the cooperative system, the spatial diversity provided by the transducer array at the destination node and the MMSE-DFD algorithm, the proposed communication system exhibits better BER performance than the traditional cooperative system.
---
paper_title: Coordinated anti-collision transmission with parity grouping for multi-hop underwater acoustic cooperative networks
paper_content:
Due to the low speed of underwater acoustic propagation and the adverse energy supply of underwater facility, it is a challenging task to reduce the end-to-end delay and the system energy consumption. In this paper, an improved scheme is proposed based on multi-hop coordinated transmission for underwater acoustic cooperative networks. The data packets are divided into parity groups and then transferred, where the odd-numbered node transfers the odd-numbered packet and the even-numbered node transfers the even-numbered packet accordingly. Arbitrary node can be a cooperative relay for the adjacent two nodes. In the meanwhile, adopting a proper transfer power, arbitrary two nodes separated by four nodes can deliver data packets to the next node without collision. The simulation results show that the proposed scheme can effectively reduce the average end-to-end delay and energy consumption with the acceptable outage probability.
---
paper_title: Software-Defined Network Function Virtualization: A Survey
paper_content:
Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.
---
paper_title: A study of applications, challenges, and channel models on the Internet of Underwater Things
paper_content:
The Internet of Underwater Things (IoUT) is a novel class of Internet of Things (IoT), and one of the emerging technologies toward smart city. To support the concept of IoUT, Underwater Wireless Sensor Networks (UWSNs) have emerged as a promising system. In this paper, we survey the potential IoUT applications, point out the challenges and differences between UWSNs and traditional sensor networks, and investigate the channel models. We validated the models by simulations.
---
paper_title: Development of advanced Lithium-ion battery for underwater vehicle
paper_content:
Electric power storage is an important technology for all equipments of underwater vehicles however environmental pressure is high, the temperature is 5 degrees Celsius or less, and conditions are unsuitable for many chemical reactions in the deep sea. Battery capacity is mainly dependent on its mass; this means that the cruising range of underwater vehicles is proportional to the mass of the battery. To solve this problem, a high energy density battery and its enclosures are being developed. Concretely, batteries are enclosed with oil, to equate environmental pressure and to be insulated in seawater. This is called the oil compensated method, and is applied to various batteries. The oil-filled 29Ah Lithium-Ion battery for the vehicle was developed in 2010. And sea trial was carried out equipped autonomous underwater vehicle “MR-X1”. The paper introduces Lithium-Ion secondary battery and result of sea trail.
---
paper_title: An Acoustic Positioning Buoy
paper_content:
Real-time positioning for autonomous underwater vehicles (AUVs) is an important but very complex problem. This article documents the overall process of creating a buoy system for AUV positioning. Similar to Global Positioning System (GPS) electromagnetic signals, the buoys were designed to transmit acoustic signals. These signals would be received and processed by the AUV, allowing the AUV to position itself in real time. This is a system-level report and does not go into any great detail on advanced topics. This project was led by Dr. Harold (Bud) Vincent, a professor at the University of Rhode Island and an expert in underwater acoustics.
---
paper_title: Performance analysis of self interference cancellation in bidirectional relaying underwater acoustic communication(BiRUAC) system
paper_content:
Due to the underwater acoustic channel is more complexity than the wireless communication channel. It has more multipath interference and larger attenuation, so the rate of transmission in the underwater acoustic channel is lower and the probability of error is larger. Aim to those problems, we proposed a bidirectional relaying underwater acoustic communication system and analysis the performance of this algorithm. The algorithm utilized the self interference cancellation model to remove the self interference. In the performance analysis part, we set the different size of self interference cancellation model. The simulations show that our proposed algorithm has lower SER and higher rate of transmission.
---
paper_title: Research of EBAP algorithm for ad Hoc underwater acoustic network
paper_content:
Ad Hoc underwater acoustic network have become a hot field in wireless sensor networks. Researches involving in sensor placement, network organization, localization, tracking and routing protocols, etc. Among these, routing protocol not only determines the monitoring quality of sensor network, but also acts as the foundation of network organization, sensor deployment and other applications. A fairly detailed description of the Ad Hoc underwater acoustic network protocol present in this paper. A good coverage, energy balancing and high connectivity routing algorithm (Energy Balancing based on AODV Protocol, EBAP) is designed according to the dynamic environment. A simulation program and performed field measurement is also developed. The result shows that EBAP can provide wireless communication in an Ad Hoc underwater acoustic network environment with low-latency and high-connectivity.
---
paper_title: Underwater Acoustic Communication Using MIMO Hydrophone
paper_content:
Hydrophone is a type of microphone that are used in underwater for recording and also for listening to the underwater sound. Generally, Hydrophones are based upon the transducer named Piezo electric that can convert the sound signals into an electrical signals. Here the frequencies associated with the underwater acoustic communications are 10Hz-IMHz. Frequencies above 1 MHz is rarely used but they can be easily absorbed in underwater. In Underwater wireless communication the information is transmitted through the channel named as UAC channel. This paper deals with the gain brought by using the underwater MIMO channel techniques.
---
paper_title: Cross-Layer Balanced Relay Node Selection Algorithm for Opportunistic Routing in Underwater Ad-Hoc Networks
paper_content:
Due to the different transmission media, the underwater environment poses a more severe situation for routing algorithm design than that of the terrestrial environment. In this paper, we propose a weight based fuzzy logic (WBFL) relay node selection algorithm for the underwater Ad-hoc network, which can get the most balanced result than the previous algorithms without increasing the computation complexity. In this algorithm, the parameter scatters instead of the parameter values are inputted into the fuzzy logic inference system. By this innovation, the algorithm can take as much cross-layer parameters into account as possible during the relay node selection. Moreover, taking the node mobility into account, we propose a geographic based link lifetime prediction algorithm for underwater Ad-hoc network. The simulation results show that the WBFL algorithm can improve the network throughput at most 50% compare with the ExOR algorithm; moreover, the WBFL is effective and accurate on selecting relay nodes and the computation complexity is less than the previous algorithms.
---
paper_title: Cross-Layer Design for Network Lifetime Maximization in Underwater Wireless Sensor Networks
paper_content:
This paper investigates the cross-layer design problem with the goal of maximizing the network lifetime for energy-constrained underwater wireless sensor networks (UWSNs). We first jointly consider link scheduling, transmission power and transmission rate in a proposed optimization problem with the adoption of time division multiple access (TDMA) schedules. Then, we propose an iterative algorithm to solve the optimization problem. It alternates between (1) link scheduling and (2) computation of transmission powers and transmission rates. In fact, the convergence of such iterative algorithm can be mathematically and empirically supported. We evaluate our algorithm for several network topologies. Extensive simulation results demonstrate the superiority of the proposed approach.
---
paper_title: Cross-Layer Energy Minimization for Underwater ALOHA Networks
paper_content:
Underwater networks suffer from energy efficiency challenges due to difficulties in recharging underwater nodes. In addition, underwater acoustic networks show unique transmission characteristics such as frequency-dependent attenuation, which causes the transmission power to significantly depend on the bandwidth and the distance. We here investigate the cross-layer energy minimization problem in underwater ALOHA networks considering the unique transmission properties of the underwater medium. We first analyze the separate optimization of the physical (PHY) and multiple access control (MAC) layers to minimize energy consumption. We analytically obtain the energy-optimum channel access rate for the ALOHA MAC layer, which minimizes the energy consumption per successfully transmitted bit. We then formulate a cross-layer optimization problem, which jointly optimizes PHY and MAC layers to minimize energy consumption. We show that such cross-layer optimization reduces the energy consumption per bit as much as 66% in comparison with separate optimization of both layers. Cross-layer optimization achieves this energy efficiency by assigning higher MAC-layer resources to the nodes that have a longer distance to the base station, i.e., which experience a less efficient PHY layer. Moreover, cross-layer optimization significantly increases the amount data transferred until first node failure since it results in a more homogeneous energy consumption distribution among the nodes.
---
paper_title: Fast Content Updating Algorithm for an SRAM-Based TCAM on FPGA
paper_content:
Static random-access memory (SRAM)-based ternary content-addressable memory (TCAM), an alternative to traditional TCAM, where inclusion of SRAM improves the memory access speed, scalability, cost, and storage density compared to conventional TCAM. In order to confidently use the SRAM-based TCAMs in application, an update module (UM) is essential. The UM replaces the old TCAM contents with fresh contents. This letter proposes a fast update mechanism for an SRAM-based TCAM and implements it on Xilinx Virtex-6 field-programmable gate array. To the best of authors’ knowledge, this is the first ever proposal on content-update-module in an SRAM-based TCAM, which consumes least possible clock cycles to update a TCAM word.
---
paper_title: Load Balancing Mechanisms in the Software Defined Networks: A Systematic and Comprehensive Review of the Literature
paper_content:
With the expansion of the network and increasing their users, as well as emerging new technologies, such as cloud computing and big data, managing traditional networks is difficult. Therefore, it is necessary to change the traditional network architecture. Lately, to address this issue, a notion named software-defined network (SDN) has been proposed, which makes network management more conformable. Due to limited network resources and to meet the requirements of quality of service, one of the points that must be considered is load balancing issue that serves to distribute data traffic among multiple resources in order to maximize the efficiency and reliability of network resources. Load balancing is established based on the local information of the network in the conventional network. Hence, it is not very precise. However, SDN controllers have a global view of the network and can produce more optimized load balances. Although load balancing mechanisms are important in the SDN, to the best of our knowledge, there exists no precise and systematic review or survey on investigating these issues. Hence, this paper reviews the load balancing mechanisms which have been used in the SDN systematically based on two categories, deterministic and non-deterministic. Also, this paper represents benefits and some weakness regarded of the selected load balancing algorithms and investigates the metrics of their algorithms. In addition, the important challenges of these algorithms have been reviewed, so better load balancing techniques can be applied by the researchers in the future.
---
paper_title: The Controller Placement Problem in Software Defined Networking: A Survey
paper_content:
Recently, a variety of solutions have been proposed to tackle the controller placement problem in SDN. The objectives include minimizing the latency between controllers and their associated switches, enhancing reliability and resilience of the network, and minimizing deployment cost and energy consumption. In this article, we first survey the state-of-the-art solutions and draw a taxonomy based on their objectives, and then propose a new approach to minimize the packet propagation latency between controllers and switches. In order to encourage future research, we also identify the ongoing research challenges and open issues relevant to this problem.
---
| Title: Advances in Software-Defined Technologies for Underwater Acoustic Sensor Networks: A Survey
Section 1: Introduction
Description 1: Provide an overview of the motivation, significance, and the main contributions of the survey in the context of underwater acoustic sensor networks (UASNs) and Software-Defined Networking (SDN).
Section 2: Preliminaries and Background
Description 2: Present fundamental concepts and previous studies related to Underwater Acoustic Sensor Networks (UASNs) and Software-Defined Networks (SDN).
Section 3: Progress on Software-Defined Underwater Acoustic Sensor Networks
Description 3: Detail the developments and innovations in Software-Defined Radio (SDR), Cognitive Radio (CR), and SDN as applied to UASNs.
Section 4: Current Issues and Potential Research Areas for SDN-Based UASNs
Description 4: Discuss the inherent challenges, existing issues, and opportunities for future research in the integration of SDN with UASNs.
Section 5: Conclusion
Description 5: Summarize the findings of the survey, highlighting the benefits of SDN for UASNs and the primary areas requiring further development. |
A Survey on Pollution Monitoring Using Sensor Networks in Environment Protection | 8 | ---
paper_title: Development of a new sensor for total organic carbon (TOC) determination
paper_content:
A sensor to determine TOC is described. It is based on the photoassisted degradation of organic compounds concurring to TOC and on the determination of the resultant CO2. The sensor was successfully tested on target molecules, demonstrating that the linear correlation constant between TOC values and produced CO2 varies according to the considered compound so that absolute value determination is not possible in largely heterogeneous solutions but can only be referred to reference compounds on the TOC scale.
---
paper_title: Highly sensitive ozone sensor
paper_content:
Abstract A highly sensitive and reliable O 3 gas sensor (below 100 ppb) has been studied using an In 2 O 3 semiconductor thin film. The effects of various metal-oxide catalysts added to the film surface have been examined. The additives greatly affect the sensitivity characteristics of the O 3 gas sensor, especially the temperature dependence of O 3 sensitivity. The peak of the temperature-dependence curve for the Fe-oxide-added film is located at about 370 °C, but the peak for the Cs-oxide (or Rb-oxide)-added film is shifted up to about 500 °C. The peaks for other metal-oxide additives (Ba, Mg, Ni, Ce, Cr and Zr) are located between 400 and 460 °C. The peak for Mo-oxide (or W-oxide) is also shifted up to above 465 °C. The location of the peaks of the temperature-dependence curves is likely to relate to their standard enthalpies of formation (° H ° f ) of the hydroxides (or the oxyacids) from their oxides. The above phenomena suggest the effects of the surface hydroxyl groups on the O 3 sensitivity. It is found that the Fe-oxide-added In 2 O 3 semiconductor thin film has a high sensitivity and reliability to low concentrations of O 3 below 100 ppb. The sensor is able to detect 8 ppb O 3 with sufficient response. It also exhibits a power-law behaviour to O 3 concentration between 8 ppb and 10 ppm, R out α C 0.38 . It may be capable of detecting low concentrations of O 3 down to 1 ppb. Thus, the present sensor has sufficiently high sensitivity and reliability to observe low concentrations of natural O 3 in the atmosphere.
---
paper_title: Smart-sensor approach for a fibre-optic-based residual chlorine monitor
paper_content:
Developments of an optical-fibre-based sensor system for monitoring residual chlorine in water are discussed. The system, based on differential absorption spectroscopy, utilizes a novel miniature monolithic diode array spectrometer operating in the ultraviolet and visible (UV-Vis) region of the spectrum in combination with an optical flow-through cell of length 430 mm and a computer-controlled deuterium lamp source. The sensor, having a detection limit of 0.2 mg l−1 of free chlorine in water, relies on the fact that the OCl− ion, in which form dissolved chlorine exists at high pH (>9), strongly absorbs light at 290 nm. This paper describes the systematic approach that is used in the modelling and design of this sensor system. It also outlines the construction of the device and gives an evaluation of the performance in the laboratory environment.
---
paper_title: Solid Electrolyte CO2 Sensor Using NASICON and Li-based Binary Carbonate Electrode
paper_content:
A solid electrolyte CO2 sensor was developed by combining an Na+ conductor and a Li-based binary carbonate auxiliary electrode represented by Li2CO3–CaCO3 (1.8 : 1 in molar ratio: eutectic mixture). It responded to CO2 quickly and reversibly, following a Nernst equation excellently in the CO2 concentration range 102–105 ppm. In addition, Li2CO3–CaCO3 electrode was found to be stable to deliquescence even when kept under a highly humid condition at 30 °C for more than 700 h.
---
paper_title: Free Base Porphyrins as Ionophores for Heavy Metal Sensors
paper_content:
Two functionalized porphyrins: 5,10,15,20-tetrakis(3,4-dimethoxyphenyl) porphyrin (A) and 5,10,15,20-tetrakis(3-hydroxyphenyl)porphyrin (B) obtained and characterized by us were used as ionophores (I) for preparing PVC-based membrane sensors selective to Ag+, Pb2+ and Cu2+. The membranes were prepared using three different plasticizers: (bis(2-ethylhexyl)sebacate (DOS), dioctylphtalate (DOP), o-nitrophenyl octyl ether (NPOE) and potassium tetrakis(4-chlorophenyl)borate (KTClPB) as additive. The functional parameters (linear concentration range, slope and selectivity) of the sensors with membrane composition: (I:PVC:KTClPB:Plasticizer) in different ratios were investigated. The best results were obtained for the membranes in the ratio I:PVC:KTClPB:Plasticizer 10:165:5:330. The influence of pH on the sensors response was studied. The sensors were used for a period of four months and their utility has been tested on synthetic and real samples.
---
paper_title: Study and Production of Organic Phosphorus Sensor Fast Pesticides Detection
paper_content:
Organic phosphorus sensor has been instrumented with P.V.C as mem brane Carrier mixed with the phosphate andthe electric sensitive matter of organic phosphorus as active component. We have tested the sensor characteristics: the response-time was short (≤20s), the detection minimum is lower than 0.1×10-6 and the selectivity coefficient is lower than 0.1. It wasa kind of convenient and speedy determination instrument for assaying crganic phosphorous pesticides.
---
paper_title: Indium oxide-based gas sensor for selective detection of CO
paper_content:
Among various single-metal oxides, In 2 O 3 , SnO 2 and ZnO were found to exhibit rather high sensitivities to CO at 300°C and above. Especially In 2 O 3 appeared to be most attractive in view of the selectivity to CO over H 2 . It was further found that modifications of In 2 O 3 with the addition of alkali metal carbonates, especially Rb 2 CO 3 , were very effective for enhancing the sensitivity and the selectivity to CO. The sensor element using Rb-In 2 O 3 could detect 500-4000 ppm CO in wet air at 300°C satisfactorily, while its cross-sensitivities to other gases such as H 2 , CH 4, C 3 H 6 , and NO were comparatively small. Catalytic activity test and TPD measurements showed that, as a reason for the promoting effects, the addition of Rb 2 CO 3 to In 2 O 3 enhances the catalytic activity of CO oxidation. In addition, XPS data indicated that, as another reason, Rb 2 CO 3 forms the strongest p-n junctions with In 2 O 3 among alkali metal carbonates.
---
paper_title: A Four-Terminal Water-Quality-Monitoring Conductivity Sensor
paper_content:
In this paper, a new four-electrode sensor for water conductivity measurements is presented. In addition to the sensor itself, all signal conditioning is implemented together with signal processing of the sensor outputs to determine the water conductivity. The sensor is designed for conductivity measurements in the range from 50 mS/m up to 5 S/m through the correct placement of the four electrodes inside the tube where the water flows. The implemented prototype is capable of supplying the sensor with the necessary current at the measurement frequency, acquiring the sine signals across the voltage electrodes of the sensor and across a sampling impedance to determine the current. A temperature sensor is also included in the system to measure the water temperature and, thus, compensate the water-conductivity temperature dependence. The main advantages of the proposed conductivity sensor include a wide measurement range, an intrinsic capability to minimize errors caused by fouling and polarization effects, and an automatic compensation of conductivity measurements caused by temperature variations.
---
paper_title: Monitoring chemical plumes in an environmental sensing chamber with a wireless chemical sensor network
paper_content:
This paper describes the development of a wireless chemical sensor network (WCSN) and an environmental sensing chamber (ESC) within which this WCSN was tested. The WCSN used in this work takes advantage of recent advances in low power wireless communication platforms and novel light emitting diode (LED) based chemical sensing techniques. Plumes of acetic acid were employed for testing and were detected by LED based colorimetric acid responsive chemical sensors. Wireless sensor nodes were positioned in fixed locations within the chamber and responses to plumes of acetic acid were monitored. Preliminary test data show that sensor response time and magnitude are related to sensor position and plume profile, and by operating the sensors collectively in a WCSN it was possible to track chemical plumes in real-time as they moved through the chamber. We envisage that it will be possible to use chemical sensors arranged in a WCSN such as this to map and predict chemical plume dynamics.
---
paper_title: Undersea wireless sensor network for ocean pollution prevention
paper_content:
The ability to effectively communicate underwater has numerous applications, such as oceanographic data collection, pollution monitoring, disaster prevention, assisted navigation, tactical surveillance applications, and exploration of natural underwater sea resources. In this paper, we have developed a completely decentralized ad-hoc wireless sensor network for the ocean pollution detection. We mainly emphasize on the deployment of sensors, protocol stack, the synchronization algorithm and the routing algorithm in order to maximize the lifetime of the network and also to improve its Quality of Service (QoS).
---
paper_title: Underwater sensor networks: applications, advances and challenges
paper_content:
This paper examines the main approaches and challenges in the design and implementation of underwater wireless sensor networks. We summarize key applications and the main phenomena related to acoustic propagation, and discuss how they affect the design and operation of communication systems and networking protocols at various layers. We also provide an overview of communications hardware, testbeds and simulation tools available to the research community.
---
paper_title: A Water Pollution Detective System Based on Wireless Sensor Network
paper_content:
Traditional water pollution detecting methods are time-consuming and complex.Therefore,an on-line detecting method using the wireless sensor networks for detecting water index real-time is proposed.WSNs nodes are installed in waters to be detected,and GPRS is used to transmit the collected data.Management query technology for WSNs communication Nodes(backbone nodes) based on the Internet is used to display platform which can improve the system capability of detecting and automation.The energy-saving routing zigbee protocol of the wireless sensor networks(WSNs) is discussed,and the system hardware and software are designed.The simulated experiment results show the proposed system can realize the reliable and high-speed communication between the remote control center and WSNs.The system is suitable for site polluted water continuous monitoring because of its small size and low cost.
---
paper_title: Underwater Acoustic Sensor Networks: Research Challenges
paper_content:
Underwater sensor nodes will find applications in oceanographic data collection, pollution monitoring, offshore exploration, disaster prevention, assisted navigation and tactical surveillance applications. Moreover, unmanned or autonomous underwater vehicles (UUVs, AUVs), equipped with sensors, will enable the exploration of natural undersea resources and gathering of scientific data in collaborative monitoring missions. Underwater acoustic networking is the enabling technology for these applications. Underwater networks consist of a variable number of sensors and vehicles that are deployed to perform collaborative monitoring tasks over a given area. In this paper, several fundamental key aspects of underwater acoustic communications are investigated. Different architectures for two-dimensional and three-dimensional underwater sensor networks are discussed, and the characteristics of the underwater channel are detailed. The main challenges for the development of efficient networking solutions posed by the underwater environment are detailed and a cross-layer approach to the integration of all communication functionalities is suggested. Furthermore, open research issues are discussed and possible solution approaches are outlined. � 2005 Published by Elsevier B.V.
---
paper_title: Development and Application of a Next Generation Air Sensor Network for the Hong Kong Marathon 2015 Air Quality Monitoring
paper_content:
This study presents the development and evaluation of a next generation air monitoring system with both laboratory and field tests. A multi-parameter algorithm was used to correct for the impact of environmental conditions on the electrochemical sensors for carbon monoxide (CO) and nitrogen dioxide (NO2) pollutants. The field evaluation in an urban roadside environment in comparison to designated monitors showed good agreement with measurement error within 5% of the pollutant concentrations. Multiple sets of the developed system were then deployed in the Hong Kong Marathon 2015 forming a sensor-based network along the marathon route. Real-time air pollution concentration data were wirelessly transmitted and the Air Quality Health Index (AQHI) for the Green Marathon was calculated, which were broadcast to the public on an hourly basis. The route-specific sensor network showed somewhat different pollutant patterns than routine air monitoring, indicating the immediate impact of traffic control during the marathon on the roadside air quality. The study is one of the first applications of a next generation sensor network in international sport events, and it demonstrated the usefulness of the emerging sensor-based air monitoring technology in rapid network deployment to supplement existing air monitoring.
---
paper_title: The detection of evaporating hazardous material released from moving sources using a gas sensor network
paper_content:
Abstract Sensor information resulting from distributed locations and/or a multitude of instruments and heterogeneous sensors can increase the reliability of safety and security applications. A gas-sensing platform was developed, communicating via a wireless sensor network based on IEEE 802.15.4 and/or Ethernet. Data from this network are aggregated via a central server feeding its information via TCP/IP into subsequent data fusion software. The usually limited spatio-temporal resolution of chemical sensors can be compensated by space-time sensor data fusion. Sensor nodes have been equipped with metal oxide gas sensors in order to identify hazardous materials [1] . A number of these nodes have then been placed alongside a corridor people had to pass to enter a restricted area. The data from the chemical sensors were fused with tracking data from laser range scanners and video systems. It has been shown that it was possible to allocate a chemical contamination of one individual within a group of moving people and discriminate between various fire accelerating fuels and solvents. This was successfully demonstrated outside the laboratory with a test corridor build in a tent during a military tech-demo in Eckernforde, Germany.
---
paper_title: Design of a Water Environment Monitoring System Based on Wireless Sensor Networks
paper_content:
A water environmental monitoring system based on a wireless sensor network is proposed. It consists of three parts: data monitoring nodes, data base station and remote monitoring center. This system is suitable for the complex and large-scale water environment monitoring, such as for reservoirs, lakes, rivers, swamps, and shallow or deep groundwaters. This paper is devoted to the explanation and illustration for our new water environment monitoring system design. The system had successfully accomplished the online auto-monitoring of the water temperature and pH value environment of an artificial lake. The system's measurement capacity ranges from 0 to 80 °C for water temperature, with an accuracy of ±0.5 °C; from 0 to 14 on pH value, with an accuracy of ±0.05 pH units. Sensors applicable to different water quality scenarios should be installed at the nodes to meet the monitoring demands for a variety of water environments and to obtain different parameters. The monitoring system thus promises broad applicability prospects.
---
paper_title: Wireless sensor devices for animal tracking and control
paper_content:
This paper describes some new wireless sensor hardware ::: developed for pastoral and environmental applications. ::: From our early experiments with Mote hardware we ::: were inspired to develop our devices with improved radio ::: range, solar power capability, mechanical and electrical robustness, ::: and with unique combinations of sensors. Here we ::: describe the design and evolution of a small family of devices: ::: radio/processor board, a soil moisture sensor interface, ::: and a single board multi-sensor unit for animal tracking ::: experiments.
---
paper_title: SmartCoast: A Wireless Sensor Network for Water Quality Monitoring
paper_content:
The implementation of the Water Framework Directive (WFD) across the EU, and the growing international emphasis on the management of water quality is giving rise to an expanding market for novel, miniaturized, intelligent monitoring systems for freshwater catchments, transitional and coastal waters. This paper describes the "SmartCoast" multi sensor system for water quality monitoring. This system is aimed at providing a platform capable of meeting the monitoring requirements of the Water Framework Directive. The key parameters under investigation include temperature, phosphate, dissolved oxygen, conductivity, pH, turbidity and water level. The "plug and play" capabilities enabled by the wireless sensor network (WSN) platform developed at Tyndall allow for integration of sensors as required are described, as well as the custom sensors under development within the project.
---
paper_title: A Remote Wireless Sensor Networks for Water Quality Monitoring
paper_content:
To resolve real-time the traditional water quality detection method, a novel system of remote water quality measuring and monitoring based on wireless sensor network (WSN) and Code Division Multiple Access (CDMA) technology is proposed. The WSNs can monitor the targets and water quality information of the waters through the cooperation of a large amount of sensors. The optimization of node distribution is studied with the intention of reducing the energy consumption and ensuring the effective information acquisition in wireless sensor network. The functions of remote detection and real-time monitoring of natural water are implemented through the CDMA wireless data transmission. This system has a simple architecture, and isn’t confined by the geographical position. The experimental results show that this system can run stably, and its operation is convenient.
---
paper_title: Wireless sensor network for real-time air pollution monitoring
paper_content:
This paper presents an ambient real-time air quality monitoring system. The system consists of several distributed monitoring stations that communicate wirelessly with a backend server using machine-to-machine communication. Each station is equipped with gaseous and meteorological sensors as well as data logging and wireless communication capabilities. The backend server collects real time data from the stations and converts it into information delivered to users through web portals and mobile applications. The system is implemented in pilot phase and four solar-powered stations are deployed over an area of 1 km2. Data over four months has been collected and performance analysis and assessment are performed. As the historical data bank becomes richer, more sophisticated operations can be performed.
---
paper_title: Gas sensor network for air-pollution monitoring
paper_content:
This paper describes the development of a gas sensor system to be used as a sensing node to form a dense real-time environmental monitoring network. Moreover, a new auto-calibration method is proposed to achieve the maintenance-free operation of the sensor network. The network connectivity can be used not only for data collection but also for the calibration and diagnosis of the sensors since the measured pollutant concentrations can be easily compared through the network with nearby sensors and governmental monitoring stations. Different pollutant concentrations are usually monitored at different sites. However, a case study on local NO2 distribution has shown that there exists a special condition under which pollutant concentrations become low and uniform in a certain local area. The baseline of the gas sensor response can be adjusted in this special occasion using the pollutant concentration values reported from the neighboring environmental monitoring stations. The experimental result has shown that NO2 concentration can be measured with sufficient accuracy by incorporating appropriate temperature and humidity compensation into calibration curves. Moreover, a case study on auto-calibration demonstrates its effectiveness in keeping the measurement accuracy of the sensor system in long-term operation.
---
paper_title: A 3D miniaturised programmable transceiver
paper_content:
Purpose – To describe the development of a three dimensional programmable transceiver system of modular design for use as a development tool for a variety of wireless sensor node applications.Design/methodology/approach – As a stepping‐stone towards the development of wireless nodes, sensor networks programme was put in place to develop a 25 mm cube module, which was modular in construction, programmable and miniaturised in form factor. This was to facilitate the development of wireless sensor networks for a variety of different applications. The nodes are used as a platform for sensing and actuating through various parameters, for use in scalable, reconfigurable distributed autonomous sensing networks in a number of research projects currently underway in the Tyndall Institute, as well as other institutes and in a variety of research programs in the area of wireless sensor networks.Findings – The modular construction enables the heterogeneous implementation of a variety of technologies required in the ar...
---
paper_title: Multi-hop Reliability and Network Operation Routing
paper_content:
Many routing, power aware and data diffusion protocols have been designed especially for WSN, which are reliability and network operation based. The presentation of various ideas on multi-hop reliability and network operation based routing protocols is given here. Unattended sensor nodes deployed randomly in the network area require reliable routing. In recent years, an extensive research has addressed various issues for the coordination and management of nodes for efficient operations and efficient routing. The mission of maintaining and finding routes is nontrivial as sudden changes in status of nodes lead to unpredictable consequences. In this chapter, comparative study of various routing protocols has been presented to pave new ways to researchers. In this chapter, we have classified reliability and network operation routing in to five types (i) coherent based, (ii) QoS based, (iii) multipath based, (iv) query based and negotiation based.
---
paper_title: Applications of Wireless Sensor Networks in Marine Environment Monitoring: A Survey
paper_content:
With the rapid development of society and the economy, an increasing number of human activities have gradually destroyed the marine environment. Marine environment monitoring is a vital problem and has increasingly attracted a great deal of research and development attention. During the past decade, various marine environment monitoring systems have been developed. The traditional marine environment monitoring system using an oceanographic research vessel is expensive and time-consuming and has a low resolution both in time and space. Wireless Sensor Networks (WSNs) have recently been considered as potentially promising alternatives for monitoring marine environments since they have a number of advantages such as unmanned operation, easy deployment, real-time monitoring, and relatively low cost. This paper provides a comprehensive review of the state-of-the-art technologies in the field of marine environment monitoring using wireless sensor networks. It first describes application areas, a common architecture of WSN-based oceanographic monitoring systems, a general architecture of an oceanographic sensor node, sensing parameters and sensors, and wireless communication technologies. Then, it presents a detailed review of some related projects, systems, techniques, approaches and algorithms. It also discusses challenges and opportunities in the research, development, and deployment of wireless sensor networks for marine environment monitoring.
---
paper_title: Environmental Monitoring System with Wireless Mesh Network Based on Embedded System
paper_content:
A kind of environmental monitoring system based on wireless mesh network with the core of embedded system ARM9 S3C2410 microprocessor is presented in the paper. The flexible and self-organizing wireless mesh network is used to achieve the real time acquisition and multi-hop wireless communication of parameters of the monitoring atmospheric environment such as SO2, NO2, NO, temperature, humidity and air pressure, etc. The network structure of the system is established, the hardware architecture of the system is designed, and the system working procedures is given. The entire monitoring system can be quickly arranged and rapidly withdrew without support of base station and has a strong self-healing capability and network robustness and can be used for a variety of occasional atmospheric environmental monitoring.
---
paper_title: Sensor-based air-pollution measurement system for environmental monitoring network
paper_content:
Metropolitan cities in the world have long been suffering from serious air pollution problems. In Tokyo, the high levels of nitrogen oxides and ozone resulting from heavy traffic emission are of the greatest concern. However, the cost and size of the chemical analyzers have limited the number of environmental monitoring stations and, therefore, have resulted in insufficient spatial resolution in the measurement of pollutant distributions. The authors have been proposing a gas distribution analyzing system (GASDAS). The use of gas sensors enables compact and inexpensive sensing systems, and will lead to a significant increase in the density of monitoring sites. As a first step in the development of GASDAS, nitrogen dioxide and ozone monitoring systems have been developed. The experimental results have shown that the low-cost sensor systems with signal compensation features for the change in weather conditions can be used for the quantitative measurement of spatial pollutant distributions.
---
paper_title: Design and Deployment of a Remote Robust Sensor Network: Experiences from an Outdoor Water Quality Monitoring Network
paper_content:
This paper investigates a wireless sensor network deployment - monitoring water quality, e.g. salinity and the level of the underground water table - in a remote tropical area of northern Australia. Our goal is to collect real time water quality measurements together with the amount of water being pumped out in the area, and investigate the impacts of current irrigation practice on the environments, in particular underground water salination. This is a challenging task featuring wide geographic area coverage (mean transmission range between nodes is more than 800 meters), highly variable radio propagations, high end-to-end packet delivery rate requirements, and hostile deployment environments. We have designed, implemented and deployed a sensor network system, which has been collecting water quality and flow measurements, e.g., water flow rate and water flow ticks for over one month. The preliminary results show that sensor networks are a promising solution to deploying a sustainable irrigation system, e.g., maximizing the amount of water pumped out from an area with minimum impact on water quality.
---
paper_title: Monitoring water quality through a telematic sensor network and a fuzzy expert system
paper_content:
: In this paper we present an expert system that monitors seawater quality and pollution in northern Greece through a sensor network called Andromeda. The expert system monitors sensor data collected by local monitoring stations and reasons about the current level of water suitability for various aquatic uses, such as swimming and piscicultures. The aim of the expert system is to help the authorities in the decision-making process in the battle against pollution of the aquatic environment, which is vital for public health and the economy of northern Greece. The expert system determines, using fuzzy logic, when certain environmental parameters exceed certain pollution limits, which are specified either by the authorities or by environmental scientists, and flags up appropriate alerts.
---
paper_title: Design of a Wireless Sensor Network for Long-term, In-Situ Monitoring of an Aqueous Environment
paper_content:
An aqueous sensor network is described consisting of an array of sensor nodes that can be randomly distributed throughout a lake or drinking water reservoir. The data of an individual node is transmitted to the host node via acoustic waves using intermediate nodes as relays. Each node of the sensor network is a data router, and contains sensors capable of measuring environmental parameters of interest. Depending upon the required application, each sensor node can be equipped with different types of physical, biological or chemical sensors, allowing long-term, wide area, in situ multi-parameter monitoring. In this work the aqueous sensor network is described, with application to pH measurement using magnetoelastic sensors. Beyond ensuring drinking water safety, possible applications for the aqueous sensor network include advanced industrial process control, monitoring of aquatic biological communities, and monitoring of waste-stream effluents.
---
paper_title: Design and Development of a Wireless Sensor Network Framework for Water Quality Remote Monitoring
paper_content:
This study involves the design and development of a wireless sensor network (WSN) that integrates several sensing modules into a fully-functional system. The overall system is composed of a remote server, a controller node, and several sensing modules. The controller node is implemented using an Android mobile phone with Bluetooth and 3G capabilities. Bluetooth is used to communicate with the various sensing modules; while 3G is used to relay data to the remote server. The sensing modules utilize an Arduino Mega 2560 (with the sensor circuits) and a Bluetooth shield. Test results show that this framework is a viable design for WSN systems and can be used for remote installations that can be continuously upgraded over time.
---
paper_title: Group-based underwater wireless sensor network for marine fish farms
paper_content:
The amount of uneaten feed and fecal waste generated by the fish in marine fish farms causes the damage of the fauna and flora, and it also reduces the economic benefits because the wastage of the uneaten food. In this paper, we propose an underwater group-based sensor network in order to quantify accurately the amount of pollution deposited on the seabed. First, an analytical model let us know the best location to place the sensor nodes. Our group-based wireless sensor network (WSN) proposal could also determine the amount of food that is wasted while it measures the amount of deposits generated. We describe the mobility of the nodes and how operates the group-based protocol and we show several simulations in order to view the load traffic and to verify the correct operation of the WSN.
---
paper_title: Ultra low power Wireless Gas Sensor Network for environmental monitoring applications
paper_content:
We present an environmental monitoring system based on wireless sensor network for air quality measurement and natural gas leakages. The system is based on catalytic off-the-self available gas sensors and on a novel strategy of sampling and processing which permits to reduce the energy consumption of one order of magnitude. The characteristic of the sensing device has been extracted both with the standard utilization and the new proposed approach, to compare and validate our results. Moreover the importance of the humidity present in the environment has been taken into account and included in the model, because it severely affects the measurements. The wireless sensor network developed has been also extended using a smartphone to enable data sharing over internet and to enhance portability of the overall system.
---
paper_title: Development of wireless sensor network for combustible gas monitoring
paper_content:
a b s t r a c t This paper describes the development and the characterization of a wireless gas sensor network (WGSN) for the detection of combustible or explosive gases. The WGSN consists of a sensor node, a relay node, a network coordinator, and a wireless actuator. The sensor node attains early gas detection using an on board 2D semiconductor sensor. Because the sensor consumes a substantial amount of power, which negatively affects the node lifetime, we employ a pulse heating profile to achieve significant energy savings. The relay node receives and forwards traffic from sensor nodes towards the network coordinator and vice versa. When an emergency is detected, the network coordinator alarms an operator through the GSM/GPRS or Ethernet network, and may autonomously control the source of gas emission through the wireless actuator. Our experimental results demonstrate how to determine the optimal temperature of the sensor's sensitive layer for methane detection, show the response time of the sensor to various gases, and evaluate the power consumption of the sensor node. The demonstrated WGSN could be used for a wide range of gas monitoring applications.
---
paper_title: Dynamic gas sensor network for air pollution monitoring and its auto-calibration
paper_content:
The use of a dynamic gas sensor network is proposed for air pollution monitoring, and its auto-calibration is discussed to achieve the maintenance-free operation. Although the gas sensor outputs generally show drift over time, frequent recalibration of a number of sensors in the network is a laborious task. To solve this problem, instead of the static network proposed in the related works, we propose to realize a dynamic gas sensor network by, e.g., placing sensors on vehicles running on the streets or placing some of them at fixed points and the others on vehicles. Since each sensor in the dynamic network often meets other sensors, calibration of that specific sensor can be performed by comparing the sensor outputs in such occasions. The sensors in the whole network can thus be calibrated eventually. The simulation results are presented to show that adjusting the sensor outputs to the average values of the sensors sharing the same site improves the measurement accuracy of the sensor network.
---
paper_title: Detecting and estimating biochemical dispersion of a moving source in a semi-infinite medium
paper_content:
Statistical methods for detecting and estimating biochemical dispersion by a moving source using model-based integrated sensor array processing are developed. Two possible cases are considered: a homogeneous semi-infinite medium (corresponding to the environment such as air above the ground for an airborne source) or a two-layer semi-infinite medium (e.g., shallow water). The proposed methods can be extended to more complex scenarios. The goals are to detect and localize the biochemical source, determine the space-time concentration distribution of the dispersion, and predict its cloud evolution. Potential applications include security, environmental monitoring, pollution control, simulating hazardous accidents, and explosives detection. Diffusion models of the biochemical substance concentration distribution are derived under various boundary and environmental conditions. A maximum-likelihood algorithm is used to estimate the biochemical concentration distribution in space and time, and the Cramer-Rao bound is computed to analyze its performance. Two detectors (generalized-likelihood ratio test (GLRT) and a mean-difference detector) are derived and then their performances are determined in terms of the probabilities of detection and false alarm. The results can be used to design the sensor array for optimal performance. Numerical examples illustrate the results of the concentration distribution and the performances of the proposed methods.
---
paper_title: A Study on Water Pollution Source Localization in Sensor Networks
paper_content:
The water pollution source localization is of great significance to water environment protection. In this paper, a study on water pollution source localization is presented. Firstly, the source detection is discussed. Then, the coarse localization methods and the localization methods based on diffusion models are introduced and analyzed, respectively. In addition, the localization method based on the contour is proposed. The detection and localization methods are compared in experiments finally. The results show that the detection method using hypotheses testing is more stable. The performance of the coarse localization algorithm depends on the nodes density. The localization based on the diffusion model can yield precise localization results; however, the results are not stable. The localization method based on the contour is better than the other two localization methods when the concentration contours are axisymmetric. Thus, in the water pollution source localization, the detection using hypotheses testing is more preferable in the source detection step. If concentration contours are axisymmetric, the localization method based on the contour is the first option. And, in case the nodes are dense and there is no explicit diffusion model, the coarse localization algorithm can be used, or else the localization based on diffusion models is a good choice.
---
paper_title: Water Pollution Detection Based on Hypothesis Testing in Sensor Networks
paper_content:
Water pollution detection is of great importance in water conservation. In this paper, the water pollution detection problems of the network and of the node in sensor networks are discussed. The detection problems in both cases of the distribution of the monitoring noise being normal and nonnormal are considered. The pollution detection problems are analyzed based on hypothesis testing theory firstly; then, the specific detection algorithms are given. Finally, two implementation examples are given to illustrate how the proposed detection methods are used in the water pollution detection in sensor networks and prove the effectiveness of the proposed detection methods.
---
paper_title: Distributed Sequential Bayesian Estimation of a Diffusive Source in Wireless Sensor Networks
paper_content:
We develop an efficient distributed sequential Bayesian estimation method for applications relating to diffusive sources-localizing a diffusive source, determining its space-time concentration distribution, and predicting its cloud envelope evolution using wireless sensor networks. Potential applications include security, environmental and industrial monitoring, as well as pollution control. We first derive the physical model of the substance dispersion by solving the diffusion equations under different environment scenarios and then integrate the physical model into the distributed processing technologies. We propose a distributed sequential Bayesian estimation method in which the state belief is transmitted in the wireless sensor networks and updated using the measurements from the new sensor node. We propose two belief representation methods: a Gaussian density approximation and a new LPG function (linear combination of polynomial Gaussian density functions) approximation. These approximations are suitable for the distributed processing in wireless sensor networks and are applicable to different sensor network situations. We implement the idea of information-driven sensor collaboration and select the next sensor node according to certain criterions, which provides an optimal subset and an optimal order of incorporating the measurements into our belief update, reduces response time, and saves energy consumption of the sensor network. Numerical examples demonstrate the effectiveness and efficiency of the proposed methods
---
paper_title: Plume Source Position Estimation Using Sensor Networks
paper_content:
This paper proposes the use of a sensor network for estimating the location of a source that releases certain substance in the environment which is then propagated over a large area. More specifically, we use nonlinear least squares optimization to estimate the source position based on the concentration readings at the sensor nodes. Such a network can be of tremendous help to emergency personnel trying to protect people from terrorist attacks or responding to an accident. Our results indicate that in high uncertainty environments it pays off to use a large number of sensors in the estimation whereas in low uncertainty scenarios a few sensors achieve satisfactory results. In addition, our results point out the importance of choosing the appropriate parameters for the least squares optimization especially the start position for our algorithm. We compare our results to the closest point approach (CPA) where the source location is assumed to be the sensor node with the highest measurement
---
paper_title: Water pollution source localization based on the contour in sensor networks
paper_content:
To make up the shortcomings of the coarse pollution source localization algorithm and the localization methods based on the diffusion models in water, a pollution source localization algorithm based on the concentration contour is proposed. In the method, the location of the source is obtained according to the geometrical configuration feature of the contour. In the simulations, based on the simulation data of MODFLOW, the proposed localization method is tested and compared with the localization methods based on diffusion models and the CPA(Closest Point Approach) localization method. The results show that the performance of the proposed algorithm is better than the other two methods when the concentration contour is axisymmetric.
---
paper_title: Plume Source Position Estimation Using Sensor Networks
paper_content:
This paper proposes the use of a sensor network for estimating the location of a source that releases certain substance in the environment which is then propagated over a large area. More specifically, we use nonlinear least squares optimization to estimate the source position based on the concentration readings at the sensor nodes. Such a network can be of tremendous help to emergency personnel trying to protect people from terrorist attacks or responding to an accident. Our results indicate that in high uncertainty environments it pays off to use a large number of sensors in the estimation whereas in low uncertainty scenarios a few sensors achieve satisfactory results. In addition, our results point out the importance of choosing the appropriate parameters for the least squares optimization especially the start position for our algorithm. We compare our results to the closest point approach (CPA) where the source location is assumed to be the sensor node with the highest measurement
---
paper_title: A Study on Water Pollution Source Localization in Sensor Networks
paper_content:
The water pollution source localization is of great significance to water environment protection. In this paper, a study on water pollution source localization is presented. Firstly, the source detection is discussed. Then, the coarse localization methods and the localization methods based on diffusion models are introduced and analyzed, respectively. In addition, the localization method based on the contour is proposed. The detection and localization methods are compared in experiments finally. The results show that the detection method using hypotheses testing is more stable. The performance of the coarse localization algorithm depends on the nodes density. The localization based on the diffusion model can yield precise localization results; however, the results are not stable. The localization method based on the contour is better than the other two localization methods when the concentration contours are axisymmetric. Thus, in the water pollution source localization, the detection using hypotheses testing is more preferable in the source detection step. If concentration contours are axisymmetric, the localization method based on the contour is the first option. And, in case the nodes are dense and there is no explicit diffusion model, the coarse localization algorithm can be used, or else the localization based on diffusion models is a good choice.
---
paper_title: Gas detection and source localization: A Bayesian approach
paper_content:
This paper discusses modeling solutions that support detection of gaseous chemical substances and source localization in applications that are characterized by large numbers of noisy information sources, absence of calibrated concentration measurements and lack of detailed knowledge about the physical processes. In particular, we introduce a solution based on discrete Bayesian networks which allows tractable exploitation of large quantities of spatio-temporally distributed heterogeneous observations. The emphasis is on using coarse models avoiding assumptions about detailed aspects of the gas propagation processes. By considering properties of Bayesian networks we discuss the consequences of modeling simplifications and show with the help of simulations that the resulting inference processes are robust with respect to the modeling deviations.
---
paper_title: Biochemical Transport Modeling and Bayesian Source Estimation in Realistic Environments
paper_content:
Early detection and estimation of the spread of a biochemical contaminant are major issues in many applications, such as homeland security and pollution monitoring. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of a contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion and use the Feynmann-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements
---
paper_title: Estimation of Pollutant-Emitting Point-Sources using Resource-Constrained Sensor Networks
paper_content:
We present an algorithm that makes an appropriate use of a Kalman filter combined with a geometric computation with respect to the localisation of a pollutant-emitting point source. Assuming resource-constrained inexpensive nodes and no specific placement distance to the source, our approach has been shown to perform well in estimating the coordinates and intensity of a source. Using local gossip to directionally propagate estimates, our algorithm initiates a real-time exchange of information that has as an ultimate goal to lead a packet from a node that initially sensed the event to a destination that is as close to the source as possible. The coordinates and intensity measurement of the destination comprise the final estimate. In this paper, we assert that this low-overhead coarse localisation method can rival more sophisticated and computationally-hungry solutions to the source estimation problem.
---
paper_title: Plume Source Localizing in Different Distributions and Noise Types Based on WSN
paper_content:
Accidental gas leaks from unknown sites will cause the serious environmental pollution. One of the efficient methods to solve the problem is tracking and locating the plume source position. This paper presents a wireless sensor network installed with the gas sensor to on-line monitor the environment and estimate the location of a gas source based on the concentration readings at the wireless sensor nodes. Nonlinear Least Squares Method (NLS) was proposed for localization. The effect of the estimation error, with different distributions of the sensor nodes, different sensor number and different types of the back ground noises, is researched by simulations. The simulation results show that when the number of the nodes is more, the effect of the different distributions is not distinct. While the number of the nodes is less, the estimation error under the uniform distribution is more stable than under the random distribution. The suitable deployed method is discussed based on the simulation results. We also discussed the impact of the different noise types to the estimation error.
---
paper_title: Distributed Sequential Bayesian Estimation of a Diffusive Source in Wireless Sensor Networks
paper_content:
We develop an efficient distributed sequential Bayesian estimation method for applications relating to diffusive sources-localizing a diffusive source, determining its space-time concentration distribution, and predicting its cloud envelope evolution using wireless sensor networks. Potential applications include security, environmental and industrial monitoring, as well as pollution control. We first derive the physical model of the substance dispersion by solving the diffusion equations under different environment scenarios and then integrate the physical model into the distributed processing technologies. We propose a distributed sequential Bayesian estimation method in which the state belief is transmitted in the wireless sensor networks and updated using the measurements from the new sensor node. We propose two belief representation methods: a Gaussian density approximation and a new LPG function (linear combination of polynomial Gaussian density functions) approximation. These approximations are suitable for the distributed processing in wireless sensor networks and are applicable to different sensor network situations. We implement the idea of information-driven sensor collaboration and select the next sensor node according to certain criterions, which provides an optimal subset and an optimal order of incorporating the measurements into our belief update, reduces response time, and saves energy consumption of the sensor network. Numerical examples demonstrate the effectiveness and efficiency of the proposed methods
---
paper_title: Plume Source Position Estimation Using Sensor Networks
paper_content:
This paper proposes the use of a sensor network for estimating the location of a source that releases certain substance in the environment which is then propagated over a large area. More specifically, we use nonlinear least squares optimization to estimate the source position based on the concentration readings at the sensor nodes. Such a network can be of tremendous help to emergency personnel trying to protect people from terrorist attacks or responding to an accident. Our results indicate that in high uncertainty environments it pays off to use a large number of sensors in the estimation whereas in low uncertainty scenarios a few sensors achieve satisfactory results. In addition, our results point out the importance of choosing the appropriate parameters for the least squares optimization especially the start position for our algorithm. We compare our results to the closest point approach (CPA) where the source location is assumed to be the sensor node with the highest measurement
---
paper_title: Wireless sensor network for real-time air pollution monitoring
paper_content:
This paper presents an ambient real-time air quality monitoring system. The system consists of several distributed monitoring stations that communicate wirelessly with a backend server using machine-to-machine communication. Each station is equipped with gaseous and meteorological sensors as well as data logging and wireless communication capabilities. The backend server collects real time data from the stations and converts it into information delivered to users through web portals and mobile applications. The system is implemented in pilot phase and four solar-powered stations are deployed over an area of 1 km2. Data over four months has been collected and performance analysis and assessment are performed. As the historical data bank becomes richer, more sophisticated operations can be performed.
---
paper_title: Parameter estimation of a continuous chemical plume source
paper_content:
The problem is estimation of the strength (emission rate) and the location of a chemical source from which a contaminant is released continuously into the atmosphere. The concentration of the contaminant is measured at regular intervals by a network of spatially distributed chemical sensors. The transport of the contaminant is modeled by turbulent diffusion and fluctuating plume concentration. The source parameter estimation is solved in the sequential Bayesian framework, with the posterior expectation approximated using Monte Carlo integration. In order to deal with the vague prior, importance sampling is implemented using the progressive correction technique. The paper presents numerical analysis of statistical performance of the proposed algorithms for different sensor configurations.
---
paper_title: A Study on Water Pollution Source Localization in Sensor Networks
paper_content:
The water pollution source localization is of great significance to water environment protection. In this paper, a study on water pollution source localization is presented. Firstly, the source detection is discussed. Then, the coarse localization methods and the localization methods based on diffusion models are introduced and analyzed, respectively. In addition, the localization method based on the contour is proposed. The detection and localization methods are compared in experiments finally. The results show that the detection method using hypotheses testing is more stable. The performance of the coarse localization algorithm depends on the nodes density. The localization based on the diffusion model can yield precise localization results; however, the results are not stable. The localization method based on the contour is better than the other two localization methods when the concentration contours are axisymmetric. Thus, in the water pollution source localization, the detection using hypotheses testing is more preferable in the source detection step. If concentration contours are axisymmetric, the localization method based on the contour is the first option. And, in case the nodes are dense and there is no explicit diffusion model, the coarse localization algorithm can be used, or else the localization based on diffusion models is a good choice.
---
paper_title: Plume Source Localization Based on Bayes Using Wireless Sensor Network
paper_content:
A wireless sensor network(WSN) was introduced into predicted evaluation of plume source localization.Based on the Bayes principle,an improved particle filter(IPF) algorithm was proposed,which is adapted for the gas pollution source localization using WSN.It was used to improve the filter performance that the methods of the weighted centraid,the backoff timer sorting and the residual resampling were adopted to determine the initial point of predicted location,the information sorting of located node and reduce the sampling variance respectively.The comparison of the simulated location performances among the IPF,the extended Kalman filter(EKF) and the improved non-linear least square(I-NLS) shows that WSN is effective on the plume source localization;IPF is better than EKF and I-NLS.
---
paper_title: Stochastic algorithm for estimation of the model's unknown parameters via Bayesian inference
paper_content:
We have applied the methodology combining Bayesian inference with Markov chain Monte Carlo (MCMC) algorithms to the problem of the atmospheric contaminant source localization. The algorithms input data are the on-line arriving information about concentration of given substance registered by sensors' network. A fast-running Gaussian plume dispersion model is adopted as the forward model in the Bayesian inference approach to achieve rapid-response event reconstructions and to benchmark the proposed algorithms. We examined different version of the MCMC in effectiveness to estimate the probabilistic distributions of atmospheric release parameters by scanning 5-dimensional parameters' space. As the results we obtained the probability distributions of a source coordinates and dispersion coefficients which we compared with the values assumed in creation of the sensors' synthetic data. The annealing and burn-in procedures were implemented to assure a robust and efficient parameter-space scans.
---
paper_title: On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds
paper_content:
We consider a new algorithm, an interior-reflective Newton approach, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generatesstrictly feasible iterates by using a new affine scaling transformation and following piecewise linear paths (reflection paths). The interior-reflective approach does not require identification of an "activity set". In this paper we establish that the interior-reflective Newton approach is globally and quadratically convergent. Moreover, we develop a specific example of interior-reflective Newton methods which can be used for large-scale and sparse problems.
---
paper_title: An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds
paper_content:
We propose a new trust region approach for minimizing nonlinear functions subject to simple bounds. By choosing an appropriate quadratic model and scaling matrix at each iteration, we show that it is not necessary to solve a quadratic programming subproblem, with linear inequalities, to obtain an improved step using the trust region idea. Instead, a solution to a trust region subproblem is defined by minimizing a quadratic function subject only to an ellipsoidal constraint. The iterates generated by these methods are always strictly feasible. Our proposed methods reduce to a standard trust region approach for the unconstrained problem when there are no upper or lower bounds on the variables. Global and quadratic convergence of the methods is established; preliminary numerical experiments are reported.
---
paper_title: Bayesian source detection and parameter estimation of a plume model based on sensor network measurements
paper_content:
We consider a network of sensors that measure the intensities of a complex plume composed of multiple absorption–diffusion source components. We address the problem of estimating the plume parameters, including the spatial and temporal source origins and the parameters of the diffusion model for each source, based on a sequence of sensor measurements. The approach not only leads to multiple-source detection, but also the characterization and prediction of the combined plume in space and time. The parameter estimation is formulated as a Bayesian inference problem, and the solution is obtained using a Markov chain Monte Carlo algorithm. The approach is applied to a simulation study, which shows that an accurate parameter estimation is achievable. Copyright © 2010 John Wiley & Sons, Ltd. ::: ::: A preliminary version of this work was presented at 9th ONR-GTRI Workshop on Target Tracking in Sensor Fusion, 2006.
---
paper_title: Offshore Pollution Source Localization in Static Water Using Wireless Sensor Networks
paper_content:
In water environments such as water reservoirs and lakes, the diffusion of pollutants is affected by boundaries.Firstly, the offshore plume source diffusion in static water is analysed and a piecewise concentration model is proposed.The localization of the pollution source near an impervious boundary is studied. It is shown that unknown parameters include not only the source position but also the mass flow rate and the initial diffusion time. To estimate the unknown parameters, we provide three algorithms, which are respectively based on a general model, an approximation function and the unscented Kalman filter(UKF). The first two algorithms employ the original concentration model and piecewise concentration model respectively, and estimate the parameters by solving the constrained nonlinear least squares problem.By using the general model based algorithm, source parameters can be acquired promptly. The algorithm based on the approximation function is more robust compared with that on the general model, although it can only be executed for sufficient samples. Considering the diffusion process, the algorithm based on UKF achieves a good tradeoff between the computation complexity and the estimation accuracy. The simulation data are generated by MODFLOW, which is a standard software for the hydrological simulation of source diffusion. Three proposed algorithms are tested by simulation data, and the results demonstrate their advantages and disadvantages.
---
paper_title: Application of Sensor Networks in Plume Source Position Estimation
paper_content:
Based on the attenuation model of the plume,the location of plume source using maximum likelihood algorithm and the nonlinear least squares algorithm were studied.The effect of the estimation error,with different sensor number and different back ground noise,is researched by simulation.The(result) shows that better accuracy can be got by using nonlinear squares algorithm when the background noise is less.On the contrary,the maximum likelihood algorithm is robust to the much noise compared with the nonlinear squares algorithm.
---
paper_title: Study and Production of Organic Phosphorus Sensor Fast Pesticides Detection
paper_content:
Organic phosphorus sensor has been instrumented with P.V.C as mem brane Carrier mixed with the phosphate andthe electric sensitive matter of organic phosphorus as active component. We have tested the sensor characteristics: the response-time was short (≤20s), the detection minimum is lower than 0.1×10-6 and the selectivity coefficient is lower than 0.1. It wasa kind of convenient and speedy determination instrument for assaying crganic phosphorous pesticides.
---
paper_title: Coverage Control in Sensor Networks
paper_content:
This easy-to-read text focuses on challenges in coverage control in sensor networks, examines fundamental coverage problems, and presents the most recent advances and techniques in the field. Features: provides an introduction to sensors, sensor nodes, sensor networks, and sensor coverage models; supplies an informal definition and taxonomy for network-wide coverage control; explores the node placement optimization problem for coverage configuration before network deployment; investigates the coverage lifetime maximization problem by controlling coverage characteristics in a randomly deployed network; discusses the critical sensor-density problem for coverage configuration before network deployment; examines the sensor-activity scheduling problem of controlling network coverage characteristics in a randomly deployed network; introduces the node movement strategy problem for sensor networks containing mobile nodes; presents the challenges of building intrusion barriers and finding penetration paths.
---
paper_title: On the Connectivity of Ad Hoc Networks
paper_content:
This paper presents a framework for the calculation of stochastic connectivity properties of wireless multihop networks. Assuming that n nodes, each node with transmission range r0, are distributed according to some spatial probability density function, we study the level of connectivity of the resulting network topology from three viewpoints. First, we analyze the number of neighbors of a given node. Second, we study the probability that there is a communication path between two given nodes. Third, we investigate the probability that the entire network is connected, i.e. each node can communicate with every other node via a multihop path. For the last-mentioned issue, we compute a tight approximation for the critical (r0 ,n )pairs that are required to keep the network connected with a probability close to one. In fact, the problem is solved for the general case of a k-connected network, accounting for the robustness against node failures. These issues are studied for uniformly distributed nodes (with and without ‘border effects’), Gaussian distributed nodes, and nodes that move according to the commonly used random waypoint mobility model. The results are of practical value for the design and simulation of wireless sensor and mobile ad hoc networks.
---
paper_title: Problems and challenges in water pollution monitoring and water pollution source localization using sensor networks
paper_content:
Sensor networks have been widely used in environment monitoring. Although there are some related works about water pollution monitoring and water pollution source localization using sensor networks, there are many problems which have not been solved so far. In this paper, the current research status on water pollution monitoring and water pollution source localization in sensor networks is illustrated firstly. And then, the fundamental problems in the researches are analyzed, and the challenges are proposed.
---
paper_title: Problems and challenges in water pollution monitoring and water pollution source localization using sensor networks
paper_content:
Sensor networks have been widely used in environment monitoring. Although there are some related works about water pollution monitoring and water pollution source localization using sensor networks, there are many problems which have not been solved so far. In this paper, the current research status on water pollution monitoring and water pollution source localization in sensor networks is illustrated firstly. And then, the fundamental problems in the researches are analyzed, and the challenges are proposed.
---
| Title: A Survey on Pollution Monitoring Using Sensor Networks in Environment Protection
Section 1: Introduction
Description 1: Introduce the significance of pollution monitoring, the three key problems in it, and the advantages of using sensor networks for environmental protection.
Section 2: Sensors and Networks in Environment Monitoring
Description 2: Discuss the specific types of sensors used for air and water pollution monitoring, the cost implications, and the limitations in practical applications.
Section 3: Sensor Network Systems for Environment Monitoring
Description 3: Describe the structure of environmental monitoring systems, including sensor nodes, data transmission, network topologies, and communication protocols.
Section 4: Pollution Detection Using the Sensor Network
Description 4: Explain the processes and methods used in pollution detection, including coarse detection methods and hypothesis-testing problems.
Section 5: Pollution Source Localization
Description 5: Present physical models of pollutant diffusion, different localization algorithms, and discuss their applicability and performance.
Section 6: Challenges in Environment Monitoring and Pollution Detection
Description 6: Identify and discuss the challenges in sensor technology and communication, and the statistical problems related to pollution detection.
Section 7: Challenges in Pollution Source Localization Using Sensor Networks
Description 7: Discuss the difficulties in dynamic environments, the lack of regularity in pollutant migration, sparse node deployment, and the complexities of diffusion models.
Section 8: Conclusions
Description 8: Summarize the importance of pollution monitoring, review the state-of-the-art in sensor networks for environment monitoring, and outline the presented challenges. |
3D Printed Sensors for Biomedical Applications: A Review | 9 | ---
paper_title: An integrated optic ethanol vapor sensor based on a silicon-on-insulator microring resonator coated with a porous ZnO film
paper_content:
Optical structures fabricated on silicon-on-insulator technology provide a convenient platform for the implementation of highly compact, versatile and low cost devices. In this work, we demonstrate the promise of this technology for integrated low power and low cost optical gas sensing. A room temperature ethanol vapor sensor is demonstrated using a ZnO nanoparticle film as a coating on an SOI micro-ring resonator of 5 µm in radius. The local coating on the ring resonators is prepared from colloidal suspensions of ZnO nanoparticles of around 3 nm diameter. The porous nature of the coating provides a large surface area for gas adsorption. The ZnO refractive index change upon vapor adsorption shifts the microring resonance through evanescent field interaction. Ethanol vapor concentrations down to 100 ppm are detected with this sensing configuration and a detection limit below 25 ppm is estimated.
---
paper_title: Wearable Flexible Sensors: A Review
paper_content:
This paper provides a review on some of the significant research work done on wearable flexible sensors (WFSs). Sensors fabricated with the flexible materials have been attached to a person along with the embedded system to monitor a parameter and transfer the significant data to the monitoring unit for the further analyses. The use of wearable sensors has played a quite important role to monitor the physiological parameters of a person to minimize any malfunctioning happening in the body. This paper categorizes the work according to the materials used for designing the system, the network protocols, and different types of activities that were being monitored. The challenges faced by the current sensing systems and future opportunities for the WFSs regarding its market values are also briefly explained in this paper.
---
paper_title: Silicon carbide coated MEMS strain sensor for harsh environment applications
paper_content:
We present poly-SiC coating and subsequent operation of a Si-based double-ended tuning fork (DETF) resonant strain sensor fabricated in the Bosch commercial foundry process. The coating is applied post release and, hence, has minimal impact on the front end of the microfabrication process. The deposition thickness of nanometer-thin SiC coating was optimized to provide enhanced corrosion resistance to silicon MEMS without compromising the electrical and mechanical performance of the original device. The coated DETF achieves a strain resolution of 0.2 mue in a 10 Hz to 20 kHz bandwidth, which is comparable to the uncoated device. The coated DETF is locally heated with an IR lamp and is shown to operate up to 190 degC in air with a temperature sensitivity of -7.6 Hz/degC. The devices are also dipped in KOH at 80 degC for 5 minutes without etching the structures, confirming the poly-SiC coating provides a sufficient chemical barrier to the underlying silicon. The results demonstrate that SiC-coated poly-Si devices are an effective bridge between poly-Si and full poly-SiC films for applications requiring a high level of corrosion resistance and moderate operating temperatures (up to 200 degC) without compromising the performance characteristics of the original poly-Si device.
---
paper_title: Piezoresistive silicon thin film sensor array for biomedical applications
paper_content:
Abstract N-type hydrogenated nanocrystalline silicon thin film piezoresistors, with gauge factor − 28, were deposited on rugged and flexible polyimide foils by Hot-wire chemical vapor deposition using a tantalum filament heated to 1750 °C. The piezoresistive response under cyclic quasi-static and dynamical (up to 100 Hz) load conditions is reported. Test structures, consisting of microresistors having lateral dimensions in the range from 50 to 100 μm and thickness of 120 nm were defined in an array by reactive ion etching. Metallic pads, forming ohmic contacts to the sensing elements, were defined by a lift-off process. A readout circuit for the array consisting in a mutiplexer on each row and column of the matrix is proposed. The digital data will be processed, interpreted and stored internally by an ultra low-power micro controller, also responsible for the communication of two-way wireless data, e.g. from inside to outside the human body.
---
paper_title: Silicon-Nanowire-Based CMOS-Compatible Field-Effect Transistor Nanosensors for Ultrasensitive Electrical Detection of Nucleic Acids
paper_content:
We herein report the design of a novel semiconducting silicon nanowire field-effect transistor (SiNW-FET) biosensor array for ultrasensitive label-free and real-time detection of nucleic acids. Highly responsive SiNWs with narrow sizes and high surface-to-volume-ratios were "top-down" fabricated with a complementary metal oxide semiconductor compatible anisotropic self-stop etching technique. When SiNWs were covalently modified with DNA probes, the nanosensor showed highly sensitive concentration-dependent conductance change in response to specific target DNA sequences. This SiNW-FET nanosensor revealed ultrahigh sensitivity for rapid and reliable detection of 1 fM of target DNA and high specificity single-nucleotide polymorphism discrimination. As a proof-of-concept for multiplex detection with this small-size and mass producible sensor array, we demonstrated simultaneous selective detection of two pathogenic strain virus DNA sequences (H1N1 and H5N1) of avian influenza.
---
paper_title: Wireless sensor network survey
paper_content:
A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
---
paper_title: Design and fabrication of a hybrid silicon three-axial force sensor for biomechanical applications
paper_content:
This paper presents the design and development of a silicon-based three-axial force sensor to be used in a flexible smart interface for biomechanical measurements. Normal and shear forces are detected by combining responses from four piezoresistors obtained by ion implantation in a high aspect-ratio cross-shape flexible element equipped with a 525 m high silicon mesa. The mesa is obtained by a subtractive dry etching process of the whole handle layer of an SOI wafer. Piezoresistor size ranges between 6 and 10m in width, and between 30 and 50m in length. The sensor configuration follows a hybrid integration approach for interconnection and for future electronic circuitry system integration. The sensor ability to measure both normal and shear forces with high linearity ( ∼99%) and low hysteresis is demonstrated by means of tests performed by applying forces from 0 to 2 N. In this paper the packaging design is also presented and materials for flexible sensor array preliminary assembly are described. © 2005 Elsevier B.V. All rights reserved.
---
paper_title: 3D printed microfluidic devices: enablers and barriers
paper_content:
3D printing has the potential to significantly change the field of microfluidics. The ability to fabricate a complete microfluidic device in a single step from a computer model has obvious attractions, but it is the ability to create truly three dimensional structures that will provide new microfluidic capability that is challenging, if not impossible to make with existing approaches. This critical review covers the current state of 3D printing for microfluidics, focusing on the four most frequently used printing approaches: inkjet (i3DP), stereolithography (SLA), two photon polymerisation (2PP) and extrusion printing (focusing on fused deposition modeling). It discusses current achievements and limitations, and opportunities for advancement to reach 3D printing's full potential.
---
paper_title: A SiC MEMS Resonant Strain Sensor for Harsh Environment Applications
paper_content:
In this paper, we present a silicon carbide MEMS resonant strain sensor for harsh environment applications. The sensor is a balanced-mass double-ended tuning fork (BDETF) fabricated from 3C-SiC deposited on a silicon substrate. The SiC was etched in a plasma etch chamber using a silicon oxide mask, achieving a selectivity of 5:1 and etch rate of 2500 Aring/min. The device resonates at atmospheric pressure and operates from room temperature to above 300degC. The device was also subjected to 10 000 g shock (out-of-plane) without damage or shift in resonant frequency. The BDETF exhibits a strain sensitivity of 66 Hz/muepsiv and achieves a strain resolution of 0.11 muepsiv in a bandwidth from 10 to 20 kHz, comparable to state-of-the-art silicon sensors
---
paper_title: A review of 3D-printed sensors
paper_content:
ABSTRACTNowadays, sensors play an important role in human life. Among the many manufacturing methods used in the fabrication of sensors, three-dimensional (3D) printing has gradually shown its advantages, particularly with commercial products. Physical sensors, biosensors, and chemical sensors can all be fabricated via 3D printing technology, through either directly printing sensing components, printing molds for casting sensors, or printing platforms to be integrated with commercial sensors. In this article, the varieties of features and applications of 3D printing technologies used in the fabrication of sensors are reviewed. Several types of 3D printing technologies are compared for better understanding of the tools. With the development of new or hybrid manufacturing methods and materials used in the 3D printing technology, this technology will show its great advantages and potential in the fabrication of highly sensitive nanosensors or compound sensors with 3D intricate structures.
---
paper_title: Novel Sensing Approach for LPG Leakage Detection—Part II: Effects of Particle Size, Composition, and Coating Layer Thickness
paper_content:
Prominent research has been going on to develop a low-cost, efficient gas sensing system. This paper presents a continuation of our earlier research work done to develop a new sensing approach for gas detection at ambient conditions. This paper exhibits the optimization of the response time of the sensor by inhabiting characteristic changes such as variation in the concentration of the dispersion medium, thickness of the coating, and the size of the dispersed medium. Different concentrations of the dispersion medium in the coated suspension were tested to determine the optimal composition required to achieve the highest sensitivity of the tin oxide (SnO2) layer toward the tested gas. The control over adsorption and desorption of the gas molecules in the coated layer was achieved by investigating the particle size of the dispersed medium. The response time of the coated sensor was encouraging and owns a promising potential to the development of a more efficient gas sensing system.
---
paper_title: Stretchable, Skin‐Mountable, and Wearable Strain Sensors and Their Potential Applications: A Review
paper_content:
There is a growing demand for flexible and soft electronic devices. In particular, stretchable, skin-mountable, and wearable strain sensors are needed for several potential applications including personalized health-monitoring, human motion detection, human-machine interfaces, soft robotics, and so forth. This Feature Article presents recent advancements in the development of flexible and stretchable strain sensors. The article shows that highly stretchable strain sensors are successfully being developed by new mechanisms such as disconnection between overlapped nanomaterials, crack propagation in thin films, and tunneling effect, different from traditional strain sensing mechanisms. Strain sensing performances of recently reported strain sensors are comprehensively studied and discussed, showing that appropriate choice of composite structures as well as suitable interaction between functional nanomaterials and polymers are essential for the high performance strain sensing. Next, simulation results of piezoresistivity of stretchable strain sensors by computational models are reported. Finally, potential applications of flexible strain sensors are described. This survey reveals that flexible, skin-mountable, and wearable strain sensors have potential in diverse applications while several grand challenges have to be still overcome.
---
paper_title: Wireless sensor networks: a survey
paper_content:
This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
---
paper_title: Tactile Sensing From Laser-Ablated Metallized PET Films
paper_content:
This paper reports the design, fabrication, and implementation of a novel sensor patch developed from commercial polyethylene terephthalate films metallized with aluminum on one side. The aluminum was ablated with laser to form interdigitated electrodes to make sensor prototypes. The interdigitated electrodes were patterned on the substrate with a laser cutter. Characterization of the prototypes was done to determine their operating frequency followed by experimentation. The prototypes have been used as a tactile sensor showing promising results for using these patches in applications with contact pressures considerably lesser than normal human contact pressure.
---
paper_title: Novel Sensing Approach for LPG Leakage Detection: Part I—Operating Mechanism and Preliminary Results
paper_content:
Gas sensing technology has been among the topical research work for quite some time. This paper showcases the research done on the detection mechanism of leakage of domestic cooking gas at ambient conditions. Micro-electro mechanical systems-based interdigital sensors were fabricated on oxidized single-crystal silicon surfaces by the maskless photolithography technique. The electrochemical impedance analysis of these sensors was done to detect liquefied petroleum gas (LPG) with and without coated particles of tin oxide (SnO2) in form of a thin layer. A thin film of SnO2 was spin-coated on the sensing surface of the interdigital sensor to induce selectivity to LPG that consists of a 60/40 mixture of propane and butane, respectively. This paper reports a novel strategy for gas detection under ambient temperature and humidity conditions. The response time of the coated sensor was encouraging and own a promising potential to the development of a complete efficient gas sensing system.
---
paper_title: Selective Ultrathin Carbon Sheath on Porous Silicon Nanowires: Materials for Extremely High Energy Density Planar Micro-Supercapacitors
paper_content:
Microsupercapacitors are attractive energy storage devices for integration with autonomous microsensor networks due to their high-power capabilities and robust cycle lifetimes. Here, we demonstrate porous silicon nanowires synthesized via a lithography compatible low-temperature wet etch and encapsulated in an ultrathin graphitic carbon sheath, as electrochemical double layer capacitor electrodes. Specific capacitance values reaching 325 mF cm–2 are achieved, representing the highest specific ECDL capacitance for planar microsupercapacitor electrode materials to date.
---
paper_title: Advances in Optical Sensing and Bioanalysis Enabled by 3D Printing.
paper_content:
The recent explosion of 3D printing applications in scientific literature has expanded the speed and effectiveness of analytical technological development. 3D printing allows for manufacture that is simply designed in software and printed in-house with nearly no constraints on geometry, and analytical methodologies can thus be prototyped and optimized with little difficulty. The versatility of methods and materials available allows the analytical chemist or biologist to fine-tune both the structural and functional portions of their apparatus. This flexibility has more recently been extended to optical-based bioanalysis, with higher resolution techniques and new printing materials opening the door for a wider variety of optical components, plasmonic surfaces, optical interfaces, and biomimetic systems that can be made in the laboratory. There have been discussions and reviews of various aspects of 3D printing technologies in analytical chemistry; this Review highlights recent literature and trends in their applications to optical sensing and bioanalysis.
---
paper_title: The Boom in 3D-Printed Sensor Technology
paper_content:
Future sensing applications will include high-performance features, such as toxin detection, real-time monitoring of physiological events, advanced diagnostics, and connected feedback. However, such multi-functional sensors require advancements in sensitivity, specificity, and throughput with the simultaneous delivery of multiple detection in a short time. Recent advances in 3D printing and electronics have brought us closer to sensors with multiplex advantages, and additive manufacturing approaches offer a new scope for sensor fabrication. To this end, we review the recent advances in 3D-printed cutting-edge sensors. These achievements demonstrate the successful application of 3D-printing technology in sensor fabrication, and the selected studies deeply explore the potential for creating sensors with higher performance. Further development of multi-process 3D printing is expected to expand future sensor utility and availability.
---
paper_title: 3D printing: an emerging technology for sensor fabrication
paper_content:
Purpose ::: ::: ::: ::: ::: This study aims to provide a technical insight into sensors fabricated by three-dimensional (3D) printing methods. ::: ::: ::: ::: ::: Design/methodology/approach ::: ::: ::: ::: ::: Following an introduction to 3D printing, this article first discusses printed sensors for strain and allied variables, based on a diverse range of principles and materials. It then considers ultrasonic and acoustic sensor developments and provides details of a sensor based on 3D printed electronic components for monitoring food quality in real-time. Finally, brief concluding comments are drawn. ::: ::: ::: ::: ::: Findings ::: ::: ::: ::: ::: Several variants of the 3D printing technique have been used in the fabrication of a range of sensors based on many different operating principles. These exhibit good performance and sometimes unique characteristics. A key benefit is the ability to overcome the limitations of conventional manufacturing techniques by creating complex shapes from a wide range of sensing materials. ::: ::: ::: ::: ::: Originality/value ::: ::: ::: ::: ::: 3D printing is a new and potentially important sensor fabrication technology, and this article provides details of a range of recently reported developments.
---
paper_title: A 3D-printed device for a smartphone-based chemiluminescence biosensor for lactate in oral fluid and sweat
paper_content:
Increasingly, smartphones are used as portable personal computers, revolutionizing communication styles and entire lifestyles. Using 3D-printing technology we have made a disposable minicartridge that can be easily prototyped to turn any kind of smartphone or tablet into a portable luminometer to detect chemiluminescence derived from enzyme-coupled reactions. As proof-of-principle, lactate oxidase was coupled with horseradish peroxidase for lactate determination in oral fluid and sweat. Lactate can be quantified in less than five minutes with detection limits of 0.5 mmol L−1 (corresponding to 4.5 mg dL−1) and 0.1 mmol L−1 (corresponding to 0.9 mg dL−1) in oral fluid and sweat, respectively. A smartphone-based device shows adequate analytical performance to offer a cost-effective alternative for non-invasive lactate measurement. It could be used to evaluate lactate variation in relation to the anaerobic threshold in endurance sport and for monitoring lactic acidosis in critical-care patients.
---
paper_title: Paper-based enzymatic reactors for batch injection analysis of glucose on 3D printed cell coupled with amperometric detection
paper_content:
Abstract This report describes for the first time the development of paper-based enzymatic reactors (PERs) for the detection of glucose (Glu) in artificial serum sample using a 3D printed batch injection analysis (BIA) cell coupled with electrochemical detection. The fabrication of the PERs involved firstly the oxidation of the paper surface with a sodium periodate solution. The oxidized paper was then perforated with a paper punch to create microdisks and activated with a solution containing N -hydroxysuccinimide (NHS) and N -(3-dimethylaminopropyl)- N ′-ethylcarbodiimide hydrochloride (EDC). Glucose oxidase (GOx) enzyme was then covalently immobilized on paper surface to promote the enzymatic assay for the detection of Glu in serum sample. After the addition of Glu on the PER surface placed inside a plastic syringe, the analyte penetrated through the paper surface under vertical flow promoting the enzymatic assay. The reaction product (H 2 O 2 ) was collected with an electronic micropipette in a microtube and analyzed in the 3D BIA cell coupled with screen-printed electrodes (SPEs). The overall preparation time and the cost estimated per PER were 2.5 h and $0.02, respectively. Likewise the PERs, the use of a 3D printer allowed the fabrication of a BIA cell within 4 h at cost of $5. The coupling of SPE with the 3D printed cell exhibited great analytical performance including repeatability and reproducibility lower than 2% as well as high sampling rate (30 injections h −1 ) under low injection volume (10 μL). The limit of detection (LD) and linear range achieved with the proposed approach was 0.11 mmol L −1 and 1–10 mmol L −1 , respectively. Lastly, the glucose concentration level was successfully determined using the proposed method and the values found were not statistically different from the data achieved by a reference method at confidence level of 95%.
---
paper_title: 3D-printed biosensor with poly(dimethylsiloxane) reservoir for magnetic separation and quantum dots-based immunolabeling of metallothionein
paper_content:
Currently, metallothioneins (MTs) are extensively investigated as the molecular biomarkers and the significant positive association of the MT amount was observed in tumorous versus healthy tissue of various types of malignant tumors, including head and neck cancer. Thus, we proposed a biosensor with fluorescence detection, comprising paramagnetic nanoparticles (nanomaghemite core with gold nanoparticles containing shell) for the magnetic separation of MT, based on affinity of its sulfhydryl groups toward gold. Biosensor was crafted from PDMS combined with technology of 3D printing and contained reservoir with volume of 50 μL linked to input (sample/detection components and washing/immunobuffer) and output (waste). For the immunolabeling of immobilized MT anti-MT antibodies conjugated to CdTe quantum dots through synthetic heptapeptide were employed. After optimization of fundamental conditions of the immunolabeling (120 min, 20°C, and 1250 rpm) we performed it on a surface of paramagnetic nanoparticles in the biosensor reservoir, with evaluation of fluorescence of quantum dots (λexc 400 nm, and λem 555 nm). The developed biosensor was applied for quantification of MT in cell lines derived from spinocellular carcinoma (cell line 122P-N) and fibroblasts (122P-F) and levels of the biomarker were found to be about 90 nM in tumor cells and 37 nM in fibroblasts. The proposed system is able to work with low volumes (< 100 μL), with low acquisition costs and high portability.
---
paper_title: Enzyme-Immobilized 3D-Printed Reactors for Online Monitoring of Rat Brain Extracellular Glucose and Lactate
paper_content:
In this study we constructed a highly sensitive system for in vivo monitoring of the concentrations of rat brain extracellular glucose and lactate. This system involved microdialysis (MD) sampling and fluorescence determination in conjunction with a novel sample derivatization scheme in which glucose oxidase and lactate oxidase were immobilized in ABS flow bioreactors (manufactured through low-cost three-dimensional printing (3DP)), via fused deposition modeling, for online oxidization of sampled glucose and lactate, respectively, in rat brain microdialysate. After optimizing the experimental conditions for MD sampling, the manufacture of the designed flow reactors, the enzyme immobilization procedure, and the online derivatization scheme, the available sampling frequency was 15 h(-1) and the system's detection limits reached as low as 0.060 mM for glucose and 0.059 mM for lactate, based on a 20-μL conditioned microdialysate; these characteristics were sufficient to reliably determine the concentrations of extracellular glucose and lactate in the brains of living rats. To demonstrate the system's applicability, we performed (i) spike analyses of offline-collected rat brain microdialysate and (ii) in vivo dynamic monitoring of the extracellular glucose and lactate in living rat brains, in addition to triggering neuronal depolarization by perfusing a high-K(+) medium from the implanted MD probe. Our analytical results and demonstrations confirm that postprinting functionalization of analytical devices manufactured using 3DP technology can be a powerful strategy for extending the diversity and adaptability of currently existing analytical configurations.
---
paper_title: 3D printed chip for electrochemical detection of influenza virus labeled with CdS quantum dots
paper_content:
In this study, we report a new three-dimensional (3D), bead-based microfluidic chip developed for rapid, sensitive and specific detection of influenza hemagglutinin. The principle of microfluidic chip is based on implementation of two-step procedure that includes isolation based on paramagnetic beads and electrochemical detection. As a platform for isolation process, streptavidin-modified MPs, which were conjugated via biotinylated glycan (through streptavidin–biotin affinity) followed by linkage of hemagglutinin to glycan, were used. Vaccine hemagglutinin (HA vaxi) was labeled with CdS quantum dots (QDs) at first. Detection of the isolation product by voltammetry was the end point of the procedure. The suggested and developed method can be used also for detection of other specific substances that are important for control, diagnosis or therapy of infectious diseases.
---
paper_title: Customizable 3D Printed ‘Plug and Play’ Millifluidic Devices for Programmable Fluidics
paper_content:
Three dimensional (3D) printing is actively sought after in recent years as a promising novel technology to construct complex objects, which scope spans from nano- to over millimeter scale. Previously we utilized Fused deposition modeling (FDM)-based 3D printer to construct complex 3D chemical fluidic systems, and here we demonstrate the construction of 3D milli-fluidic structures for programmable liquid handling and control of biological samples. Basic fluidic operation devices, such as water-in-oil (W/O) droplet generators for producing compartmentalized mono-disperse droplets, sensor-integrated chamber for online monitoring of cellular growth, are presented. In addition, chemical surface treatment techniques are used to construct valve-based flow selector for liquid flow control and inter-connectable modular devices for networking fluidic parts. As such this work paves the way for complex operations, such as mixing, flow control, and monitoring of reaction / cell culture progress can be carried out by constructing both passive and active components in 3D printed structures, which designs can be shared online so that anyone with 3D printers can reproduce them by themselves.
---
paper_title: 3D-printed supercapacitor-powered electrochemiluminescent protein immunoarray.
paper_content:
Herein we report a low cost, sensitive, supercapacitor-powered electrochemiluminescent (ECL) protein immunoarray fabricated by an inexpensive 3-dimensional (3D) printer. The immunosensor detects three cancer biomarker proteins in serum within 35 min. The 3D-printed device employs hand screen printed carbon sensors with gravity flow for sample/reagent delivery and washing. Prostate cancer biomarker proteins, prostate specific antigen (PSA), prostate specific membrane antigen (PSMA) and platelet factor-4 (PF-4) in serum were captured on the antibody-coated carbon sensors followed by delivery of detection-antibody-coated Ru(bpy)3(2+) (RuBPY)-doped silica nanoparticles in a sandwich immunoassay. ECL light was initiated from RuBPY in the silica nanoparticles by electrochemical oxidation with tripropylamine (TPrA) co-reactant using supercapacitor power and ECL was captured with a CCD camera. The supercapacitor was rapidly photo-recharged between assays using an inexpensive solar cell. Detection limits were 300-500f gmL(-1) for the 3 proteins in undiluted calf serum. Assays of 6 prostate cancer patient serum samples gave good correlation with conventional single protein ELISAs. This technology could provide sensitive onsite cancer diagnostic tests in resource-limited settings with the need for only moderate-level training.
---
paper_title: Application of 3D Printing Technology in Increasing the Diagnostic Performance of Enzyme-Linked Immunosorbent Assay (ELISA) for Infectious Diseases
paper_content:
Enzyme-linked Immunosorbent Assay (ELISA)-based diagnosis is the mainstay for measuring antibody response in infectious diseases and to support pathogen identification of potential use in infectious disease outbreaks and clinical care of individual patients. The development of laboratory diagnostics using readily available 3D printing technologies provides a timely opportunity for further expansion of this technology into immunodetection systems. Utilizing available 3D printing platforms, a '3D well' was designed and developed to have an increased surface area compared to those of 96-well plates. The ease and rapidity of the development of the 3D well prototype provided an opportunity for its rapid validation through the diagnostic performance of ELISA in infectious disease without modifying current laboratory practices for ELISA. The improved sensitivity of the 3D well of up to 2.25-fold higher compared to the 96-well ELISA provides a potential for the expansion of this technology towards miniaturization and Lab-On-a-Chip platforms to reduce time, volume of reagents and samples needed for such assays in the laboratory diagnosis of infectious and other diseases including applications in other disciplines.
---
paper_title: Smartphone-interfaced 3D printed toxicity biosensor integrating bioluminescent “sentinel cells”
paper_content:
Abstract In this work, we report the design, fabrication, and preliminary assessment of analytical performance of a smartphone-based bioluminescence (BL) whole-cell toxicity biosensor. Genetically engineered human embryonic kidney cells, constitutively expressing a powerful green-emitting luciferase mutant, were used as “sentinel cells” and integrated into 3D printed ready-to-use cartridges, also containing assay reagents. Customizable, low-cost smartphone adaptors were created using a desktop 3D printer to provide a mini-darkbox and an aligned optical interface between the smartphone camera and the cell cartridge for BL signals acquisition. The developed standalone compact device, which also includes disposable droppers for sample and reagents addition, allows the user to perform the toxicity assay within 30 min following the procedure provided by a custom-developed application running on Android (Tox-App). As proof-of-concept we analyzed real samples including ubiquitous products used in everyday life. The results showed good correlation with those obtained with laboratory instrumentation and commercially available toxicity assays, thus supporting potential applications of the proposed device for portable real-life needs.
---
paper_title: A Simple, Low-Cost Conductive Composite Material for 3D Printing of Electronic Sensors
paper_content:
3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes (‘rapid prototyping’) before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term ‘carbomorph’ and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.
---
paper_title: Application of fusion deposition modelling for rapid investment casting – a review
paper_content:
The rapid prototyping technologies are being used in the various fields of engineering. The ability of producing very small and intricate details makes the rapid prototyping technologies suitable for making patterns for investment casting. Moreover, by using rapid prototyping technologies the patterns can be produced without the necessity of costly hard tooling. The rapid prototyping technologies are considered very useful when only limited numbers of pieces are promptly required as in making prototypes, design iterations and design optimisations. The fusion deposition modelling (FDM) is a rapid prototyping technology that can use a number of materials which can be effectively used for making patterns for investment casting. Different non-wax materials are available which can be used for making patterns and can be burnt easily during autoclaving/firing. This paper reviews the suitability of FDM for making patterns for investment casting. The direct and indirect methods of producing casting patterns along ...
---
paper_title: A portable low-cost long-term live-cell imaging platform for biomedical research and education
paper_content:
Abstract Time-resolved visualization and analysis of slow dynamic processes in living cells has revolutionized many aspects of in vitro cellular studies. However, existing technology applied to time-resolved live-cell microscopy is often immobile, costly and requires a high level of skill to use and maintain. These factors limit its utility to field research and educational purposes. The recent availability of rapid prototyping technology makes it possible to quickly and easily engineer purpose-built alternatives to conventional research infrastructure which are low-cost and user-friendly. In this paper we describe the prototype of a fully automated low-cost, portable live-cell imaging system for time-resolved label-free visualization of dynamic processes in living cells. The device is light-weight (3.6 kg), small (22×22×22 cm) and extremely low-cost (
---
paper_title: Simple, Cost-Effective 3D Printed Microfluidic Components for Disposable, Point-of-Care Colorimetric Analysis
paper_content:
The fabrication of microfluidic chips can be simplified and accelerated by three-dimensional (3D) printing. However, all of the current designs of 3D printed microchips require off-chip bulky equipment to operate, which hindered their applications in the point-of-care (POC) setting. In this work, we demonstrate a new class of movable 3D printed microfluidic chip components, including torque-actuated pump and valve, rotary valve, and pushing valve, which can be operated manually without any off-chip bulky equipment such as syringe pump and gas pressure source. By integrating these components, we developed a user-friendly 3D printed chip that can perform general colorimetric assays. Protein quantification was performed on artificial urine samples as a proof-of-concept model with a smartphone used as the imaging platform. The protein was quantified linearly and was within the physiologically relevant range for humans. We believe that the demonstrated components and designs can expand the functionalities and ...
---
paper_title: Automated 3D-printed unibody immunoarray for chemiluminescence detection of cancer biomarker proteins
paper_content:
A low cost three-dimensional (3D) printed clear plastic microfluidic device was fabricated for fast, low cost automated protein detection. The unibody device features three reagent reservoirs, an efficient 3D network for passive mixing, and an optically transparent detection chamber housing a glass capture antibody array for measuring chemiluminescence output with a CCD camera. Sandwich type assays were built onto the glass arrays using a multi-labeled detection antibody-polyHRP (HRP = horseradish peroxidase). Total assay time was ∼30 min in a complete automated assay employing a programmable syringe pump so that the protocol required minimal operator intervention. The device was used for multiplexed detection of prostate cancer biomarker proteins prostate specific antigen (PSA) and platelet factor 4 (PF-4). Detection limits of 0.5 pg mL-1 were achieved for these proteins in diluted serum with log dynamic ranges of four orders of magnitude. Good accuracy vs. ELISA was validated by analyzing human serum samples. This prototype device holds good promise for further development as a point-of-care cancer diagnostics tool.
---
paper_title: Review of Rapid Prototyping-Technology for the Future
paper_content:
The term “Rapid Prototyping” (RP) refers to a class of technologies that can automatically construct physical models from computer-Aided Design (CAD) data or is a group of techniques used to quickly fabricate a scale model of a physical part or assembly using three-dimensional computer aided design (CAD) data. The “three dimensional printers” allow designers to quickly create tangible prototypes of their designs rather than two dimensional pictures. Such models have numerous uses. They make excellent visual aids for communicating ideas with co-workers or customers apart from design testing. For example Aerospace Engineer might mount a model aerofoil in a wind tunnel to measure lift and drag forces. Across the world, Engineering has the common language and common goal-“Improving the Quality of Life” of mankind without any boundary restrictions. To bring about this much needed change, we require the services of the fraternity of Engineers to work on the challenges posed by our times. The need of the hour is to bring together globally this fraternity to collaborate with each other.
---
paper_title: Development of Aptamer-Based Point-of-Care Diagnostic Devices for Malaria Using Three-Dimensional Printing Rapid Prototyping
paper_content:
We present the adaption of an aptamer-tethered enzyme capture (APTEC) assay into point-of-care device prototypes with potential for malaria diagnosis. The assay functions by capturing the malaria biomarker Plasmodium falciparum lactate dehydrogenase (PfLDH) from samples and using its intrinsic enzymatic activity to generate a visualizable blue color in response to Plasmodium positive samples. Using three-dimensional (3D) printing rapid prototyping, a paper-based syringe test and magnetic bead-based well test were developed. Both were found to successfully detect recombinant PfLDH at ng mL–1 concentrations using low sample volumes (20 μL) and could function using purified or spiked whole blood samples with facile sample preparation. The syringe test was found to be more analytically sensitive but required more additional preparation steps, while the well test required fewer steps and hence may be better suited for future clinical testing. Additionally, the development reagents required for the color respon...
---
paper_title: Disposable electrochemical sensor prepared using 3D printing for cell and tissue diagnostics
paper_content:
Abstract In this paper we present a novel electrochemical sensor with a unique 3D architecture allowing for direct measurements on contact, or in close proximity, to biological samples. For biomedical applications, the all-polymer architecture can be mounted on special probes that can access the region under test with no need for biopsy as is done today with the conventional 2D electrodes. The chip consists of a biocompatible substrate comprised of an electrochemical cell with two gold electrodes (working and counter) and an Ag/AgCl quasi-reference electrode. The metal electrodes on the biochip front (sensing) side are fabricated by conventional electroplating and patterning methods. The chip itself is made from PDMS cast from a polymer master fabricated by 3D printing. The electrical communication between the biochip front and backside is enabled by through-hole via -contacts filled with conductive PDMS containing 60 wt% graphite powder. The electroactivity of working electrodes was verified by cyclic voltammetry of ferrocyanide/ferricyanide redox reaction. Amperometric in-vitro detection of the biomarker alkaline phosphatase from three different colon cancer cell lines directly in a cell culture plate while maintaining their biological environment was successfully demonstrated. The sensor exhibit stable voltammetric signatures and significant amperometric response to the enzyme in repeated tests. This approach paves the way to perform direct, non-invasive diagnostics on top of an exposed cell layer for both in-vivo and in-vitro applications.
---
paper_title: Ultrarapid detection of pathogenic bacteria using a 3D immunomagnetic flow assay.
paper_content:
We developed a novel 3D immunomagnetic flow assay for the rapid detection of pathogenic bacteria in a large-volume food sample. Antibody-functionalized magnetic nanoparticle clusters (AbMNCs) were magnetically immobilized on the surfaces of a 3D-printed cylindrical microchannel. The injection of a Salmonella-spiked sample solution into the microchannel produced instant binding between the AbMNCs and the Salmonella bacteria due to their efficient collisions. Nearly perfect capture of the AbMNCs and AbMNCs-Salmonella complexes was achieved under a high flow rate by stacking permanent magnets with spacers inside the cylindrical separator to maximize the magnetic force. The concentration of the bacteria in solution was determined using ATP luminescence measurements. The detection limit was better than 10 cfu/mL, and the overall assay time, including the binding, rinsing, and detection steps for a 10 mL sample took less than 3 min. To our knowledge, the 3D immunomagnetic flow assay described here provides the fastest high-sensitivity, high-capacity method for the detection of pathogenic bacteria.
---
paper_title: Applications of additive manufacturing in dentistry: A review
paper_content:
Additive manufacturing (AM) or 3D printing has been hailed as the third industrial revolution as it has caused a paradigm shift in the way objects have been manufactured. Conventionally, converting a raw material to a fully finished and assembled, usable product comprises several steps which can be eliminated by using this process as functional products can be created directly from the raw material at a fraction of the time originally consumed. Thus, AM has found applications in several sectors including automotive, aerospace, printed electronics, and healthcare. AM is increasingly being used in the healthcare sector, given its potential to fabricate patient-specific customized implants with required accuracy and precision. Implantable heart valves, rib cages, and bones are some of the examples where AM technologies are used. A vast variety of materials including ceramics, metals, polymers, and composites have been processed to fabricate intricate implants using 3D printing. The applications of AM in dentistry include maxillofacial implants, dentures, and other prosthetic aids. It may also be used in surgical training and planning, as anatomical models can be created at ease using AM. This article gives an overview of the AM process and reviews in detail the applications of 3D printing in dentistry. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 2017.
---
paper_title: 3D-printed microfluidic automation
paper_content:
Microfluidic automation - the automated routing, dispensing, mixing, and/or separation of fluids through microchannels - generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology's use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer.
---
paper_title: 3D printed modules for integrated microfluidic devices
paper_content:
Due to the ever increasing demand for integrated microfluidic devices, an advanced fabrication method is required to expand their capabilities in many research areas. We propose a three-dimensional (3D) printing technique for production of functional modules and demonstrate their assembly into integrated microfluidic device for non-expert users.
---
paper_title: 3D printed stratospheric probe as a platform for determination of DNA damage based on carbon quantum dots/DNA complex fluorescence increase
paper_content:
We present a utilization of carbon quantum dots (CQDs), passivated with polyethylene glycol as a fluorescent recognition probe for DNA damage. Synthesized CQDs were characterized in detail, using optical and electrochemical methods. Further, fluorescent behavior of CQDs was monitored in the presence of genomic DNA, isolated from Staphylococcus aureus. In laboratory conditions, after 30 min of exposure to UV irradiation (λ = 254 nm), the DNA/CQDs complex significantly increased its fluorescence. Further, stratospheric probe was suggested and crafted by using technology of 3D printing (acrylonitrile–butadiene–styrene as a material). CQDs were exploited to evaluate the DNA damage in stratospheric conditions (up to 20,000 m) by determination of fluorescence increase (λexc = 245 nm, λem = 400 nm), together with other parameters (temperature, humidity, altitude, pressure, UV intensity, and X-ray irradiation). The obtained data showed that the sensor utilizing the DNA/CQDs was able to identify the DNA damage, together with external conditions. It was shown the proposed concept is able to operate at temperatures lower than −70 °C. The proposed protocol may by applicable as a biosensor for long-term space missions, like international space station, missions to the Moon or Mars.
---
paper_title: Processing Issues and the Characterization of Soft Electrochemical 3D Sensor
paper_content:
Abstract In this work we present the process and characterization issues of 3D electrochemical sensor made on a polymeric substrate using through-substrate via contacts filled with a conductive polymer. In this paper we highlight and demonstrate the main purpose of this architecture for a “touch” sensing applications. The chip consists of a PDMS substrate comprising an electrochemical cell with two gold electrodes (working and counter) and an Ag/AgCl quasi-reference electrode. In the present 3D architecture the electrodes are located on one side of the substrate while the contacts are located at the back side, allowing electrical signals to flow vertically to the potentiostat and signal processing units. The electrical communication between the bio-chip front and backside is enabled by conducting vias fabricated by cast molding. The via-contacts are filled with conductive PDMS containing 60 wt% graphite powder. Electrochemical characterization of the chip is carried by measuring the redox behavior ferricyanide/ferrocyanide couple in a cyclic voltammetry analysis. The 3D sensor exhibits stable voltammetric signatures in repeated tests. Enzymatic activity of alkaline phosphatase in two cell lines was detected and quantified by chronoamperometric assay. These results showing that the described system is suitable for non-invasive diagnostics both for in-vitro & in-vivo applications.
---
paper_title: Simple, Cost-Effective 3D Printed Microfluidic Components for Disposable, Point-of-Care Colorimetric Analysis
paper_content:
The fabrication of microfluidic chips can be simplified and accelerated by three-dimensional (3D) printing. However, all of the current designs of 3D printed microchips require off-chip bulky equipment to operate, which hindered their applications in the point-of-care (POC) setting. In this work, we demonstrate a new class of movable 3D printed microfluidic chip components, including torque-actuated pump and valve, rotary valve, and pushing valve, which can be operated manually without any off-chip bulky equipment such as syringe pump and gas pressure source. By integrating these components, we developed a user-friendly 3D printed chip that can perform general colorimetric assays. Protein quantification was performed on artificial urine samples as a proof-of-concept model with a smartphone used as the imaging platform. The protein was quantified linearly and was within the physiologically relevant range for humans. We believe that the demonstrated components and designs can expand the functionalities and ...
---
paper_title: Automated 3D-printed unibody immunoarray for chemiluminescence detection of cancer biomarker proteins
paper_content:
A low cost three-dimensional (3D) printed clear plastic microfluidic device was fabricated for fast, low cost automated protein detection. The unibody device features three reagent reservoirs, an efficient 3D network for passive mixing, and an optically transparent detection chamber housing a glass capture antibody array for measuring chemiluminescence output with a CCD camera. Sandwich type assays were built onto the glass arrays using a multi-labeled detection antibody-polyHRP (HRP = horseradish peroxidase). Total assay time was ∼30 min in a complete automated assay employing a programmable syringe pump so that the protocol required minimal operator intervention. The device was used for multiplexed detection of prostate cancer biomarker proteins prostate specific antigen (PSA) and platelet factor 4 (PF-4). Detection limits of 0.5 pg mL-1 were achieved for these proteins in diluted serum with log dynamic ranges of four orders of magnitude. Good accuracy vs. ELISA was validated by analyzing human serum samples. This prototype device holds good promise for further development as a point-of-care cancer diagnostics tool.
---
paper_title: A 3D Printed Fluidic Device that Enables Integrated Features
paper_content:
Fluidic devices fabricated using conventional soft lithography are well suited as prototyping methods. Three-dimensional (3D) printing, commonly used for producing design prototypes in industry, allows for one step production of devices. 3D printers build a device layer by layer based on 3D computer models. Here, a reusable, high throughput, 3D printed fluidic device was created that enables flow and incorporates a membrane above a channel in order to study drug transport and affect cells. The device contains 8 parallel channels, 3 mm wide by 1.5 mm deep, connected to a syringe pump through standard, threaded fittings. The device was also printed to allow integration with commercially available membrane inserts whose bottoms are constructed of a porous polycarbonate membrane; this insert enables molecular transport to occur from the channel to above the well. When concentrations of various antibiotics (levofloxacin and linezolid) are pumped through the channels, approximately 18-21% of the drug migrates through the porous membrane, providing evidence that this device will be useful for studies where drug effects on cells are investigated. Finally, we show that mammalian cells cultured on this membrane can be affected by reagents flowing through the channels. Specifically, saponin was used to compromise cell membranes, and a fluorescent label was used to monitor the extent, resulting in a 4-fold increase in fluorescence for saponin treated cells.
---
paper_title: 3D-printed fluidic devices enable quantitative evaluation of blood components in modified storage solutions for use in transfusion medicine
paper_content:
A fluidic device constructed with a 3D-printer can be used to investigate stored blood components with subsequent high-throughput calibration and readout with a standard plate reader.
---
paper_title: A 3D printed dry electrode for ECG/EEG recording
paper_content:
Abstract In this paper, the design, fabrication and testing of a 3D printed dry electrode is proposed. 3D printing represents an authentic breakthrough for the development and mass production of dry medical electrodes. In fact, it allows a fast and low cost production of high precision tridimensional shapes. This technique is reliable and efficient, and facilitates controllability over the whole process. Initially, 3D capable design software is used to draw the electrode model. The resulting file is simply loaded in a 3D printer whose resolution is 42 μm on x- and y-axes, and 16 μm on z-axis. The electrode is made by an insulating acrylic-based photopolymer. It consists of 180 conical needles (distance = 250 μm) on a truncated conical base. The metallization process undergoes two steps: sputtering of titanium as adhesion promotion layer and evaporation of gold to lower the impedance and prevent oxidation of the electrode. After electrode characterization, experimental results are presented and compared with planar wet Ag/AgCl electrodes for recording ECG–EEG.
---
paper_title: Challenges and limitations of patient-specific vascular phantom fabrication using 3D Polyjet printing
paper_content:
Additive manufacturing (3D printing) technology offers a great opportunity towards development of patient-specific vascular anatomic models, for medical device testing and physiological condition evaluation. However, the development process is not yet well established and there are various limitations depending on the printing materials, the technology and the printer resolution. Patient-specific neuro-vascular anatomy was acquired from computed tomography angiography and rotational digital subtraction angiography (DSA). The volumes were imported into a Vitrea 3D workstation (Vital Images Inc.) and the vascular lumen of various vessels and pathologies were segmented using a “marching cubes” algorithm. The results were exported as Stereo Lithographic (STL) files and were further processed by smoothing, trimming, and wall extrusion (to add a custom wall to the model). The models were printed using a Polyjet printer, Eden 260V (Objet-Stratasys). To verify the phantom geometry accuracy, the phantom was reimaged using rotational DSA, and the new data was compared with the initial patient data. The most challenging part of the phantom manufacturing was removal of support material. This aspect could be a serious hurdle in building very tortuous phantoms or small vessels. The accuracy of the printed models was very good: distance analysis showed average differences of 120 μm between the patient and the phantom reconstructed volume dimensions. Most errors were due to residual support material left in the lumen of the phantom. Despite the post-printing challenges experienced during the support cleaning, this technology could be a tremendous benefit to medical research such as in device development and testing.
---
paper_title: 3D-printed microfluidic automation
paper_content:
Microfluidic automation - the automated routing, dispensing, mixing, and/or separation of fluids through microchannels - generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology's use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer.
---
paper_title: 3D printed microfluidic devices with integrated versatile and reusable electrodes
paper_content:
We report two 3D printed devices that can be used for electrochemical detection. In both cases, the electrode is housed in commercially available, polymer-based fittings so that the various electrode materials (platinum, platinum black, carbon, gold, silver) can be easily added to a threaded receiving port printed on the device; this enables a module-like approach to the experimental design, where the electrodes are removable and can be easily repolished for reuse after exposure to biological samples. The first printed device represents a microfluidic platform with a 500 × 500 μm channel and a threaded receiving port to allow integration of either polyetheretherketone (PEEK) nut-encased glassy carbon or platinum black (Pt-black) electrodes for dopamine and nitric oxide (NO) detection, respectively. The embedded 1 mm glassy carbon electrode had a limit of detection (LOD) of 500 nM for dopamine and a linear response (R2 = 0.99) for concentrations between 25–500 μM. When the glassy carbon electrode was coated with 0.05% Nafion, significant exclusion of nitrite was observed when compared to signal obtained from equimolar injections of dopamine. When using flow injection analysis with a Pt/Pt-black electrode and standards derived from NO gas, a linear correlation (R2 = 0.99) over a wide range of concentrations (7.6–190 μM) was obtained, with the LOD for NO being 1 μM. The second application showcases a 3D printed fluidic device that allows collection of the biologically relevant analyte adenosine triphosphate (ATP) while simultaneously measuring the release stimulus (reduced oxygen concentration). The hypoxic sample (4.8 ± 0.5 ppm oxygen) released 2.4 ± 0.4 times more ATP than the normoxic sample (8.4 ± 0.6 ppm oxygen). Importantly, the results reported here verify the reproducible and transferable nature of using 3D printing as a fabrication technique, as devices and electrodes were moved between labs multiple times during completion of the study.
---
paper_title: New perspectives in shake flask pH control using a 3D-printed control unit based on pH online measurement
paper_content:
Abstract Online pH control during microbial shake flask cultivation has not been established due to the lack of a practical combination of an online sensor system and an appropriate control unit. The objective of this investigation was to develop a minimum scale dosage apparatus, namely shake flask controller (“SFC”), which can control the pH during a complete cultivation and serves as technical example for the application of small liquid dispensing lab devices. A well evaluated optical, chemosensor based, noninvasive, multisensory platform prototype for online DO (dissolved oxygen)-, pH- and biomass measurement served as sensor. The SFC was designed as cap-integrated, semi-autarkical control unit. Minimum scale working parts like the commercial mp6 piezoelectric micropumps and miniature solenoid valves were combined with a selective laser sintering (SLS) printed backbone. In general it is intended to extend its application range on the control of enzymatic assays, polymerization processes, cell disruption methods or the precise dispense of special chemicals like inducers or inhibitors. It could be proved that pH control within a range of 0.1 pH units could be maintained at different cultivation conditions. A proportional-integral-derivative- (PID) controller and an adaptive proportional controller were successfully applied to calculate the balancing solution volume. SLS based 3D printing using polyamide combined with state-of-the-art micro pumps proved to be perfectly adaptable for minimum size, autoclavable lab devices.
---
paper_title: Selective laser sintering: A qualitative and objective approach
paper_content:
This article presents an overview of selective laser sintering (SLS) work as reported in various journals and proceedings. Selective laser sintering was first done mainly on polymers and nylon to create prototypes for audio-visual help and fit-to-form tests. Gradually it was expanded to include metals and alloys to manufacture functional prototypes and develop rapid tooling. The growth gained momentum with the entry of commercial entities such as DTM Corporation and EOS GmbH Electro Optical Systems. Computational modeling has been used to understand the SLS process, optimize the process parameters, and enhance the efficiency of the sintering machine.
---
paper_title: A review on selective laser sintering/melting (SLS/SLM) of aluminium alloy powders: Processing, microstructure, and properties
paper_content:
Manufacturing businesses aiming to deliver their new customised products more quickly and gain more consumer markets for their products will increasingly employ selective laser sintering/melting (SLS/SLM) for fabricating high quality, low cost, repeatable, and reliable aluminium alloy powdered parts for automotive, aerospace, and aircraft applications. However, aluminium powder is known to be uniquely bedevilled with the tenacious surface oxide film which is difficult to avoid during SLS/SLM processing. The tenacity of the surface oxide film inhibits metallurgical bonding across the layers during SLS/SLM processing and this consequently leads to initiation of spheroidisation by Marangoni convection. Due to the paucity of publications on SLS/SLM processing of aluminium alloy powders, we review the current state of research and progress from different perspectives of the SLS/SLM, powder metallurgy (P/M) sintering, and pulsed electric current sintering (PECS) of ferrous, non-ferrous alloys, and composite powders as well as laser welding of aluminium alloys in order to provide a basis for follow-on-research that leads to the development of high productivity, SLS/SLM processing of aluminium alloy powders. Moreover, both P/M sintering and PECS of aluminium alloys are evaluated and related to the SLS process with a view to gaining useful insights especially in the aspects of liquid phase sintering (LPS) of aluminium alloys; application of LPS to SLS process; alloying effect in disrupting the surface oxide film of aluminium alloys; and designing of aluminium alloy suitable for the SLS/SLM process. Thereafter, SLS/SLM parameters, powder properties, and different types of lasers with their effects on the processing and densification of aluminium alloys are considered. The microstructure and metallurgical defects associated with SLS/SLM processed parts are also elucidated by highlighting the mechanism of their formation, the main influencing factors, and the remedial measures. Mechanical properties such as hardness, tensile, and fatigue strength of SLS/SLM processed parts are reported. The final part of this paper summarises findings from this review and outlines the trend for future research in the SLS/SLM processing of aluminium alloy powders.
---
paper_title: Metal Additive Manufacturing: A Review
paper_content:
This paper reviews the state-of-the-art of an important, rapidly emerging, manufacturing technology that is alternatively called additive manufacturing (AM), direct digital manufacturing, free form fabrication, or 3D printing, etc. A broad contextual overview of metallic AM is provided. AM has the potential to revolutionize the global parts manufacturing and logistics landscape. It enables distributed manufacturing and the productions of parts-on-demand while offering the potential to reduce cost, energy consumption, and carbon footprint. This paper explores the material science, processes, and business consideration associated with achieving these performance gains. It is concluded that a paradigm shift is required in order to fully exploit AM potential.
---
paper_title: 3D Printed Stretchable Capacitive Sensors for Highly Sensitive Tactile and Electrochemical Sensing
paper_content:
Developments of innovative strategies for the fabrication of stretchable sensors are of crucial importance for their applications in wearable electronic systems. In this work, we report the successful fabrication of stretchable capacitive sensors using a novel 3D printing method for highly sensitive tactile and electrochemical sensing applications. Unlike conventional lithographic or templated methods, the programmable 3D printing technique can fabricate complex device structures in a cost-effective and facile manner. We designed and fabricated stretchable capacitive sensors with interdigital and double-vortex designs and demonstrated their successful applications as tactile and electrochemical sensors. Especially, our stretchable sensors exhibited a detection limit as low as 1 × 10−6 M for NaCl aqueous solution, which could have significant potential applications when integrated in electronics skins.
---
paper_title: Temperature sensor realized by inkjet printing process on flexible substrate
paper_content:
Abstract The objective of this study is to realize a printed and flexible temperature sensor to achieve surface temperature measurement of the human body. The sensor is a thermistor composed silver (Ag) deposited on a Polyimide substrate (Kapton HN). The meander was patterned by inkjet printing with a drop-on-demand Jetlab4 (Microfab Technologies Inc.). The resistance temperature coefficients have been studied in the temperature range of 20–60 °C with a range of voltage between 0 and 1 V. The stability versus time has also been measured without a sensor layer protection. The sensitive area of the sensor, silver lines width and the gap between the electrical conductors were, respectively 6.2 cm 2 , 300 μm, 60 μm. The mean temperature sensor sensitivity found was 2.23 × 10 −3 °C −1 . The results show a good linearity and less than 5% hysteresis in the extended measurement.
---
paper_title: 3D multifunctional integumentary membranes for spatiotemporal cardiac measurements and stimulation across the entire epicardium
paper_content:
Tools for cardiac physiological mapping are important for basic and clinical cardiac research. Here the authors use 3D printing to create a thin, elastic silicone sheath that fits tightly around the entire epicardium and contains sensors to measure a variety of physiological parameters of the beating heart ex vivo.
---
paper_title: Biocompatible inkjet printing technique for designed seeding of individual living cells.
paper_content:
Inkjet printers are capable of printing at high resolution by ejecting extremely small ink drops. Established printing technology will be able to seed living cells, at micrometer resolution, in arrangements similar to biological tissues. We describe the use of a biocompatible inkjet head and our investigation of the feasibility of microseeding with living cells. Living cells are easily damaged by heat; therefore, we used an electrostatically driven inkjet system that was able to eject ink without generating significant heat. Bovine vascular endothelial cells were prepared and suspended in culture medium, and the cell suspension was used as "ink" and ejected onto culture disks. Microscopic observation showed that the endothelial cells were situated in the ejected dots in the medium, and that the number of cells in each dot was dependent on the concentration of the cell suspension and ejection frequency chosen. After the ejected cells were incubated for a few hours, they adhered to the culture disks. Using our non-heat-generating, electrostatically driven inkjet system, living cells were safely ejected onto culture disks. This microseeding technique with living cells has the potential to advance the field of tissue engineering.
---
paper_title: Development and validation of a 3D-printed interfacial stress sensor for prosthetic applications.
paper_content:
a b s t r a c t A novel capacitance-based sensor designed for monitoring mechanical stresses at the stump-socket inter- face of lower-limb amputees is described. It provides practical means of measuring pressure and shear stresses simultaneously. In particular, it comprises of a flexible frame (20 mm × 20 mm), with thickness of 4 mm. By employing rapid prototyping technology in its fabrication, it offers a low-cost and versatile solution, with capability of adopting bespoke shapes of lower-limb residua. The sensor was first ana- lysed using finite element analysis (FEA) and then evaluated using lab-based electromechanical tests. The results validate that the sensor is capable of monitoring both pressure and shear at stresses up to 350 kPa and 80 kPa, respectively. A post-signal processing model is developed to induce pressure and shear stresses, respectively. The effective separation of pressure and shear signals can be potentially advantageous for sensor calibration in clinical applications. The sensor also demonstrates high linearity (approx. 5-8%) and high pressure (approx. 1.3 kPa) and shear (approx. 0.6 kPa) stress resolution perfor- mance. Accordingly, the sensor offers the potential for exploitation as an assistive tool to both evaluate prosthetic socket fitting in clinical settings and alert amputees in home settings of excessive loading at the stump-socket interface, effectively preventing stump tissue breakdown at an early stage.
---
paper_title: A digital light processing 3D printer for fast and high-precision fabrication of soft pneumatic actuators
paper_content:
Abstract In this paper, we built up a desktop digital light processing (DLP) 3D printer and fabricated multiple size soft pneumatic actuators integrally with fast speed and high precision. The printing process is based on the projection microstereolithography method. The composition of the printing system and key parameters during the printing process were presented. Evaluation experiments demonstrate that our printer can print objects with as small as 87.5 μm size features. We first printed single pneumatic net-works (pneu-net) actuators integrally and conducted a series of actuation experiments and finite element method analyses to test the actuators’ deformation capability. We further designed a soft pneumatic gripper containing three micro pneu-net actuators with 0.4 mm wide square air channels as well as 0.2 mm thick chamber walls and fabricated the gripper integrally in less than 30 min using our printer. The grasping capability of the gripper was verified through experiments as well. Results presented in this work prove the performance of the DLP printer we build up and show the convenience of fabricating micro soft pneumatic actuators integrally using DLP 3D printing approach with fast speed and high precision.
---
paper_title: Digital Light Processing for high-brightness high-resolution applications
paper_content:
Electronic projection display technology for high-brightness applications had its origins in the Gretag Eidophor, an oil film-based projection system developed in the early 1940s. A number of solid state technologies have challenged the Eidophor, including CRT-addressed LCD light valves and active-matrix-addressed LCD panels. More recently, in response to various limitations of the LCD technologies, high-brightness systems have been developed based on Digital Light Processing technology. At the heart of the DLP projection display is the Digital Micromirror Device, a semiconductor-based array of fast, reflective digital light switches that precisely control a light source using a binary pulsewidth modulation technique. This paper describes the design, operation, performance, and advantages of DLP- based projection systems for high-brightness, high- resolution applications. It also presents the current status of high-brightness products that will soon be on the market.
---
paper_title: Fabrication of biocompatible lab-on-chip devices for biomedical applications by means of a 3D-printing process
paper_content:
A new microfluidic assembly method for semiconductor-based biosensors using 3D-printing technologies was proposed for a rapid and cost-efficient design of new sensor systems. The microfluidic unit is designed and printed by a 3D-printer in just a few hours and assembled on a light-addressable potentiometric sensor (LAPS) chip using a photo resin. The cell growth curves obtained from culturing cells within microfluidics-based LAPS systems were compared with cell growth curves in cell culture flasks to examine biocompatibility of the 3D-printed chips. Furthermore, an optimal cell culturing within microfluidics-based LAPS chips was achieved by adjusting the fetal calf serum concentrations of the cell culture medium, an important factor for the cell proliferation.
---
paper_title: Flexible three-dimensional measurement technique based on a digital light processing projector
paper_content:
A new method of 3D measurement based on a digital light processing (DLP) projector is presented. The projection model of the DLP projector is analyzed, and the relationship between the fringe patterns of the DLP and the fringe strips projected into the 3D space is proposed. Then the 3D shape of the object can be obtained by this relationship. Meanwhile a calibration method for this model is presented. Using this calibration method, parameters of the model can be obtained by a calibration plate, and there is no requirement for the plate to move precisely. This new 3D shape measurement method does not require any restrictions as that in the classical methods. The camera and projector can be put in an arbitrary position, and it is unnecessary to arrange the system layout in parallel, vertical, or other stringent geometry conditions. The experiments show that this method is flexible and is easy to carry out. The system calibration can be finished quickly, and the system is applicable to many shape measurement tasks.
---
paper_title: Autonomous Chemical Sensing Interface for Universal Cell Phone Readout
paper_content:
Exploiting the ubiquity of cell phones for quantitative chemical sensing imposes strong demands on interfacing devices. They should be autonomous, disposable, and integrate all necessary calibration and actuation elements. In addition, a single design should couple universally to a variety of cell phones, and operate in their default configuration. Here, we demonstrate such a concept and its implementation as a quantitative glucose meter that integrates finger pumps, unidirectional valves, calibration references, and focusing optics on a disposable device configured for universal video acquisition.
---
paper_title: Research of a Novel 3D Printed Strain Gauge Type Force Sensor †
paper_content:
A 3D printed force sensor with a composite structure developed by combining digital light processing (DLP) based printing and inkjet printing technologies is described in this paper. The sensor has cost effectiveness and time-saving advantages compared to the traditional sensor manufacturing process. During this work, the substrate of the force sensor was printed by a DLP based 3D printer using a transparent high-temperature resin, and the strain gauge of the force sensor was inkjet printed using poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT/PSS) conductive ink. Finite element (FE) simulation was conducted to find the print origin of the strain gauge. The relationship between the mechanical properties of the post-cured resin and the curing time was investigated and the resistance of the printed strain gauges was characterized to optimize process parameters. Afterward, the force sensor was characterized. Experimental results show that the sensitivity of the sensor is 2.92% N−1 and the linearity error is 3.1485% full scale (FS) within the range from 0 mN–160 mN, and the effective gauge factor of the strain gauge is about 0.98. The resistance drifting is less than 0.004 kΩ within an hour. These figures prove that the device can perform as a force sensor and 3D printing technology may have great applied potential in sensor fabrication.
---
paper_title: Piezoelectric microphone via a digital light processing 3D printing process
paper_content:
In nature sensors possess complex interlocking 3D structures and extremely localized material properties that allow processing of incredibly complex information in a small space. Acoustic sensor design is limited by fabrication processes, often MEMS based, where there is limited scope for fully 3D creations due to planer fabrication methods. Here we investigate the application of 3D printing via digital light processing (DLP) to integrate piezoelectric, conductive and structural polymer layers to create a complete electro-mechanical device. We demonstrate a working piezoelectric acoustic sensor, capable of sending electric signals that can be picked up by pre-amp circuitry fabricated using a commercially available 3D printer. We show that the 3D printing of mechanically sensitive membranes with thicknesses down to 35 μm and tunable resonant frequencies is possible and further show it is possible to create a fully working electro-acoustic device by embedding 3D printed piezoelectric and conductive parts. Realizing this design opens up the possibility of generating truly 3D structured functional prints that may be used in bio-inspired design.
---
paper_title: 3D Printing of Carbon Nanotubes-Based Microsupercapacitors
paper_content:
A novel 3D printing procedure is presented for fabricating carbon-nanotubes (CNTs)-based microsupercapacitors. The 3D printer uses a CNTs ink slurry with a moderate solid content and prints a stream of continuous droplets. Appropriate control of a heated base is applied to facilitate the solvent removal and adhesion between printed layers and to improve the structure integrity without structure delamination or distortion upon drying. The 3D-printed electrodes for microsupercapacitors are characterized by SEM, laser scanning confocal microscope, and step profiler. Effect of process parameters on 3D printing is also studied. The final solid-state microsupercapacitors are assembled with the printed multilayer CNTs structures and poly(vinyl alcohol)-H3PO4 gel as the interdigitated microelectrodes and electrolyte. The electrochemical performance of 3D printed microsupercapacitors is also tested, showing a significant areal capacitance and excellent cycle stability.
---
paper_title: 4D Printing: Multi‐Material Shape Change
paper_content:
How might 4D printing overcome the obstacles that are hampering the rolling out and scaling up of 3D printing? Skylar Tibbits, Director of the Self-Assembly Lab at the Massachusetts Institute of Technology (MIT), describes how the Lab has partnered up with Stratasys Ltd, an industry leader in the development of 4D Printing, and is making the development of self-assembly programmable materials and adaptive technologies for industrial application in building design and construction its focus.
---
paper_title: Occupancy Detection at Smart Home Using Real-Time Dynamic Thresholding of Flexiforce Sensor
paper_content:
Monitoring of the activities of the occupant is paramount important in the field of ambient-assisted living environment. Sensors are widely used to collect, store, and analyze on the continuous stream of data of the observation of their day-to-day activities. The output of any sensor may gradually change with time. It may come to a point where it becomes difficult to make a distinction between different situations from the sensor’s output. Due to the change of sensor’s output with time, it is hard for the system to discriminate very accurately between the regular states and the irregular states. This paper deals with the problems faced with the output of flexiforce sensors that happens when the sensor is in operation for a long period to monitor the activities of the inhabitant. A real-time dynamic thresholding has been designed and introduced to identify different situations clearly and avoid confusions regarding the output of the sensor used.
---
paper_title: 4D printing smart biomedical scaffolds with novel soybean oil epoxidized acrylate
paper_content:
Photocurable, biocompatible liquid resins are highly desired for 3D stereolithography based bioprinting. Here we solidified a novel renewable soybean oil epoxidized acrylate, using a 3D laser printing technique, into smart and highly biocompatible scaffolds capable of supporting growth of multipotent human bone marrow mesenchymal stem cells (hMSCs). Porous scaffolds were readily fabricated by simply adjusting the printer infill density; superficial structures of the polymerized soybean oil epoxidized acrylate were significantly affected by laser frequency and printing speed. Shape memory tests confirmed that the scaffold fixed a temporary shape at -18 °C and fully recovered its original shape at human body temperature (37 °C), which indicated the great potential for 4D printing applications. Cytotoxicity analysis proved that the printed scaffolds had significant higher hMSC adhesion and proliferation than traditional polyethylene glycol diacrylate (PEGDA), and had no statistical difference from poly lactic acid (PLA) and polycaprolactone (PCL). This research is believed to significantly advance the development of biomedical scaffolds with renewable plant oils and advanced 3D fabrication techniques.
---
paper_title: Active origami by 4D printing
paper_content:
Recent advances in three dimensional (3D) printing technology that allow multiple materials to be printed within each layer enable the creation of materials and components with precisely controlled heterogeneous microstructures. In addition, active materials, such as shape memory polymers, can be printed to create an active microstructure within a solid. These active materials can subsequently be activated in a controlled manner to change the shape or configuration of the solid in response to an environmental stimulus. This has been termed 4D printing, with the 4th dimension being the time-dependent shape change after the printing. In this paper, we advance the 4D printing concept to the design and fabrication of active origami, where a flat sheet automatically folds into a complicated 3D component. Here we print active composites with shape memory polymer fibers precisely printed in an elastomeric matrix and use them as intelligent active hinges to enable origami folding patterns. We develop a theoretical model to provide guidance in selecting design parameters such as fiber dimensions, hinge length, and programming strains and temperature. Using the model, we design and fabricate several active origami components that assemble from flat polymer sheets, including a box, a pyramid, and two origami airplanes. In addition, we directly print a 3D box with active composite hinges and program it to assume a temporary flat shape that subsequently recovers to the 3D box shape on demand.
---
paper_title: Biomimetic 4D printing
paper_content:
Printed hydrogel composites with plant-inspired architectures dynamically change shape on immersion in water to yield prescribed complex morphologies.
---
paper_title: 3D–4D Printed Objects: New Bioactive Material Opportunities
paper_content:
One of the main objectives of 3D printing in health science is to mimic biological functions. To reach this goal, a 4D printing might be added to 3D-printed objects which will be characterized by their abilities to evolve over time and under external stimulus by modifying their shape, properties or composition. Such abilities are the promise of great opportunities for biosensing and biomimetic systems to progress towards more physiological mimicking systems. Herein are presented two 4D printing examples for biosensing and biomimetic applications using 3D-printed enzymes. The first one is based on the printing of the enzymatic couple glucose oxidase/peroxidase for the chemiluminescent detection of glucose, and the second uses printed alkaline phosphatase to generate in situ programmed and localized calcification of the printed object.
---
| Title: 3D Printed Sensors for Biomedical Applications: A Review
Section 1: Introduction
Description 1: Introduce the importance of sensors and 3D printing in biomedical applications and provide a background context.
Section 2: Fused Deposition Modelling
Description 2: Discuss the process of Fused Deposition Modelling (FDM) and its applications in fabricating 3D printed sensors for biomedical use, with examples from recent research.
Section 3: Stereolithography
Description 3: Describe the Stereolithography (SLA) method, its advantages, and specific biomedical sensor applications, supported by examples from the literature.
Section 4: Polyjet Process
Description 4: Explain the Polyjet printing process, the materials used, and its specific applications in 3D printed biomedical sensors, with relevant examples.
Section 5: Selective Laser Sintering
Description 5: Explore the Selective Laser Sintering (SLS) process, the range of materials it can process, and its applications in creating biomedical sensors, citing recent studies.
Section 6: 3D Inkjet Printing
Description 6: Detail the 3D Inkjet Printing process, types of inks used, and its applications in biomedical sensing, with specific examples of research work.
Section 7: Digital Light Processing
Description 7: Cover the Digital Light Processing (DLP) technique, its photocuring process, and applications in biomedical sensors, along with examples from recent research.
Section 8: Current Challenges and Future Opportunities
Description 8: Analyze the current challenges in 3D printed biomedical sensors, discuss potential solutions, and explore opportunities for future research and development.
Section 9: Conclusions
Description 9: Summarize the key points discussed in the review, highlight the major findings, and provide final thoughts on the future prospects of 3D printed sensors in biomedical applications. |
Internet of Things in Marine Environment Monitoring: A Review | 15 | ---
paper_title: Cyber-Physical Systems and Events
paper_content:
This paper discusses event-based semantics in the context of the emerging concept of Cyber Physical Systems and describes two related formal models concerning policy-based coordination and Interactive Agents.
---
paper_title: Middleware: middleware challenges and approaches for wireless sensor networks
paper_content:
Using middleware to bridge the gap between applications and low-level constructs is a novel approach to resolving many wireless sensor network issues and enhancing application development. This survey discusses representative WSN middleware, presenting the state of the research
---
paper_title: An IoT-Aware Architecture for Smart Healthcare Systems
paper_content:
Over the last few years, the convincing forward steps in the development of Internet of Things (IoT)-enabling solutions are spurring the advent of novel and fascinating applications. Among others, mainly radio frequency identification (RFID), wireless sensor network (WSN), and smart mobile technologies are leading this evolutionary trend. In the wake of this tendency, this paper proposes a novel, IoT-aware, smart architecture for automatic monitoring and tracking of patients, personnel, and biomedical devices within hospitals and nursing institutes. Staying true to the IoT vision, we propose a smart hospital system (SHS), which relies on different, yet complementary, technologies, specifically RFID, WSN, and smart mobile, interoperating with each other through a Constrained Application Protocol (CoAP)/IPv6 over low-power wireless personal area network (6LoWPAN)/representational state transfer (REST) network infrastructure. The SHS is able to collect, in real time, both environmental conditions and patients’ physiological parameters via an ultra-low-power hybrid sensing network (HSN) composed of 6LoWPAN nodes integrating UHF RFID functionalities. Sensed data are delivered to a control center where an advanced monitoring application (MA) makes them easily accessible by both local and remote users via a REST web service. The simple proof of concept implemented to validate the proposed SHS has highlighted a number of key capabilities and aspects of novelty, which represent a significant step forward compared to the actual state of the art.
---
paper_title: Effective design of WSNs: From the lab to the real world
paper_content:
Distributed environmental monitoring with wireless sensor networks (WSNs) is one of the most challenging research activities faced by the embedded system community in the last decade. Here, the need for pervasive, reliable and accurate monitoring systems has pushed the research towards the realization of credible deployments able to survive in harsh environments for long time. Design an effective WSN requires a good piece of engineer work, not to mention the research contribution needed to provide a credible deployment. As a matter of fact, to solve our application, we are looking for a monitoring framework scalable, adaptive with respect to topological changes in the network, power-aware in its middleware components and endowed with energy harvesting mechanisms to grant a long lifetime for the network. The paper addresses all main aspects related to the design of a WSN ranging from the -possible- need of an ad-hoc embedded system, to sensing, local and remote transmission, data storage and visualization; particular attention will be devoted to energy harvesting and management aspects at the unit and network level. Two applications, namely monitoring the marine environment and forecasting the collapse of rock faces in mountaineering areas will be the experimental leitmotiv of the presentation.
---
paper_title: An IoT architecture for things from industrial environment
paper_content:
Currently, there are significant changes in industrial process control, intelligent building control and automation technologies under pressure to reduce operating costs and to integrate important advances in telecommunications and software. The software has become an essential factor in production and enterprise-wide systems. Internet connection has fundamentally changed the arrangements for monitoring and control, and the use of open/public standards and personal computer systems (PCs, tablets, smart phones) bring significant benefits to their users and producers. This led to the definition of Industry 4.0 that brings the concept of the Internet of Things in the industry. In this article, we want to present an Internet of Things architecture based on OPC.NET specifications which can be used in both industrial environments and smart building.
---
paper_title: A Survey on Smart Grid Communication Infrastructures: Motivations, Requirements and Challenges
paper_content:
A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.
---
paper_title: Applications of Wireless Sensor Networks in Marine Environment Monitoring: A Survey
paper_content:
With the rapid development of society and the economy, an increasing number of human activities have gradually destroyed the marine environment. Marine environment monitoring is a vital problem and has increasingly attracted a great deal of research and development attention. During the past decade, various marine environment monitoring systems have been developed. The traditional marine environment monitoring system using an oceanographic research vessel is expensive and time-consuming and has a low resolution both in time and space. Wireless Sensor Networks (WSNs) have recently been considered as potentially promising alternatives for monitoring marine environments since they have a number of advantages such as unmanned operation, easy deployment, real-time monitoring, and relatively low cost. This paper provides a comprehensive review of the state-of-the-art technologies in the field of marine environment monitoring using wireless sensor networks. It first describes application areas, a common architecture of WSN-based oceanographic monitoring systems, a general architecture of an oceanographic sensor node, sensing parameters and sensors, and wireless communication technologies. Then, it presents a detailed review of some related projects, systems, techniques, approaches and algorithms. It also discusses challenges and opportunities in the research, development, and deployment of wireless sensor networks for marine environment monitoring.
---
paper_title: ENERGY EFFICIENT COVERAGE PROBLEMS IN WIRELESS Ad Hoc SENSOR NETWORKS
paper_content:
Coverage is a typical problem in wireless sensor networks to fulfil the issued sensing tasks. In general, sensing coverage represents how well an area is monitored by sensors. The quality of a sensor network can be reflected by the levels of coverage and connectivity that it offers. The coverage problem has been studied extensively, especially when combined with connectivity and energy efficiency. Constructing a connected fully covered, and energy efficient sensor network is valuable for real world applications due to the limited resources of sensor nodes. In this paper, we survey recent contributions addressing energyefficient coverage problems in the context of static WASNs, networks in which sensor nodes do not move once they are deployed and present in some detail of the algorithms, assumptions, and results. A comprehensive comparison among these approaches is given from the perspective of design objectives, assumptions, algorithm attributes and related results.
---
paper_title: Wireless Sensor Networks for Oceanographic Monitoring: A Systematic Review
paper_content:
Monitoring of the marine environment has come to be a field of scientific interest in the last ten years. The instruments used in this work have ranged from small-scale sensor networks to complex observation systems. Among small-scale networks, Wireless Sensor Networks (WSNs) are a highly attractive solution in that they are easy to deploy, operate and dismantle and are relatively inexpensive. The aim of this paper is to identify, appraise, select and synthesize all high quality research evidence relevant to the use of WSNs in oceanographic monitoring. The literature is systematically reviewed to offer an overview of the present state of this field of study and identify the principal resources that have been used to implement networks of this kind. Finally, this article details the challenges and difficulties that have to be overcome if these networks are to be successfully deployed.
---
paper_title: IoT and Cloud Computing in Automation of Assembly Modeling Systems
paper_content:
After the technologies of integrated circuits, personal computers, and the Internet, Internet of Things (IoT) is the latest information technology (IT) that is radically changing business paradigms. However, IoT's influence in the manufacturing sector has yet been fully explored. On the other hand, existing computer-aided software tools are experiencing a bottleneck in dealing with complexity, dynamics, and uncertainties in their applications of modern enterprises. It is argued that the adoption of IoT and cloud computing in enterprise systems (ESs) would overcome the bottleneck. In this paper, the challenges in generating assembly plans of complex products are discussed. IoT and cloud computing are proposed to help a conventional assembly modeling system evolve into an advanced system, which is capable to deal with complexity and changes automatically. To achieve this goal, an assembly modeling system is automated, and the proposed system includes the following innovations: 1) the modularized architecture to make the system robust, reliable, flexible, and expandable; 2) the integrated object-oriented templates to facilitate interfaces and reuses of system components; and 3) the automated algorithms to retrieve relational assembly matrices for assembly planning. Assembly modeling for aircraft engines is used as examples to illustrate the system effectiveness.
---
paper_title: A service-oriented architecture for the transportation Cyber-Physical Systems
paper_content:
The emerging concept of CPS (Cyber-Physical Systems) is introduced into transportation systems in the present paper, which satisfies the need of tight fusion of transportation physical systems and transportation cyber systems. Considering the characteristics of a transportation system, a service-oriented architecture for the Transportation Cyber-Physical Systems (T-CPS) is put forward, including the perception, communication, computation, control and service, and then the function of each corresponding layer is carefully designed. In the final, some key techniques and applications of T-CPS are also discussed, which can lay foundations for next generation of ITS (Intelligent Transportation Systems).
---
paper_title: Survey in Smart Grid and Smart Home Security: Issues, Challenges and Countermeasures
paper_content:
The electricity industry is now at the verge of a new era—an era that promises, through the evolution of the existing electrical grids to smart grids, more efficient and effective power management, better reliability, reduced production costs, and more environmentally friendly energy generation. Numerous initiatives across the globe, led by both industry and academia, reflect the mounting interest around not only the enormous benefits but also the great risks introduced by this evolution. This paper focuses on issues related to the security of the smart grid and the smart home, which we present as an integral part of the smart grid. Based on several scenarios, we aim to present some of the most representative threats to the smart home/smart grid environment. The threats detected are categorized according to specific security goals set for the smart home/smart grid environment, and their impact on the overall system security is evaluated. A review of contemporary literature is then conducted with the aim of presenting promising security countermeasures with respect to the identified specific security goals for each presented scenario. An effort to shed light on open issues and future research directions concludes this paper.
---
paper_title: Requirements for Testing and Validating the Industrial Internet of Things
paper_content:
The latest advances in industry have been accomplished within the 4th Industrial Revolution, mostly noted as Industrie 4.0. This industrial revolution is boosted by the application of Internet of Things (IoT) technologies into the industrial contexts, also known as Industrial Internet of Things (IIoT), which is being supported by the implementation of Cyber-Physical Production Systems (CPPS). In this context, most of the existing work concentrates on developing IIoT models and CPPS architectures, laking the identification of validation requirements of these platforms. By rushing into releasing state-of-the-art IIoT applications, developers usually forget to implement methodologies to validate these applications. In this paper, we propose a list of requirements for IIoT platform validation, based on its architecture, as well as in requirements established by the industrial reality. A CPPS case study is presented, in order to illustrate some of these requisites and how validation of these type of system could be achieved.
---
paper_title: Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges
paper_content:
The Internet is continuously changing and evolving. The main communication form of present Internet is human-human. The Internet of Things (IoT) can be considered as the future evaluation of the Internet that realizes machine-to-machine (M2M) learning. Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet-connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services. This paper addresses the existing development trends, the generic architecture of IoT, its distinguishing features and possible future applications. This paper also forecast the key challenges associated with the development of IoT. The IoT is getting increasing popularity for academia, industry as well as government that has the potential to bring significant personal, professional and economic benefits.
---
paper_title: Study and application on the architecture and key technologies for IOT
paper_content:
IOT refers to the third scientific and economic tide after computer, Internet in the global information industry, having attracted highly attention of governments, enterprises, and academia, and brought a huge new market for the communication industry. At present, global main operators and equipment suppliers begin to provide M2M business and solution. Here mainly introduces the concept of IOT, analyses the structure of IOT: perception layer, network layer, application layer. Sets forth the key technologies of IOT: RFID, network communication, etc. At length, it proposes the future development and reform trend of IOT.
---
paper_title: Research on the architecture of Internet of Things
paper_content:
The Internet of Things is a technological revolution that represents the future of computing and communications. It is not the simple extension of the Internet or the Telecommunications Network. It has the features of both the Internet and the Telecommunications Network, and also has its own distinguishing feature. Through analysing the current accepted three-layer structure of the Internet of things, we suggest that the three-layer structure can't express the whole features and connotation of the Internet of Things. After reanalysing the technical framework of the Internet and the Logical Layered Architecture of the Telecommunication Management Network, we establish new five-layer architecture of the Internet of Things. We believe this architecture is more helpful to understand the essence of the Internet of Things, and we hope it is helpful to develop the Internet of Things.
---
paper_title: Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges
paper_content:
The Internet is continuously changing and evolving. The main communication form of present Internet is human-human. The Internet of Things (IoT) can be considered as the future evaluation of the Internet that realizes machine-to-machine (M2M) learning. Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet-connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services. This paper addresses the existing development trends, the generic architecture of IoT, its distinguishing features and possible future applications. This paper also forecast the key challenges associated with the development of IoT. The IoT is getting increasing popularity for academia, industry as well as government that has the potential to bring significant personal, professional and economic benefits.
---
paper_title: Research on the architecture of Internet of Things
paper_content:
The Internet of Things is a technological revolution that represents the future of computing and communications. It is not the simple extension of the Internet or the Telecommunications Network. It has the features of both the Internet and the Telecommunications Network, and also has its own distinguishing feature. Through analysing the current accepted three-layer structure of the Internet of things, we suggest that the three-layer structure can't express the whole features and connotation of the Internet of Things. After reanalysing the technical framework of the Internet and the Logical Layered Architecture of the Telecommunication Management Network, we establish new five-layer architecture of the Internet of Things. We believe this architecture is more helpful to understand the essence of the Internet of Things, and we hope it is helpful to develop the Internet of Things.
---
paper_title: Towards an Optimal Network Topology in Wireless Sensor Networks: A Hybrid Approach
paper_content:
As the demand for wireless sensor networks increases in both military and civilian sector, the need for a stable networking scheme has increased. Wireless sensor network, or WSN, must be robust networks capable of handling many adverse conditions. If these adverse conditions cannot be handled effectively they can result in data loss and under extreme conditions total network failure. Through the use of a hybrid network topology, WSN can be stable and reliable throughout the life of the node. The paper introduces the new proposed network topology in a WSN environment called leader-based enhanced butterfly (LEB) network which is based on our previous work, enhanced butterfly network. The proposed network guarantees the maximum reaching steps of n+1 in the ⎣2⋅2 n ⎦ nodes. And also, in order to demonstrate the
---
paper_title: Applications of Wireless Sensor Networks in Marine Environment Monitoring: A Survey
paper_content:
With the rapid development of society and the economy, an increasing number of human activities have gradually destroyed the marine environment. Marine environment monitoring is a vital problem and has increasingly attracted a great deal of research and development attention. During the past decade, various marine environment monitoring systems have been developed. The traditional marine environment monitoring system using an oceanographic research vessel is expensive and time-consuming and has a low resolution both in time and space. Wireless Sensor Networks (WSNs) have recently been considered as potentially promising alternatives for monitoring marine environments since they have a number of advantages such as unmanned operation, easy deployment, real-time monitoring, and relatively low cost. This paper provides a comprehensive review of the state-of-the-art technologies in the field of marine environment monitoring using wireless sensor networks. It first describes application areas, a common architecture of WSN-based oceanographic monitoring systems, a general architecture of an oceanographic sensor node, sensing parameters and sensors, and wireless communication technologies. Then, it presents a detailed review of some related projects, systems, techniques, approaches and algorithms. It also discusses challenges and opportunities in the research, development, and deployment of wireless sensor networks for marine environment monitoring.
---
paper_title: Wired and wireless sensor networks for industrial applications
paper_content:
Distributed architectures for industrial applications are a new opportunity to realize cost-effective, flexible, scalable and reliable systems. Direct interfacing of sensors and actuators to the industrial communication network improves the system performance, because process data and diagnostics can be simultaneously available to many systems and also shared on the Web. However, sensors, especially low-cost ones, cannot use standard communication protocols suitable for computers and PLCs. In fact, sensors typically require a cyclic, isochronous and hard real-time exchange of few data, whereas PCs and PLCs exchange a large amount of data with soft real-time constrains. Looking at the industrial communication systems, this separation is clearly visible: several fieldbuses have been designed for specific sensor application areas, whereas high-level industrial equipments use wired/wireless Ethernet and Internet technologies. Recently, traditional fieldbuses were replaced by Real-Time Ethernet protocols, which are ''extended'' versions of Ethernet that meet real-time operation requirements. Besides, real-time wireless sensor networking seems promising, as demonstrated by the growing research activities. In this paper, an overview of the state-of-art of real-time sensor networks for industrial applications is presented. Particular attention has been paid to the description of methods and instrumentation for performance measurement in this kind of architectures.
---
paper_title: Connectivity and coverage maintenance in wireless sensor networks
paper_content:
One of the main design challenges for wireless sensor networks (WSNs) is to obtain long system lifetime without sacrificing system original performance such as communication connectivity and sensing coverage. A large number of sensor nodes are deployed in redundant fashion in dense sensor networks, which lead to higher energy consumption. We propose a distributed framework for energy efficient connectivity and coverage maintenance in WSNs. In our framework, each sensor makes self-scheduling to separately control the states of RF and sensing unit based on dynamic coordinated reconstruction mechanism. A novel energy-balanced distributed connected dominating set algorithm is presented to make connectivity maintenance; and also a distributed node sensing scheduling is brought forward to maintain the network coverage according to the surveillance requirements. We implemented our framework by C++ programming, and the simulation results show that our framework outperforms several related work by considerably improving the energy performance of sensor networks to effectively extend network lifetime.
---
paper_title: Towards an Optimal Network Topology in Wireless Sensor Networks: A Hybrid Approach
paper_content:
As the demand for wireless sensor networks increases in both military and civilian sector, the need for a stable networking scheme has increased. Wireless sensor network, or WSN, must be robust networks capable of handling many adverse conditions. If these adverse conditions cannot be handled effectively they can result in data loss and under extreme conditions total network failure. Through the use of a hybrid network topology, WSN can be stable and reliable throughout the life of the node. The paper introduces the new proposed network topology in a WSN environment called leader-based enhanced butterfly (LEB) network which is based on our previous work, enhanced butterfly network. The proposed network guarantees the maximum reaching steps of n+1 in the ⎣2⋅2 n ⎦ nodes. And also, in order to demonstrate the
---
paper_title: Wired and wireless sensor networks for industrial applications
paper_content:
Distributed architectures for industrial applications are a new opportunity to realize cost-effective, flexible, scalable and reliable systems. Direct interfacing of sensors and actuators to the industrial communication network improves the system performance, because process data and diagnostics can be simultaneously available to many systems and also shared on the Web. However, sensors, especially low-cost ones, cannot use standard communication protocols suitable for computers and PLCs. In fact, sensors typically require a cyclic, isochronous and hard real-time exchange of few data, whereas PCs and PLCs exchange a large amount of data with soft real-time constrains. Looking at the industrial communication systems, this separation is clearly visible: several fieldbuses have been designed for specific sensor application areas, whereas high-level industrial equipments use wired/wireless Ethernet and Internet technologies. Recently, traditional fieldbuses were replaced by Real-Time Ethernet protocols, which are ''extended'' versions of Ethernet that meet real-time operation requirements. Besides, real-time wireless sensor networking seems promising, as demonstrated by the growing research activities. In this paper, an overview of the state-of-art of real-time sensor networks for industrial applications is presented. Particular attention has been paid to the description of methods and instrumentation for performance measurement in this kind of architectures.
---
paper_title: Towards an Optimal Network Topology in Wireless Sensor Networks: A Hybrid Approach
paper_content:
As the demand for wireless sensor networks increases in both military and civilian sector, the need for a stable networking scheme has increased. Wireless sensor network, or WSN, must be robust networks capable of handling many adverse conditions. If these adverse conditions cannot be handled effectively they can result in data loss and under extreme conditions total network failure. Through the use of a hybrid network topology, WSN can be stable and reliable throughout the life of the node. The paper introduces the new proposed network topology in a WSN environment called leader-based enhanced butterfly (LEB) network which is based on our previous work, enhanced butterfly network. The proposed network guarantees the maximum reaching steps of n+1 in the ⎣2⋅2 n ⎦ nodes. And also, in order to demonstrate the
---
paper_title: Wired and wireless sensor networks for industrial applications
paper_content:
Distributed architectures for industrial applications are a new opportunity to realize cost-effective, flexible, scalable and reliable systems. Direct interfacing of sensors and actuators to the industrial communication network improves the system performance, because process data and diagnostics can be simultaneously available to many systems and also shared on the Web. However, sensors, especially low-cost ones, cannot use standard communication protocols suitable for computers and PLCs. In fact, sensors typically require a cyclic, isochronous and hard real-time exchange of few data, whereas PCs and PLCs exchange a large amount of data with soft real-time constrains. Looking at the industrial communication systems, this separation is clearly visible: several fieldbuses have been designed for specific sensor application areas, whereas high-level industrial equipments use wired/wireless Ethernet and Internet technologies. Recently, traditional fieldbuses were replaced by Real-Time Ethernet protocols, which are ''extended'' versions of Ethernet that meet real-time operation requirements. Besides, real-time wireless sensor networking seems promising, as demonstrated by the growing research activities. In this paper, an overview of the state-of-art of real-time sensor networks for industrial applications is presented. Particular attention has been paid to the description of methods and instrumentation for performance measurement in this kind of architectures.
---
paper_title: Applications of Wireless Sensor Networks in Marine Environment Monitoring: A Survey
paper_content:
With the rapid development of society and the economy, an increasing number of human activities have gradually destroyed the marine environment. Marine environment monitoring is a vital problem and has increasingly attracted a great deal of research and development attention. During the past decade, various marine environment monitoring systems have been developed. The traditional marine environment monitoring system using an oceanographic research vessel is expensive and time-consuming and has a low resolution both in time and space. Wireless Sensor Networks (WSNs) have recently been considered as potentially promising alternatives for monitoring marine environments since they have a number of advantages such as unmanned operation, easy deployment, real-time monitoring, and relatively low cost. This paper provides a comprehensive review of the state-of-the-art technologies in the field of marine environment monitoring using wireless sensor networks. It first describes application areas, a common architecture of WSN-based oceanographic monitoring systems, a general architecture of an oceanographic sensor node, sensing parameters and sensors, and wireless communication technologies. Then, it presents a detailed review of some related projects, systems, techniques, approaches and algorithms. It also discusses challenges and opportunities in the research, development, and deployment of wireless sensor networks for marine environment monitoring.
---
paper_title: Wireless Sensor Networks for Ecology
paper_content:
Abstract Field biologists and ecologists are starting to open new avenues of inquiry at greater spatial and temporal resolution, allowing them to “observe the unobservable” through the use of wireless sensor networks. Sensor networks facilitate the collection of diverse types of data (from temperature to imagery and sound) at frequent intervals—even multiple times per second—over large areas, allowing ecologists and field biologists to engage in intensive and expansive sampling and to unobtrusively collect new types of data. Moreover, real-time data flows allow researchers to react rapidly to events, thus extending the laboratory to the field. We review some existing uses of wireless sensor networks, identify possible areas of application, and review the underlying technologies in the hope of stimulating additional use of this promising technology to address the grand challenges of environmental science.
---
paper_title: Toward Practical MAC Design for Underwater Acoustic Networks
paper_content:
Recently, various medium access control (MAC) protocols have been proposed for underwater acoustic networks (UANs). These protocols have significantly improved the performance of MAC layer in theory. However, two critical characteristics, low transmission rates and long preambles, found in the commercial modem-based real systems, severely degrade the performance of existing MAC protocols in the real world. Thus, a new practical MAC design is demanded. Toward an efficient approach, this paper analyzes the impact of these two modem characteristics on the random access-based MAC and handshake-based MAC, which are two major categories of MAC protocols for UANs. We further develop the nodal throughput and collision probability models for representative solutions of these two MAC protocol categories. Based on the analyses, we believe time sharing-based MAC is very promising. Along this line, we propose a time sharing-based MAC and analyze its nodal throughput. Both analytical and simulation results show that the time sharing-based solution can achieve significantly better performance.
---
paper_title: Underwater Acoustic Networks - Issues and Solutions
paper_content:
Underwater Acoustic Networks (UANs) are very unique and can be deployed for commercial and military applications. The research of UANs attracts increasing attention in recent years. This survey paper first introduces the concept of UANs, and then reviews some recent developments within this research area. It also lists some practical and potential research issues of UANs, ranging from energy saving and deployment to different layers. Finally, some suggestions and promising solutions are given for these issues.
---
paper_title: Research on marine environmental monitoring system based on the Internet of Things technology
paper_content:
In this paper, the marine environmental monitoring system based on the Internet of Things technology is researched and demonstrated. At first, the system requirement and the overall framework of the marine environmental monitoring system are introduced. Then, the paper discusses the issues of how surface and underwater wireless sensor networks work in network building, and how to apply modes of ZigBee and CDMA in the system. Finally, the paper expounds the requirements of information subsystem in the marine environmental monitoring system.
---
paper_title: Applications of Wireless Sensor Networks in Marine Environment Monitoring: A Survey
paper_content:
With the rapid development of society and the economy, an increasing number of human activities have gradually destroyed the marine environment. Marine environment monitoring is a vital problem and has increasingly attracted a great deal of research and development attention. During the past decade, various marine environment monitoring systems have been developed. The traditional marine environment monitoring system using an oceanographic research vessel is expensive and time-consuming and has a low resolution both in time and space. Wireless Sensor Networks (WSNs) have recently been considered as potentially promising alternatives for monitoring marine environments since they have a number of advantages such as unmanned operation, easy deployment, real-time monitoring, and relatively low cost. This paper provides a comprehensive review of the state-of-the-art technologies in the field of marine environment monitoring using wireless sensor networks. It first describes application areas, a common architecture of WSN-based oceanographic monitoring systems, a general architecture of an oceanographic sensor node, sensing parameters and sensors, and wireless communication technologies. Then, it presents a detailed review of some related projects, systems, techniques, approaches and algorithms. It also discusses challenges and opportunities in the research, development, and deployment of wireless sensor networks for marine environment monitoring.
---
paper_title: Prototype autonomous mini-buoy for use in a wireless networked, ocean surface sensor array
paper_content:
We report the design, prototype construction and initial testing of a small minibuoy that is aimed at use in a coordinated, wireless networked array of buoys for near-surface ocean sensing. This vehicle is designed to fill the gap between larger ocean surface vessels and/or moored buoys and subsurface gliders. The size and cost is low enough that these versatile sensor platforms can be deployed easily and in quantity. Since these minibuoys are mobile, they can keep station in currents as large as 25 cm/s or move as an adaptive, coordinated sensor array for high resolution in both time and space. The buoy is about 74 cm (29 in) long, 41 cm (16 in) wide (max) and weighs about 14.5 kg (32 lbs); hence, it can be deployed easily from small craft. Deployment times are about 1 to 2 days or more - longer with solar power. The buoy structure is fiberglass and PVC with two 2 W DC motors. Control is done with GPS and magnetic heading sensors and a PID scheme to maintain course. Communication is via a 900 MHz system with a range of 1 to 2 km and plans for a longer range HF/VHF or satellite system. The initial sensor system is designed for ocean hyperspectral observations as surface truth for airborne system calibration and validation and other ocean color applications. Acoustic, wave, air & water temperature sensors as well as GPS are included. The Mark I prototype has been successfully tested in a pool with manual control.
---
paper_title: A demonstration of wireless sensing for long term monitoring of water quality
paper_content:
At a time when technological advances are providing new sensor capabilities, novel network capabilities, long-range communications technologies and data interpreting and delivery formats via the World Wide Web, we never before had such opportunities to sense and analyse the environment around us. However, the challenges exist. While measurement and detection of environmental pollutants can be successful under laboratory-controlled conditions, continuous in-situ monitoring remains one of the most challenging aspects of environmental sensing. This paper describes the development and test of a multi-sensor hetrogenous real-time water monitoring system. A multi-sensor system was deployed in the River Lee Co. Cork, Ireland to monitor water quality parameters such as pH, temperature, conductivity, turbidity and dissolved oxygen. The R. Lee comprises of a tidal water system that provides an interesting test site to monitor. The multi-sensor system set-up is described and results of the sensor deployment and the various challenges are discussed.
---
paper_title: A low cost reconfigurable sensor network for coastal monitoring
paper_content:
Sensor networks have experienced a fast development and extended their fields of application since their appearance for military uses. The monitoring of physical parameters in natural habitats is a typical application for assessing the risks of worsening the ecosystem. Among natural environments with scientific interest, there are the coastlines in front of cities whose industrial activities impact on them, such as San Jorge Gulf, Chubut, Argentina. For that, a low cost wireless sensor network is being developed in order to deploy nodes over the coastline for measuring physical parameters, providing a large coverage area of study. In this paper the node architecture based on reconfigurable mixed signals array called PSoC® (Programmable System on Chip) from Cypress® is presented. The use of these devices allow, on the one hand, to easily interface with sensors and communication devices by using a single chip and, on the other hand, to reprogram the hardware while the system is running allowing to perform different functions or being able to improve their performance in a remote way.
---
paper_title: A system for monitoring marine environments based on Wireless Sensor Networks
paper_content:
In this paper a Wireless Sensor Network (WSN) for monitoring a coastal shallow water marine environment is presented. The study area is located in the Mar Menor coastal lagoon, situated in the Southeast of Spain, separated of the Mediterranean Sea by La Manga, a narrow piece of land of 22 km long and crossed by three channels that regulate the water circulation between the lagoon and the Mediterranean Sea. In order to know hydrodynamic performances of the lagoon and other oceanographic parameters a WSN has been developed. It is composed of several sensor nodes or buoys. These sensor nodes take oceanographic data and send them to the sink node using wireless communication. The description of this system, the buoy prototype and the user application are presented in this paper.
---
paper_title: Multi-modal sensor networks for more effective sensing in Irish coastal and freshwater environments
paper_content:
The world's oceans represent a vital resource to global economies and there exists huge economic opportunity that remains unexploited. However along with this huge potential there rests a responsibility into understanding the effects various developments may have on our natural ecosystem. This along with a variety of other issues necessitates a need for continuous and reliable monitoring of the marine and freshwater environment. The potential for innovative technology development for marine and freshwater monitoring and knowledge generation is huge and recent years have seen huge leaps forward in relation to the development of sensor technology for such purposes. However despite the advancements there are still a number of issues. In our research we advocate a multi-modal approach to create smarter more efficient monitoring networks, while enhancing the use of in-situ wireless sensor networks (WSNs). In particular we focus on the use of visual sensors, modelled outputs and context information to support a conventional in-situ wireless sensor network creating a multi-modal environmental monitoring network. Here we provide an overview of a selection of our work in relation to the use of visual sensing through networked cameras or satellite imagers in three very diverse test sites — a river catchment, a busy port and a coastal environment.
---
paper_title: Deployment of Wireless Sensor Networks for intelligent information retrieval in marine environment
paper_content:
Wireless Sensor Network is a highly promising technique which is used for monitoring environmental conditions in marine areas. In marine areas it is very important to know the state of the sea conditions to direct gateways to be opened from the neighboring areas to allow water activities. Messages are sent from nearby sea areas stating the sea conditions. This paper proposes a novel framework for deployment of Wireless Sensor Networks in marine environment to carry out information retrieval and generate intelligent data. This method uses SentiWordNet, a lexical resource which is used for opinion mining. This framework extracts intelligent information from the aggregate message by first filtering the message and transforming it by lemmatization and then gathering the opinion words and calculating their scores with the help of SentiWordNet. Thereby polarity of the sentence is calculated and the message is conveyed. The proposed method is executed on the messages sent from various marine areas and the results prove the efficiency and accuracy of the method.
---
paper_title: Integration of micro-sensor technology and remote sensing for monitoring coastal water quality in a municipal beach and other areas in Cyprus
paper_content:
The proposed project has as main objective the monitoring of coastal waters using satellite remote sensing and wireless ::: sensor technology employed on a buoy with emphasis firstly in municipal beaches and further to areas that a systematic ::: sampling is required. Satellite remote sensing has the advantage of using remote sensing data to assess the quality of ::: water bodies has proven to be successful not only in inland waters but to coastal water areas as shown by several others ::: conducted studies. Reflectance signature of municipal coastal water is monitored using a GER 1500 field spectroradiometer. ::: Simultaneous measurements of turbidity, temperature have been acquired. Cross-validation of measurements ::: of water quality both from micro-sensor and remote sensing are planned to be undertaken. An overall methodology that ::: integrates both micro-sensor technology and satellite remote sensing is presented.
---
paper_title: Building Novel VHF-Based Wireless Sensor Networks for the Internet of Marine Things
paper_content:
Traditional marine monitoring systems such as oceanographic and hydrographic research vessels use either wireless sensor networks with a limited coverage, or expensive satellite communication that is not suitable for small and mid-sized vessels. This paper proposes a novel Internet of Marine Things data acquisition and cartography system in the marine environment using Very High Frequency (VHF) available on the majority of ships. The proposed system is equipped with many sensors such as sea depth, temperature, wind speed and direction, and the collected data is sent to 5G edge cloudlets connected to sink/base station nodes on shore. The sensory data is ultimately aggregated at a central cloud on the internet to produce up to date cartography systems. Several observations and obstacles unique to the marine environment have been discussed and feed into the solutions presented. The impact of marine sparsity on the network is examined and a novel hybrid Mobile Ad-hoc/Delay Tolerant routing protocol is proposed to switch automatically between Mobile Ad-hoc Network and Delay Tolerant Network routing according to the network connectivity. The low rate data transmission offered by VHF radio has been investigated in terms of the network bottlenecks and the data collection rate achievable near the sinks. A data synchronization and transmission approach has also been proposed at the 5G network core using Information Centric Networks.
---
paper_title: SEA-LABS: a wireless sensor network for sustained monitoring of coral reefs
paper_content:
This paper describes SEA-LABS (Sensor Exploration Apparatus utilizing Low-power Aquatic Broadcasting System), a low-cost, power-efficient Wireless Sensor Network (WSN) for sustained, real-time monitoring of shallow water coral reefs. The system is designed to operate in remote, hard-to-access areas of the world, which limits the ability to perform on-site data retrieval and periodic system maintenance (e.g., battery replacement/recharging). SEA-LABS thus provides a customized solution to shallow-water environmental monitoring addressing the trade-offs between power conservation and the system's functional requirements, namely data sensing and processing as well as real-time, wireless communication. We present SEA-LABS' architecture and its current implementation. Finally, we share our experience deploying SEA-LABS in the Monterey Bay.
---
paper_title: A layered approach to in situ data management on a wireless sensor network
paper_content:
A multi-layered algorithm is proposed that provides a scalable and adaptive method for handling data on a wireless sensor network. Statistical tests, local feedback and global genetic style material exchange ensure limited resources, such as battery and bandwidth, are used efficiently by manipulating data at the source and important features in the time series are not lost when compression needs to be made. The approach leads to a more 'hands off' implementation which is demonstrated by a real world oceanographic deployment of the system.
---
paper_title: A Robust, Adaptive, Solar-Powered WSN Framework for Aquatic Environmental Monitoring
paper_content:
The paper proposes an environmental monitoring framework based on a wireless sensor network technology characterized by energy harvesting, robustness with respect to a large class of perturbations and real-time adaptation to the network topology. The fully designed and developed ad hoc system, based on clusters relying on a star topology, encompasses a sensing activity, a one-step local transmission from sensor nodes to the gateway, a remote data transmission from the gateway to the control center, data storage in a DB and real-time visualization. Hw and Sw modules have been either carefully selected or designed to guarantee a high quality of service, optimal solar energy harvesting, storage and energy awareness. A monitoring system integrating the outlined framework has been deployed in Queensland, Australia, for monitoring the underwater luminosity and temperature, information necessary to derive the health status of the coralline barrier. At the same time, acquired data can be used to provide quantitative indications related to cyclone formations in tropical areas.
---
paper_title: A Smart Sensor Network for Sea Water Quality Monitoring
paper_content:
Measurement of chlorophyll concentration is gaining more-and-more importance in evaluating the status of the marine ecosystem. For wide areas monitoring a reliable architecture of wireless sensors network is required. In this paper, we present a network of smart sensors, based on ISO/IEC/IEEE 21451 suite of standards, for in situ and in continuous space–time monitoring of surface water bodies, in particular for seawater. The system is meant to be an important tool for evaluating water quality and a valid support to strategic decisions concerning critical environment issues. The aim of the proposed system is to capture possible extreme events and collect long-term periods of data.
---
paper_title: An underwater wireless group-based sensor network for marine fish farms sustainability monitoring
paper_content:
One of the main problems in marine fish farms sustainability is the amount of uneaten feed and fecal waste dispersed and deposited on the seabed under the cages. It damages the fauna and flora, and decreases the economic benefits because the wastage of the uneaten food. Several country governments and international associations have published laws and rules about the maximum permitted pollution on the seabed in order to avoid having high impact on the environment. In this paper, we propose an underwater wireless group-based sensor network in order to quantify and monitor the accurate amount of pollution that is deposited on the seabed. First, we present an analytical model and study the best location to place the sensor nodes. The mobility of the nodes and the group-based protocol operation is described. Our wireless group-based sensor network proposal is able to determine the amount of food that is wasted while it measures the amount of deposits generated. This data can be used to compute and estimate more accurately the amount of food that should be thrown into the cage. Finally, several simulations are presented in order to show the network traffic and to verify the correct operation of the wireless sensor system.
---
paper_title: Applications of Wireless Sensor Networks in Marine Environment Monitoring: A Survey
paper_content:
With the rapid development of society and the economy, an increasing number of human activities have gradually destroyed the marine environment. Marine environment monitoring is a vital problem and has increasingly attracted a great deal of research and development attention. During the past decade, various marine environment monitoring systems have been developed. The traditional marine environment monitoring system using an oceanographic research vessel is expensive and time-consuming and has a low resolution both in time and space. Wireless Sensor Networks (WSNs) have recently been considered as potentially promising alternatives for monitoring marine environments since they have a number of advantages such as unmanned operation, easy deployment, real-time monitoring, and relatively low cost. This paper provides a comprehensive review of the state-of-the-art technologies in the field of marine environment monitoring using wireless sensor networks. It first describes application areas, a common architecture of WSN-based oceanographic monitoring systems, a general architecture of an oceanographic sensor node, sensing parameters and sensors, and wireless communication technologies. Then, it presents a detailed review of some related projects, systems, techniques, approaches and algorithms. It also discusses challenges and opportunities in the research, development, and deployment of wireless sensor networks for marine environment monitoring.
---
paper_title: A Hierarchical Communication Architecture for Oceanic Surveillance Applications
paper_content:
The interest in monitoring applications using underwater sensor networks has been growing in recent years. The severe communication restrictions imposed by underwater channels make that efficient monitoring be a challenging task. Though a lot of research has been conducted on underwater sensor networks, there are only few concrete applications to a real-world case study. In this work, hence, we propose a general three tier architecture leveraging low cost wireless technologies for acoustic communications between underwater sensors and standard technologies, Zigbee and Wireless Fidelity (WiFi), for water surface communications. We have selected a suitable Medium Access Control (MAC) layer, after making a comparison with some common MAC protocols. Thus the performance of the overall system in terms of Signals Discarding Rate (SDR), signalling delay at the surface gateway as well as the percentage of true detection have been evaluated by simulation, pointing out good results which give evidence in applicability’s favour.
---
paper_title: Wireless sensor networks in coastal marine environments: a study case outcome
paper_content:
In this paper the results of an experimental wireless sensor network application in a coastal shallow water marine environment are presented. The study focuses on the practical aspects of deployment, data gathering and retrieval events. The trial sensor network was used to retrieve temperature and illuminance data from the seabed of Moreton Bay, Australia. The application described possesses features and implements technical solutions that distinguish it from previous deployments. For example, the particular mooring system maintains the buoys horizontal on the water's surface even in strong tidal current conditions, thus enabling reliable communication at 2.4 GHz. In this application, the underwater sensors were wired to surface wireless nodes, and this arrangement led to various difficulties in the network's deployment, maintenance and retrieval phases. For this reason, the knowledge acquired through this experience is presented in this paper to provide insight into, and to further stress, the importance of using fully wireless systems in monitoring applications for the marine environment.
---
paper_title: Architecture of wireless sensor network for monitoring aquatic environment of marine shellfish
paper_content:
Aquatic environment monitoring plays an important role in the artificial breeding of marine shellfish. Wireless sensor network (WSN) is an efficient approach for monitoring large-scale coastal beach with densely distributed smart nodes. Combined with a project of aquatic environment monitoring and assessment of marine shellfish in Zhejiang province, China, a new structure of WSN combining network clustering and route enhancing is proposed with the aim at facilitating node deployment in offshore and tidal zone and regular network maintenance. The topology of wireless sensing cluster, hard- and soft- design of wireless node and routing enhancement are elaborated. The testing result shows that the proposed scheme has the capability of robust data transmission. The method proposed in the paper is also suitable for other water quality monitoring system.
---
paper_title: IEEE 802.15.4 based wireless monitoring of pH and temperature in a fish farm
paper_content:
In the last years the number of papers related to wireless sensor networks has increased substantially. Most of them focus in raising issues as routing algorithms, network lifetime, and more recently, Multiple Input Multiple Output wireless networks. In contrast with all those studies, we present a practical application of wireless networks: The sensing of the pH and temperature for a fish farm. The application requires two different kind of modules: the sensor itself and the wireless module. The sensor collect and transmit the information to a wireless module using a wired connection. Once the information reaches the wireless node, it is forwarded to the central unit through a wireless protocol. The central unit starts and manages the network, as well as stores all the received data. The sensor module includes an pH sensor based on a specially designed ISFET and a commercial temperature sensor. The wireless node collects the sensed data by means of an asynchronous wired serial polling communication. The use of this kind of protocol allows to connect a single master with multiple slaves. In our particular case, we have connected one master with four slaves using a transmission rate of 9600 b/s. The wireless transmission follows the standard IEEE 802.15.4, and implements the routing protocol based on the ZigBee standard. The number of nodes distributed in the fish farm has been limited to 30 while the maximum number of hops to 6. Moreover, between the MAC and the routing layer an energy management layer have been included. This layer reduces the power consumption of the wireless network using an RF activity duty cycle for the reception stage at the final end device of around 0.02%.
---
paper_title: A Low-Cost Sensor Buoy System for Monitoring Shallow Marine Environments
paper_content:
Monitoring of marine ecosystems is essential to identify the parameters that determine their condition. The data derived from the sensors used to monitor them are a fundamental source for the development of mathematical models with which to predict the behaviour of conditions of the water, the sea bed and the living creatures inhabiting it. This paper is intended to explain and illustrate a design and implementation for a new multisensor monitoring buoy system. The system design is based on a number of fundamental requirements that set it apart from other recent proposals: low cost of implementation, the possibility of application in coastal shallow-water marine environments, suitable dimensions for deployment and stability of the sensor system in a shifting environment like the sea bed, and total autonomy of power supply and data recording. The buoy system has successfully performed remote monitoring of temperature and marine pressure (SBE 39 sensor), temperature (MCP9700 sensor) and atmospheric pressure (YOUNG 61302L sensor). The above requirements have been satisfactorily validated by operational trials in a marine environment. The proposed buoy sensor system thus seems to offer a broad range of applications.
---
paper_title: Design of a Wireless Sensor Network for Long-term, In-Situ Monitoring of an Aqueous Environment
paper_content:
An aqueous sensor network is described consisting of an array of sensor nodes that can be randomly distributed throughout a lake or drinking water reservoir. The data of an individual node is transmitted to the host node via acoustic waves using intermediate nodes as relays. Each node of the sensor network is a data router, and contains sensors capable of measuring environmental parameters of interest. Depending upon the required application, each sensor node can be equipped with different types of physical, biological or chemical sensors, allowing long-term, wide area, in situ multi-parameter monitoring. In this work the aqueous sensor network is described, with application to pH measurement using magnetoelastic sensors. Beyond ensuring drinking water safety, possible applications for the aqueous sensor network include advanced industrial process control, monitoring of aquatic biological communities, and monitoring of waste-stream effluents.
---
paper_title: OceanSense: A practical wireless sensor network on the surface of the sea
paper_content:
In this paper, we present a practical wireless sensor network for environmental monitoring (OceanSense) deployed on the sea. The system is mainly composed of TelosB motes, which are deployed on the surface of the sea collecting environmental data, such as temperature, light and RSSI from the testbed. The motes communicate with a base station, which transmits collected data to a visualization system running on a database server. The data can be accessed using a browser-based web application. The OceanSense has been running for more than half a year, providing environmental monitoring data for further study.
---
paper_title: Sensor Networking in Aquatic Environments - Experiences and New Challenges
paper_content:
In this paper we present the design and implemen tation of a small-scale marine sensor network. The network monitors the temperature in the Baltic Sea on different heights from the water surface down to the bottom. Unlike many other wireless sensor networks, this network contains both a wired and a wireless part. One of the major challenges is that the network is hard to access after its deployment and hence both hard- and software must be robust and reliable. We also present the design of an advanced buoy system featuring a diving unit that achieves a better vertical resolution and discuss remaining challenges of sensor networking in aquatic environments.
---
paper_title: Water monitoring system using Wireless Sensor Network (WSN): Case study of Kuwait beaches
paper_content:
Firstly it is needed to realize Wireless Sensors Networks (WSN); we can define it as the most important technologies in the new century. during the last decades there was many achievements especially in the field of micro sensor technology and the low power electronics have made WSNs and the reality of applications, WSNs enabled a great amount of the surveillance and supervision applications, especially for the hostile and the critical environments, such as the process of monitoring the sea. We are presented software and hardware platforms of the augmented sensor networks which should be connected in a temporal way to the back-end infrastructures to store the data storage and the interaction of the users, and it make a special use for the actuators or the devices which are considered a rich and special computing resource to manage a complex signal processing tasks. In our proposed solution, we attempt to deploy the sensors of the network which is used on the sea surface shall monitor the water characteristics such as temperature, PH, dissolved oxygen, etc., and provide various convenient services for end users who can manage the data via a website with spreadsheet from a long distance or applications in a console terminal. This project introduces the architecture of a WSN system, the hardware of the node, data acquisition, data processing with gateway, and data visualization. The schemes which are considered traditional one depend on the intensive work of the labor and the expensive hardware. We presented better solutions with special sensors to measure the characteristics of the water in Kuwait.
---
paper_title: Group-based underwater wireless sensor network for marine fish farms
paper_content:
The amount of uneaten feed and fecal waste generated by the fish in marine fish farms causes the damage of the fauna and flora, and it also reduces the economic benefits because the wastage of the uneaten food. In this paper, we propose an underwater group-based sensor network in order to quantify accurately the amount of pollution deposited on the seabed. First, an analytical model let us know the best location to place the sensor nodes. Our group-based wireless sensor network (WSN) proposal could also determine the amount of food that is wasted while it measures the amount of deposits generated. We describe the mobility of the nodes and how operates the group-based protocol and we show several simulations in order to view the load traffic and to verify the correct operation of the WSN.
---
paper_title: LakeNet: An Integrated Sensor Network for Environmental Sensing in Lakes
paper_content:
Field investigations in the hydrologic sciences often are limited by the ability to collect data at the high spatiotemporal resolution necessary to build accurate predictive models or to control complex engineered systems in real time. Here, we describe LakeNet, an embedded wireless sensor network constructed by an interdisciplinary team of hydrogeologists, environmental engineers, and electrical engineers at the University of Notre Dame. Off-the-shelf temperature, dissolved oxygen, and pH probes are suspended from floating, waterproof cases with electronics, forming sensor pods. Wireless transmission to relay stations and a PC gateway enable researchers to interact with the network remotely to alter sampling patterns, download data, and analyze data trends using the gateway's recursive processing of raw data. LakeNet functions as a “smart” network, in which each pod is aware of surrounding pods. Ongoing research will allow in-network computation to detect change points in the data stream, thus triggering...
---
paper_title: OceanSense: Monitoring the Sea with Wireless Sensor Networks
paper_content:
In this project, we explore the possibility of deploying networked sensors on the Ocean's surface, to monitor depth and temperature, as well as other valuable environmental parameters. Sea depth monitoring is a critical task to ensure the safe operation of harbors. Traditional schemes largely rely on labor-intensive work and expensive hardware. We present a new solution for measuring the sea depth with Restricted Floating Sensors. To address the problem of node localization on the changeable sea environment, we propose Perpendicular Intersection (PI), a novel mobile-assisted localization scheme. In the OceanSense project, we propose the concept of passive diagnosis as well as the PAD approach which is both lightweight and adaptive to network dynamics. The OceanSense system has been working for over 16 months and provides large amounts of valuable data about the sea.
---
paper_title: A novel design of water environment monitoring system based on WSN
paper_content:
The importance of maintaining good water environment highlights the increasing need for advanced technologies. This paper proposes a novel design of water environment monitoring system based on wireless sensor networks (WSN). The system consists of three parts: sensor nodes; sink nodes and data monitoring center. The sensor nodes can be constructed with arbitrary parameter or multi-parameter sensor modules such as PH, dissolved oxygen (DO), conductivity and temperature. The measurement capacity ranges from 0 to 14 on PH value; from 0∼20mg/L on DO; from 0∼2S/cm on conductivity. The sink nodes communicate with the local or remote data monitoring center by RS232 or 3G/GPRS. The performance shows the system can be effectively applied to some water area such as aquiculture, lake and river for distributed water environment automatic monitoring.
---
paper_title: Development and Evaluation of Wave Sensor Nodes for Ocean Wave Monitoring
paper_content:
Ocean wave monitoring is a significant activity in oceanography and forecasting. There are several techniques utilized for this activity ranging from local point buoys and wide-area satellites. This study focuses on constructing modular wave sensors that can be utilized efficiently in a local area wave monitoring system. This paper presents this development in different stages, namely, wave sensor construction, laboratory experiments and testing, field deployments, preprocessing technique application, and evaluation.
---
paper_title: Design and implementation of smart environment monitoring and analytics in real-time system framework based on internet of underwater things and big data
paper_content:
The Internet of Things Technology (IoT) is developing so rapidly. One of IoT implementation is an Internet of Underwater Things (IoUT) which is used for u.nderwater monitoring. This paper discuss an integrated smart environment system framework based on IoUT and big data that consist of open platform that processes data from Remotely Operated Vehicle (ROV) and portable sensor with water parameters sensors such an input device to collect and save the data of oxidation-reduction potential, pH, electrical conductivity, total dissolve solid, salinity, dissolved oxygen, and temperature in monitored rivers temporary. Coral Reef Monitoring used for preventing coral bleaching by using underwater camera to take some pictures that will be sent to data center and then analyze it, Wireless Mess Network Access Point used as the way ROVs ‘talking’ to each other while doing a monitoring process in a wide river. Then all of the collected data from ROVs and portable sensor will be saved and analyzed to the data center platform based on Hadoop Multi-Node Cluster. The data will be visualized as a chart or a table and can be accessed world-wide We found that, the processing of the data used in the open platform to be relatively straightforward using a combination of SQL. However, the use of HDFS (Hadoop Distributed File System) as data center and more versatile frameworks such as Mapreduce, Hive, Spark, Redis and more flexible set component of features that particularly facilitate working with larger volumes and more heterogeneous data sources.
---
paper_title: Autonomous systems in remote areas of the ocean using BLUECOM+ communication network
paper_content:
The authors present a series of sea trails with autonomous systems using a long-range communication network. The continuous monitoring of the oceans and realtime data gathering/monitoring is a key issue in future marine challenges. To have long range communication, between land and ships at tens of kilometers', the authors used the BlueCom+ project research trials and tested their robotic systems. Bluecom+ project intends to fill the gap of long range communication with high bandwidth. It was demonstrated the usefulness of the system using autonomous systems, such as a small unmanned vehicle (ROAZ USV) for bathymetric mapping and tested an underwater acoustic positioning and communications system.
---
paper_title: Implementation study of a home position monitoring system for marine environment
paper_content:
According to requirements of marine observation environment, a kind of real-time data collection architecture in marine monitoring system, based on wireless sensor network, is proposed in this paper. The architecture is a two-level in-situ monitoring technology: the high level taking GPS as an absolute in-situ and the lower level taking wireless sensor array as a relative in-situ. The buoyages and base station equip with costly GPS, while a large numbers of sensor nodes equip with the cheap RF SOC. In wireless sensors network, we propose a position estimating algorithm that allows sensors to determine their location without any centralized computation. In addition, PEA provides a location verification mechanism that verifies the location claims of the sensors before data collection. We show that PEA bounds the ability of an attacker to spoof sensors' locations, with relatively low density deployment of reference points. We confirm the PEA against attacks analytically and via simulations. And a novel entropy-based incremental fuzzy clustering algorithm is designed to cluster real-time datum efficiently and gives attention to realtime while improving clustering accuracy by applying a series of improved policies. Results of simulation experiments show that proposed algorithm is adaptive to the requirements of marine observation.
---
paper_title: The Recopesca Project : a new example of participative approach to collect fisheries and in situ environmental data
paper_content:
Face to the lack of data to assess precisely the spatial distribution of catches and fishing effort and for the environmental characterization of the fishing area, Ifremer has been implemented since 2005 a new project, Recopesca. It consists in fitting out a sample of voluntary fishing vessels with sensors recording data on fishing effort (and at mid-terms catches) and physical parameters such as temperature or salinity. Recopesca aims at setting up a network of sensors, for scientific purposes, to collect data and improve resources assessment and diagnostics on fisheries, and environmental data required for ecosystem-based management initiatives. The challenge was to develop sensors with no trouble for the fishermen, tough enough to be fixed up on fishing gears, self powered and autonomous. Insofar as the sample of targeted vessels intends to be representative of all the metiers and fleets, the sensors are modular and scalable to collect new data. Different sensors have been implemented: (i) a temperature-salinity sensor, able to record physical parameters, depth and duration of immersion, for passive and active gears, and (ii) a specific sensor to record number or length of passive gears. A GPS monitors the position of the vessels and the temperature or salinity profiles and series. Each sensor is equipped with a radio device transferring the data to a receiver on-board, called “concentrator” that sends the data to Ifremer central databases by GPRS. An anti-rolling weigh-scale has been developed and is currently on test to record catches per species and fishing operation. The presentation will show the first data and results of this participative approach.
---
paper_title: Collective Efficacy of Support Vector Regression With Smoothness Priority in Marine Sensor Data Prediction
paper_content:
Marine data prediction plays an increasingly important role in marine environmental monitoring. The support vector machine (SVM) is viewed as a useful machine learning tool in marine data processing, whereas it is not completely suitable for the abruptly fluctuating, multi-noise, non-stationary, and abnormal data. To address this issue, this paper proposes a novel machine learning framework for marine sensor data prediction, i.e., a support vector regression architecture with smoothness priority. This is a united and consistent system with functions of data acquisition, smoothness, and nonlinear approximation. Here, the smoothness is used to process the outliers and noises of the acquired marine sensor data. Whereafter, a nonlinear approximator based on the SVM is constructed for marine time series prediction. This architecture is the first attempt to consider the collective efficacy of smoother and SVM in marine data processing tasks. The experimental results show that our model significantly surpasses the single SVM in the real-world marine data prediction. Besides, standard statistical evaluation methods, such as QQPlot, PDF, CDF, and BoxPlot, are utilized to verify its superior nonlinear approximation capacity.
---
paper_title: Marine depth mapping algorithm based on the edge computing in Internet of things
paper_content:
Abstract In recent years, the research of marine environmental monitoring system has been especially popular. The construction of Internet of things between devices occupies the main position in marine environment detection system. In order to protect and utilize marine resources, it is urgent to realize the collection and treatment of marine information. We have established the Internet of things system for large ocean data collected by sensors. We calculate the ocean data in terminal devices such as sensors. In the process of calculation and arrangement of data, this paper presents a new method for the calculation of data contours. The aim of this method is to quickly describe the contour of data. The distribution of contour lines can be calculated accurately in a short time. This paper introduces the principle and calculation process of the method. The simulation of big data calculation is carried out in practice. The simulation results are also analyzed. Finally, the advantages of this method are illustrated by comparison simulation. In the same set of data and conditions, the method achieves the improvement of computational efficiency and calculation accuracy. It has certain significance in practical application.
---
paper_title: The Big Data Processing Algorithm for Water Environment Monitoring of the Three Gorges Reservoir Area
paper_content:
Owing to the increase and the complexity of data caused by the uncertain environment, the water environment monitoring system in Three Gorges Reservoir Area faces much pressure in data handling. In order to identify the water quality quickly and effectively, this paper presents a new big data processing algorithm for water quality analysis. The algorithm has adopted a fast fuzzy C-means clustering algorithm to analyze water environment monitoring data. The fast clustering algorithm is based on fuzzy C-means clustering algorithm and hard C-means clustering algorithm. And the result of hard clustering is utilized to guide the initial value of fuzzy clustering. The new clustering algorithm can speed up the rate of convergence. With the analysis of fast clustering, we can identify the quality of water samples. Both the theoretical and simulated results show that the algorithm can quickly and efficiently analyze the water quality in the Three Gorges Reservoir Area, which significantly improves the efficiency of big data processing. What is more, our proposed processing algorithm provides a reliable scientific basis for water pollution control in the Three Gorges Reservoir Area.
---
paper_title: Correlation Analysis Method for Ocean Monitoring Big Data in a Cloud Environment
paper_content:
ABSTRACT Song, J.; Xie, H., and Feng, Y., 2018. Correlation analysis method for ocean monitoring big data in a cloud environment. In: Ashraf, M.A. and Chowdhury, A.J.K. (eds.), Coastal Ecosystem Responses to Human and Climatic Changes throughout Asia. The current correlation analysis method for ocean monitoring big data is time consuming, and stability is poor. A new method is proposed to overcome these problems. The ocean monitoring big data were divided into eight categories and analyzed individually, and then ocean monitoring data model, data model, ocean coast big data model, and ocean monitoring data sets were analyzed and calculated according to the spatial correlation function to determine the ocean monitoring big data spatial correlation coefficient and collect spatial correlation of ocean monitoring big data characteristic elements. Experimental results showed that the proposed method improved the speed and stability of the correlation analysis of ocean monitoring big data, and advanced the class...
---
paper_title: Acoustic Diversity Classifier for Automated Marine Big Data Analysis
paper_content:
In recent years, big data has increasingly drawn the attention of the R&D community. With the advent of marine data, monitoring marine big data becomes a new trend that advocates for assessing human impact on marine data. Nevertheless, there is a lack of support for acoustic sounds classification in such environment, covering diverse data that can exist (i.e., fish sounds, human activities sounds and environmental sounds). In this paper, we cope with this gap by proposing a deep learning-based approach that enables to efficiently classify these acoustic sounds aiming at automating the support of marine sound analysis in big data architectures. A set of experiments have been conducted using a real marine dataset to demonstrate the feasibility and the effectiveness of our approach.
---
paper_title: POSEIDON - Passive-acoustic Ocean Sensor for Entertainment and Interactive Data-gathering in Opportunistic Nautical-activities
paper_content:
Recent years demonstrate an increased interest in Passive Acoustic Monitoring (PAM) applications when studying cetaceans. However, they remain expensive underwater systems and targeted for industrial and military purposes. While the usage of smartphones as acoustic sensors has been observed in terrestrial environments, ocean and nautical PAM applications remain greatly unexplored. This paper presents the design, deployment and testing of a POSEIDON system, used for real-time augmentation of whale-watching experiences. We collect and use cetaceans' vocal call acoustic samples (clicks, moans and whistles) and apply machine learning for offline model training and prediction. When discriminating the calls, we find that Extra Trees and Gradient Boosting outperform other classifiers (>0.95 confidence threshold). Collected samples are at disposal to citizen scientists and marine biologists. Future studies involve real-time on-boat user testing.
---
paper_title: A new wave of marine evidence-based management: emerging challenges and solutions to transform monitoring, evaluating, and reporting
paper_content:
Sustainable management and conservation of the world’s oceans requires effective monitoring, evaluation and reporting. Despite the growing political and social imperative for these activities, there are some persistent and emerging challenges that marine practitioners face in undertaking these activities. In 2015, a diverse group of marine practitioners came together to discuss the emerging challenges associated with marine monitoring, evaluation and reporting, and potential solutions to address these challenges. Three emerging challenges were identified: (1) the need to incorporate environmental, social and economic dimensions in evaluation and reporting; (2) the implications of big data, creating challenges in data management and interpretation; and, (3) dealing with uncertainty throughout monitoring, evaluation and reporting activities. We point to key solutions to address these challenges across monitoring, evaluation and reporting activities: 1) integrating models into marine management systems to help understand, interpret, and manage the environmental and socio-economic dimensions of uncertain and complex marine systems; 2) utilising big data sources and new technologies to collect, process, store, and analyse data; and 3) applying approaches to evaluate, account for, and report on the multiple sources and types of uncertainty. These solutions point towards a potential for a new wave of evidence-based marine management, through more innovative monitoring, rigorous evaluation and transparent reporting. Effective collaboration and institutional support across the science–management–policy interface will be crucial to deal with emerging challenges, and implement the tools and approaches embedded within these solutions.
---
paper_title: Machine Learning Automatic Model Selection Algorithm for Oceanic Chlorophyll-a Content Retrieval
paper_content:
Ocean Color remote sensing has a great importance in monitoring of aquatic environments. The number of optical imaging sensors onboard satellites has been increasing in the past decades, allowing to retrieve information about various water quality parameters of the world’s oceans and inland waters. This is done by using various regression algorithms to retrieve water quality parameters from remotely sensed multi-spectral data for the given sensor and environment. There is a great number of such algorithms for estimating water quality parameters with different performances. Hence, choosing the most suitable model for a given purpose can be challenging. This is especially the fact for optically complex aquatic environments. In this paper, we present a concept to an Automatic Model Selection Algorithm (AMSA) aiming at determining the best model for a given matchup dataset. AMSA automatically chooses between regression models to estimate the parameter in interest. AMSA also determines the number and combination of features to use in order to obtain the best model. We show how AMSA can be built for a certain application. The example AMSA we present here is designed to estimate oceanic Chlorophyll-a for global and optically complex waters by using four Machine Learning (ML) feature ranking methods and three ML regression models. We use a synthetic and two real matchup datasets to find the best models. Finally, we use two images from optically complex waters to illustrate the predictive power of the best models. Our results indicate that AMSA has a great potential to be used for operational purposes. It can be a useful objective tool for finding the most suitable model for a given sensor, water quality parameter and environment.
---
paper_title: Link Scheduling Method for Underwater Acoustic Sensor Networks Based on Correlation Matrix
paper_content:
The unique characteristic of the underwater acoustic communication channel force development of new protocols for underwater acoustic sensor networks. Especially, the pretty slower propagation speed of acoustic signals in underwater should be taken into consideration. This paper constructs a link scheduling model on medium access control (MAC) layer for multihop underwater acoustic sensor networks. The proposed model employs correlation matrix to describe the conflicts relationship among links and uses propagation delay to generate conflict matrix for collision detection. In order to minimum the frame length, a power control strategy is introduced and aimed to reduce the effect of link interference. Then, a heuristic algorithm is presented to solve the conflict-free problem based on the model. Simulation results show that the link scheduling method and the power control strategy can effectively improve network performance with respect to throughput and end-to-end delay.
---
paper_title: Dynamic transmission power control based on exact sea surface movement modeling in underwater acoustic sensor networks
paper_content:
Prediction of sea surface movement can be an important tool for the estimation of time-variant acoustic channel because signal attenuation caused by reflection occupies a large proportion in path loss. Although a number of researches have proposed resource allocation schemes based on the channel modeling, they did not consider reflection loss and time-variant characteristic. This paper suggests a transmission power control based on the prediction of time-variant channel by using the RMS (Root Mean Square) wave-height for low power consumption and stable throughput. The proposed scheme adopts transfer function including reflection coefficient overlooked in other papers using the Kirchhoff approximation. In addition, it defines the transmission power needed to guarantee a pre-specified SNR (Signal-to-Noise Ratio) threshold using the transfer function. The BELLHOP and WAFO simulators were utilized to build simulation environment similar to actual ocean. The simulation results show that the proposed method is practical by considering the reflection impact on the power control and reduces energy consumption by 32.79% compared with the existing methods which do not use the adaptive power control based on channel condition.
---
paper_title: An overview of the internet of underwater things
paper_content:
Approximately 71% of the Earth's surface is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. Ocean temperatures determine climate and wind patterns that affect life on land. Freshwater in lakes and rivers covers less than 1%. Its contamination seriously damages ecosystems. The Internet of Underwater Things (IoUT) is defined as a world-wide network of smart interconnected underwater objects that enables to monitor vast unexplored water areas. The purpose of this paper is to analyze how to benefit from the IoUT to learn from, exploit and preserve the natural underwater resources. In this paper, the IoUT is introduced and its main differences with respect to the Internet of Things (IoT) are outlined. Furthermore, the proposed IoUT architecture is described. Important application scenarios that illustrate the interaction of IoUT components have been proposed. Critical challenges have been identified and addressed.
---
paper_title: Underwater Sensor Network Applications: A Comprehensive Survey
paper_content:
There is no escaping fact that a huge amount of unexploited resources lies underwater which covers almost 70% of the Earth. Yet, the aquatic world has mainly been unaffected by the recent advances in the area of wireless sensor networks (WSNs) and their pervasive penetration in modern day research and industrial development. The current pace of research in the area of underwater sensor networks (UWSNs) is slow due to the difficulties arising in transferring the state-of-the-art WSNs to their underwater equivalent. Maximum underwater deployments rely on acoustics for enabling communication combined with special sensors having the capacity to take on harsh environment of the oceans. However, sensing and subsequent transmission tend to vary as per different subsea environments; for example, deep sea exploration requires altogether a different approach for communication as compared to shallow water communication. This paper particularly focuses on comprehensively gathering most recent developments in UWSN applications and their deployments. We have classified the underwater applications into five main classes, namely, monitoring, disaster, military, navigation, and sports, to cover the large spectrum of UWSN. The applications are further divided into relevant subclasses. We have also shown the challenges and opportunities faced by recent deployments of UWSN.
---
paper_title: Underwater Wireless Sensor Networks: A New Challenge for Topology Control–Based Systems
paper_content:
Underwater wireless sensor networks (UWSNs) will pave the way for a new era of underwater monitoring and actuation applications. The envisioned landscape of UWSN applications will help us learn more about our oceans, as well as about what lies beneath them. They are expected to change the current reality where no more than 5% of the volume of the oceans has been observed by humans. However, to enable large deployments of UWSNs, networking solutions toward efficient and reliable underwater data collection need to be investigated and proposed. In this context, the use of topology control algorithms for a suitable, autonomous, and on-the-fly organization of the UWSN topology might mitigate the undesired effects of underwater wireless communications and consequently improve the performance of networking services and protocols designed for UWSNs. This article presents and discusses the intrinsic properties, potentials, and current research challenges of topology control in underwater sensor networks. We propose to classify topology control algorithms based on the principal methodology used to change the network topology. They can be categorized in three major groups: power control, wireless interface mode management, and mobility assisted–based techniques. Using the proposed classification, we survey the current state of the art and present an in-depth discussion of topology control solutions designed for UWSNs.
---
paper_title: A survey on Underwater Wireless Sensor Networks and applications
paper_content:
In this article a survey on the different technologies in the area of Underwater Wireless Sensor Networks (UWSN) will be presented. The characteristics of these networks are different from those found in the terrestrial ones, while their architecture is vulnerable to various issues such as large propagation delays, mobility of floating sensor nodes, limited link capacity and multiple messages receptions due to reflections on the sea ground and sea surface. This article will present an overview of the underlying technologies in UWSN and will focus in presenting the most important research approaches towards UWSNs' architecture, routing, MAC and localization protocols, energy consumption and security, while highlighting their most illustrative real-life applications.
---
paper_title: An energy-efficient asynchronous wake-up scheme for underwater acoustic sensor networks
paper_content:
In addition to the requirements of the terrestrial sensor network where performance metrics such as throughput and packet delivery delay are often emphasized, energy efficiency becomes an even more significant and challenging issue in underwater acoustic sensor networks, especially when long-term deployment is required. In this paper, we tackle the problem of energy conservation in underwater acoustic sensor networks for long-term marine monitoring applications. We propose an asynchronous wake-up scheme based on combinatorial designs to minimize the working duty cycle of sensor nodes. We prove that network connectivity can be properly maintained using such a design even with a reduced duty cycle. We study the utilization ratio of the sink node and the scalability of the network using multiple sink nodes. Simulation results show that the proposed asynchronous wake-up scheme can effectively reduce the energy consumption for idle listening and can outperform other cyclic difference set-based wake-up schemes. More significantly, high performance is achieved without sacrificing network connectivity. Copyright © 2015John Wiley & Sons, Ltd.
---
paper_title: Optimizing Resurfacing Schedules to Maximize Value of Information in UWSNs
paper_content:
In Underwater Sensor Networks (UWSNs) with high volume of data recording activity, a mobile sink such as a Autonomous Underwater Vehicle (AUV) can be used to offload data from the sensor nodes. When the AUV approaches the underwater node, it can use high data rate optical communication. However, the data is not considered delivered when it was transferred from the sensor node to the AUV, but when the AUV had resurfaced and transferred the data to the sink. If the data is not time sensitive, it is sufficient for the AUV to resurface only once at the end of its data collection path. However, for time-sensitive data, it is more advantageous for the AUV to resurface multiple times during its path, and upload the data collected since the previous resurfacing. Thus, a resurfacing schedule needs to complement the path planning process. In this paper we are using the metric of Value of Information (VoI) as the optimization criteria to capture the time- sensitive nature of collected information. We propose a genetic algorithm based approach to determine the resurfacing schedule for an AUV which is already provided with the sequence of nodes to be visited.
---
paper_title: Modeling the sleep interval effects in duty-cycled underwater sensor networks
paper_content:
Lately, there has been a growing interest in connecting opportunistic routing (OR) and low duty-cycling methodologies in underwater sensor network (UWSNs) applications. This connection improves the data collection reliability and prolongs the network lifetime. When sensor nodes operate in a duty-cycling manner, the properly sleep interval selection and its on-the-fly adjustment should be addressed. Both tasks are challenging when opportunistic routing protocols are used at the network layer, as they should consider the presence of the next-hop candidates set. In this paper, we propose a modeling framework to evaluate the effects of the sleep interval on the energy consumption of duty-cycled UWSNs, which employ opportunistic routing protocol at the network layer. We investigate the sleep interval control problem in the OR scenarios, formulating it as an optimization problem with the goal of extending the network lifetime. Our simulation results show that different fixed sleep interval duty-cycles do not impact on the average energy consumption whereas the sleep interval control can prolong the UWSN lifetime.
---
paper_title: Balanced Energy Consumption Based Adaptive Routing for IoT Enabling Underwater WSNs
paper_content:
Applications of Internet of Things underwater wireless sensor networks, such as imaging underwater life, environmental monitoring, and supervising geological processes on the ocean floor, demand a prolonged network lifetime. However, these networks face many challenges, such as high path loss, limited available bandwidth, limited battery power, and high attenuation. For a longer network lifetime, both balanced and efficient energy consumption are equally important. In this paper, we propose a new routing protocol, called balanced energy adaptive routing (BEAR), to prolong the lifetime of UWSNs. The proposed BEAR protocol operates in three phases: 1) initialization phase; 2) tree construction phase; and 3) data transmission phase. In the initialization phase, all nodes share information related to their residual energy level and location. In the tree construction phase, our proposed BEAR exploits the location information for: a) selecting neighbour nodes and b) choosing the facilitating and successor nodes based on the value of cost function. In order to balance the energy consumption among the successor and the facilitator nodes, BEAR chooses nodes with relatively higher residual energy than the average residual energy of the network. The results of our extensive simulations show that BEAR outperforms its counterpart protocols in terms of network lifetime.
---
paper_title: QERP: Quality-of-Service (QoS) Aware Evolutionary Routing Protocol for Underwater Wireless Sensor Networks
paper_content:
Quality-of-service (QoS) aware reliable data delivery is a challenging issue in underwater wireless sensor networks (UWSNs). This is due to impairments of the acoustic transmission caused by excessive noise, extremely long propagation delays, high bit error rate, low bandwidth capacity, multipath effects, and interference. To address these challenges, meet the commonly used UWSN performance indicators, and overcome the inefficiencies of the existing clustering-based routing schemes, a novel QoS aware evolutionary cluster based routing protocol (QERP) has been proposed for UWSN-based applications. The proposed protocol improves packet delivery ratio, and reduces average end-to-end delay and overall network energy consumption. Our comparative performance evaluations demonstrate that QERP is successful in achieving low network delay, high packet delivery ratio, and low energy consumption.
---
paper_title: Enhanced dynamic duty cycled multiple rendezvous multi-channel media access control (DMM-MAC) protocol for underwater sensor network based marine eco system
paper_content:
Wireless Sensor Network (WSN) is a self-organized network in which sensor nodes collect the data in distributed environment. The consumption of energy and hence the life time of network is a major issue in any applications of Wireless Sensor Network (WSN) particularly in Underwater Sensor Networks (UWSN). The concept of multiple channel and duty cycling, which reduce the transmission collision and idle listening, can be used to conserve the energy. Media access control protocols such as Multiple Rendezvous Multichannel Media Access Control (MM-MAC) and Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC) are used for handling more volume of data in Underwater Sensor Networks (UWSN). Only one modem is used in Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC) and it operates in more realistic multi-hop environment, without using the information about distances or propagation delays to neighbour nodes which is considered as basic requirement for unattended wireless sensor network applications. In this paper, underwater sensor networks applied to monitor the marine ecosystem are proposed. The logic used for Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC) is enhanced and implemented and the results are compared with Time Division Multiple Access (TDMA) media access control which is the basis for both Multiple Rendezvous Multichannel Media Access Control (MM-MAC) and Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC). By employing the enhanced dynamic duty cycling, sensor nodes running Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC) are able to handle more volume of sensor data effectively. From the results obtained through simulation, it is observed that the delay is reduced, throughput and packet delivery rate are improved and the overhead for packet transmission is greatly reduced in Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC). Also Dynamic Duty Cycled Multiple Rendezvous Multichannel Media Access Control (DMM-MAC) provides better network performance than Multiple Rendezvous Multichannel Media Access Control (MM-MAC) in unattended environment.
---
paper_title: Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges
paper_content:
The Internet is continuously changing and evolving. The main communication form of present Internet is human-human. The Internet of Things (IoT) can be considered as the future evaluation of the Internet that realizes machine-to-machine (M2M) learning. Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet-connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services. This paper addresses the existing development trends, the generic architecture of IoT, its distinguishing features and possible future applications. This paper also forecast the key challenges associated with the development of IoT. The IoT is getting increasing popularity for academia, industry as well as government that has the potential to bring significant personal, professional and economic benefits.
---
paper_title: Intelligence in IoT-Based 5G Networks: Opportunities and Challenges
paper_content:
The requirement of high data rates, low latency, efficient use of spectrum, and coexistence of different network technologies are major considerations in Internet of Things (IoT)-based fifth generation (5G) networks. To achieve the above requirements, the incorporation of artificial intelligence (AI) is required to make efficient decisions based on the massive data generated by the large number of IoT devices. AI methods analyze the data to extract patterns and make sense of the data to prescribe action to the end devices. In this work, we first give an overview, discussing the challenges and relevant solutions of the 5G and IoT technologies including the IoT-based 5G enabling technologies. We discuss the need for AI in future IoT-based 5G networks from the perspective of Kipling's method. In addition, we review the intelligent use of spectrum through full duplex and cognitive radio technologies.
---
| Title: Internet of Things in Marine Environment Monitoring: A Review
Section 1: Introduction
Description 1: Write an introduction to the Internet of Things (IoT) and its relevance in marine environment monitoring and its evolution from wireless sensor networks (WSNs).
Section 2: Overview of IoT in Marine Environment Monitoring
Description 2: Provide an overview of IoT in marine environment monitoring, including various applications, common system architectures, typical sensing nodes and sensing parameters, and related wireless communication technologies.
Section 3: IoT-Based Marine Environment Monitoring Applications
Description 3: Describe different IoT-based marine environment monitoring application areas such as ocean sensing, water quality monitoring, coral reef monitoring, marine fish farm monitoring, and wave and current monitoring.
Section 4: Common IoT-Based System Architectures for Marine Environment Monitoring and Protection
Description 4: Discuss different IoT system architectures proposed for marine environment monitoring and protection, detailing layers like perception, transmission, data pre-processing, application, and business layers.
Section 5: A General Marine Environment Monitoring Sensor Node
Description 5: Explain the general architecture of a marine environment monitoring sensor node, including its main modules like sensing module, microcontroller, wireless transceiver module, and power supply module.
Section 6: Typical Sensors and Sensing Parameters
Description 6: Review typical sensors used for measuring physical and chemical parameters in marine environments and discuss the decision criteria for selecting sensors based on deployment requirements.
Section 7: Wireless Communication Technologies
Description 7: Outline different wireless communication technologies applicable to IoT-based marine environment monitoring, emphasizing their requirements in terms of reliability, energy efficiency, and transmission characteristics.
Section 8: A Review of Existing Marine Environment Monitoring Projects and Systems
Description 8: Provide a comprehensive review of various marine environment monitoring projects and systems developed over the past few decades, summarizing their features, applications, and deployment settings.
Section 9: Data Analysis
Description 9: Discuss the challenges and advancements in data analysis techniques for IoT-based marine environment monitoring, including methods like big data analytics, clustering algorithms, and machine learning models.
Section 10: Network Topology Control
Description 10: Review network topology control techniques specific to wireless sensor networks and their applicability to IoT-based marine environment monitoring, emphasizing their impact on network performance and lifecycle.
Section 11: New Communication Routing Protocols
Description 11: Highlight recent advancements in communication routing protocols designed for marine environment monitoring systems, focusing on their mechanisms and benefits.
Section 12: Energy Management
Description 12: Discuss energy management strategies and energy harvesting options for IoT devices in marine environments, emphasizing the importance of efficient energy utilization and renewable energy sources.
Section 13: Standardization
Description 13: Address the need for standardization in IoT devices, platforms, communication protocols, and data management for marine environment monitoring, outlining the challenges and importance of international cooperation.
Section 14: Marine Environment Protection
Description 14: Discuss the role of IoT and big data analytics in active marine environment protection measures, including real-time interventions and autonomous disaster response systems.
Section 15: Conclusions
Description 15: Summarize the key findings and insights from the review, highlighting the advancements, applications, and future challenges and opportunities in IoT-based marine environment monitoring and protection. |
Characterisation of Phosphate Accumulating Organisms and Techniques for Polyphosphate Detection: A Review | 18 | ---
paper_title: Phosphorus management in Europe in a changing world
paper_content:
Food production in Europe is dependent on imported phosphorus (P) fertilizers, but P use is inefficient and losses to the environment high. Here, we discuss possible solutions by changes in P management. We argue that not only the use of P fertilizers and P additives in feed could be reduced by fine-tuning fertilization and feeding to actual nutrient requirements, but also P from waste has to be completely recovered and recycled in order to close the P balance of Europe regionally and become less dependent on the availability of P-rock reserves. Finally, climate-smart P management measures are needed, to reduce the expected deterioration of surface water quality resulting from climate-change-induced P loss.
---
paper_title: The Relevance of Phosphorus and Iron Chemistry to the Recovery of Phosphorus from Wastewater: A Review
paper_content:
The addition of iron is a convenient way for removing phosphorus from wastewater, but this is often considered to limit phosphorus recovery. Struvite precipitation is currently used to recover phosphorus, and this approach has attracted much interest. However, it requires the use of enhanced biological phosphorus removal (EBPR). EBPR is not yet widely applied and the recovery potential is low. Other phosphorus recovery methods, including sludge application to agricultural land or recovering phosphorus from sludge ash, also have limitations. Energy-producing wastewater treatment plants increasingly rely on phosphorus removal using iron, but the problem (as in current processes) is the subsequent recovery of phosphorus from the iron. In contrast, phosphorus is efficiently mobilized from iron by natural processes in sediments and soils. Iron-phosphorus chemistry is diverse, and many parameters influence the binding and release of phosphorus, including redox conditions, pH, presence of organic substances, and particle morphology. We suggest that the current poor understanding of iron and phosphorus chemistry in wastewater systems is preventing processes being developed to recover phosphorus from iron-phosphorus rich wastes like municipal wastewater sludge. Parameters that affect phosphorus recovery are reviewed here, and methods are suggested for manipulating iron-phosphorus chemistry in wastewater treatment processes to allow phosphorus to be recovered.
---
paper_title: Optimizing the production of Polyphosphate from Acinetobacter towneri
paper_content:
Inorganic polyphosphates (PolyP) are linear polymers of few to several hundred orthophosphate residues, linked by energy-rich phosphoanhydride bonds. Four isolates had been screened from soil sample. By MALDI-TOF analysis, they were identified as Bacillius cereus, Acinetobacter towneri, B. megaterium and B. cereus. The production of PolyP in four isolates was studied in phosphate uptake medium and sulfur deficient medium at pH 7. These organisms had shown significant production of PolyP after 22h of incubation. PolyP was extracted from the cells using alkaline lysis method. Among those isolates, Acinetobacter towneri was found to have high (24.57% w/w as P) accumulation of PolyP in sulfur deficient medium. The media optimization for sulfur deficiency was carried out using Response surface methodology (RSM). It was proven that increase in phosphate level in the presence of glucose, under sulfur limiting condition, enhanced the phosphate accumulation by Acinetobacter towneri and these condition can be simulated for the effective removal of phosphate from wastewater sources.
---
paper_title: Phosphorus recovery from wastewater through microbial processes
paper_content:
Waste streams offer a compelling opportunity to recover phosphorus (P). 15–20% of world demand for phosphate rock could theoretically be satisfied by recovering phosphorus from domestic waste streams alone. For very dilute streams ( −1 ), including domestic wastewater, it is necessary to concentrate phosphorus in order to make recovery and reuse feasible. This review discusses enhanced biological phosphorus removal (EBPR) as a key technology to achieve this. EBPR relies on polyphosphate accumulating organisms (PAOs) to take up phosphorus from waste streams, so concentrating phosphorus in biomass. The P-rich biosolids can be either directly applied to land, or solubilized and phosphorus recovered as a mineral product. Direct application is effective, but the product is bulky and carries contaminant risks that need to be managed. Phosphorus release can be achieved using either thermochemical or biochemical methods, while recovery is generally by precipitation as struvite. We conclude that while EBPR technology is mature, the subsequent phosphorus release and recovery technologies need additional development.
---
paper_title: A method for screening remove polyphosphate-accumulating mutants which remove phosphate efficiently from synthetic wastewater
paper_content:
The biological process for phosphorus removal from wastewater is based on the use of bacteria capable of accumulating inorganic polyphosphate (polyp). We previously showed that a phoU mutation leads to polyp accumulation in Escherichia coli. The phoU mutant could be easily screened on agar plates containing 5-bromo-4-chloro-3-indolyl-phosphate (X-Pi) after N-methyl-N′-vitro-N-nitrosoguanidine (NTG) mutagenesis. Here, we demonstrate that this method is also useful for screening polyp-accumulating mutants of bacterial strains isolated from soil and activated sludge samples.
---
paper_title: Advances in enhanced biological phosphorus removal: from micro to macro scale
paper_content:
The enhanced biological phosphorus removal (EBPR) process has been implemented in many wastewater treatment plants worldwide. While the EBPR process is indeed capable of efficient phosphorus (P) removal performance, disturbances and prolonged periods of insufficient P removal have been observed at full-scale plants on numerous occasions under conditions that are seemingly favourable for EBPR. Recent studies in this field have utilised a wide range of approaches to address this problem, from studying the microorganisms that are primarily responsible for or detrimental to this process, to determining their biochemical pathways and developing mathematical models that facilitate better prediction of process performance. The overall goal of each of these studies is to obtain a more detailed insight into how the EBPR process works, where the best way of achieving this objective is through linking together the information obtained using these different approaches. This review paper critically assesses the recent advances that have been achieved in this field, particularly relating to the areas of EBPR microbiology, biochemistry, process operation and process modelling. Potential areas for future research are also proposed. Although previous research in this field has undoubtedly improved our level of understanding, it is clear that much remains to be learned about the process, as many unanswered questions still remain. One of the challenges appears to be the integration of the existing and growing scientific knowledge base with the observations and applications in practice, which this paper hopes to partially achieve.
---
paper_title: Inorganic polyphosphate: essential for growth and survival.
paper_content:
Inorganic polyphosphate (Poly P) is a polymer of tens to hundreds of phosphate residues linked by "high-energy" phosphoanhydride bonds as in ATP. Found in abundance in all cells in nature, it is unique in its likely role in the origin and survival of species. Here, we present extensive evidence that the remarkable properties of Poly P as a polyanion have made it suited for a crucial role in the emergence of cells on earth. Beyond that, Poly P has proved in a variety of ways to be essential for growth of cells, their responses to stresses and stringencies, and the virulence of pathogens. In this review, we pay particular attention to the enzyme, polyphosphate kinase 1 (Poly P kinase 1 or PPK1), responsible for Poly P synthesis and highly conserved in many bacterial species, including 20 or more of the major pathogens. Mutants lacking PPK1 are defective in motility, quorum sensing, biofilm formation, and virulence. Structural studies are cited that reveal the conserved ATP-binding site of PPK1 at atomic resolution and reveal that the site can be blocked with minute concentrations of designed inhibitors. Another widely conserved enzyme is PPK2, which has distinctive kinetic properties and is also implicated in the virulence of some pathogens. Thus, these enzymes, absent in yeast and animals, are novel attractive targets for treatment of many microbial diseases. Still another enzyme featured in this review is one discovered in Dictyostelium discoideum that becomes an actin-like fiber concurrent with the synthesis, step by step, of a Poly P chain made from ATP. The Poly P-actin fiber complex, localized in the cell, lengthens and recedes in response to metabolic signals. Homologs of DdPPK2 are found in pathogenic protozoa and in the alga Chlamydomonas. Beyond the immediate relevance of Poly P as a target for anti-infective drugs, a large variety of cellular operations that rely on Poly P will be considered.
---
paper_title: Isolation and Phylogenetic Analysis of Polyphosphate Accumulating Organisms in Water and Sludge of Intensive Catfish Ponds in the Mekong Delta, Vietnam
paper_content:
Polyphosphate accumulating organisms were isolated from water and sludge samples of intensive catfish ponds in the Mekong Delta, Vietnam. The Results of estimation of intracellular polyphosphate concentration conducted on each of monocultures indicated that the content of intracellular polyphosphate varied from 2 mg/l to 148.1 mg/l after 6 days of incubation in the medium. Of 191 isolates, twenty-one have uptake and store intracellular phosphate from 19.6 to 148.1 mg/l. They have shaped like a rods and short rods or cocci, a few of them were slightly curved or straight or curved rods. The majority of them are gram-positive (76.2%) and the remains are gram-negative. The partial 16S rRNA genes of these isolates were sequenced and compared with bacterial 16S rRNA genes in Genbank using BlastN Program. Phylogenetic tree was constructed on the basic 16S rRNA gene sequences demonstrating the population of high phosphate accumulating bacteria obtained from samples of catfish ponds were affiliated with four major bacterial lineages. Twenty-one bacteria isolates from samples of catfish ponds included in four classes: Bacilli, Actinobacteria, Beta-proteobacteria, Gamma-proteobacteria. The majority of the strains showed excess phosphate accumulation. Strains related to Bacillus sp. were dominant bacteria group constituted up to 52.4% of all identified isolates, but high phosphate accumulating bacteria are Burkholderia vietnamiensis TVT003L within class Beta-proteobacteria, Acinetobacter radioresistens TGT013L within Gamma-proteobacteria and Arthrobacter protophomiae VLT002L within class Actinobacteria. Methyl blue Loeffler’s staning and electron microscopy examination confirmed that the bacteria had stored polyphosphate granules intracellularly.
---
paper_title: Methods for Detection and Quantification of Polyphosphate and Polyphosphate Accumulating Microorganisms in Aquatic Sediments
paper_content:
It has been speculated that the microbial P pool is highly variable in the uppermost layer of various aquatic sediments, especially when an excessive P accumulation in form of polyphosphate (Poly-P) occurs. Poly-P storage is a universal feature of many different organisms and has been technically optimised in wastewater treatment plants (WWTP) with enhanced biological phosphorus removal (EBPR). In the recent past, new insights into mechanisms of P elimination in WWTP almost exclusively depended on the development and application of novel methods like 31P-NMR spectroscopy and molecular methods for identifying Poly-P accumulating microorganisms (PAO). The aim of the present review is to compile current methods potentially available for detection and quantification of Poly-P in sediments and to complement it with yet unpublished results to validate their application in natural sediments. The most powerful tool for reliable Poly-P quantification in sediments is the liquid 31P-NMR technique which has been successfully used for Poly-P measurements in a variety of aquatic sediments. But the microorganisms as well as mechanisms involved in Poly-P storage and cycling are largely unknown. Therefore, we also intend to stimulate future studies focusing on these encouraging topics in sediment research via the implementation of novel methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
---
paper_title: Methods for detection and visualization of intracellular polymers stored by polyphosphate-accumulating microorganisms.
paper_content:
Polyphosphate-accumulating microorganisms (PAOs) are important in enhanced biological phosphorus (P) removal. Considerable effort has been devoted to understanding the biochemical nature of enhanced biological phosphorus removal (EBPR) and it has been shown that intracellular polymer storage plays an important role in PAO's metabolism. The storage capacity of PAOs gives them a competitive advantage over other microorganisms present that are not able to accumulate internal reserves. Intracellular polymers stored by PAOs include polyphosphate (poly-P), polyhydroxyalkanoates (PHAs) and glycogen. Staining procedures for qualitative visualization of polymers by optical microscopy and combinations of these procedures with molecular tools for in situ identification are described here. The strengths and weaknesses of widely used polymer quantification methods that require destruction of samples, are also discussed. Finally, the potential of in vivo nuclear magnetic resonance (NMR) spectroscopy for on-line measurement of intracellular reserves is reported.
---
paper_title: Advances in techniques for phosphorus analysis in biological sources.
paper_content:
In general, conventional P analysis methods suffer from not only the fastidious extraction and pre-treatment procedures required but also the generally low specificity and poor resolution regarding the P composition and its temporal and spatial dynamics. More powerful yet feasible P analysis tools are in demand to help elucidating the biochemistry nature, roles and dynamics of various phosphorus-containing molecules in vitro and in vivo. Recent advances in analytical chemistry, especially in molecular and atomic spectrometry such as NMR, Raman and X-ray techniques, have enabled unique capability of P analysis relevant to submicron scale biochemical processes in individual cell and in natural samples without introducing too complex and invasive pretreatment steps. Great potential still remains to be explored in wider and more combined and integrated requests of these techniques to allow for new possibilities and more powerful P analysis in biological systems. This review provides a comprehensive summary of the available methods and recent developments in analytical techniques and their applications for characterization and quantification of various forms of phosphorus, particularly polyphosphate, in different biological sources.
---
paper_title: Identification of functionally relevant populations in enhanced biological phosphorus removal processes based on intracellular polymers profiles and insights into the metabolic diversity and heterogeneity.
paper_content:
This study proposed and demonstrated the application of a new Raman microscopy-based method for metabolic state-based identification and quantification of functionally relevant populations, namely polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), in enhanced biological phosphorus removal (EBPR) system via simultaneous detection of multiple intracellular polymers including polyphosphate (polyP), glycogen, and polyhydroxybutyrate (PHB). The unique Raman spectrum of different combinations of intracellular polymers within a cell at a given stage of the EBPR cycle allowed for its identification as PAO, GAO, or neither. The abundance of total PAOs and GAOs determined by Raman method were consistent with those obtained with polyP staining and fluorescence in situ hybridization (FISH). Different combinations and quantities of intracellular polymer inclusions observed in single cells revealed the distribution of different sub-PAOs groups among the total PAO populations, which exhibit phenotypic and metabolic heterogeneity and diversity. These results also provided evidence for the hypothesis that different PAOs may employ different extents of combination of glycolysis and TCA cycle pathways for anaerobic reducing power and energy generation and it is possible that some PAOs may rely on TCA cycle solely without glycolysis. Sum of cellular level quantification of the internal polymers associated with different population groups showed differentiated and distributed trends of glycogen and PHB level between PAOs and GAOs, which could not be elucidated before with conventional bulk measurements of EBPR mixed cultures.
---
paper_title: Dominant and novel clades of Candidatus Accumulibacter phosphatis in 18 globally distributed full-scale wastewater treatment plants
paper_content:
Here we employed quantitative real-time PCR (qPCR) assays for polyphosphate kinase 1 (ppk1) and 16S rRNA genes to assess relative abundances of dominant clades of Candidatus Accumulibacter phosphatis (referred to Accumulibacter) in 18 globally distributed full-scale wastewater treatment plants (WWTPs) from six countries. Accumulibacter were not only detected in the 6 WWTPs performing biological phosphorus removal, but also inhabited in the other 11 WWTPs employing conventional activated sludge (AS) with abundances ranging from 0.02% to 7.0%. Among the AS samples, clades IIC and IID were found to be dominant among the five Accumulibacter clades. The relative abundance of each clade in the Accumulibacter lineage significantly correlated (p < 0.05) with the influent total phosphorus and chemical oxygen demand instead of geographical factors (e.g. latitude), which showed that the local wastewater characteristics and WWTPs configurations could be more significant to determine the proliferation of Accumulibacter clades in full-scale WWTPs rather than the geographical location. Moreover, two novel Accumulibacter clades (IIH and II-I) which had not been previously detected were discovered in two enhanced biological phosphorus removal (EBPR) WWTPs. The results deepened our understanding of the Accumulibacter diversity in environmental samples.
---
paper_title: Phosphorus accumulation by bacteria isolated from a continuous-flow two-sludge system.
paper_content:
In this article, polyphosphate-accumulating organisms (PAOs) from a lab-scale continuous-flow two-sludge system was isolated and identified, the different phosphorus accumulation characteristics of the isolates under anoxic and aerobic conditions were investigated. Two kinds of PAOs were both found in the anoxic zones of the two-sludge system, one of them utilized only oxygen as electron acceptor, and the other one utilized either nitrate or oxygen as electron acceptor. Of the total eight isolates, five isolates were capable of utilizing both nitrate and oxygen as electron acceptors to uptake phosphorus to some extent. And three of the five isolates showed good phosphorus accumulative capacities both under anoxic or aerobic conditions, two identified as Alcaligenes and one identified as Pseudomonas. Streptococcus was observed weak anoxic phosphorus accumulation because of its weak denitrification capacity, but it showed good phosphorus accumulation capacity under aerobic conditions. One isolates identified as Enterobacteriaceae was proved to be a special species of PAOs, which could only uptake small amounts of phosphorus under anoxic conditions, although its denitrification capacity and aerobic phosphorus accumulation capacity were excellent.
---
paper_title: Extraction and detection methods for polyphosphate storage in autotrophic planktonic organisms
paper_content:
Different extraction procedures were employed to characterise the polyphosphate granules in autotrophic planktonic organisms, the green microalgae Chlorella vulgaris and the cyanobacterium Synechocystis sp. strain PPC 6803. The effectiveness of these methods was assessed using epifluorescence microscopic analysis of DAPI stained specimens as well as by electron spectroscopic imaging. The results clearly indicate that NaOH and hot water treatment followed by filtration of the extracts are suitable to obtain a cell free suspension of intact polyphosphate granules without hydrolysing the polymers. The methods described are useful to gain physiological information on the phosphorus status of autotrophic planktonic organisms.
---
paper_title: Inorganic polyphosphate in industry, agriculture and medicine: Modern state and outlook
paper_content:
Abstract Inorganic polyphosphates (PolyP) are linear polymers containing a few to several hundred orthophosphate residues linked by energy-rich phosphoanhydride bonds. PolyPs are widely used as reagents in water treatment, fertilizers, flame retardants and food additives due to its unique properties, inexpensiveness, nontoxicity and biodegradability. The practice of enhanced biological phosphorus removal (EBPR), based on PolyP accumulation by sludge bacteria, is an accepted and low-cost strategy for controlling eutrophication. PolyPs are present in the cells of all living organisms, from bacteria to mammals. They perform numerous functions in the cells: phosphate and energy storage, sequestration and storage of cations, formation of membrane channels, cell envelope formation and function, gene activity control, regulation of enzyme activities, stress response and stationary phase adaptation. PolyPs participate in bone tissue development and in the blood coagulation cascade and are promising candidates in therapy for bone and blood diseases. They may also have application in creating novel bone substitute materials, serving as carriers for prolonged action drugs, and acting as a phosphodonor in enzymatic synthesis of biologically active compounds. The importance of polyphosphate kinases in the virulence of pathogens forms a basis for the development of new antibiotics. Further study of PolyP biochemistry and cell biology can be applied to medicine, environmental protection and agriculture.
---
paper_title: A high throughput method and culture medium for rapid screening of phosphate accumulating microorganisms
paper_content:
Abstract A novel PA Medium (PAM) for efficient screening of phosphate-accumulating organisms (PAOs) was developed taking Serratia marcescens NBRI1213 as model organism. The defined National Botanical Research Institute’s growth medium (NBRI) supplemented with 0.1% maltose, designed for quantitative estimation of phosphate accumulation was designated as PAM. Our work suggested usage of PAM for efficient qualitative screening and as a microbiological medium for preferential selection of PAOs on Petri-plates. For qualitative screening of PAOs, Toluidine blue-O dye (TBO) was supplemented in PAM, designated as PAM-TBO. Qualitative analysis of phosphate accumulated by various groups correlated well with grouping based upon quantitative analysis of PAOs, effect of carbon, nitrogen, salts, and phosphate accumulation-defective transposon mutants. For significantly increasing sample throughput, efficiency of screening PAOs was further enhanced by adaptation of PAM-TBO assay to microtiter plate based method. It is envisaged that usage of this medium will be salutary for quick screening of PAOs from environment.
---
paper_title: Differentiation of polyphosphate metabolism between the extra- and intraradical hyphae of arbuscular mycorrhizal fungi
paper_content:
Summary ::: ::: • Regulation of polyphosphate metabolism is reported in arbuscular mycorrhizal symbiosis. ::: ::: ::: • Marigold (Tagetes patula) plants inoculated with Glomus coronatum or Glomus etunicatum were grown in mesh bags. Exopolyphosphatase activity in extra- and intraradical hyphae was measured and characterized. The hyphae were stained with Neutral red to show acidic vacuoles in which polyphosphate synthesis might occur. ::: ::: ::: • Exopolyphosphate activity was differently expressed between the extra- and intraradical hyphae, as indicated by different pH optima; high activity was observed at pH 5.0 in the intraradical hyphae of both fungal species. Km values were lower at neutral pH with long-chain polyphosphate, whereas acidic activity showed lower Km with short-chain polyphosphate. Both extra- and intradical hyphae had acidic vacuoles. Polyphosphate occurred in the hyphae of the high-P, but not the low-P treatment. By contrast, exopolyphosphatase activity and vacuolar acidity were relatively constant irrespective of polyphosphate status. ::: ::: ::: • The fungi have at least two different exopolyphosphatase-type enzymes which are differently expressed between extra- and intraradical hyphae; polyphosphate accumulation might be a dynamic balance between synthesis and hydrolysis.
---
paper_title: Methods for Detection and Quantification of Polyphosphate and Polyphosphate Accumulating Microorganisms in Aquatic Sediments
paper_content:
It has been speculated that the microbial P pool is highly variable in the uppermost layer of various aquatic sediments, especially when an excessive P accumulation in form of polyphosphate (Poly-P) occurs. Poly-P storage is a universal feature of many different organisms and has been technically optimised in wastewater treatment plants (WWTP) with enhanced biological phosphorus removal (EBPR). In the recent past, new insights into mechanisms of P elimination in WWTP almost exclusively depended on the development and application of novel methods like 31P-NMR spectroscopy and molecular methods for identifying Poly-P accumulating microorganisms (PAO). The aim of the present review is to compile current methods potentially available for detection and quantification of Poly-P in sediments and to complement it with yet unpublished results to validate their application in natural sediments. The most powerful tool for reliable Poly-P quantification in sediments is the liquid 31P-NMR technique which has been successfully used for Poly-P measurements in a variety of aquatic sediments. But the microorganisms as well as mechanisms involved in Poly-P storage and cycling are largely unknown. Therefore, we also intend to stimulate future studies focusing on these encouraging topics in sediment research via the implementation of novel methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
---
paper_title: Methods for detection and visualization of intracellular polymers stored by polyphosphate-accumulating microorganisms.
paper_content:
Polyphosphate-accumulating microorganisms (PAOs) are important in enhanced biological phosphorus (P) removal. Considerable effort has been devoted to understanding the biochemical nature of enhanced biological phosphorus removal (EBPR) and it has been shown that intracellular polymer storage plays an important role in PAO's metabolism. The storage capacity of PAOs gives them a competitive advantage over other microorganisms present that are not able to accumulate internal reserves. Intracellular polymers stored by PAOs include polyphosphate (poly-P), polyhydroxyalkanoates (PHAs) and glycogen. Staining procedures for qualitative visualization of polymers by optical microscopy and combinations of these procedures with molecular tools for in situ identification are described here. The strengths and weaknesses of widely used polymer quantification methods that require destruction of samples, are also discussed. Finally, the potential of in vivo nuclear magnetic resonance (NMR) spectroscopy for on-line measurement of intracellular reserves is reported.
---
paper_title: Dynamics of polyphosphate-accumulating bacteria in wastewater treatment plant microbial communities detected via DAPI (4',6'-diamidino-2-phenylindole) and tetracycline labeling.
paper_content:
Wastewater treatment plants with enhanced biological phosphorus removal represent a state-of-the-art technology. Nevertheless, the process of phosphate removal is prone to occasional failure. One reason is the lack of knowledge about the structure and function of the bacterial communities involved. Most of the bacteria are still not cultivable, and their functions during the wastewater treatment process are therefore unknown or subject of speculation. Here, flow cytometry was used to identify bacteria capable of polyphosphate accumulation within highly diverse communities. A novel fluorescent staining technique for the quantitative detection of polyphosphate granules on the cellular level was developed. It uses the bright green fluorescence of the antibiotic tetracycline when it complexes the divalent cations acting as a countercharge in polyphosphate granules. The dynamics of cellular DNA contents and cell sizes as growth indicators were determined in parallel to detect the most active polyphosphate-accumulating individuals/subcommunities and to determine their phylogenetic affiliation upon cell sorting. Phylotypes known as polyphosphate-accumulating organisms, such as a "Candidatus Accumulibacter"-like phylotype, were found, as well as members of the genera Pseudomonas and Tetrasphaera. The new method allows fast and convenient monitoring of the growth and polyphosphate accumulation dynamics of not-yet-cultivated bacteria in wastewater bacterial communities.
---
paper_title: A novel method for determination of inorganic polyphosphates using the fluorescent dye fura-2.
paper_content:
Abstract A method for determining inorganic polyphosphate, which is based on the Mn 2+ -induced quenching of the fluorescence of the calcium indicator fura-2, is described. The effect of Mn 2+ ions on fura-2 fluorescence is gradually abolished in the presence of increasing concentrations of polyphosphate; this allows the quantification both of synthetic polyphosphates and of the naturally occurring polymer isolated from tissues or cells. The described method has some advantages compared to conventional procedures for detection of polyphosphates based on the metachromatic effect on toluidine blue. It can be applied for the determination of pyrophosphate, tripolyphosphate and other short-chain polyphosphates not detectable by toluidine blue and it can be used for measurement both of pyrophosphatase and exopolyphosphatase activity.
---
paper_title: Biodiversity of Polyphosphate Accumulating Bacteria in Eight WWTPs with Different Modes of Operation
paper_content:
AbstractEnhanced biological phosphorous removal (EBPR) from wastewater has been successfully used for more than three decades and is considered to be an environmentally friendly wastewater-treatment process. Biologically, this process is realized by incorporation of phosphate as polyphosphate (polyP) granules in activated sludge bacteria. Important groups of bacteria responsible for P removal have been identified, but the full microbial diversity involved in this process is still unknown. This paper reports on the microbial composition of activated sludge communities in eight wastewater-treatment plants (WWTPs) with different sizes and modes of operation. The polyphosphate accumulating organisms (PAOs) within this complex biocenosis were identified by fluorescent dye staining and classified by in situ hybridization techniques. Of the bacteria in the aerobic basin, 5–13% contained polyP granules. In addition, flow cytometry was used to quantify PAOs after tetracycline staining and to separate these cells. ...
---
paper_title: Fixation procedures for flow cytometric analysis of environmental bacteria
paper_content:
Abstract Analysis of environmental bacteria on the single cell level often requires fixation to store the cells and to keep them in a state as near life-like as possible. Fixation procedures should furthermore counteract the increase of autofluorescence, cell clogging, and distortion of surface characteristics. Additionally, they should meet the specific fixation demands of both aerobically and anaerobically grown bacteria. A fixation method was developed based on metal solutions in combination with sodium azide. The fixation efficiencies of aluminium, barium, bismuth, cobalt, molybdenum, nickel, and tungsten salts were evaluated by flow cytometric measurement of the DNA contents as a bacterial population/community stability marker. Statistical equivalence testing was involved to permit highly reliable flow cytometric pattern evaluation. Investigations were carried out with pure cultures representing environmentally important metabolic and respiratory pathways as controls and with activated sludge as an example for highly diverse bacterial communities. A mixture of 5 mM barium chloride and nickel chloride, each and 10% sodium azide was found to be a suitable fixative for all tested bacteria. The described method provided good sample stability for at least 9 days.
---
paper_title: Diversity of nitrite reductase genes in "Candidatus Accumulibacter phosphatis"-dominated cultures enriched by flow-cytometric sorting.
paper_content:
"Candidatus Accumulibacter phosphatis" is considered a polyphosphate-accumulating organism (PAO) though it has not been isolated yet. To reveal the denitrification ability of this organism, we first concentrated this organism by flow cytometric sorting following fluorescence in situ hybridization (FISH) using specific probes for this organism. The purity of the target cells was about 97% of total cell count in the sorted sample. The PCR amplification of the nitrite reductase genes (nirK and nirS) from unsorted and sorted cells was performed. Although nirK and nirS were amplified from unsorted cells, only nirS was detected from sorted cells, indicating that "Ca. Accumulibacter phosphatis" has nirS. Furthermore, nirS fragments were cloned from unsorted (Ba clone library) and sorted (Bd clone library) cells and classified by restriction fragment length polymorphism analysis. The most dominant clone in clone library Ba, which represented 62% of the total number of clones, was not found in clone library Bd. In contrast, the most dominant clone in clone library Bd, which represented 59% of the total number of clones, represented only 2% of the total number of clones in clone library Ba, indicating that this clone could be that of "Ca. Accumulibacter phosphatis." The sequence of this nirS clone exhibited less than 90% similarity to the sequences of known denitrifying bacteria in the database. The recovery of the nirS genes makes it likely that "Ca. Accumulibacter phosphatis" behaves as a denitrifying PAO capable of utilizing nitrite instead of oxygen as an electron acceptor for phosphorus uptake.
---
paper_title: Correlation of Community Dynamics and Process Parameters As a Tool for the Prediction of the Stability of Wastewater Treatment
paper_content:
Wastewater treatment often suffers from instabilities and the failure of specific functions such as biological phosphorus removal by polyphosphate accumulating organisms. Since most of the microorganisms involved in water clarification are unknown it is challenging to operate the process accounting for the permanent varying abiotic parameters and the complex composition and unrevealed metabolic capacity of a wastewater microbial community. Fulfilling the demands for water quality irrespective of substrate inflow conditions may emit severe problems if the limited management resources of municipal wastewater treatment plants are regarded. We used flow cytometric analyses of cellular DNA and polyphosphate to create patterns mirroring dynamics in community structure. These patterns were resolved in up to 15 subclusters, the presence and abundances of which correlated with abiotic data. The study used biostatistics to determine the kind and strength of the correlation. Samples investigated were obtained from a...
---
paper_title: Dynamics of polyphosphate-accumulating bacteria in wastewater treatment plant microbial communities detected via DAPI (4',6'-diamidino-2-phenylindole) and tetracycline labeling.
paper_content:
Wastewater treatment plants with enhanced biological phosphorus removal represent a state-of-the-art technology. Nevertheless, the process of phosphate removal is prone to occasional failure. One reason is the lack of knowledge about the structure and function of the bacterial communities involved. Most of the bacteria are still not cultivable, and their functions during the wastewater treatment process are therefore unknown or subject of speculation. Here, flow cytometry was used to identify bacteria capable of polyphosphate accumulation within highly diverse communities. A novel fluorescent staining technique for the quantitative detection of polyphosphate granules on the cellular level was developed. It uses the bright green fluorescence of the antibiotic tetracycline when it complexes the divalent cations acting as a countercharge in polyphosphate granules. The dynamics of cellular DNA contents and cell sizes as growth indicators were determined in parallel to detect the most active polyphosphate-accumulating individuals/subcommunities and to determine their phylogenetic affiliation upon cell sorting. Phylotypes known as polyphosphate-accumulating organisms, such as a "Candidatus Accumulibacter"-like phylotype, were found, as well as members of the genera Pseudomonas and Tetrasphaera. The new method allows fast and convenient monitoring of the growth and polyphosphate accumulation dynamics of not-yet-cultivated bacteria in wastewater bacterial communities.
---
paper_title: Myeloma cells contain high levels of inorganic polyphosphate which is associated with nucleolar transcription
paper_content:
Background. In hematology there has recently been increasing interest in inorganic polyphosphate. This polymer accumulates in platelet granules and its functions include modulating various stages in blood coagulation, inducing angiogenesis, and provoking apoptosis of plasma cells. In this work, we evaluate the characteristics of intracellular polyphosphate in myeloma cell lines, in primary myeloma cells from patients, and in other human B-cell populations from healthy donors. ::: ::: Design and Methods. We have developed a novel method for detecting levels of polyphosphate in cell populations using flow cytometry. We also have studied polyphosphate localization and characteristics, using confocal microscopy and enzymatic analysis experiments. ::: ::: Results. We have found that myeloma plasma cells present higher levels of intracellular polyphosphate than normal plasma cells and other B-cell populations. Localization experiments indicated that polyphosphate accumulates at high levels in the nucleolus of myeloma cells. As the principal function of the nucleolus involves the transcription of ribosomal DNA genes, we have found changes in the cellular distribution of polyphosphate after the inhibition of the nucleolar transcription. In addition, we have found that RNA polymerase I activity, responsible for transcription in the nucleolus, is also modulated by polyphosphate, in a dose-dependent manner. ::: ::: Conclusions. Our results show an unusual high accumulation of polyphosphate in the nucleoli of myeloma cells and a functional relationship of this polymer with the nucleolar transcription.
---
paper_title: In situ identification of polyphosphate- and polyhydroxyalkanoate-accumulating traits for microbial populations in a biological phosphorus removal process
paper_content:
Polyphosphate- and polyhydroxyalkanoate (PHA)-accumulating traits of predominant microorganisms in an efficient enhanced biological phosphorus removal (EBPR) process were investigated systematically using a suite of non-culture-dependent methods. Results of 16S rDNA clone library and fluorescence in situ hybridization (FISH) with rRNA-targeted, group-specific oligonucleotide probes indicated that the microbial community consisted mostly of the α- (9.5% of total cells), β- (41.3%) and γ- (6.8%) subclasses of the class Proteobacteria, Flexibacter–Cytophaga (4.5%) and the Gram-positive high G+C (HGC) group (17.9%). With individual phylogenetic groups or subgroups, members of Candidatus Accumulibacter phosphatis in the β-2 subclass, a novel HGC group closely related to Tetrasphaera spp., and a novel γ-proteobacterial group were the predominant populations. Furthermore, electron microscopy with energy-dispersive X-ray analysis was used to validate the staining specificity of 4,6-diamino-2-phenylindole (DAPI) for intracellular polyphosphate and revealed the composition of polyphosphate granules accumulated in predominant bacteria as mostly P, Ca and Na. As a result, DAPI and PHA staining procedures could be combined with FISH to identify directly the polyphosphate- and PHA-accumulating traits of different phylogenetic groups. Members of Accumulibacter phosphatis and the novel gamma-proteobacterial group were observed to accumulate both polyphosphate and PHA. In addition, one novel rod-shaped group, closely related to coccus-shaped Tetrasphaera, and one filamentous group resembling Candidatus Nostocoidia limicola in the HGC group were found to accumulate polyphosphate but not PHA. No cellular inclusions were detected in most members of the α-Proteobacteria and the Cytophaga–Flavobacterium group. The diversified functional traits observed suggested that different substrate metabolisms were used by predominant phylogenetic groups in EBPR processes.
---
paper_title: Characteristics of Denitrifying Phosphate Accumulating Organisms in an Anaerobic-Intermittently Aerobic Process
paper_content:
The anaerobic-intermittently aerobic (AIA) process was operated for the enhanced biological phosphorus and nitrogen removal over 2 years. A bench-scale AIA reactor operated in a continuous-flow anaerobic and alternating anoxic–aerobic mode was demonstrated to accomplish nitrification, denitrification, and phosphorus removal. Under the anaerobic zone, carbon source was taken up, polyhydroxyalkanoates (PHAs) were formed, and accomplished phosphorus release. The simultaneous phosphate uptake and denitrification by the denitrifying phosphate accumulating organisms (DePAOs) was observed even though the PHAs in cells were oxidized in the aerobic phase before the anoxic phase. Ammonium was oxidized to nitrate under the aerobic phase, and nitrate was reduced to nitrogen gas under the anoxic phase. As the nitrate concentration increased, phosphate uptake rate and denitrification rate decreased, whereas the release of phosphate was accelerated with the addition of the external carbon source. The secondary phosphate...
---
paper_title: Anaerobic glyoxylate cycle activity during simultaneous utilization of glycogen and acetate in uncultured Accumulibacter enriched in enhanced biological phosphorus removal communities
paper_content:
Enhanced biological phosphorus removal (EBPR) communities protect waterways from nutrient pollution and enrich microorganisms capable of assimilating acetate as polyhydroxyalkanoate (PHA) under anaerobic conditions. Accumulibacter, an important uncultured polyphosphate-accumulating organism (PAO) enriched in EBPR, was investigated to determine the central metabolic pathways responsible for producing PHA. Acetate uptake and assimilation to PHA in Accumulibacter was confirmed using fluorescence in situ hybridization (FISH)-microautoradiography and post-FISH chemical staining. Assays performed with enrichments of Accumulibacter using an inhibitor of glyceraldehyde-3-phosphate dehydrogenase inferred anaerobic glycolysis activity. Significant decrease in anaerobic acetate uptake and PHA production rates were observed using inhibitors targeting enzymes within the glyoxylate cycle. Bioinformatic analysis confirmed the presence of genes unique to the glyoxylate cycle (isocitrate lyase and malate synthase) and gene expression analysis of isocitrate lyase demonstrated that the glyoxylate cycle is likely involved in PHA production. Reduced anaerobic acetate uptake and PHA production was observed after inhibition of succinate dehydrogenase and upregulation of a succinate dehydrogenase gene suggested anaerobic activity. Cytochrome b/b6 activity inferred that succinate dehydrogenase activity in the absence of external electron acceptors may be facilitated by a novel cytochrome b/b6 fusion protein complex that pushes electrons uphill to more electronegative electron carriers. Identification of phosphoenolpyruvate carboxylase and phosphoenolpyruvate carboxykinase genes in Accumulibacter demonstrated the potential for interconversion of C3 intermediates of glycolysis and C4 intermediates of the glyoxylate cycle. Our findings along with previous hypotheses from analysis of microbiome data and metabolic models for PAOs were used to develop a model for anaerobic carbon metabolism in Accumulibacter.
---
paper_title: Biological phosphorus removal from real wastewater in a sequencing batch reactor operated as aerobic/extended-idle regime
paper_content:
Abstract Recently, it has been reported that biological phosphorus removal (BPR) could be achieved in a sequencing batch reactor (SBR) with aerobic/extended-idle (A/EI) regime using synthetic medium. This paper first examined the feasibility and stability of the A/EI regime treating real domestic wastewater. The results showed that the A/EI-SBR removed 1.32 ± 0.03–3.55 ± 0.04 mg of phosphorus per g of volatile suspended solids during the steady-state operation, suggesting that BPR from domestic wastewater could be well realised in the A/EI regime. Then, another SBR operated as the conventional anaerobic/oxic (A/O) regime was conducted to compare the soluble orthophosphate (SOP) removal with the A/EI regime. The results clearly showed that the A/EI regime achieved higher SOP removal than the A/O regime. Finally, the mechanism for the A/EI-SBR driving superior SOP removal was investigated. It was found that the sludge cultured by the A/EI regime had more polyphosphate accumulating organisms and less glycogen accumulating organisms than that by the A/O regime. Further investigations showed that the A/EI-SBR had a lower glycogen transformation and a higher PHB/PHV ratio, which correlated well with the superior phosphorus removal.
---
paper_title: Simultaneous COD, nitrogen, and phosphate removal by aerobic granular sludge
paper_content:
Aerobic granular sludge technology offers a possibility to design compact wastewater treatment plants based on simultaneous chemical oxygen demand (COD), nitrogen and phosphate removal in one sequencing batch reactor. In earlier studies, it was shown that aerobic granules, cultivated with an aerobic pulse-feeding pattern, were not stable at low dissolved oxygen concentrations. Selection for slow-growing organisms such as phosphate-accumulating organisms (PAO) was shown to be a measure for improved granule stability, particularly at low oxygen concentrations. Moreover, this allows long feeding periods needed for economically feasible full-scale applications. Simultaneous nutrient removal was possible, because of heterotrophic growth inside the granules (denitrifying PAO). At low oxygen saturation (20%) high removal efficiencies were obtained; 100% COD removal, 94% phosphate (P-) removal and 94% total nitrogen (N-) removal (with 100% ammonium removal). Experimental results strongly suggest that P-removal occurs partly by (biologically induced) precipitation. Monitoring the laboratory scale reactors for a long period showed that N-removal efficiency highly depends on the diameter of the granules. © 2005 Wiley Periodicals, Inc.
---
paper_title: Operation and control of SBR processes for enhanced biological nutrient removal from wastewater
paper_content:
In the last decades, the awareness of environmental issues has increased in society considerably. There is an increasing need to improve the effluent quality of domestic wastewater treatment processes. This thesis describes the application of the Sequencing Batch Reactor (SBR) technology for Biological Nutrient Removal (BNR) from the wastewater. In particular, the work presented evolves from the nitrogen removal to the biological nutrient removal (i.e. nitrogen plus phosphorous removal) with special attention to the operational strategy design, the identification of possible reactor cycle controls or the influent composition related to the process efficiency. In such sense, also the use of ethanol as an external carbon (when low influent Carbon:Phosphorus (C:P) or Carbon:Nitrogen (C:N) ratios are presented) are studied as an alternative to maintain the BNR efficiency.
---
paper_title: Ecophysiology of polyphosphate-accumulating organisms and glycogen-accumulating organisms in a continuously aerated enhanced biological phosphorus removal process
paper_content:
Aims: To investigate the ecophysiology of populations of polyphosphate-accumulating organisms (PAO) and glycogen-accumulating organisms (GAO) in communities of a novel acetate fed process removing phosphate from wastewater. Attempts were made to see if acetate could be replaced by an alternative carbon source which did not support the growth of the GAO. ::: ::: ::: ::: Methods and Results: A continuously aerated sequencing batch reactor was operated with different acetate feed levels. Fluorescence in situ hybridization (FISH) showed that Defluviicoccus GAO numbers increased at lower acetate feed levels. With FISH/microautoradiography (MAR) both detected morphotypes of Defluviicoccus assimilated a wider range of substrates aerobically than Accumulibacter PAO. Their uptake profile differed from that reported for the same phylotype in full scale anaerobic : aerobic EBPR plants. ::: ::: ::: ::: Conclusions: This suggests that replacing acetate with another substrate is unlikely to provide Accumulibacter with a selective advantage in this process. Why Defluviicoccus appeared to out-compete Accumulibacter at lower acetate concentrations was not clear. Data suggest physiological and morphological diversity may exist within a single Defluviicoccus phylotype. ::: ::: ::: ::: Significance and Impact of the Study: This study implies that the current FISH probes for Defluviicoccus GAO may not reveal the full extent of their biodiversity, and that more information is required before strategies for their control can be devised.
---
paper_title: Optimizing the production of Polyphosphate from Acinetobacter towneri
paper_content:
Inorganic polyphosphates (PolyP) are linear polymers of few to several hundred orthophosphate residues, linked by energy-rich phosphoanhydride bonds. Four isolates had been screened from soil sample. By MALDI-TOF analysis, they were identified as Bacillius cereus, Acinetobacter towneri, B. megaterium and B. cereus. The production of PolyP in four isolates was studied in phosphate uptake medium and sulfur deficient medium at pH 7. These organisms had shown significant production of PolyP after 22h of incubation. PolyP was extracted from the cells using alkaline lysis method. Among those isolates, Acinetobacter towneri was found to have high (24.57% w/w as P) accumulation of PolyP in sulfur deficient medium. The media optimization for sulfur deficiency was carried out using Response surface methodology (RSM). It was proven that increase in phosphate level in the presence of glucose, under sulfur limiting condition, enhanced the phosphate accumulation by Acinetobacter towneri and these condition can be simulated for the effective removal of phosphate from wastewater sources.
---
paper_title: Accumulation of phosphate and polyphosphate by Cryptococcus humicola and Saccharomyces cerevisiae in the absence of nitrogen.
paper_content:
The search for new phosphate-accumulating microorganisms is of interest in connection with the problem of excess phosphate in environment. The ability of some yeast species belonging to ascomycetes and basidiomycetes for phosphate (P (i) ) accumulation in nitrogen-deficient medium was studied. The ascomycetous Saccharomyces cerevisiae and Kuraishia capsulata and basidiomycetous Cryptococcus humicola, Cryptococcus curvatus, and Pseudozyma fusiformata were the best in P (i) removal. The cells of Cryptococcus humicola and S. cerevisiae took up 40% P (i) from the media containing P (i) and glucose (5 and 30 mM, respectively), and up to 80% upon addition of 5 mM MgSO(4) (.) The cells accumulated P (i) mostly in the form of polyphosphate (PolyP). In the presence of Mg(2+) , the content of PolyP with longer average chain length increased in both yeasts; they both had numerous inclusions fluorescing in the yellow region of the spectrum, typical of DAPI-PolyP complexes. Among the yeast species tested, Cryptococcus humicola is a new promising model organisms to study phosphorus removal from the media and biomineralization in microbial cells.
---
paper_title: Accumulation of inorganic polyphosphates in Saccharomyces cerevisiae under nitrogen deprivation: Stimulation by magnesium ions and peculiarities of localization
paper_content:
The yeast Saccharomyces cerevisiae was shown to have a high potential as a phosphate-accumulating organism under growth suppression by nitrogen limitation. The cells took up over 40% of phosphate from the medium containing 30 mM glucose and 5 mM potassium phosphate and over 80% of phosphate on addition of 5 mM magnesium sulfate. The major part of accumulated Pi was reserved as polyphosphates. The content of polyphosphates was ∼57 and ∼75% of the phosphate accumulated by the cells in the absence and presence of magnesium ions, respectively. The content of long-chain polyphosphates increased in the presence of magnesium ions, 5-fold for polymers with the average length of ∼45 phosphate residues, 3.7-fold for polymers with the average chain length of ∼75 residues, and more than 10-fold for polymers with the average chain length of ∼200 residues. On the contrary, the content of polyphosphates with the average chain length of ∼15 phosphate residues decreased threefold. According to the data of electron and confocal microscopy and X-ray microanalysis, the accumulated polyphosphates were localized in the cytoplasm and vacuoles. The cytoplasm of the cells accumulating polyphosphates in the presence of magnesium ions had numerous small phosphorus-containing inclusions; some of them were associated with large electron-transparent inclusions and the cytoplasmic membrane.
---
paper_title: Methods for Detection and Quantification of Polyphosphate and Polyphosphate Accumulating Microorganisms in Aquatic Sediments
paper_content:
It has been speculated that the microbial P pool is highly variable in the uppermost layer of various aquatic sediments, especially when an excessive P accumulation in form of polyphosphate (Poly-P) occurs. Poly-P storage is a universal feature of many different organisms and has been technically optimised in wastewater treatment plants (WWTP) with enhanced biological phosphorus removal (EBPR). In the recent past, new insights into mechanisms of P elimination in WWTP almost exclusively depended on the development and application of novel methods like 31P-NMR spectroscopy and molecular methods for identifying Poly-P accumulating microorganisms (PAO). The aim of the present review is to compile current methods potentially available for detection and quantification of Poly-P in sediments and to complement it with yet unpublished results to validate their application in natural sediments. The most powerful tool for reliable Poly-P quantification in sediments is the liquid 31P-NMR technique which has been successfully used for Poly-P measurements in a variety of aquatic sediments. But the microorganisms as well as mechanisms involved in Poly-P storage and cycling are largely unknown. Therefore, we also intend to stimulate future studies focusing on these encouraging topics in sediment research via the implementation of novel methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
---
paper_title: Advances in techniques for phosphorus analysis in biological sources.
paper_content:
In general, conventional P analysis methods suffer from not only the fastidious extraction and pre-treatment procedures required but also the generally low specificity and poor resolution regarding the P composition and its temporal and spatial dynamics. More powerful yet feasible P analysis tools are in demand to help elucidating the biochemistry nature, roles and dynamics of various phosphorus-containing molecules in vitro and in vivo. Recent advances in analytical chemistry, especially in molecular and atomic spectrometry such as NMR, Raman and X-ray techniques, have enabled unique capability of P analysis relevant to submicron scale biochemical processes in individual cell and in natural samples without introducing too complex and invasive pretreatment steps. Great potential still remains to be explored in wider and more combined and integrated requests of these techniques to allow for new possibilities and more powerful P analysis in biological systems. This review provides a comprehensive summary of the available methods and recent developments in analytical techniques and their applications for characterization and quantification of various forms of phosphorus, particularly polyphosphate, in different biological sources.
---
paper_title: Identification of functionally relevant populations in enhanced biological phosphorus removal processes based on intracellular polymers profiles and insights into the metabolic diversity and heterogeneity.
paper_content:
This study proposed and demonstrated the application of a new Raman microscopy-based method for metabolic state-based identification and quantification of functionally relevant populations, namely polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), in enhanced biological phosphorus removal (EBPR) system via simultaneous detection of multiple intracellular polymers including polyphosphate (polyP), glycogen, and polyhydroxybutyrate (PHB). The unique Raman spectrum of different combinations of intracellular polymers within a cell at a given stage of the EBPR cycle allowed for its identification as PAO, GAO, or neither. The abundance of total PAOs and GAOs determined by Raman method were consistent with those obtained with polyP staining and fluorescence in situ hybridization (FISH). Different combinations and quantities of intracellular polymer inclusions observed in single cells revealed the distribution of different sub-PAOs groups among the total PAO populations, which exhibit phenotypic and metabolic heterogeneity and diversity. These results also provided evidence for the hypothesis that different PAOs may employ different extents of combination of glycolysis and TCA cycle pathways for anaerobic reducing power and energy generation and it is possible that some PAOs may rely on TCA cycle solely without glycolysis. Sum of cellular level quantification of the internal polymers associated with different population groups showed differentiated and distributed trends of glycogen and PHB level between PAOs and GAOs, which could not be elucidated before with conventional bulk measurements of EBPR mixed cultures.
---
paper_title: Myeloma cells contain high levels of inorganic polyphosphate which is associated with nucleolar transcription
paper_content:
Background. In hematology there has recently been increasing interest in inorganic polyphosphate. This polymer accumulates in platelet granules and its functions include modulating various stages in blood coagulation, inducing angiogenesis, and provoking apoptosis of plasma cells. In this work, we evaluate the characteristics of intracellular polyphosphate in myeloma cell lines, in primary myeloma cells from patients, and in other human B-cell populations from healthy donors. ::: ::: Design and Methods. We have developed a novel method for detecting levels of polyphosphate in cell populations using flow cytometry. We also have studied polyphosphate localization and characteristics, using confocal microscopy and enzymatic analysis experiments. ::: ::: Results. We have found that myeloma plasma cells present higher levels of intracellular polyphosphate than normal plasma cells and other B-cell populations. Localization experiments indicated that polyphosphate accumulates at high levels in the nucleolus of myeloma cells. As the principal function of the nucleolus involves the transcription of ribosomal DNA genes, we have found changes in the cellular distribution of polyphosphate after the inhibition of the nucleolar transcription. In addition, we have found that RNA polymerase I activity, responsible for transcription in the nucleolus, is also modulated by polyphosphate, in a dose-dependent manner. ::: ::: Conclusions. Our results show an unusual high accumulation of polyphosphate in the nucleoli of myeloma cells and a functional relationship of this polymer with the nucleolar transcription.
---
paper_title: A high throughput method and culture medium for rapid screening of phosphate accumulating microorganisms
paper_content:
Abstract A novel PA Medium (PAM) for efficient screening of phosphate-accumulating organisms (PAOs) was developed taking Serratia marcescens NBRI1213 as model organism. The defined National Botanical Research Institute’s growth medium (NBRI) supplemented with 0.1% maltose, designed for quantitative estimation of phosphate accumulation was designated as PAM. Our work suggested usage of PAM for efficient qualitative screening and as a microbiological medium for preferential selection of PAOs on Petri-plates. For qualitative screening of PAOs, Toluidine blue-O dye (TBO) was supplemented in PAM, designated as PAM-TBO. Qualitative analysis of phosphate accumulated by various groups correlated well with grouping based upon quantitative analysis of PAOs, effect of carbon, nitrogen, salts, and phosphate accumulation-defective transposon mutants. For significantly increasing sample throughput, efficiency of screening PAOs was further enhanced by adaptation of PAM-TBO assay to microtiter plate based method. It is envisaged that usage of this medium will be salutary for quick screening of PAOs from environment.
---
paper_title: Sensitive fluorescence detection of polyphosphate in polyacrylamide gels using 4′,6-diamidino-2-phenylindol
paper_content:
PAGE is commonly used to identify and resolve inorganic polyphosphates (polyP). We now report highly sensitive and specific staining methods for polyP in polyacrylamide gels based on the fluorescent dye, 4',6-diamidino-2-phenylindol (DAPI). DAPI bound to polyP in gels fluoresced yellow while DAPI bound to nucleic acids or glycosaminoglycans fluoresced blue. Inclusion of EDTA prevented staining of glycosaminoglycans by DAPI. We also identified conditions under which DAPI that was bound to polyP (but not nucleic acids or other anionic polymers) rapidly photobleached. This allowed us to develop an even more sensitive and specific negative staining method that distinguishes polyP from nucleic acids and glycosaminoglycans. The lower LOD using DAPI negative staining was 4 pmol (0.3 ng) phosphate per band, compared to conventional toluidine blue staining with a lower LOD of 250 pmol per band.
---
paper_title: Accumulation of phosphate and polyphosphate by Cryptococcus humicola and Saccharomyces cerevisiae in the absence of nitrogen.
paper_content:
The search for new phosphate-accumulating microorganisms is of interest in connection with the problem of excess phosphate in environment. The ability of some yeast species belonging to ascomycetes and basidiomycetes for phosphate (P (i) ) accumulation in nitrogen-deficient medium was studied. The ascomycetous Saccharomyces cerevisiae and Kuraishia capsulata and basidiomycetous Cryptococcus humicola, Cryptococcus curvatus, and Pseudozyma fusiformata were the best in P (i) removal. The cells of Cryptococcus humicola and S. cerevisiae took up 40% P (i) from the media containing P (i) and glucose (5 and 30 mM, respectively), and up to 80% upon addition of 5 mM MgSO(4) (.) The cells accumulated P (i) mostly in the form of polyphosphate (PolyP). In the presence of Mg(2+) , the content of PolyP with longer average chain length increased in both yeasts; they both had numerous inclusions fluorescing in the yellow region of the spectrum, typical of DAPI-PolyP complexes. Among the yeast species tested, Cryptococcus humicola is a new promising model organisms to study phosphorus removal from the media and biomineralization in microbial cells.
---
paper_title: Accumulation of inorganic polyphosphates in Saccharomyces cerevisiae under nitrogen deprivation: Stimulation by magnesium ions and peculiarities of localization
paper_content:
The yeast Saccharomyces cerevisiae was shown to have a high potential as a phosphate-accumulating organism under growth suppression by nitrogen limitation. The cells took up over 40% of phosphate from the medium containing 30 mM glucose and 5 mM potassium phosphate and over 80% of phosphate on addition of 5 mM magnesium sulfate. The major part of accumulated Pi was reserved as polyphosphates. The content of polyphosphates was ∼57 and ∼75% of the phosphate accumulated by the cells in the absence and presence of magnesium ions, respectively. The content of long-chain polyphosphates increased in the presence of magnesium ions, 5-fold for polymers with the average length of ∼45 phosphate residues, 3.7-fold for polymers with the average chain length of ∼75 residues, and more than 10-fold for polymers with the average chain length of ∼200 residues. On the contrary, the content of polyphosphates with the average chain length of ∼15 phosphate residues decreased threefold. According to the data of electron and confocal microscopy and X-ray microanalysis, the accumulated polyphosphates were localized in the cytoplasm and vacuoles. The cytoplasm of the cells accumulating polyphosphates in the presence of magnesium ions had numerous small phosphorus-containing inclusions; some of them were associated with large electron-transparent inclusions and the cytoplasmic membrane.
---
paper_title: Myeloma cells contain high levels of inorganic polyphosphate which is associated with nucleolar transcription
paper_content:
Background. In hematology there has recently been increasing interest in inorganic polyphosphate. This polymer accumulates in platelet granules and its functions include modulating various stages in blood coagulation, inducing angiogenesis, and provoking apoptosis of plasma cells. In this work, we evaluate the characteristics of intracellular polyphosphate in myeloma cell lines, in primary myeloma cells from patients, and in other human B-cell populations from healthy donors. ::: ::: Design and Methods. We have developed a novel method for detecting levels of polyphosphate in cell populations using flow cytometry. We also have studied polyphosphate localization and characteristics, using confocal microscopy and enzymatic analysis experiments. ::: ::: Results. We have found that myeloma plasma cells present higher levels of intracellular polyphosphate than normal plasma cells and other B-cell populations. Localization experiments indicated that polyphosphate accumulates at high levels in the nucleolus of myeloma cells. As the principal function of the nucleolus involves the transcription of ribosomal DNA genes, we have found changes in the cellular distribution of polyphosphate after the inhibition of the nucleolar transcription. In addition, we have found that RNA polymerase I activity, responsible for transcription in the nucleolus, is also modulated by polyphosphate, in a dose-dependent manner. ::: ::: Conclusions. Our results show an unusual high accumulation of polyphosphate in the nucleoli of myeloma cells and a functional relationship of this polymer with the nucleolar transcription.
---
paper_title: Accumulation of inorganic polyphosphates in Saccharomyces cerevisiae under nitrogen deprivation: Stimulation by magnesium ions and peculiarities of localization
paper_content:
The yeast Saccharomyces cerevisiae was shown to have a high potential as a phosphate-accumulating organism under growth suppression by nitrogen limitation. The cells took up over 40% of phosphate from the medium containing 30 mM glucose and 5 mM potassium phosphate and over 80% of phosphate on addition of 5 mM magnesium sulfate. The major part of accumulated Pi was reserved as polyphosphates. The content of polyphosphates was ∼57 and ∼75% of the phosphate accumulated by the cells in the absence and presence of magnesium ions, respectively. The content of long-chain polyphosphates increased in the presence of magnesium ions, 5-fold for polymers with the average length of ∼45 phosphate residues, 3.7-fold for polymers with the average chain length of ∼75 residues, and more than 10-fold for polymers with the average chain length of ∼200 residues. On the contrary, the content of polyphosphates with the average chain length of ∼15 phosphate residues decreased threefold. According to the data of electron and confocal microscopy and X-ray microanalysis, the accumulated polyphosphates were localized in the cytoplasm and vacuoles. The cytoplasm of the cells accumulating polyphosphates in the presence of magnesium ions had numerous small phosphorus-containing inclusions; some of them were associated with large electron-transparent inclusions and the cytoplasmic membrane.
---
paper_title: Biological phosphorus removal from abattoir wastewater at very short sludge ages mediated by novel PAO clade Comamonadaceae.
paper_content:
Recent increases in global phosphorus costs, together with the need to remove phosphorus from wastewater to comply with water discharge regulations, make phosphorus recovery from wastewater economically and environmentally attractive. Biological phosphorus (Bio-P) removal process can effectively capture the phosphorus from wastewater and concentrate it in a form that is easily amendable for recovery in contrast to traditional (chemical) phosphorus removal processes. However, Bio-P removal processes have historically been operated at medium to long solids retention times (SRTs, 10-20 days typically), which inherently increases the energy consumption while reducing the recoverable carbon fraction and hence makes it incompatible with the drive towards energy self-sufficient wastewater treatment plants. In this study, a novel high-rate Bio-P removal process has been developed as an energy efficient alternative for phosphorus removal from wastewater through operation at an SRT of less than 4 days. The process was most effective at an SRT of 2-2.5 days, achieving >90% phosphate removal. Further reducing the SRT to 1.7 days resulted in a loss of Bio-P activity. 16S pyrotag sequencing showed the community changed considerably with changes in the SRT, but that Comamonadaceae was consistently abundant when the Bio-P activity was evident. FISH analysis combined with DAPI staining confirmed that bacterial cells of Comamonadaceae arranged in tetrads contained polyphosphate, identifying them as the key polyphosphate accumulating organisms at these low SRT conditions. Overall, this paper demonstrates a novel, high-rate phosphorus removal process that can be effectively integrated with short SRT, energy-efficient carbon removal and recovery processes.
---
paper_title: Isolation and Phylogenetic Analysis of Polyphosphate Accumulating Organisms in Water and Sludge of Intensive Catfish Ponds in the Mekong Delta, Vietnam
paper_content:
Polyphosphate accumulating organisms were isolated from water and sludge samples of intensive catfish ponds in the Mekong Delta, Vietnam. The Results of estimation of intracellular polyphosphate concentration conducted on each of monocultures indicated that the content of intracellular polyphosphate varied from 2 mg/l to 148.1 mg/l after 6 days of incubation in the medium. Of 191 isolates, twenty-one have uptake and store intracellular phosphate from 19.6 to 148.1 mg/l. They have shaped like a rods and short rods or cocci, a few of them were slightly curved or straight or curved rods. The majority of them are gram-positive (76.2%) and the remains are gram-negative. The partial 16S rRNA genes of these isolates were sequenced and compared with bacterial 16S rRNA genes in Genbank using BlastN Program. Phylogenetic tree was constructed on the basic 16S rRNA gene sequences demonstrating the population of high phosphate accumulating bacteria obtained from samples of catfish ponds were affiliated with four major bacterial lineages. Twenty-one bacteria isolates from samples of catfish ponds included in four classes: Bacilli, Actinobacteria, Beta-proteobacteria, Gamma-proteobacteria. The majority of the strains showed excess phosphate accumulation. Strains related to Bacillus sp. were dominant bacteria group constituted up to 52.4% of all identified isolates, but high phosphate accumulating bacteria are Burkholderia vietnamiensis TVT003L within class Beta-proteobacteria, Acinetobacter radioresistens TGT013L within Gamma-proteobacteria and Arthrobacter protophomiae VLT002L within class Actinobacteria. Methyl blue Loeffler’s staning and electron microscopy examination confirmed that the bacteria had stored polyphosphate granules intracellularly.
---
paper_title: Advances in techniques for phosphorus analysis in biological sources.
paper_content:
In general, conventional P analysis methods suffer from not only the fastidious extraction and pre-treatment procedures required but also the generally low specificity and poor resolution regarding the P composition and its temporal and spatial dynamics. More powerful yet feasible P analysis tools are in demand to help elucidating the biochemistry nature, roles and dynamics of various phosphorus-containing molecules in vitro and in vivo. Recent advances in analytical chemistry, especially in molecular and atomic spectrometry such as NMR, Raman and X-ray techniques, have enabled unique capability of P analysis relevant to submicron scale biochemical processes in individual cell and in natural samples without introducing too complex and invasive pretreatment steps. Great potential still remains to be explored in wider and more combined and integrated requests of these techniques to allow for new possibilities and more powerful P analysis in biological systems. This review provides a comprehensive summary of the available methods and recent developments in analytical techniques and their applications for characterization and quantification of various forms of phosphorus, particularly polyphosphate, in different biological sources.
---
paper_title: Identification of functionally relevant populations in enhanced biological phosphorus removal processes based on intracellular polymers profiles and insights into the metabolic diversity and heterogeneity.
paper_content:
This study proposed and demonstrated the application of a new Raman microscopy-based method for metabolic state-based identification and quantification of functionally relevant populations, namely polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), in enhanced biological phosphorus removal (EBPR) system via simultaneous detection of multiple intracellular polymers including polyphosphate (polyP), glycogen, and polyhydroxybutyrate (PHB). The unique Raman spectrum of different combinations of intracellular polymers within a cell at a given stage of the EBPR cycle allowed for its identification as PAO, GAO, or neither. The abundance of total PAOs and GAOs determined by Raman method were consistent with those obtained with polyP staining and fluorescence in situ hybridization (FISH). Different combinations and quantities of intracellular polymer inclusions observed in single cells revealed the distribution of different sub-PAOs groups among the total PAO populations, which exhibit phenotypic and metabolic heterogeneity and diversity. These results also provided evidence for the hypothesis that different PAOs may employ different extents of combination of glycolysis and TCA cycle pathways for anaerobic reducing power and energy generation and it is possible that some PAOs may rely on TCA cycle solely without glycolysis. Sum of cellular level quantification of the internal polymers associated with different population groups showed differentiated and distributed trends of glycogen and PHB level between PAOs and GAOs, which could not be elucidated before with conventional bulk measurements of EBPR mixed cultures.
---
paper_title: Copper ions stimulate polyphosphate degradation and phosphate efflux in Acidithiobacillus ferrooxidans.
paper_content:
For some bacteria and algae, it has been proposed that inorganic polyphosphates and transport of metal-phosphate complexes could participate in heavy metal tolerance. To test for this possibility in Acidithiobacillus ferrooxidans, a microorganism with a high level of resistance to heavy metals, the polyphosphate levels were determined when the bacterium was grown in or shifted to the presence of a high copper concentration (100 mM). Under these conditions, cells showed a rapid decrease in polyphosphate levels with a concomitant increase in exopolyphosphatase activity and a stimulation of phosphate efflux. Copper in the range of 1 to 2 μM greatly stimulated exopolyphosphatase activity in cell extracts from A. ferrooxidans. The same was seen to a lesser extent with cadmium and zinc. Bioinformatic analysis of the available A. ferrooxidans ATCC 23270 genomic sequence did not show a putative pit gene for phosphate efflux but rather an open reading frame similar in primary and secondary structure to that of the Saccharomyces cerevisiae phosphate transporter that is functional at acidic pH (Pho84). Our results support a model for metal detoxification in which heavy metals stimulate polyphosphate hydrolysis and the metal-phosphate complexes formed are transported out of the cell as part of a possibly functional heavy metal tolerance mechanism in A. ferrooxidans.
---
paper_title: Microscopy evidence of bacterial microfossils in phosphorite crusts of the Peruvian shelf: Implications for phosphogenesis mechanisms
paper_content:
Abstract Phosphorites are sedimentary formations enriched in Ca-phosphate minerals. The precipitation of these minerals is thought to be partly mediated by the activity of microorganisms. The vast majority of studies on phosphorites have focused on a petrological and geochemical characterization of these rocks. However, detailed descriptions are needed at the sub-micrometer scale at which crucial information can be retrieved about traces of past or modern microbial activities. Here, scanning electron microscopy (SEM) analyses of a recent phosphorite crust from the upwelling-style phosphogenesis area off Peru revealed that it contained a great number of rod-like and coccus-like shaped micrometer-sized (~ 1.1 μm and 0.5 μm, respectively) objects, referred to as biomorphs. Some of these biomorphs were filled with carbonate fluoroapatite (CFA, a calcium-phosphate phase common in phosphorites); some were empty; some were surrounded by one or two layers of pyrite. Transmission electron microscopy (TEM) and energy dispersive X-ray spectrometry (EDXS) analyses were performed on focused ion beam (FIB) milled ultrathin foils to characterize the texture of CFA and pyrite in these biomorphs at the few nanometer scale. Non-pyritized phosphatic biomorphs were surrounded by a thin (5–15 nm thick) rim appearing as a void on TEM images. Bundles of CFA crystals sharing the same crystallographic orientations (aligned along their c -axis) were found in the interior of some biomorphs. Pyrite formed a thick (~ 35–115 nm) layer with closely packed crystals surrounding the pyritized biomorphs, whereas pyrite crystals at distance from the biomorphs were smaller and distributed more sparsely. Scanning transmission X-ray microscopy (STXM) analyses performed at the C K-edge provided maps of organic and inorganic carbon in the samples. Inorganic C, mainly present as carbonate groups in the CFA lattice, was homogeneously distributed, whereas organic C was concentrated in the rims of the phosphatic biomorphs. Finally, STXM analyses at the Fe L 2,3 -edges together with TEM-EDXS analyses, revealed that some pyritized biomorphs experienced partial oxidation. The mineralogical features of these phosphatic biomorphs are very similar to those formed by bacteria having precipitated phosphate minerals intra- and extracellularly in laboratory experiments. Similarly, pyritized biomorphs resemble bacteria encrusted by pyrite. We therefore interpret phosphatic and pyritized biomorphs present in the Peruvian phosphorite crust as microorganisms fossilized near the boundary of zones of sulfate reduction. The implications of these observations are then discussed in the light of the different possible and non-exclusive microbially-driven phosphogenesis mechanisms that have been proposed in the past: (i) Organic matter mineralization, in particular mediated by iron reducing bacteria and/or sulfate-reducing bacteria (SRB), (ii) reduction of iron-(oxyhydr)oxides by iron-reducing bacteria and/or SRB, and (iii) polyphosphate metabolism in sulfide-oxidizing bacteria, possibly associated with SRB.
---
paper_title: Precipitation of Phosphate Minerals by Microorganisms Isolated from a Fixed-Biofilm Reactor Used for the Treatment of Domestic Wastewater
paper_content:
The ability of bacteria isolated from a fixed-film bioreactor to precipitate phosphate crystals for the treatment of domestic wastewater in both artificial and natural media was studied. When this was demonstrated in artificial solid media for crystal formation, precipitation took place rapidly, and crystal formation began 3 days after inoculation. The percentage of phosphate-forming bacteria was slightly higher than 75%. Twelve major colonies with phosphate precipitation capacity were the dominant heterotrophic platable bacteria growing aerobically in artificial media. According to their taxonomic affiliations (based on partial sequencing of the 16S rRNA), the 12 strains belonged to the following genera of Gram-negative bacteria: Rhodobacter, Pseudoxanthobacter, Escherichia, Alcaligenes, Roseobacter, Ochrobactrum, Agromyce, Sphingomonas and Paracoccus. The phylogenetic tree shows that most of the identified populations were evolutionarily related to the Alphaproteobacteria (91.66% of sequences). The minerals formed were studied by X-ray diffraction, scanning electron microscopy (SEM), and energy dispersive X-ray microanalysis (EDX). All of these strains formed phosphate crystals and precipitated struvite (MgNH4PO4·6H2O), bobierrite [Mg3(PO4)2·8H2O] and baricite [(MgFe)3(PO4)2·8H2O]. The results obtained in this study show that struvite and spherulite crystals did not show any cell marks. Moreover, phosphate precipitation was observed in the bacterial mass but also near the colonies. Our results suggest that the microbial population contributed to phosphate precipitation by changing the media as a consequence of their metabolic activity. Moreover, the results of this research suggest that bacteria play an active role in the mineral precipitation of soluble phosphate from urban wastewater in submerged fixed-film bioreactors.
---
paper_title: Accumulation of inorganic polyphosphates in Saccharomyces cerevisiae under nitrogen deprivation: Stimulation by magnesium ions and peculiarities of localization
paper_content:
The yeast Saccharomyces cerevisiae was shown to have a high potential as a phosphate-accumulating organism under growth suppression by nitrogen limitation. The cells took up over 40% of phosphate from the medium containing 30 mM glucose and 5 mM potassium phosphate and over 80% of phosphate on addition of 5 mM magnesium sulfate. The major part of accumulated Pi was reserved as polyphosphates. The content of polyphosphates was ∼57 and ∼75% of the phosphate accumulated by the cells in the absence and presence of magnesium ions, respectively. The content of long-chain polyphosphates increased in the presence of magnesium ions, 5-fold for polymers with the average length of ∼45 phosphate residues, 3.7-fold for polymers with the average chain length of ∼75 residues, and more than 10-fold for polymers with the average chain length of ∼200 residues. On the contrary, the content of polyphosphates with the average chain length of ∼15 phosphate residues decreased threefold. According to the data of electron and confocal microscopy and X-ray microanalysis, the accumulated polyphosphates were localized in the cytoplasm and vacuoles. The cytoplasm of the cells accumulating polyphosphates in the presence of magnesium ions had numerous small phosphorus-containing inclusions; some of them were associated with large electron-transparent inclusions and the cytoplasmic membrane.
---
paper_title: Biological phosphorus removal from abattoir wastewater at very short sludge ages mediated by novel PAO clade Comamonadaceae.
paper_content:
Recent increases in global phosphorus costs, together with the need to remove phosphorus from wastewater to comply with water discharge regulations, make phosphorus recovery from wastewater economically and environmentally attractive. Biological phosphorus (Bio-P) removal process can effectively capture the phosphorus from wastewater and concentrate it in a form that is easily amendable for recovery in contrast to traditional (chemical) phosphorus removal processes. However, Bio-P removal processes have historically been operated at medium to long solids retention times (SRTs, 10-20 days typically), which inherently increases the energy consumption while reducing the recoverable carbon fraction and hence makes it incompatible with the drive towards energy self-sufficient wastewater treatment plants. In this study, a novel high-rate Bio-P removal process has been developed as an energy efficient alternative for phosphorus removal from wastewater through operation at an SRT of less than 4 days. The process was most effective at an SRT of 2-2.5 days, achieving >90% phosphate removal. Further reducing the SRT to 1.7 days resulted in a loss of Bio-P activity. 16S pyrotag sequencing showed the community changed considerably with changes in the SRT, but that Comamonadaceae was consistently abundant when the Bio-P activity was evident. FISH analysis combined with DAPI staining confirmed that bacterial cells of Comamonadaceae arranged in tetrads contained polyphosphate, identifying them as the key polyphosphate accumulating organisms at these low SRT conditions. Overall, this paper demonstrates a novel, high-rate phosphorus removal process that can be effectively integrated with short SRT, energy-efficient carbon removal and recovery processes.
---
paper_title: Methods for Detection and Quantification of Polyphosphate and Polyphosphate Accumulating Microorganisms in Aquatic Sediments
paper_content:
It has been speculated that the microbial P pool is highly variable in the uppermost layer of various aquatic sediments, especially when an excessive P accumulation in form of polyphosphate (Poly-P) occurs. Poly-P storage is a universal feature of many different organisms and has been technically optimised in wastewater treatment plants (WWTP) with enhanced biological phosphorus removal (EBPR). In the recent past, new insights into mechanisms of P elimination in WWTP almost exclusively depended on the development and application of novel methods like 31P-NMR spectroscopy and molecular methods for identifying Poly-P accumulating microorganisms (PAO). The aim of the present review is to compile current methods potentially available for detection and quantification of Poly-P in sediments and to complement it with yet unpublished results to validate their application in natural sediments. The most powerful tool for reliable Poly-P quantification in sediments is the liquid 31P-NMR technique which has been successfully used for Poly-P measurements in a variety of aquatic sediments. But the microorganisms as well as mechanisms involved in Poly-P storage and cycling are largely unknown. Therefore, we also intend to stimulate future studies focusing on these encouraging topics in sediment research via the implementation of novel methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
---
paper_title: Advances in techniques for phosphorus analysis in biological sources.
paper_content:
In general, conventional P analysis methods suffer from not only the fastidious extraction and pre-treatment procedures required but also the generally low specificity and poor resolution regarding the P composition and its temporal and spatial dynamics. More powerful yet feasible P analysis tools are in demand to help elucidating the biochemistry nature, roles and dynamics of various phosphorus-containing molecules in vitro and in vivo. Recent advances in analytical chemistry, especially in molecular and atomic spectrometry such as NMR, Raman and X-ray techniques, have enabled unique capability of P analysis relevant to submicron scale biochemical processes in individual cell and in natural samples without introducing too complex and invasive pretreatment steps. Great potential still remains to be explored in wider and more combined and integrated requests of these techniques to allow for new possibilities and more powerful P analysis in biological systems. This review provides a comprehensive summary of the available methods and recent developments in analytical techniques and their applications for characterization and quantification of various forms of phosphorus, particularly polyphosphate, in different biological sources.
---
paper_title: Identification of functionally relevant populations in enhanced biological phosphorus removal processes based on intracellular polymers profiles and insights into the metabolic diversity and heterogeneity.
paper_content:
This study proposed and demonstrated the application of a new Raman microscopy-based method for metabolic state-based identification and quantification of functionally relevant populations, namely polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), in enhanced biological phosphorus removal (EBPR) system via simultaneous detection of multiple intracellular polymers including polyphosphate (polyP), glycogen, and polyhydroxybutyrate (PHB). The unique Raman spectrum of different combinations of intracellular polymers within a cell at a given stage of the EBPR cycle allowed for its identification as PAO, GAO, or neither. The abundance of total PAOs and GAOs determined by Raman method were consistent with those obtained with polyP staining and fluorescence in situ hybridization (FISH). Different combinations and quantities of intracellular polymer inclusions observed in single cells revealed the distribution of different sub-PAOs groups among the total PAO populations, which exhibit phenotypic and metabolic heterogeneity and diversity. These results also provided evidence for the hypothesis that different PAOs may employ different extents of combination of glycolysis and TCA cycle pathways for anaerobic reducing power and energy generation and it is possible that some PAOs may rely on TCA cycle solely without glycolysis. Sum of cellular level quantification of the internal polymers associated with different population groups showed differentiated and distributed trends of glycogen and PHB level between PAOs and GAOs, which could not be elucidated before with conventional bulk measurements of EBPR mixed cultures.
---
paper_title: Copper ions stimulate polyphosphate degradation and phosphate efflux in Acidithiobacillus ferrooxidans.
paper_content:
For some bacteria and algae, it has been proposed that inorganic polyphosphates and transport of metal-phosphate complexes could participate in heavy metal tolerance. To test for this possibility in Acidithiobacillus ferrooxidans, a microorganism with a high level of resistance to heavy metals, the polyphosphate levels were determined when the bacterium was grown in or shifted to the presence of a high copper concentration (100 mM). Under these conditions, cells showed a rapid decrease in polyphosphate levels with a concomitant increase in exopolyphosphatase activity and a stimulation of phosphate efflux. Copper in the range of 1 to 2 μM greatly stimulated exopolyphosphatase activity in cell extracts from A. ferrooxidans. The same was seen to a lesser extent with cadmium and zinc. Bioinformatic analysis of the available A. ferrooxidans ATCC 23270 genomic sequence did not show a putative pit gene for phosphate efflux but rather an open reading frame similar in primary and secondary structure to that of the Saccharomyces cerevisiae phosphate transporter that is functional at acidic pH (Pho84). Our results support a model for metal detoxification in which heavy metals stimulate polyphosphate hydrolysis and the metal-phosphate complexes formed are transported out of the cell as part of a possibly functional heavy metal tolerance mechanism in A. ferrooxidans.
---
paper_title: Microscopy evidence of bacterial microfossils in phosphorite crusts of the Peruvian shelf: Implications for phosphogenesis mechanisms
paper_content:
Abstract Phosphorites are sedimentary formations enriched in Ca-phosphate minerals. The precipitation of these minerals is thought to be partly mediated by the activity of microorganisms. The vast majority of studies on phosphorites have focused on a petrological and geochemical characterization of these rocks. However, detailed descriptions are needed at the sub-micrometer scale at which crucial information can be retrieved about traces of past or modern microbial activities. Here, scanning electron microscopy (SEM) analyses of a recent phosphorite crust from the upwelling-style phosphogenesis area off Peru revealed that it contained a great number of rod-like and coccus-like shaped micrometer-sized (~ 1.1 μm and 0.5 μm, respectively) objects, referred to as biomorphs. Some of these biomorphs were filled with carbonate fluoroapatite (CFA, a calcium-phosphate phase common in phosphorites); some were empty; some were surrounded by one or two layers of pyrite. Transmission electron microscopy (TEM) and energy dispersive X-ray spectrometry (EDXS) analyses were performed on focused ion beam (FIB) milled ultrathin foils to characterize the texture of CFA and pyrite in these biomorphs at the few nanometer scale. Non-pyritized phosphatic biomorphs were surrounded by a thin (5–15 nm thick) rim appearing as a void on TEM images. Bundles of CFA crystals sharing the same crystallographic orientations (aligned along their c -axis) were found in the interior of some biomorphs. Pyrite formed a thick (~ 35–115 nm) layer with closely packed crystals surrounding the pyritized biomorphs, whereas pyrite crystals at distance from the biomorphs were smaller and distributed more sparsely. Scanning transmission X-ray microscopy (STXM) analyses performed at the C K-edge provided maps of organic and inorganic carbon in the samples. Inorganic C, mainly present as carbonate groups in the CFA lattice, was homogeneously distributed, whereas organic C was concentrated in the rims of the phosphatic biomorphs. Finally, STXM analyses at the Fe L 2,3 -edges together with TEM-EDXS analyses, revealed that some pyritized biomorphs experienced partial oxidation. The mineralogical features of these phosphatic biomorphs are very similar to those formed by bacteria having precipitated phosphate minerals intra- and extracellularly in laboratory experiments. Similarly, pyritized biomorphs resemble bacteria encrusted by pyrite. We therefore interpret phosphatic and pyritized biomorphs present in the Peruvian phosphorite crust as microorganisms fossilized near the boundary of zones of sulfate reduction. The implications of these observations are then discussed in the light of the different possible and non-exclusive microbially-driven phosphogenesis mechanisms that have been proposed in the past: (i) Organic matter mineralization, in particular mediated by iron reducing bacteria and/or sulfate-reducing bacteria (SRB), (ii) reduction of iron-(oxyhydr)oxides by iron-reducing bacteria and/or SRB, and (iii) polyphosphate metabolism in sulfide-oxidizing bacteria, possibly associated with SRB.
---
paper_title: Precipitation of Phosphate Minerals by Microorganisms Isolated from a Fixed-Biofilm Reactor Used for the Treatment of Domestic Wastewater
paper_content:
The ability of bacteria isolated from a fixed-film bioreactor to precipitate phosphate crystals for the treatment of domestic wastewater in both artificial and natural media was studied. When this was demonstrated in artificial solid media for crystal formation, precipitation took place rapidly, and crystal formation began 3 days after inoculation. The percentage of phosphate-forming bacteria was slightly higher than 75%. Twelve major colonies with phosphate precipitation capacity were the dominant heterotrophic platable bacteria growing aerobically in artificial media. According to their taxonomic affiliations (based on partial sequencing of the 16S rRNA), the 12 strains belonged to the following genera of Gram-negative bacteria: Rhodobacter, Pseudoxanthobacter, Escherichia, Alcaligenes, Roseobacter, Ochrobactrum, Agromyce, Sphingomonas and Paracoccus. The phylogenetic tree shows that most of the identified populations were evolutionarily related to the Alphaproteobacteria (91.66% of sequences). The minerals formed were studied by X-ray diffraction, scanning electron microscopy (SEM), and energy dispersive X-ray microanalysis (EDX). All of these strains formed phosphate crystals and precipitated struvite (MgNH4PO4·6H2O), bobierrite [Mg3(PO4)2·8H2O] and baricite [(MgFe)3(PO4)2·8H2O]. The results obtained in this study show that struvite and spherulite crystals did not show any cell marks. Moreover, phosphate precipitation was observed in the bacterial mass but also near the colonies. Our results suggest that the microbial population contributed to phosphate precipitation by changing the media as a consequence of their metabolic activity. Moreover, the results of this research suggest that bacteria play an active role in the mineral precipitation of soluble phosphate from urban wastewater in submerged fixed-film bioreactors.
---
paper_title: Inorganic polyphosphate: essential for growth and survival.
paper_content:
Inorganic polyphosphate (Poly P) is a polymer of tens to hundreds of phosphate residues linked by "high-energy" phosphoanhydride bonds as in ATP. Found in abundance in all cells in nature, it is unique in its likely role in the origin and survival of species. Here, we present extensive evidence that the remarkable properties of Poly P as a polyanion have made it suited for a crucial role in the emergence of cells on earth. Beyond that, Poly P has proved in a variety of ways to be essential for growth of cells, their responses to stresses and stringencies, and the virulence of pathogens. In this review, we pay particular attention to the enzyme, polyphosphate kinase 1 (Poly P kinase 1 or PPK1), responsible for Poly P synthesis and highly conserved in many bacterial species, including 20 or more of the major pathogens. Mutants lacking PPK1 are defective in motility, quorum sensing, biofilm formation, and virulence. Structural studies are cited that reveal the conserved ATP-binding site of PPK1 at atomic resolution and reveal that the site can be blocked with minute concentrations of designed inhibitors. Another widely conserved enzyme is PPK2, which has distinctive kinetic properties and is also implicated in the virulence of some pathogens. Thus, these enzymes, absent in yeast and animals, are novel attractive targets for treatment of many microbial diseases. Still another enzyme featured in this review is one discovered in Dictyostelium discoideum that becomes an actin-like fiber concurrent with the synthesis, step by step, of a Poly P chain made from ATP. The Poly P-actin fiber complex, localized in the cell, lengthens and recedes in response to metabolic signals. Homologs of DdPPK2 are found in pathogenic protozoa and in the alga Chlamydomonas. Beyond the immediate relevance of Poly P as a target for anti-infective drugs, a large variety of cellular operations that rely on Poly P will be considered.
---
paper_title: Advances in techniques for phosphorus analysis in biological sources.
paper_content:
In general, conventional P analysis methods suffer from not only the fastidious extraction and pre-treatment procedures required but also the generally low specificity and poor resolution regarding the P composition and its temporal and spatial dynamics. More powerful yet feasible P analysis tools are in demand to help elucidating the biochemistry nature, roles and dynamics of various phosphorus-containing molecules in vitro and in vivo. Recent advances in analytical chemistry, especially in molecular and atomic spectrometry such as NMR, Raman and X-ray techniques, have enabled unique capability of P analysis relevant to submicron scale biochemical processes in individual cell and in natural samples without introducing too complex and invasive pretreatment steps. Great potential still remains to be explored in wider and more combined and integrated requests of these techniques to allow for new possibilities and more powerful P analysis in biological systems. This review provides a comprehensive summary of the available methods and recent developments in analytical techniques and their applications for characterization and quantification of various forms of phosphorus, particularly polyphosphate, in different biological sources.
---
paper_title: Recovery of phosphorus from dairy manure: a pilot-scale study.
paper_content:
Phosphorus was recovered from dairy manure via a microwave-enhanced advanced oxidation process (MW/H2O2-AOP) followed by struvite crystallization in a pilot-scale continuous flow operation. Soluble phosphorus in dairy manure increased by over 50% after the MW/H2O2-AOP, and the settleability of suspended solids was greatly improved. More than 50% of clear supernatant was obtained after microwave treatment, and the maximum volume of supernatant was obtained at a hydrogen peroxide dosage of 0.3% and pH 3.5. By adding oxalic acid into the supernatant, about 90% of calcium was removed, while more than 90% of magnesium was retained. As a result, the resulting solution was well suited for struvite crystallization. Nearly 95% of phosphorus in the treated supernatant was removed and recovered as struvite.
---
paper_title: Determination of carbon, carbonate, nitrogen, and phosphorus in freshwater sediments by near-infrared reflectance spectroscopy: Rapid analysis and a check on conventional analytical methods
paper_content:
Sediments are typically analyzed for C, N, and P for characterization, sediment quality assessment, and in nutrient and contaminant studies. Cost and time required for analysis of these constituents by conventional chemical techniques can be limiting factors in these studies. Determination of these constituents by near-infrared reflectance spectroscopy (NIRS) may be a rapid, cost-effective method provided the technology can be applied generally across aquatic ecosystems. In this study, we explored the feasibility of using NIRS to predict total C, CO3−2 organic C, N, and P in deep-water sediment cores from four Canadian lakes varying over 19 degrees of latitude. Concentration ranges of constituents in the samples (dry weight basis) were total C, 12-55; CO3−2, 6-26; organic C, 7-31; N, 0.6-3.1; and P, 0.22-2.1 mg g−1. Coefficients of determination, r2, between results from conventional chemical analysis and NIR-predicted concentrations, based on calibrations across all the four lakes, were 0.97-0.99 for total C, organic C, and N. Prediction for CO3−2 was good for the hard water lake from a calibration across all four lakes, but this constituent in the three soft water lakes was better predicted by a calibration across the soft water lakes. The NIR calibration for P fell below acceptable levels for the technique, but proved useful in the identification of outliers from the chemical method that were later removed with the re-analysis of several samples. This study demonstrated that NIRS is useful for rapid, simultaneous, cost-effective analysis of total C, CO3−2, organic C, N, and P in dried sediments from lakes at widely varying latitudes. Also, this study showed that NIRS is an independent analytical tool useful for the identification of outliers that may be due to error during the analysis or to distinctive composition of the samples.
---
paper_title: Phosphorus recovery from wastewater through microbial processes
paper_content:
Waste streams offer a compelling opportunity to recover phosphorus (P). 15–20% of world demand for phosphate rock could theoretically be satisfied by recovering phosphorus from domestic waste streams alone. For very dilute streams ( −1 ), including domestic wastewater, it is necessary to concentrate phosphorus in order to make recovery and reuse feasible. This review discusses enhanced biological phosphorus removal (EBPR) as a key technology to achieve this. EBPR relies on polyphosphate accumulating organisms (PAOs) to take up phosphorus from waste streams, so concentrating phosphorus in biomass. The P-rich biosolids can be either directly applied to land, or solubilized and phosphorus recovered as a mineral product. Direct application is effective, but the product is bulky and carries contaminant risks that need to be managed. Phosphorus release can be achieved using either thermochemical or biochemical methods, while recovery is generally by precipitation as struvite. We conclude that while EBPR technology is mature, the subsequent phosphorus release and recovery technologies need additional development.
---
paper_title: Evaluation of Intracellular Polyphosphate Dynamics in Enhanced Biological Phosphorus Removal Process using Raman Microscopy
paper_content:
A Raman microscopy method was developed and successfully applied to evaluate the dynamics of intracellular polyphosphate in polyphosphate-accumulating organisms (PAOs) in enhanced biological phosphorus removal (EBPR) processes. Distinctive Raman spectra of polyphosphates allowed for both identification of PAOs and quantification of intracellular polyphosphate during various metabolic phases in a lab-scale EBPR process. Observation of polyphosphate at individual cell level indicated that there are distributed states of cells in terms of polyphosphate content at any given time, suggesting that agent-based distributive modeling would more accurately reflect the behavior of an EBPR process than the traditional average-state based modeling. The results, for the first time, showed that the polyphosphate depletion or replenishment observed at the overall population level were collective results from shifts/transition in the distribution of abundance of PAOs with different amounts of polyphosphate inclusions duri...
---
paper_title: Single-cell analysis of bacteria by Raman microscopy: spectral information on the chemical composition of cells and on the heterogeneity in a culture
paper_content:
In the acetone–butanol (ABE) fermentation process, the utilised organisms from the group of the solventogenic Clostridia go through a complex cell-cycle. The role of different cell types in product formation is not understood in detail yet. We aim to use Raman spectroscopy to characterise the population distribution in Clostridium cultures. Cell suspensions were dried on calcium fluoride carriers. Raman spectra of single cells were obtained using a confocal Raman microscope (Dilor, Lille, France). The laser beam was focused on individual cells through the microscope objective. Spectra with good signal-to-noise ratio were obtained. Cells of different morphology, but also apparently similar cells, showed different spectra. Several cell components could be detected and varied in quantity. Compared to other methods for single-cell analysis, the new method is much more time-consuming to analyse one individual cell. However, a large amount of chemical information is obtained from each single cell in a non-destructive, non-invasive way. Raman microscopy appears to be a suitable method for studying population distributions in bacterial cultures.
---
paper_title: Advances in techniques for phosphorus analysis in biological sources.
paper_content:
In general, conventional P analysis methods suffer from not only the fastidious extraction and pre-treatment procedures required but also the generally low specificity and poor resolution regarding the P composition and its temporal and spatial dynamics. More powerful yet feasible P analysis tools are in demand to help elucidating the biochemistry nature, roles and dynamics of various phosphorus-containing molecules in vitro and in vivo. Recent advances in analytical chemistry, especially in molecular and atomic spectrometry such as NMR, Raman and X-ray techniques, have enabled unique capability of P analysis relevant to submicron scale biochemical processes in individual cell and in natural samples without introducing too complex and invasive pretreatment steps. Great potential still remains to be explored in wider and more combined and integrated requests of these techniques to allow for new possibilities and more powerful P analysis in biological systems. This review provides a comprehensive summary of the available methods and recent developments in analytical techniques and their applications for characterization and quantification of various forms of phosphorus, particularly polyphosphate, in different biological sources.
---
paper_title: Identification of functionally relevant populations in enhanced biological phosphorus removal processes based on intracellular polymers profiles and insights into the metabolic diversity and heterogeneity.
paper_content:
This study proposed and demonstrated the application of a new Raman microscopy-based method for metabolic state-based identification and quantification of functionally relevant populations, namely polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), in enhanced biological phosphorus removal (EBPR) system via simultaneous detection of multiple intracellular polymers including polyphosphate (polyP), glycogen, and polyhydroxybutyrate (PHB). The unique Raman spectrum of different combinations of intracellular polymers within a cell at a given stage of the EBPR cycle allowed for its identification as PAO, GAO, or neither. The abundance of total PAOs and GAOs determined by Raman method were consistent with those obtained with polyP staining and fluorescence in situ hybridization (FISH). Different combinations and quantities of intracellular polymer inclusions observed in single cells revealed the distribution of different sub-PAOs groups among the total PAO populations, which exhibit phenotypic and metabolic heterogeneity and diversity. These results also provided evidence for the hypothesis that different PAOs may employ different extents of combination of glycolysis and TCA cycle pathways for anaerobic reducing power and energy generation and it is possible that some PAOs may rely on TCA cycle solely without glycolysis. Sum of cellular level quantification of the internal polymers associated with different population groups showed differentiated and distributed trends of glycogen and PHB level between PAOs and GAOs, which could not be elucidated before with conventional bulk measurements of EBPR mixed cultures.
---
paper_title: Application of Raman Microscopy for Simultaneous and Quantitative Evaluation of Multiple Intracellular Polymers Dynamics Functionally Relevant to Enhanced Biological Phosphorus Removal Processes
paper_content:
Polyphosphate (poly-P), polyhydroxyalkanoates (PHAs), and glycogen are the key functionally relevant intracellular polymers involved in the enhanced biological phosphorus removal (EBPR) process. Further understanding of the mechanisms of EBPR has been hampered by the lack of cellular level quantification tools to accurately measure the dynamics of these polymers during the EBPR process. In this study, we developed a novel Raman microscopy method for simultaneous identification and quantification of poly-P, PHB, and glycogen abundance in each individual cell and their distribution among the populations in EBPR. Validation of the method was demonstrated via a batch phosphorus uptake and release test, in which the total intracellular polymers abundance determined via Raman approach correlated well with those measured via conventional bulk chemical analysis (correlation coefficient r = 0.8 for poly-P, r = 0.94 for PHB, and r = 0.7 for glycogen). Raman results, for the first time, clearly showed the distributions of microbial cells containing different abundance levels of the three intracellular polymers under the same environmental conditions (at a given time point), indicating population heterogeneity exists. The results revealed the intracellular distribution and dynamics of the functionally relevant polymers in different metabolic stages of the EBPR process and elucidated the association of cellular metabolic state with the fate of these polymers during various substrates availability conditions.
---
paper_title: Inorganic polyphosphate: essential for growth and survival.
paper_content:
Inorganic polyphosphate (Poly P) is a polymer of tens to hundreds of phosphate residues linked by "high-energy" phosphoanhydride bonds as in ATP. Found in abundance in all cells in nature, it is unique in its likely role in the origin and survival of species. Here, we present extensive evidence that the remarkable properties of Poly P as a polyanion have made it suited for a crucial role in the emergence of cells on earth. Beyond that, Poly P has proved in a variety of ways to be essential for growth of cells, their responses to stresses and stringencies, and the virulence of pathogens. In this review, we pay particular attention to the enzyme, polyphosphate kinase 1 (Poly P kinase 1 or PPK1), responsible for Poly P synthesis and highly conserved in many bacterial species, including 20 or more of the major pathogens. Mutants lacking PPK1 are defective in motility, quorum sensing, biofilm formation, and virulence. Structural studies are cited that reveal the conserved ATP-binding site of PPK1 at atomic resolution and reveal that the site can be blocked with minute concentrations of designed inhibitors. Another widely conserved enzyme is PPK2, which has distinctive kinetic properties and is also implicated in the virulence of some pathogens. Thus, these enzymes, absent in yeast and animals, are novel attractive targets for treatment of many microbial diseases. Still another enzyme featured in this review is one discovered in Dictyostelium discoideum that becomes an actin-like fiber concurrent with the synthesis, step by step, of a Poly P chain made from ATP. The Poly P-actin fiber complex, localized in the cell, lengthens and recedes in response to metabolic signals. Homologs of DdPPK2 are found in pathogenic protozoa and in the alga Chlamydomonas. Beyond the immediate relevance of Poly P as a target for anti-infective drugs, a large variety of cellular operations that rely on Poly P will be considered.
---
paper_title: Methods for Detection and Quantification of Polyphosphate and Polyphosphate Accumulating Microorganisms in Aquatic Sediments
paper_content:
It has been speculated that the microbial P pool is highly variable in the uppermost layer of various aquatic sediments, especially when an excessive P accumulation in form of polyphosphate (Poly-P) occurs. Poly-P storage is a universal feature of many different organisms and has been technically optimised in wastewater treatment plants (WWTP) with enhanced biological phosphorus removal (EBPR). In the recent past, new insights into mechanisms of P elimination in WWTP almost exclusively depended on the development and application of novel methods like 31P-NMR spectroscopy and molecular methods for identifying Poly-P accumulating microorganisms (PAO). The aim of the present review is to compile current methods potentially available for detection and quantification of Poly-P in sediments and to complement it with yet unpublished results to validate their application in natural sediments. The most powerful tool for reliable Poly-P quantification in sediments is the liquid 31P-NMR technique which has been successfully used for Poly-P measurements in a variety of aquatic sediments. But the microorganisms as well as mechanisms involved in Poly-P storage and cycling are largely unknown. Therefore, we also intend to stimulate future studies focusing on these encouraging topics in sediment research via the implementation of novel methods. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
---
paper_title: Copper ions stimulate polyphosphate degradation and phosphate efflux in Acidithiobacillus ferrooxidans.
paper_content:
For some bacteria and algae, it has been proposed that inorganic polyphosphates and transport of metal-phosphate complexes could participate in heavy metal tolerance. To test for this possibility in Acidithiobacillus ferrooxidans, a microorganism with a high level of resistance to heavy metals, the polyphosphate levels were determined when the bacterium was grown in or shifted to the presence of a high copper concentration (100 mM). Under these conditions, cells showed a rapid decrease in polyphosphate levels with a concomitant increase in exopolyphosphatase activity and a stimulation of phosphate efflux. Copper in the range of 1 to 2 μM greatly stimulated exopolyphosphatase activity in cell extracts from A. ferrooxidans. The same was seen to a lesser extent with cadmium and zinc. Bioinformatic analysis of the available A. ferrooxidans ATCC 23270 genomic sequence did not show a putative pit gene for phosphate efflux but rather an open reading frame similar in primary and secondary structure to that of the Saccharomyces cerevisiae phosphate transporter that is functional at acidic pH (Pho84). Our results support a model for metal detoxification in which heavy metals stimulate polyphosphate hydrolysis and the metal-phosphate complexes formed are transported out of the cell as part of a possibly functional heavy metal tolerance mechanism in A. ferrooxidans.
---
paper_title: Characterization of intact subcellular bodies in whole bacteria by cryo-electron tomography and spectroscopic imaging.
paper_content:
We illustrate the combined use of cryo-electron tomography and spectroscopic difference imaging in the study of subcellular structure and subcellular bodies in whole bacteria. We limited our goal and focus to bodies with a distinct elemental composition that was in a sufficiently high concentration to provide the necessary signal-to-noise level at the relatively large sample thicknesses of the intact cell. This combination proved very powerful, as demonstrated by the identification of a phosphorus-rich body in Caulobacter crescentus. We also confirmed the presence of a body rich in carbon, demonstrated that these two types of bodies are readily recognized and distinguished from each other, and provided, for the first time to our knowledge, structural information about them in their intact state. In addition, we also showed the presence of a similar type of phosphorus-rich body in Deinococcus grandis, a member of a completely unrelated bacteria genus. Cryo-electron microscopy and tomography allowed the study of the biogenesis and morphology of these bodies at resolutions better than 10 nm, whereas spectroscopic difference imaging provided a direct identification of their chemical composition.
---
paper_title: Inorganic polyphosphate: essential for growth and survival.
paper_content:
Inorganic polyphosphate (Poly P) is a polymer of tens to hundreds of phosphate residues linked by "high-energy" phosphoanhydride bonds as in ATP. Found in abundance in all cells in nature, it is unique in its likely role in the origin and survival of species. Here, we present extensive evidence that the remarkable properties of Poly P as a polyanion have made it suited for a crucial role in the emergence of cells on earth. Beyond that, Poly P has proved in a variety of ways to be essential for growth of cells, their responses to stresses and stringencies, and the virulence of pathogens. In this review, we pay particular attention to the enzyme, polyphosphate kinase 1 (Poly P kinase 1 or PPK1), responsible for Poly P synthesis and highly conserved in many bacterial species, including 20 or more of the major pathogens. Mutants lacking PPK1 are defective in motility, quorum sensing, biofilm formation, and virulence. Structural studies are cited that reveal the conserved ATP-binding site of PPK1 at atomic resolution and reveal that the site can be blocked with minute concentrations of designed inhibitors. Another widely conserved enzyme is PPK2, which has distinctive kinetic properties and is also implicated in the virulence of some pathogens. Thus, these enzymes, absent in yeast and animals, are novel attractive targets for treatment of many microbial diseases. Still another enzyme featured in this review is one discovered in Dictyostelium discoideum that becomes an actin-like fiber concurrent with the synthesis, step by step, of a Poly P chain made from ATP. The Poly P-actin fiber complex, localized in the cell, lengthens and recedes in response to metabolic signals. Homologs of DdPPK2 are found in pathogenic protozoa and in the alga Chlamydomonas. Beyond the immediate relevance of Poly P as a target for anti-infective drugs, a large variety of cellular operations that rely on Poly P will be considered.
---
paper_title: Direct Labeling of Polyphosphate at the Ultrastructural Level in Saccharomyces cerevisiae by Using the Affinity of the Polyphosphate Binding Domain of Escherichia coli Exopolyphosphatase
paper_content:
Inorganic polyphosphate (polyP) is a linear polymer of orthophosphate and has many biological functions in prokaryotic and eukaryotic organisms. To investigate polyP localization, we developed a novel technique using the affinity of the recombinant polyphosphate binding domain (PPBD) of Escherichia coli exopolyphosphatase to polyP. An epitope-tagged PPBD was expressed and purified from E. coli. Equilibrium binding assay of PPBD revealed its high affinity for long-chain polyP and its weak affinity for short-chain polyP and nucleic acids. To directly demonstrate polyP localization in Saccharomyces cerevisiae on resin sections prepared by rapid freezing and freeze-substitution, specimens were labeled with PPBD containing an epitope tag and then the epitope tag was detected by an indirect immunocytochemical method. A goat anti-mouse immunoglobulin G antibody conjugated with Alexa 488 for laser confocal microscopy or with colloidal gold for transmission electron microscopy was used. When the S. cerevisiae was cultured in yeast extract-peptone-dextrose medium (10 mM phosphate) for 10 h, polyP was distributed in a dispersed fashion in vacuoles in successfully cryofixed cells. A few polyP signals of the labeling were sometimes observed in cytosol around vacuoles with electron microscopy. Under our experimental conditions, polyP granules were not observed. Therefore, it remains unclear whether the method can detect the granule form. The method directly demonstrated the localization of polyP at the electron microscopic level for the first time and enabled the visualization of polyP localization with much higher specificity and resolution than with other conventional methods.
---
paper_title: Polyphosphate kinase from activated sludge performing enhanced biological phosphorus removal
paper_content:
A novel polyphosphate kinase (PPK) was retrieved from an uncultivated organism in activated sludge carrying out enhanced biological phosphorus removal (EBPR). Acetate-fed laboratory-scale sequencing batch reactors were used to maintain sludge with a high phosphorus content (approximately 11% of the biomass). PCR-based clone libraries of small subunit rRNA genes and fluorescent in situ hybridization (FISH) were used to verify that the sludge was enriched in Rhodocyclus-like beta-Proteobacteria known to be associated with sludges carrying out EBPR. These organisms comprised approximately 80% of total bacteria in the sludge, as assessed by FISH. Degenerate PCR primers were designed to retrieve fragments of putative ppk genes from a pure culture of Rhodocyclus tenuis and from organisms in the sludge. Four novel ppk homologs were found in the sludge, and two of these (types I and II) shared a high degree of amino acid similarity with R. tenuis PPK (86 and 87% similarity, respectively). Dot blot analysis of total RNA extracted from sludge demonstrated that the Type I ppk mRNA was present, indicating that this gene is expressed during EBPR. Inverse PCR was used to obtain the full Type I sequence from sludge DNA, and a full-length PPK was cloned, overexpressed, and purified to near homogeneity. The purified PPK has a specific activity comparable to that of other PPKs, has a requirement for Mg(2+), and does not appear to operate in reverse. PPK activity was found mainly in the particulate fraction of lysed sludge microorganisms.
---
paper_title: Radiolabelled proteomics to determine differential functioning ofAccumulibacterduring the anaerobic and aerobic phases of a bioreactor operating for enhanced biological phosphorus removal
paper_content:
Proteins synthesized by the mixed microbial community of two sequencing batch reactors run for enhanced biological phosphorus removal (EBPR) during aerobic and anaerobic reactor phases were compared, using mass spectrometry-based proteomics and radiolabelling. Both sludges were dominated by polyphosphate-accumulating organisms belonging to Candidatis Accumulibacter and the majority of proteins identified matched closest to these bacteria. Enzymes from the Embden-Meyerhof-Parnas pathway were identified, suggesting this is the major glycolytic pathway for these Accumulibacter populations. Enhanced aerobic synthesis of glyoxylate cycle enzymes suggests this cycle is important during the aerobic phase of EBPR. In one sludge, several TCA cycle enzymes showed enhanced aerobic synthesis, suggesting this cycle is unimportant anaerobically. The second sludge showed enhanced synthesis of TCA cycle enzymes under anaerobic conditions, suggesting full or partial TCA cycle operation anaerobically. A phylogenetic analysis of Accumulibacter polyphosphate kinase genes from each sludge demonstrated different Accumulibacter populations dominated the two sludges. Thus, TCA cycle activity differences may be due to Accumulibacter strain differences. The major fatty acids present in Accumulibacter-dominated sludge include palmitic, hexadecenoic and cis-vaccenic acid and fatty acid content increased by approximately 20% during the anaerobic phase. We hypothesize that this is associated with increased anaerobic phospholipid membrane biosynthesis, to accommodate intracellular polyhydroxyalkanoate granules.
---
paper_title: Metaproteomics Provides Functional Insight into Activated Sludge Wastewater Treatment
paper_content:
BACKGROUND ::: Through identification of highly expressed proteins from a mixed culture activated sludge system this study provides functional evidence of microbial transformations important for enhanced biological phosphorus removal (EBPR). ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: A laboratory-scale sequencing batch reactor was successfully operated for different levels of EBPR, removing around 25, 40 and 55 mg/l P. The microbial communities were dominated by the uncultured polyphosphate-accumulating organism "Candidatus Accumulibacter phosphatis". When EBPR failed, the sludge was dominated by tetrad-forming alpha-Proteobacteria. Representative and reproducible 2D gel protein separations were obtained for all sludge samples. 638 protein spots were matched across gels generated from the phosphate removing sludges. 111 of these were excised and 46 proteins were identified using recently available sludge metagenomic sequences. Many of these closely match proteins from "Candidatus Accumulibacter phosphatis" and could be directly linked to the EBPR process. They included enzymes involved in energy generation, polyhydroxyalkanoate synthesis, glycolysis, gluconeogenesis, glycogen synthesis, glyoxylate/TCA cycle, fatty acid beta oxidation, fatty acid synthesis and phosphate transport. Several proteins involved in cellular stress response were detected. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: Importantly, this study provides direct evidence linking the metabolic activities of "Accumulibacter" to the chemical transformations observed in EBPR. Finally, the results are discussed in relation to current EBPR metabolic models.
---
| Title: Characterisation of Phosphate Accumulating Organisms and Techniques for Polyphosphate Detection: A Review
Section 1: Introduction
Description 1: Outline the importance of phosphorus in food production, current sources of phosphorus, geopolitical considerations, and the rationale for researching phosphate-accumulating organisms (PAOs) and polyphosphate (Poly-P) detection techniques.
Section 2: Isolation of PAOs
Description 2: Discuss the various forms of phosphate accumulation, the isolation attempts of PAOs, and the conditions affecting phosphate uptake and release in different microorganisms.
Section 3: Poly-P Detection and Identification of PAOs
Description 3: Provide an overview of the methods used to identify and quantify Poly-P in microbial cells, focusing on different detection techniques and the specific information they offer.
Section 4: Staining Techniques-Light and Epifluorescence Microscopy (LEM)
Description 4: Describe the staining techniques used for detecting Poly-P through light and epifluorescence microscopy, including the specific dyes and protocols used.
Section 5: Flow Cytometry (FC)
Description 5: Detail the application of flow cytometry in Poly-P detection, including the staining procedures, data acquisition, analysis, and separation of cells.
Section 6: Fluorescence in Situ Hybridization (FISH) Analysis
Description 6: Explain the use of FISH in identifying PAOs, studying gene expression, and targeting specific bacterial clades in environmental samples.
Section 7: Extraction Procedures and Polyphosphate Quantification (EXT)
Description 7: Summarize the methods for extracting Poly-P from microbial cells and the subsequent quantification techniques.
Section 8: Polyacrylamide Gel Electrophoresis (PAGE)
Description 8: Discuss the use of gel electrophoresis in determining the degree of polymerization of Poly-P, including extraction and staining methods.
Section 9: Electron Microscopy (EM)
Description 9: Explain the role of electron microscopy in visualizing Poly-P granules in cells and its combination with other analytical techniques.
Section 10: X-Ray Analysis Techniques (X-RAY)
Description 10: Describe how X-ray analysis provides information about the composition of Poly-P granules and its application in combination with microscopy.
Section 11: Nuclear Magnetic Resonance Spectroscopy (NMRS)
Description 11: Outline the use of NMR spectroscopy to study the physiology of PAOs and the presence of Poly-P in different sample types.
Section 12: RAMAN Spectromicroscopy (RAM)
Description 12: Detail the application of RAMAN spectromicroscopy in identifying and quantifying Poly-P within individual cells and examining microbiota composition.
Section 13: Enzyme Assays (EA)
Description 13: Discuss enzyme-based methods for detecting Poly-P, highlighting the enzymatic reactions and the analysis of resulting products.
Section 14: Cryoelectron Tomography and Spectroscopic Imaging (CTSI)
Description 14: Describe the technique of cryoelectron tomography combined with spectroscopic imaging for studying the structure of PAOs and Poly-P.
Section 15: Mass Spectrometry (MS)
Description 15: Provide an overview of the use of mass spectrometry in analyzing Poly-P and related compounds in microbial samples.
Section 16: Proteic Affinity (PA)
Description 16: Explain the technique based on protein affinity for detecting and locating Poly-P granules within cells.
Section 17: "Omics" Techniques (OMICS)
Description 17: Summarize the use of metagenomics, metatranscriptomics, and metaproteomics in studying PAOs and their roles in the EBPR process.
Section 18: Conclusions
Description 18: Highlight the diversity of techniques available for isolating and detecting Poly-P, the need for further research on PAO culture, and potential alternative approaches to phosphate recovery. |
Malware Analysis and Classification: A Survey | 6 | ---
paper_title: Dynamic Analysis of Malicious Code
paper_content:
Malware analysis is the process of determining the purpose and functionality of a given malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques for malicious code. In addition, it is an important prerequisite for the development of removal tools that can thoroughly delete malware from an infected machine. Traditionally, malware analysis has been a manual process that is tedious and time- intensive. Unfortunately, the number of samples that need to be analyzed by security vendors on a daily basis is constantly increasing. This clearly reveals the need for tools that auto- mate and simplify parts of the analysis process. In this paper, we present TTAnalyze, a tool for dynamically analyzing the behavior of Windows executables. To this end, the binary is run in an emulated operating system environment and its (security-relevant) actions are monitored. In particular, we record the Windows native system calls and Windows API functions that the program invokes. One important feature of our system is that it does not modify the program that it executes (e.g., through API call hooking or breakpoints), making it more difficult to detect by malicious code. Also, our tool runs binaries in an unmodified Windows environment,
---
paper_title: A survey on automated dynamic malware-analysis techniques and tools
paper_content:
Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmed malicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware. This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that leverage these It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior.
---
paper_title: Ether: malware analysis via hardware virtualization extensions
paper_content:
Malware has become the centerpiece of most security threats on the Internet. Malware analysis is an essential technology that extracts the runtime behavior of malware, and supplies signatures to detection systems and provides evidence for recovery and cleanup. The focal point in the malware analysis battle is how to detect versus how to hide a malware analyzer from malware during runtime. State-of-the-art analyzers reside in or emulate part of the guest operating system and its underlying hardware, making them easy to detect and evade. In this paper, we propose a transparent and external approach to malware analysis, which is motivated by the intuition that for a malware analyzer to be transparent, it must not induce any side-effects that are unconditionally detectable by malware. Our analyzer, Ether, is based on a novel application of hardware virtualization extensions such as Intel VT, and resides completely outside of the target OS environment. Thus, there are no in-guest software components vulnerable to detection, and there are no shortcomings that arise from incomplete or inaccurate system emulation. Our experiments are based on our study of obfuscation techniques used to create 25,000 recent malware samples. The results show that Ether remains transparent and defeats the obfuscation tools that evade existing approaches.
---
paper_title: A survey on automated dynamic malware-analysis techniques and tools
paper_content:
Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmed malicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware. This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that leverage these It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior.
---
paper_title: TTAnalyze: A Tool for Analyzing Malware
paper_content:
Malware analysis is the process of determining the purpose and functionality of a given malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques for malicious code. In addition, it is an important prerequisite for the development of removal tools that can thoroughly delete malware from an infected machine. Traditionally, malware analysis has been a manual process that is tedious and time-intensive. Unfortunately, the number of samples that need to be analyzed by security vendors on a daily basis is constantly increasing. This clearly reveals the need for tools that automate and simplify parts of the analysis process. In this paper, we present TTAnalyze, a tool for dynamically analyzing the behavior of Windows executables. To this end, the binary is run in an emulated operating system environment and its (security-relevant) actions are monitored. In particular, we record the Windows native system calls andWindows API functions that the program invokes. One important feature of our system is that it does not modify the program that it executes (e.g., through API call hooking or breakpoints), making it more difficult to detect by malicious code. Also, our tool runs binaries in an unmodified Windows environment, which leads to excellent emulation accuracy. These factors make TTAnalyze an ideal tool for quickly getting an understanding of the behavior of an unknown malware.
---
paper_title: Toward Automated Dynamic Malware Analysis Using CWSandbox
paper_content:
Malware is notoriously difficult to combat because it appears and spreads so quickly. In this article, we describe the design and implementation of CWSandbox, a malware analysis tool that fulfills our three design criteria of automation, effectiveness, and correctness for the Win32 family of operating systems
---
paper_title: Unknown malcode detection via text categorization and the imbalance problem
paper_content:
Todaypsilas signature-based anti-viruses are very accurate, but are limited in detecting new malicious code. Currently, dozens of new malicious codes are created every day, and this number is expected to increase in the coming years. Recently, classification algorithms were used successfully for the detection of unknown malicious code. These studies used a test collection with a limited size where the same malicious-benign-file ratio in both the training and test sets, which does not reflect real-life conditions. In this paper we present a methodology for the detection of unknown malicious code, based on text categorization concepts. We performed an extensive evaluation using a test collection that contains more than 30,000 malicious and benign files, in which we investigated the imbalance problem. In real-life scenarios, the malicious file content is expected to be low, about 10% of the total files. For practical purposes, it is unclear as to what the corresponding percentage in the training set should be. Our results indicate that greater than 95% accuracy can be achieved through the use of a training set that contains below 20% malicious file content.
---
paper_title: Function length as a tool for malware classification
paper_content:
The proliferation of malware is a serious threat to computer and information systems throughout the world. Anti-malware companies are continually challenged to identify and counter new malware as it is released into the wild. In attempts to speed up this identification and response, many researchers have examined ways to efficiently automate classification of malware as it appears in the environment. In this paper, we present a fast, simple and scalable method of classifying Trojans based only on the lengths of their functions. Our results indicate that function length may play a significant role in classifying malware, and, combined with other features, may result in a fast, inexpensive and scalable method of malware classification.
---
paper_title: Improving malware classification: bridging the static/dynamic gap
paper_content:
Malware classification systems have typically used some machine learning algorithm in conjunction with either static or dynamic features collected from the binary. Recently, more advanced malware has introduced mechanisms to avoid detection in these views by using obfuscation techniques to avoid static detection and execution-stalling techniques to avoid dynamic detection. In this paper we construct a classification framework that is able to incorporate both static and dynamic views into a unified framework in the hopes that, while a malicious executable can disguise itself in some views, disguising itself in every view while maintaining malicious intent will prove to be substantially more difficult. Our method uses kernels to place a similarity metric on each distinct view and then employs multiple kernel learning to find a weighted combination of the data sources which yields the best classification accuracy in a support vector machine classifier. Our approach opens up new avenues of malware research which will allow the research community to elegantly look at multiple facets of malware simultaneously, and which can easily be extended to integrate any new data sources that may become popular in the future.
---
paper_title: OPEM: A Static-Dynamic Approach for Machine-Learning-Based Malware Detection
paper_content:
Malware is any computer software potentially harmful to both computers and networks. The amount of malware is growing every year and poses a serious global security threat. Signature-based detection is the most extended method in commercial antivirus software, however, it consistently fails to detect new malware. Supervised machine learning has been adopted to solve this issue. There are two types of features that supervised malware detectors use: (i) static features and (ii) dynamic features. Static features are extracted without executing the sample whereas dynamic ones requires an execution. Both approaches have their advantages and disadvantages. In this paper, we propose for the first time, OPEM, an hybrid unknown malware detector which combines the frequency of occurrence of operational codes (statically obtained) with the information of the execution trace of an executable (dynamically obtained). We show that this hybrid approach enhances the performance of both approaches when run separately.
---
paper_title: Scalable, Behavior-Based Malware Clustering
paper_content:
Anti-malware companies receive thousands of malware samples every day. To process this large quantity, a number of automated analysis tools were developed. These tools execute a malicious program in a controlled environment and produce reports that summarize the program’s actions. Of course, the problem of analyzing the reports still remains. Recently, researchers have started to explore automated clustering techniques that help to identify samples that exhibit similar behavior. This allows an analyst to discard reports of samples that have been seen before, while focusing on novel, interesting threats. Unfortunately, previous techniques do not scale well and frequently fail to generalize the observed activity well enough to recognize
---
paper_title: Learning to detect malicious executables in the wild
paper_content:
In this paper, we describe the development of a fielded application for detecting malicious executables in the wild. We gathered 1971 benign and 1651 malicious executables and encoded each as a training example using n-grams of byte codes as features. Such processing resulted in more than 255 million distinct n-grams. After selecting the most relevant n-grams for prediction, we evaluated a variety of inductive methods, including naive Bayes, decision trees, support vector machines, and boosting. Ultimately, boosted decision trees outperformed other methods with an area under the roc curve of 0.996. Results also suggest that our methodology will scale to larger collections of executables. To the best of our knowledge, ours is the only fielded application for this task developed using techniques from machine learning and data mining.
---
paper_title: Malware images: visualization and automatic classification
paper_content:
We propose a simple yet effective method for visualizing and classifying malware using image processing techniques. Malware binaries are visualized as gray-scale images, with the observation that for many malware families, the images belonging to the same family appear very similar in layout and texture. Motivated by this visual similarity, a classification method using standard image features is proposed. Neither disassembly nor code execution is required for classification. Preliminary experimental results are quite promising with 98% classification accuracy on a malware database of 9,458 samples with 25 different malware families. Our technique also exhibits interesting resilience to popular obfuscation techniques such as section encryption.
---
paper_title: A comparative assessment of malware classification using binary texture analysis and dynamic analysis
paper_content:
AI techniques play an important role in automated malware classification. Several machine-learning methods have been applied to classify or cluster malware into families, based on different features derived from dynamic review of the malware. While these approaches demonstrate promise, they are themselves subject to a growing array of counter measures that increase the cost of capturing these binary features. Further, feature extraction requires a time investment per binary that does not scale well to the daily volume of binary instances being reported by those who diligently collect malware. Recently, a new type of feature extraction, used by a classification approach called binary-texture analysis, was introduced in [16]. We compare this approach to existing malware classification approaches previously published. We find that, while binary texture analysis is capable of providing comparable classification accuracy to that of contemporary dynamic techniques, it can deliver these results 4000 times faster than dynamic techniques. Also surprisingly, the texture-based approach seems resilient to contemporary packing strategies, and can robustly classify a large corpus of malware with both packed and unpacked samples. We present our experimental results from three independent malware corpora, comprised of over 100 thousand malware samples. These results suggest that binary-texture analysis could be a useful and efficient complement to dynamic analysis.
---
paper_title: Discriminant malware distance learning on structuralinformation for automated malware classification
paper_content:
The voluminous malware variants that appear in the Internet have posed severe threats to its security. In this work, we explore techniques that can automatically classify malware variants into their corresponding families. We present a generic framework that extracts structural information from malware programs as attributed function call graphs, in which rich malware features are encoded as attributes at the function level. Our framework further learns discriminant malware distance metrics that evaluate the similarity between the attributed function call graphs of two malware programs. To combine various types of malware attributes, our method adaptively learns the confidence level associated with the classification capability of each attribute type and then adopts an ensemble of classifiers for automated malware classification. We evaluate our approach with a number of Windows-based malware instances belonging to 11 families, and experimental results show that our automated malware classification method is able to achieve high classification accuracy.
---
paper_title: Ether: malware analysis via hardware virtualization extensions
paper_content:
Malware has become the centerpiece of most security threats on the Internet. Malware analysis is an essential technology that extracts the runtime behavior of malware, and supplies signatures to detection systems and provides evidence for recovery and cleanup. The focal point in the malware analysis battle is how to detect versus how to hide a malware analyzer from malware during runtime. State-of-the-art analyzers reside in or emulate part of the guest operating system and its underlying hardware, making them easy to detect and evade. In this paper, we propose a transparent and external approach to malware analysis, which is motivated by the intuition that for a malware analyzer to be transparent, it must not induce any side-effects that are unconditionally detectable by malware. Our analyzer, Ether, is based on a novel application of hardware virtualization extensions such as Intel VT, and resides completely outside of the target OS environment. Thus, there are no in-guest software components vulnerable to detection, and there are no shortcomings that arise from incomplete or inaccurate system emulation. Our experiments are based on our study of obfuscation techniques used to create 25,000 recent malware samples. The results show that Ether remains transparent and defeats the obfuscation tools that evade existing approaches.
---
paper_title: Semi-supervised Learning for Unknown Malware Detection
paper_content:
Malware is any kind of computer software potentially harmful to both computers and networks. The amount of malware is increasing every year and poses a serious global security threat. Signature-based detection is the most widely used commercial antivirus method, however, it consistently fails to detect new malware. Supervised machine-learning models have been used to solve this issue, but the usefulness of supervised learning is far to be perfect because it requires that a significant amount of malicious code and benign software to be identified and labelled beforehand. In this paper, we propose a new method of malware protection that adopts a semi-supervised learning approach to detect unknown malware. This method is designed to build a machine-learning classifier using a set of labelled (malware and legitimate software) and unlabelled instances.We performed an empirical validation demonstrating that the labelling efforts are lower than when supervised learning is used, while maintaining high accuracy rates.
---
paper_title: The WEKA data mining software: an update
paper_content:
More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.
---
paper_title: Automated malware classification based on network behavior
paper_content:
Over the past decade malware, i.e., malicious software, has become a major security threat on the Internet. Today anti-virus companies receive thousands of malicious samples every day. However the vast majority of these samples are variants of the existing malware. Due to the sheer number of malware variants it is important to accurately determine whether a sample belongs to a known malware family or exhibits a new behavior and thus requires further analysis and separate detection signature. Despite of the importance of network activity, the existing research on malware analysis does not fully leverage the malware network behavior for classification. In this paper, we propose an automated malware classification system that focuses on network behavior of malware samples. Our approach employs behavioral profiles that summarize the network behavior of malware samples. The proposed approach is applied to a real world malware corpus. Our experimental results show the effectiveness of the proposed approach in classifying malware samples only based on the network activity exhibited by the samples.
---
paper_title: Collective classification for unknown malware detection
paper_content:
Malware is any type of computer software harmful to computers and networks. The amount of malware is increasing every year and poses as a serious global security threat. Signature-based detection is the most broadly used commercial antivirus method, however, it fails to detect new and previously unseen malware. Supervised machine-learning models have been proposed in order to solve this issue, but the usefulness of supervised learning is far to be perfect because it requires a significant amount of malicious code and benign software to be identified and labelled in beforehand. In this paper, we propose a new method that adopts a collective learning approach to detect unknown malware. Collective classification is a type of semi-supervised learning that presents an interesting method for optimising the classification of partially-labelled data. In this way, we propose here, for the first time, collective classification algorithms to build different machine-learning classifiers using a set of labelled (as malware and legitimate software) and unlabelled instances. We perform an empirical validation demonstrating that the labelling efforts are lower than when supervised learning is used, while maintaining high accuracy rates.
---
paper_title: Fast malware classification by automated behavioral graph matching
paper_content:
Malicious software (malware) is a serious problem in the Internet. Malware classification is useful for detection and analysis of new threats for which signatures are not available, or possible (due to polymorphism). This paper proposes a new malware classification method based on maximal common subgraph detection. A behavior graph is obtained by capturing system calls during the execution (in a sandboxed environment) of the suspicious software. The method has been implemented and tested on a set of 300 malware instances in 6 families. Results demonstrate the method effectively groups the malware instances, compared with previous methods of classification, is fast, and has a low false positive rate when presented with benign software.
---
paper_title: An approach for malware behavior identification and classification
paper_content:
Malware is one of the major security threats that can break computer operation. However, commercial anti-virus or anti-spyware that used signature-based matching to detects malware cannot solve that kind of threats. Nowadays malware writers try to avoid detection by using several techniques such as polymorphic, metamorphic and also hiding technique. In order to overcome that issue, we proposed a new framework for malware behavior identification and classification that apply dynamic approach. This framework consists of two major processes such as behavior identification and malware classification. These two major processes will integrate together as interrelated process in our proposed framework. Result from this study is a new framework that able to identify and classify malware based on it behaviors.
---
paper_title: An automated classification system based on the strings of trojan and virus families
paper_content:
Classifying malware correctly is an important research issue for anti-malware software producers. This paper presents an effective and efficient malware classification technique based on string information using several well-known classification algorithms. In our testing we extracted the printable strings from 1367 samples, including unpacked trojans and viruses and clean files. Information describing the printable strings contained in each sample was input to various classification algorithms, including tree-based classifiers, a nearest neighbour algorithm, statistical algorithms and AdaBoost. Using k-fold cross validation on the unpacked malware and clean files, we achieved a classification accuracy of 97%. Our results reveal that strings from library code (rather than malicious code itself) can be utilised to distinguish different malware families.
---
paper_title: Data mining methods for detection of new malicious executables
paper_content:
A serious security threat today is malicious executables, especially new, unseen malicious executables often arriving as email attachments. These new malicious executables are created at the rate of thousands every year and pose a serious security threat. Current anti-virus systems attempt to detect these new malicious programs with heuristics generated by hand. This approach is costly and oftentimes ineffective. We present a data mining framework that detects new, previously unseen malicious executables accurately and automatically. The data mining framework automatically found patterns in our data set and used these patterns to detect a set of new malicious binaries. Comparing our detection methods with a traditional signature-based method, our method more than doubles the current detection rates for new malicious executables.
---
paper_title: Fast Effective Rule Induction
paper_content:
Abstract Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recently-proposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error rates higher than those of C4.5 and C4.5rules. We then propose a number of modifications resulting in an algorithm RIPPERk that is very competitive with C4.5rules with respect to error rates, but much more efficient on large samples. RIPPERk obtains error rates lower than or equivalent to C4.5rules on 22 of 37 benchmark problems, scales nearly linearly with the number of training examples, and can efficiently process noisy datasets containing hundreds of thousands of examples.
---
paper_title: Analysis of Machine learning Techniques Used in Behavior-Based Malware Detection
paper_content:
The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naive Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.
---
paper_title: Automated classification and analysis of internet malware
paper_content:
Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus (AV) software--a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects--from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.
---
paper_title: Graph-based malware detection using dynamic analysis
paper_content:
We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.
---
paper_title: Toward Automated Dynamic Malware Analysis Using CWSandbox
paper_content:
Malware is notoriously difficult to combat because it appears and spreads so quickly. In this article, we describe the design and implementation of CWSandbox, a malware analysis tool that fulfills our three design criteria of automation, effectiveness, and correctness for the Win32 family of operating systems
---
| ```
Title: Malware Analysis and Classification: A Survey
Section 1: Introduction
Description 1: Introduce the concept of malware, its variations, and the threat it poses to computer systems and networks.
Section 2: Malware Analysis
Description 2: Discuss the importance of analyzing malware to understand its risks and intentions.
Section 3: Static Analysis
Description 3: Explain the methodologies and tools used for analyzing malware without executing it, along with its limitations.
Section 4: Dynamic Analysis
Description 4: Describe the approach of analyzing malware by executing it in a controlled environment and the corresponding tools and techniques.
Section 5: Machine Learning for Detecting and Classifying Malwares
Description 5: Elaborate on various machine learning approaches used for identifying and classifying malware, detailing specific algorithms and their effectiveness.
Section 6: Conclusion
Description 6: Summarize the key points discussed in the paper about malware analysis techniques and the challenges faced by machine learning technologies in this domain.
``` |
A Survey of MAC Protocols for Cognitive Radio Body Area Networks | 6 | ---
paper_title: A Survey on Wireless Body Area Networks: Technologies and Design Challenges
paper_content:
Interest in Wireless Body Area Networks (WBANs) has increased significantly in recent years thanks to the advances in microelectronics and wireless communications. Owing to the very stringent application requirements in terms of reliability, energy efficiency, and low device complexity, the design of these networks requires the definition of new protocols with respect to those used in general purpose wireless sensor networks. This motivates the effort in research activities and in standardisation process of the last years. This survey paper aims at reporting an overview of WBAN main applications, technologies and standards, issues in WBANs design, and evolutions. Some case studies are reported, based on both real implementation and experimentation on the field, and on simulations. These results have the aim of providing useful insights for WBAN designers and of highlighting the main issues affecting the performance of these kind of networks.
---
paper_title: A Survey on M2M Systems for mHealth: A Wireless Communications Perspective
paper_content:
In the new era of connectivity, marked by the explosive number of wireless electronic devices and the need for smart and pervasive applications, Machine-to-Machine (M2M) communications are an emerging technology that enables the seamless device interconnection without the need of human interaction. The use of M2M technology can bring to life a wide range of mHealth applications, with considerable benefits for both patients and healthcare providers. Many technological challenges have to be met, however, to ensure the widespread adoption of mHealth solutions in the future. In this context, we aim to provide a comprehensive survey on M2M systems for mHealth applications from a wireless communication perspective. An end-to-end holistic approach is adopted, focusing on different communication aspects of the M2M architecture. Hence, we first provide a systematic review of Wireless Body Area Networks (WBANs), which constitute the enabling technology at the patient's side, and then discuss end-to-end solutions that involve the design and implementation of practical mHealth applications. We close the survey by identifying challenges and open research issues, thus paving the way for future research opportunities.
---
paper_title: A Comprehensive Survey of MAC Protocols for Wireless Body Area Networks
paper_content:
In this paper, we present a comprehensive study of Medium Access Control (MAC) protocols developed for Wireless Body Area Networks (WBANs). In WBANs, small battery operated on-body or implanted biomedical sensor nodes are used to monitor physiological signs such as temperature, blood pressure, ElectroCardioGram (ECG), ElectroEncephaloGraphy (EEG) etc. We discuss design requirements for WBANs with major sources of energy dissipation. Then, we further investigate the existing designed protocols for WBANs with focus on their strengths and weaknesses. Paper ends up with concluding remarks and open research issues for future work.
---
paper_title: WiseMAC: an ultra low power MAC protocol for the downlink of infrastructure wireless sensor networks
paper_content:
This work proposes wiseMAC(wireless sensor MAC) for the downlink of infrastructure wireless sensor networks. WiseMAC is a novel energy efficient medium access control protocol based on synchronized preamble sampling. The trade-off between power consumption and delay is analyzed, focusing on low traffic. WiseMAC is compared analytically with the power management protocol used in the IEEE 802.15.4 ZigBee standard. It is shown that WiseMAC can provide a significantly lower power consumption for the same delay.
---
paper_title: An energy-efficient MAC protocol for wireless sensor networks
paper_content:
This paper proposes S-MAC, a medium-access control (MAC) protocol designed for wireless sensor networks. Wireless sensor networks use battery-operated computing and sensing devices. A network of these devices will collaborate for a common application such as environmental monitoring. We expect sensor networks to be deployed in an ad hoc fashion, with individual nodes remaining largely inactive for long periods of time, but then becoming suddenly active when something is detected. These characteristics of sensor networks and applications motivate a MAC that is different from traditional wireless MACs such as IEEE 802.11 in almost every way: energy conservation and self-configuration are primary goals, while per-node fairness and latency are less important. S-MAC uses three novel techniques to reduce energy consumption and support self-configuration. To reduce energy consumption in listening to an idle channel, nodes periodically sleep. Neighboring nodes form virtual clusters to auto-synchronize on sleep schedules. Inspired by PAMAS, S-MAC also sets the radio to sleep during transmissions of other nodes. Unlike PAMAS, it only uses in-channel signaling. Finally, S-MAC applies message passing to reduce contention latency for sensor-network applications that require store-and-forward processing as data move through the network. We evaluate our implementation of S-MAC over a sample sensor node, the Mote, developed at University of California, Berkeley. The experiment results show that, on a source node, an 802.11-like MAC consumes 2-6 times more energy than S-MAC for traffic load with messages sent every 1-10 s.
---
paper_title: Distributed Coordinated Spectrum Sharing MAC Protocol for Cognitive Radio
paper_content:
Recently, the CR (cognitive radio) technology is gathering more and more attention because it has the capacity to deal with the scarce of the precious spectrum resource. Within the domain of CR technology, channel management of CR is of utmost importance due to its key role in the performance enhancement of the transmission and the minimum interference to the primary users as well. An 802.11 WLAN based ad-hoc protocol using the cognitive radio has been proposed in this paper. It provides the detection and protection for incumbent systems around the communication pair by separating the spectrum into the data channels and common control channel. By adding the available channel list into the RTS and CTS, the communication pair can know which data sub channels are available (i.e., no incumbent signal). We proposed an ENNI (exchanging of neighbor nodes information) mechanism to deal with the hidden incumbent device problem. The simulation results show that by using our protocol the hidden incumbent device problem (HIDP) can be solved successfully.
---
paper_title: A MAC protocol for cognitive wireless body area sensor networking
paper_content:
In this paper, a Cognitive Radio based Medium Access Control (CR-MAC) protocol for Wireless Body Area Sensor Networks (WBASN) that utilizes cognitive radio transmission is proposed. In this proposal, the sensor nodes are classified into nodes of life-critical health information, and nodes of non-critical health information. The CR-MAC protocol prioritizes the critical packets access to the transmission medium by transmitting them with higher power while transmitting lower priority packets using lower transmission power. At the receiver, a higher priority packet experience collision only when there are more than one critical packet transmission at the same time slot while non critical packets experience collision when there are more than one transmission at the same time slot. This protocol is evaluated analytically. The obtained results demonstrate a differentiated service system which prioritizes critical traffic access to the transmission medium and increases the critical traffic throughput.
---
paper_title: Dynamic Channel Adjustable Asynchronous Cognitive Radio MAC Protocol for Wireless Medical Body Area Sensor Networks
paper_content:
Medical body area networks (MBAN) impose several requirements to the medium access control layer which have various contexts: energy efficiency, QoS providing, reliability. And a cognitive radio (CR) network should be able to sense its environment and adapt communication to utilize the unused licensed spectrum without interfering with licensed users. As CR nodes need to hop from channel to channel to make the most use of the spectrum opportunities, we consider asynchronous medium access control (MAC) protocols to be solution for these networks. The DCAA-MAC protocol presented in this paper has been designed has been designed specifically for wireless body area network with cognitive radio capability. The DCAA-MAC protocol has energy-efficiency, low latency, and no synchronization overhead by provide asynchronous and fast channel switching. Analytical models are shown to perform at low energy consumption, to scale well with network size.
---
paper_title: Study on ZigBee technology
paper_content:
Wireless Sensor Networks are being gradually introduced in different application scenarios. ZigBee is one of the most widely used transceiver standard in wireless sensor networks. ZigBee over IEEE 802.15.4., defines specifications for low data rate WPAN (LR-WPAN) to support low power monitoring and controlling devices. This paper presents a detailed study of Zigbee wireless standard, IEEE 802.15.4 specification, ZigBee device types, the protocol stack architecture and its applications.
---
paper_title: A Survey on MAC Strategies for Cognitive Radio Networks
paper_content:
Dynamic spectrum policies combined with software defined radio are powerful means to improve the overall spectral efficiency allowing the development of new wireless services and technologies. Medium Access Control (MAC) protocols exploit sensing stimuli to build up a spectrum opportunity map (cognitive sensing). Available resources are scheduled (dynamic spectrum allocation), improving coexistence between users that belong to heterogeneous systems (dynamic spectrum sharing). Furthermore, MAC protocols may allow cognitive users to vacate selected channels when their quality becomes unacceptable (dynamic spectrum mobility). The contribution of this survey is threefold. First, we show the fundamental role of the MAC layer and identify its functionalities in a cognitive radio (CR) network. Second, a classification of cognitive MAC protocols is proposed. Third, advantages, drawbacks, and further design challenges of cognitive MAC protocols are discussed.
---
paper_title: Asynchronous MAC protocol for spectrum agility in Wireless Body Area Sensor Networks
paper_content:
A Wireless Body Area Sensor Network (WBASN) is a special-purpose Wireless Sensor Network (WSN) that supports remote monitoring and entertainment applications. The energy consumption plays an important role in the design of this specific sensor network. Unfortunately, the performance of WBASNs decreases in high interference environments such as the Industrial, Scientific and Medical (ISM) band where wireless spectrums are getting crowded. In this paper, an energy-efficient Medium Access Control (MAC) protocol named C-RICER (Cognitive-Receiver Initiated CyclEd Receiver) is specifically designed for WBASN to cognitively work in high interference environment. C-RICER protocol adapts both transmission power and channel frequency to reduce the interferences and thus, the energy consumption. The protocol is simulated thanks to OMNET++ simulator. Simulation results show that, depending on the interference level, C-RICER is able to outperform the traditional RICER protocol in terms of energy consumption, packet delay, and network throughput.
---
paper_title: Physical layer designs for WBAN systems in IEEE 802.15.6 proposals
paper_content:
This paper presents the trend of physical layer designs for WBAN systems in IEEE 802.15.6 proposals. According to the technical requirement of the WBAN task group, many companies and research institutes have proposed physical layer architectures to provide fundamental technologies for the WBAN communication systems. Since there are various service scenarios for in-body or on-body applications, the physical layer proposals include UWB as well as narrowband techniques. Hence we summarize the design issues for the physical layer proposals with the category of narrowband and UWB signals. The key features of the proposals are described with the frequency bands, modulations, and other technical aspects.
---
paper_title: Body Area Networks: A Survey
paper_content:
Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications.
---
paper_title: Channel ranking algorithms for cognitive coexistence of IEEE 802.15.4
paper_content:
Widespread proliferation of competitive technologies in licence free Industrial, Scientific and Medical (ISM) radio band is squeezing the room in frequency, temporal and spatial domain for the reliable operation of low power, low cost, IEEE 802.15.4 devices. In this context, providing some intelligence to IEEE 802.15.4 devices to analyze the environment and find the least interfered channel can result in significant improvement in performance and reliability. In this paper we prototyped a test bed that emulate wireless channels in order to evaluate the performance of IEEE 802.15.4 devices under varying IEEE 802.11 activities in different environments. The limiting thresholds for the sustainable operation are determined. Based on the thresholds, two algorithms have been proposed to rank the channels according to interference signal strength and activity level. The effectiveness of the algorithms is also verified by ranking both emulated and real channels.
---
paper_title: HCVP: A Hybrid Cognitive Validation Platform for WBAN
paper_content:
Cognitive Radio (CR) is widely anticipated to address spectrum scarcity and interference issues in future wireless communications. As a promising technology, many CR related studies have sprung up to improve the performance of network. However, most existing works mainly focus on theoretical analysis and software simulation without verifying feasibility and performance in practical network scenarios. In this paper, a Hybrid Cognitive Validation Platform (HCVP) is developed to realize practical situations by integrating computer software and hardware devices. Comparing with existing FPGA-based platforms, our HCVP is easier to deploy and configure, by replacing FPGA with low-power programmable SoC chips. Moreover, with excellent performance of energy efficiency, HCVP can be applied to a wide scope of applications, especially battery-powered networks. To evaluate the performance of the HCVP, we implement a WBAN scenario with same frequency interference traffic considered. An adaptive CR MAC algorithm is adopted to minimize the impact of interference. Experiments close to reality are built based on HCVP and the results show that the introduction of CR algorithm significantly alleviate the negative impact of collisions caused by interference and improve system performance.
---
paper_title: Dynamic Channel Adjustable Asynchronous Cognitive Radio MAC Protocol for Wireless Medical Body Area Sensor Networks
paper_content:
Medical body area networks (MBAN) impose several requirements to the medium access control layer which have various contexts: energy efficiency, QoS providing, reliability. And a cognitive radio (CR) network should be able to sense its environment and adapt communication to utilize the unused licensed spectrum without interfering with licensed users. As CR nodes need to hop from channel to channel to make the most use of the spectrum opportunities, we consider asynchronous medium access control (MAC) protocols to be solution for these networks. The DCAA-MAC protocol presented in this paper has been designed has been designed specifically for wireless body area network with cognitive radio capability. The DCAA-MAC protocol has energy-efficiency, low latency, and no synchronization overhead by provide asynchronous and fast channel switching. Analytical models are shown to perform at low energy consumption, to scale well with network size.
---
paper_title: Asynchronous MAC protocol for spectrum agility in Wireless Body Area Sensor Networks
paper_content:
A Wireless Body Area Sensor Network (WBASN) is a special-purpose Wireless Sensor Network (WSN) that supports remote monitoring and entertainment applications. The energy consumption plays an important role in the design of this specific sensor network. Unfortunately, the performance of WBASNs decreases in high interference environments such as the Industrial, Scientific and Medical (ISM) band where wireless spectrums are getting crowded. In this paper, an energy-efficient Medium Access Control (MAC) protocol named C-RICER (Cognitive-Receiver Initiated CyclEd Receiver) is specifically designed for WBASN to cognitively work in high interference environment. C-RICER protocol adapts both transmission power and channel frequency to reduce the interferences and thus, the energy consumption. The protocol is simulated thanks to OMNET++ simulator. Simulation results show that, depending on the interference level, C-RICER is able to outperform the traditional RICER protocol in terms of energy consumption, packet delay, and network throughput.
---
paper_title: A MAC protocol for cognitive wireless body area sensor networking
paper_content:
In this paper, a Cognitive Radio based Medium Access Control (CR-MAC) protocol for Wireless Body Area Sensor Networks (WBASN) that utilizes cognitive radio transmission is proposed. In this proposal, the sensor nodes are classified into nodes of life-critical health information, and nodes of non-critical health information. The CR-MAC protocol prioritizes the critical packets access to the transmission medium by transmitting them with higher power while transmitting lower priority packets using lower transmission power. At the receiver, a higher priority packet experience collision only when there are more than one critical packet transmission at the same time slot while non critical packets experience collision when there are more than one transmission at the same time slot. This protocol is evaluated analytically. The obtained results demonstrate a differentiated service system which prioritizes critical traffic access to the transmission medium and increases the critical traffic throughput.
---
paper_title: HCVP: A Hybrid Cognitive Validation Platform for WBAN
paper_content:
Cognitive Radio (CR) is widely anticipated to address spectrum scarcity and interference issues in future wireless communications. As a promising technology, many CR related studies have sprung up to improve the performance of network. However, most existing works mainly focus on theoretical analysis and software simulation without verifying feasibility and performance in practical network scenarios. In this paper, a Hybrid Cognitive Validation Platform (HCVP) is developed to realize practical situations by integrating computer software and hardware devices. Comparing with existing FPGA-based platforms, our HCVP is easier to deploy and configure, by replacing FPGA with low-power programmable SoC chips. Moreover, with excellent performance of energy efficiency, HCVP can be applied to a wide scope of applications, especially battery-powered networks. To evaluate the performance of the HCVP, we implement a WBAN scenario with same frequency interference traffic considered. An adaptive CR MAC algorithm is adopted to minimize the impact of interference. Experiments close to reality are built based on HCVP and the results show that the introduction of CR algorithm significantly alleviate the negative impact of collisions caused by interference and improve system performance.
---
paper_title: IEEE 802.11 Wireless Local Area Networks
paper_content:
The draft IEEE 802.11 wireless local area network (WLAN) specification is approaching completion. In this article, the IEEE 802.11 protocol is explained, with particular emphasis on the medium access control sublayer. Performance results are provided for packetized data and a combination of packetized data and voice over the WLAN. Our performance investigation reveals that an IEEE 802.11 network may be able to carry traffic with time-bounded requirements using the point coordination function. However, our findings suggest that packetized voice traffic must be handled in conjunction with an echo canceler.
---
paper_title: Energy-efficient and reliability-driven cooperative communications in cognitive body area networks
paper_content:
We study the potential of cognition and co-operation in Body Area Networks (BANs). On one hand, most BAN-based applications involve end-to-end transmission across heterogenous networks. Cognitive communication has been known to be an effective technology for addressing network heterogeneity. On the other hand, a BAN is normally required to provide reliable communications and operate in a very low power level to conserve energy and reduce the electromagnetic radiation impact on human body. Cooperative communication has been known to enhance the transmission reliability and maintain low transmission power. However, the joint cognitive and cooperative mechanism has not been investigated yet in the literature. In this paper, we propose a network architecture for cognitive and cooperative communications in BANs. An intelligent mobile device is introduced as either a cognitive gateway to interconnect heterogenous networks; or a cooperative relay node to achieve transmission diversity. Two cooperative transmission schemes, Energy-conserved Cooperative Transmission and Reliability-driven Cooperative Transmission, are presented for different applications that have distinct energy consumption or reliability requirement. Optimization problems are formulated to optimally allocate power in the cooperative transmission. Results indicate that cooperative transmission schemes can significantly decrease Bit Error Rate (BER) and reduce energy consumption, compared to the non-cooperative schemes. The BER gain is over one order in the high SNR region, while the energy consumption can save up to 50% in the low BER region.
---
paper_title: Electromagnetic interference from radio frequency identification inducing potentially hazardous incidents in critical care medical equipment.
paper_content:
CONTEXT ::: Health care applications of autoidentification technologies, such as radio frequency identification (RFID), have been proposed to improve patient safety and also the tracking and tracing of medical equipment. However, electromagnetic interference (EMI) by RFID on medical devices has never been reported. ::: ::: ::: OBJECTIVE ::: To assess and classify incidents of EMI by RFID on critical care equipment. ::: ::: ::: DESIGN AND SETTING ::: Without a patient being connected, EMI by 2 RFID systems (active 125 kHz and passive 868 MHz) was assessed under controlled conditions during May 2006, in the proximity of 41 medical devices (in 17 categories, 22 different manufacturers) at the Academic Medical Centre, University of Amsterdam, Amsterdam, The Netherlands. Assessment took place according to an international test protocol. Incidents of EMI were classified according to a critical care adverse events scale as hazardous, significant, or light. ::: ::: ::: RESULTS ::: In 123 EMI tests (3 per medical device), RFID induced 34 EMI incidents: 22 were classified as hazardous, 2 as significant, and 10 as light. The passive 868-MHz RFID signal induced a higher number of incidents (26 incidents in 41 EMI tests; 63%) compared with the active 125-kHz RFID signal (8 incidents in 41 EMI tests; 20%); difference 44% (95% confidence interval, 27%-53%; P < .001). The passive 868-MHz RFID signal induced EMI in 26 medical devices, including 8 that were also affected by the active 125-kHz RFID signal (26 in 41 devices; 63%). The median distance between the RFID reader and the medical device in all EMI incidents was 30 cm (range, 0.1-600 cm). ::: ::: ::: CONCLUSIONS ::: In a controlled nonclinical setting, RFID induced potentially hazardous incidents in medical devices. Implementation of RFID in the critical care environment should require on-site EMI tests and updates of international standards.
---
paper_title: Data security and privacy in wireless body area networks
paper_content:
The wireless body area network has emerged as a new technology for e-healthcare that allows the data of a patient's vital body parameters and movements to be collected by small wearable or implantable sensors and communicated using short-range wireless communication techniques. WBAN has shown great potential in improving healthcare quality, and thus has found a wide range of applications from ubiquitous health monitoring and computer assisted rehabilitation to emergency medical response systems. The security and privacy protection of the data collected from a WBAN, either while stored inside the WBAN or during their transmission outside of the WBAN, is a major unsolved concern, with challenges coming from stringent resource constraints of WBAN devices, and the high demand for both security/privacy and practicality/usability. In this article we look into two important data security issues: secure and dependable distributed data storage, and fine-grained distributed data access control for sensitive and private patient medical data. We discuss various practical issues that need to be taken into account while fulfilling the security and privacy requirements. Relevant solutions in sensor networks and WBANs are surveyed, and their applicability is analyzed.
---
paper_title: On Cognitive Radio-based Wireless Body Area Networks for medical applications
paper_content:
Wireless Body Area Network (WBAN) is envisioned to provide a wide range of health-care services to patients in medical environment such as hospitals and clinics. This increases the deployment of wireless platforms in medical environment that bring new challenges, such as interference with neighboring medical devices and the degradation of Quality of Service (QoS) performance, which may be critical to patient's safety. Cognitive Radio (CR) is next-generation wireless communications, and artificial intelligence has been widely adopted to provide self-learning in order to observe, learn and take action against its operating environment. The application of CR in medical wireless environment can cater to the aforementioned challenges. In this paper, we present a review on the limited literature on CR-based WBAN, highlighting some pioneering schemes in this area. We present two architectures, two state-of-the-art applications of CR (i.e. Electro-Magnetic Interference (EMI) reduction and QoS enhancement), as well as a number of schemes in CR-based WBAN. While there are numerous research efforts investigating CR and WBAN respectively, the research into CR-based WBAN remains at the infancy stage. This paper discusses various open issues related to CR-based WBAN in order to spark new interests in this research area.
---
| Title: A Survey of MAC Protocols for Cognitive Radio Body Area Networks
Section 1: Introduction
Description 1: Introduce the concept of Cognitive Radio Body Area Networks (CRBANs) and provide an overview of the advancements and applications in this field.
Section 2: MAC Design Issues in CRBANs
Description 2: Discuss the critical design challenges for MAC protocols in CRBANs, including spectrum access, energy efficiency, cross-layer design, opportunistic sensing, and optimized spectrum decision.
Section 3: MAC Protocols for CRBANs
Description 3: Review and highlight existing MAC protocols designed for CRBANs with details on their key characteristics and features.
Section 4: Comparison of MAC Protocols
Description 4: Compare and discuss the MAC protocols designed for CRBANs with respect to various parameters such as collision ratio, channel access parameters, and energy consumption.
Section 5: Open Research Issues and Challenges
Description 5: Present the ongoing challenges and open research issues in designing reliable and efficient MAC protocols for CRBANs.
Section 6: Conclusions
Description 6: Summarize the key findings from the survey and suggest future research directions for MAC protocols in CRBANs. |
Observers for linear distributed-parameter systems: A survey | 7 | ---
paper_title: Toward a practical theory for distributed parameter systems
paper_content:
The control of dist,ributed parameter (DP) systems represents a real challenge, both from a theoretical and a practical point of view, to the systems engineer. Distribut.ed parameter systems arise in various application areas, such as chemical proms systems, aerospace systems, magneto-hydrodynamic systems, and communicat. ions systems, to ment.ion just a few. Thus, there is sufficient motivation for research directed t,oward the analysis, synt.hesis, and design techniques for DP systems. On t.he surface, it. may appear that t.he available theory for distributed parameter systems is almost at the same level as that associated with lumped systems. However, there exists a much wider gap between the theory and its applications. In the remainder of this correspondence, we shall briefly discus the reasons for this gap and suggest, certain tentative approaches which may contribute to the development of a theory and computat,ional algorithms which take into account. some of the practical problems associated with the design of controllers for DP systems. In order to make these concepts clear it. becomes necessary to briefly review, in an informal manner, what a DP system is and in what sense it differs, from both a mat,hemat.ical and a practical point of view, from a conventional lumped system.
---
paper_title: Nonlinear Observers—A State-of-the-Art Survey
paper_content:
The state-of-the-art of nonlinear state estimators or “observers” is reviewed. The use of these observers in real time nonlinear compensators is evaluated in terms of their on-line computational requirements. Their robustness properties are evaluated in terms of the extent to which the design requires a “perfect” model.
---
paper_title: Sliding mode observers: a survey
paper_content:
Sliding mode observers have unique properties, in that the ability to generate a sliding motion on the error between the measured plant output and the output of the observer ensures that a sliding mode observer produces a set of state estimates that are precisely commensurate with the actual output of the plant. It is also the case that analysis of the average value of the applied observer injection signal, the so-called equivalent injection signal, contains useful information about the mismatch between the model used to define the observer and the actual plant. These unique properties, coupled with the fact that the discontinuous injection signals which were perceived as problematic for many control applications have no disadvantages for software-based observer frameworks, have generated a ground swell of interest in sliding mode observer methods in recent years. This article presents an overview of both linear and non-linear sliding mode observer paradigms. The use of the equivalent injection signal in problems relating to fault detection and condition monitoring is demonstrated. A number of application specific results are also described. The literature in the area is presented and qualified in the context of continuing developments in the broad areas of the theory and application of sliding mode observers.
---
paper_title: Observing the State of a Linear System
paper_content:
In much of modern control theory designs are based on the assumption that the state vector of the system to be controlled is available for measurement. In many practical situations only a few output quantities are available. Application of theories which assume that the state vector is known is severely limited in these cases. In this paper it is shown that the state vector of a linear system can be reconstructed from observations of the system inputs and outputs. It is shown that the observer, which reconstructs the state vector, is itself a linear system whose complexity decreases as the number of output quantities available increases. The observer may be incorporated in the control of a system which does not have its state vector available for measurement. The observer supplies the state vector, but at the expense of adding poles to the over-all system.
---
paper_title: Some recent applications of distributed parameter systems theory - A survey
paper_content:
A survey of some recent applications of distributed parameter systems theory is presented. The practical areas discussed range from process control problems in an industrial plant to the identification, monitoring and control of air and water quality in our environment. Some new, promising areas of application are discussed along with suggestions for future research emphasis.
---
paper_title: A finite dimensional sliding mode observer for a spatially continuous process
paper_content:
Modeling and control of systems represented by partial differential equations (PDEs) is an interesting research field as the process under investigation is infinite dimensional and most commonly used techniques are for finite dimensional systems. This paper considers the development of a finite dimensional observer obtained after an appropriate model reduction stage. A sliding mode observer is considered as it utilizes the sign of a quantity and ensures good reconstruction performance. The paper compares the fictitious state variables that are obtained after projecting the instantaneous snapshots on an eigenbasis and the state variables predicted by the observer. The results emphasize that the designed observer functions well on some set of operating conditions, which are elaborated in the paper.
---
paper_title: Some recent applications of distributed parameter systems theory - A survey
paper_content:
A survey of some recent applications of distributed parameter systems theory is presented. The practical areas discussed range from process control problems in an industrial plant to the identification, monitoring and control of air and water quality in our environment. Some new, promising areas of application are discussed along with suggestions for future research emphasis.
---
paper_title: Observation and Control for Operator Semigroups
paper_content:
The evolution of the state of many systems modeled by linear partial difierentialequations (PDEs) or linear delay-difierential equations can be described by operatorsemigroups. The state of such a system is an element in an inflnite-dimensionalnormed space, whence the name \inflnite-dimensional linear system".The study of operator semigroups is a mature area of functional analysis, which isstill very active. The study of observation and control operators for such semigroupsis relatively more recent. These operators are needed to model the interactionof a system with the surrounding world via outputs or inputs. The main topicsof interest about observation and control operators are admissibility, observability,controllability, stabilizability and detectability. Observation and control operatorsare an essential ingredient of well-posed linear systems (or more generally systemnodes). Inthisbookwedealonlywithadmissibility, observabilityandcontrollability.We deal only with operator semigroups acting on Hilbert spaces.This book is meant to be an elementary introduction into the topics mentionedabove. By \elementary" we mean that we assume no prior knowledge of flnite-dimensional control theory, and no prior knowledge of operator semigroups or ofunbounded operators. We introduce everything needed from these areas. We doassume that the reader has a basic understanding of bounded operators on Hilbertspaces, difierential equations, Fourier and Laplace transforms, distributions andSobolev spaces on
---
paper_title: Design of optimal controllers for distributed systems using finite dimensional state observers
paper_content:
The problem of constructing an "observer" to enable us to implement an approximate optimal control for a distributed parameter system is examined where the state is measured at a few pre-specified points. The observer is formulated as the output of a dynamical system described by a set of ordinary differential equations. Both distributed and boundary control problems are studied and the observer-formulation is set up for both cases. Some reasonable assumptions have been made in order that the approximation introduced by the eigenfunction expansion technique be satisfactory. For the case of the boundary control problem, a simple example is solved to illustrate the method.
---
paper_title: An Introduction to Infinite-Dimensional Linear Systems Theory
paper_content:
1 Introduction.- 1.1 Motivation.- 1.2 Systems theory concepts in finite dimensions.- 1.3 Aims of this book.- 2 Semigroup Theory.- 2.1 Strongly continuous semigroups.- 2.2 Contraction and dual semigroups.- 2.3 Riesz-spectral operators.- 2.4 Delay equations.- 2.5 Invariant subspaces.- 2.6 Exercises.- 2.7 Notes and references.- 3 The Cauchy Problem.- 3.1 The abstract Cauchy problem.- 3.2 Perturbations and composite systems.- 3.3 Boundary control systems.- 3.4 Exercises.- 3.5 Notes and references.- 4 Inputs and Outputs.- 4.1 Controllability and observability.- 4.2 Tests for approximate controllability and observability.- 4.3 Input-output maps.- 4.4 Exercises.- 4.5 Notes and references.- 5 Stability, Stabilizability, and Detectability.- 5.1 Exponential stability.- 5.2 Exponential stabilizability and detectability.- 5.3 Compensator design.- 5.4 Exercises.- 5.5 Notes and references.- 6 Linear Quadratic Optimal Control.- 6.1 The problem on a finite-time interval.- 6.2 The problem on the infinite-time interval.- 6.3 Exercises.- 6.4 Notes and references.- 7 Frequency-Domain Descriptions.- 7.1 The Callier-Desoer class of scalar transfer functions.- 7.2 The multivariable extension.- 7.3 State-space interpretations.- 7.4 Exercises.- 7.5 Notes and references.- 8 Hankel Operators and the Nehari Problem.- 8.1 Frequency-domain formulation.- 8.2 Hankel operators in the time domain.- 8.3The Nehari extension problem for state linear systems.- 8.4 Exercises.- 8.5 Notes and references.- 9 Robust Finite-Dimensional Controller Synthesis.- 9.1 Closed-loop stability and coprime factorizations.- 9.2 Robust stabilization of uncertain systems.- 9.3 Robust stabilization under additive uncertainty.- 9.4 Robust stabilization under normalized left-coprime-factor uncertainty.- 9.5 Robustness in the presence of small delays.- 9.6 Exercises.- 9.7 Notes and references.- A. Mathematical Background.- A.1 Complex analysis.- A.2 Normed linear spaces.- A.2.1 General theory.- A.2.2 Hilbert spaces.- A.3 Operators on normed linear spaces.- A.3.1 General theory.- A.3.2 Operators on Hilbert spaces.- A.4 Spectral theory.- A.4.1 General spectral theory.- A.4.2 Spectral theory for compact normal operators.- A.5 Integration and differentiation theory.- A.5.1 Integration theory.- A.5.2 Differentiation theory.- A.6 Frequency-domain spaces.- A.6.1 Laplace and Fourier transforms.- A.6.2 Frequency-domain spaces.- A.6.3 The Hardy spaces.- A.7 Algebraic concepts.- A.7.1 General definitions.- A.7.2 Coprime factorizations over principal ideal domains.- A.7.3 Coprime factorizations over commutative integral domains.- References.- Notation.
---
paper_title: Nonlinear Observers—A State-of-the-Art Survey
paper_content:
The state-of-the-art of nonlinear state estimators or “observers” is reviewed. The use of these observers in real time nonlinear compensators is evaluated in terms of their on-line computational requirements. Their robustness properties are evaluated in terms of the extent to which the design requires a “perfect” model.
---
paper_title: Sliding mode observers: a survey
paper_content:
Sliding mode observers have unique properties, in that the ability to generate a sliding motion on the error between the measured plant output and the output of the observer ensures that a sliding mode observer produces a set of state estimates that are precisely commensurate with the actual output of the plant. It is also the case that analysis of the average value of the applied observer injection signal, the so-called equivalent injection signal, contains useful information about the mismatch between the model used to define the observer and the actual plant. These unique properties, coupled with the fact that the discontinuous injection signals which were perceived as problematic for many control applications have no disadvantages for software-based observer frameworks, have generated a ground swell of interest in sliding mode observer methods in recent years. This article presents an overview of both linear and non-linear sliding mode observer paradigms. The use of the equivalent injection signal in problems relating to fault detection and condition monitoring is demonstrated. A number of application specific results are also described. The literature in the area is presented and qualified in the context of continuing developments in the broad areas of the theory and application of sliding mode observers.
---
paper_title: An introduction to observers
paper_content:
Observers which approximately reconstruct missing state-variable information necessary for control are presented in an introductory manner. The special topics of the identity observer, a reduced-order observer, linear functional observers, stability properties, and dual observers are discussed.
---
paper_title: Observing the State of a Linear System
paper_content:
In much of modern control theory designs are based on the assumption that the state vector of the system to be controlled is available for measurement. In many practical situations only a few output quantities are available. Application of theories which assume that the state vector is known is severely limited in these cases. In this paper it is shown that the state vector of a linear system can be reconstructed from observations of the system inputs and outputs. It is shown that the observer, which reconstructs the state vector, is itself a linear system whose complexity decreases as the number of output quantities available increases. The observer may be incorporated in the control of a system which does not have its state vector available for measurement. The observer supplies the state vector, but at the expense of adding poles to the over-all system.
---
paper_title: A finite dimensional sliding mode observer for a spatially continuous process
paper_content:
Modeling and control of systems represented by partial differential equations (PDEs) is an interesting research field as the process under investigation is infinite dimensional and most commonly used techniques are for finite dimensional systems. This paper considers the development of a finite dimensional observer obtained after an appropriate model reduction stage. A sliding mode observer is considered as it utilizes the sign of a quantity and ensures good reconstruction performance. The paper compares the fictitious state variables that are obtained after projecting the instantaneous snapshots on an eigenbasis and the state variables predicted by the observer. The results emphasize that the designed observer functions well on some set of operating conditions, which are elaborated in the paper.
---
paper_title: Observers and parameter determination for distributed parameter systems
paper_content:
The aim of the paper is to investigate the estimation of unknown states and unknown functions for distributed parameter systems. First we construct finite dimensional state observers to estimate the unknown states and give the error estimates for the distributed parameter systems of parabolic type with unknown input sources. Next we consider the determination of unknown input distribution functions using these estimated states. The problems of unknown function determination are not necessarily well-posed even if the distributed parameter systems are identifiable. A well-posed approximation method by regularization is applied to obtain the approximate functions which depend continuously on the measurement data. We also construct finite dimensional state observers for the distributed parameter systems of hyperbolic type with unknown input sources. Simple numerical examples are presented.
---
paper_title: Observers for linear multivariable systems with applications
paper_content:
This paper presents an algorithm for the design of asymptotic state estimators (observers) for index-invariant uniformly observable time-varying linear finite-dimensional multivariable systems. The results obtained indicate that asymptotic estimators can be employed in optimally designed regulators provided an increase from the optimal cost is tolerable. It is also shown that any uniformly observable and uniformly controllable plant with index-invariant observability and controllability matrices can be stabilized with an observer.
---
paper_title: Design of optimal controllers for distributed systems using finite dimensional state observers
paper_content:
The problem of constructing an "observer" to enable us to implement an approximate optimal control for a distributed parameter system is examined where the state is measured at a few pre-specified points. The observer is formulated as the output of a dynamical system described by a set of ordinary differential equations. Both distributed and boundary control problems are studied and the observer-formulation is set up for both cases. Some reasonable assumptions have been made in order that the approximation introduced by the eigenfunction expansion technique be satisfactory. For the case of the boundary control problem, a simple example is solved to illustrate the method.
---
paper_title: Discrete-Time Observers and Parameter Determination for Distributed Parameter Systems with Discrete-Time Input–Output Data
paper_content:
The aim of this paper is to study the estimation of unknown states and unknown input distribution functions for distributed parameter systems with discrete-time input–output data. First we construct finite dimensional discrete-time state observers to estimate unknown states and give the error estimates for distributed parameter systems with unknown inputs. Next we consider the determination of unknown input distribution functions using these estimated states. We discuss the relationship between observability and identifiability. The problem of determination of unknown functions is not well posed in general even if the distributed parameter systems are identifiable. We present and discuss a feasible approximation method by regularization which gives a constructive procedure to obtain approximately a true input distribution function. We also investigate limit properties of approximate solutions as the number of sampling periods tends to $ + \infty $.
---
paper_title: Toward a practical theory for distributed parameter systems
paper_content:
The control of dist,ributed parameter (DP) systems represents a real challenge, both from a theoretical and a practical point of view, to the systems engineer. Distribut.ed parameter systems arise in various application areas, such as chemical proms systems, aerospace systems, magneto-hydrodynamic systems, and communicat. ions systems, to ment.ion just a few. Thus, there is sufficient motivation for research directed t,oward the analysis, synt.hesis, and design techniques for DP systems. On t.he surface, it. may appear that t.he available theory for distributed parameter systems is almost at the same level as that associated with lumped systems. However, there exists a much wider gap between the theory and its applications. In the remainder of this correspondence, we shall briefly discus the reasons for this gap and suggest, certain tentative approaches which may contribute to the development of a theory and computat,ional algorithms which take into account. some of the practical problems associated with the design of controllers for DP systems. In order to make these concepts clear it. becomes necessary to briefly review, in an informal manner, what a DP system is and in what sense it differs, from both a mat,hemat.ical and a practical point of view, from a conventional lumped system.
---
paper_title: Sliding mode observers: a survey
paper_content:
Sliding mode observers have unique properties, in that the ability to generate a sliding motion on the error between the measured plant output and the output of the observer ensures that a sliding mode observer produces a set of state estimates that are precisely commensurate with the actual output of the plant. It is also the case that analysis of the average value of the applied observer injection signal, the so-called equivalent injection signal, contains useful information about the mismatch between the model used to define the observer and the actual plant. These unique properties, coupled with the fact that the discontinuous injection signals which were perceived as problematic for many control applications have no disadvantages for software-based observer frameworks, have generated a ground swell of interest in sliding mode observer methods in recent years. This article presents an overview of both linear and non-linear sliding mode observer paradigms. The use of the equivalent injection signal in problems relating to fault detection and condition monitoring is demonstrated. A number of application specific results are also described. The literature in the area is presented and qualified in the context of continuing developments in the broad areas of the theory and application of sliding mode observers.
---
paper_title: Backstepping observers for a class of parabolic PDEs
paper_content:
In this paper we design exponentially convergent observers for a class of parabolic partial integro-differential equations (P(I)DEs) with only boundary sensing available. The problem is posed as a problem of designing an invertible coordinate transformation of the observer error system into an exponentially stable target system. Observer gain (output injection function) is shown to satisfy a well-posed hyperbolic PDE that is closely related to the hyperbolic PDE governing backstepping control gain for the state-feedback problem. For several physically relevant problems the observer gains are obtained in closed form. The observer gains are then used for an output-feedback design in both collocated and anti-collocated setting of sensor and actuator. The order of the resulting compensator can be substantially lowered without affecting stability. Explicit solutions of a closed loop system are found in particular cases.
---
paper_title: Observers for systems characterized by semigroups
paper_content:
The theory of observers is generalized from finite dimensional linear systems to abstract linear systems characterized by semigroups on Banach spaces. Sufficient conditions are given for both identity and reduced-order observers to exist for the abstract system. It is shown that the spectrum of a closed-loop control system using an observer is the union of the spectrum of the observer and the spectrum of the closed-loop system with state feedback. The observer theory for the abstract system is used to show that observability is a sufficient condition for the existence of an observer for a system modeled by a linear functional differential equation.
---
paper_title: Implementation of distributed parameter state observers
paper_content:
With the aid of simplifying assumptions a one-dimensional mathematical model for a three-dimensional aluminium slab (100 cm long, 25 cm wide, 2 cm thick) has been developed and the modelling parameter of the apparatus have been determined experimentally |14|. The observer problem considered here is the real time state reconstruction of the slab temperature profile using only a limited number of thermocouple measurements.
---
paper_title: Sensors and observers in distributed parameter systems
paper_content:
Luenberger observer theory is extended to distributed parameter systems. This extension is based on the consideration of sensors. For systems with infinite dimensional state spaces, it is possible to construct the state vector asymptotically (or a part of the state vector) by a ‘good’ choice of sensors. We show that the link between detectability and sensor structure may be of some interest in the construction of observers.
---
paper_title: Receding window observer and dynamic feedback control of discrete infinite dimensional systems
paper_content:
A discrete-time infinite-dimensional system is considered. The receding window filtering idea is introduced to reconstruct the state of the system asymptotically from incomplete measurements. This observer is used in the stabilizing dynamic feedback control of the system. Robustness properties of such dynamic controllers are derived. >
---
paper_title: Observer theory for distributed-parameter systems
paper_content:
This paper examines the problem of the approximate reconstruction of the unknown state variables in distributed-parameter systems. New results on the observer theory for important classes of linear and non-linear operator, partial differential, and partial differential-integral equations in describing distributed-parameter systems are presented. The specific developments employ the recent results on Lyapunov stability theory, along with the theory of linear and non-linear semigroup operators, and their infinitesimal generators. The questions of observability, stability of the state reconstruction error dynamics associated with the proposed observer structure are discussed. The theoretical results are illustrated with some applications to problems of the kinetic lumping of complex distributed-parameter chemical reaction systems, as well as the observer design for linear and non-linear distributed-parameter diffusion systems.
---
paper_title: Feedback stabilization of a class of distributed systems and construction of a state estimator
paper_content:
In this paper, we study feedback stabilization of a class of distributed systems governed by partial differential equations of parabolic type and its application to constructing a state estimator for asymptotic state identification. It is proved that, when a controller (an observation) can be arbitrarily constructed, observability (controllability) of the system is necessary and sufficient for stabilizing the system so that it has an arbitrarily large damping constant. As an application of this result, it is shown that a state estimator can be constructed, the output of which approaches asymptotically the real state of the system with an arbitrary convergence rate.
---
paper_title: Output-feedback boundary control of an uncertain heat equation with noncollocated observation: A sliding-mode approach
paper_content:
The boundary stabilization problem of a one-dimensional unstable heat conduction system with boundary disturbance is investigated using a sliding-mode approach. This infinite-dimensional system, mathematical modeled by a parabolic partial differential equation (PDE), is powered with a Dirichlet type boundary actuator and only sensing at opposite end. By applying the Volterra integral transformation, a stabilizing boundary control law is obtained to achieve exponential stability in the ideal situation when there are no system uncertainties. The associated Lyapunov function is used for designing an infinite-dimensional sliding manifold, on which the system exhibits the same type of stability and robustness against of bounded exogenous boundary disturbance. By utilizing the similar transformation, an infinite-dimensional sliding-mode observer is proposed to reconstruct the system' states, which is with robustness to boundary disturbance. Moreover, the relative degree of the chosen sliding function with respect to the output-feedback boundary control input is zero. A continuous control law satisfying the reaching condition is obtained by passing a discontinuous (signum) signal through an integrator.
---
paper_title: Feedback stabilization of distributed parameter systems by a functional observer
paper_content:
Feedback stabilization of unstable parabolic equations is of great interest. The fact that it is not necessarily possible to stabilize the equations by means of static feedback schemes when both observation and control can be realized only through the boundary is illustratively shown by a simple example. In view of this, a functional observer of Luenberger type is derived and then utilized in order to stabilize unstable parabolic equations for which observation of the state and control can be carried out only through the boundary.
---
paper_title: Adaptive observers for structurally perturbed infinite dimensional systems
paper_content:
The aim of this investigation is to construct an adaptive observer for a class of infinite-dimensional plants having a structured perturbation with an unknown constant parameter, such as the case of static output feedback with an unknown gain. The adaptive observer uses the nominal dynamics of the unperturbed plant and an adaptation law based on the Lyapunov redesign method. We obtain conditions on the system to ensure uniform boundedness of the estimator dynamics and the parameter estimates, and convergence of the estimator error. Examples illustrating the approach are presented along with some numerical results.
---
paper_title: Adaptive observers for slowly time varying infinite dimensional systems
paper_content:
We consider a class of infinite dimensional systems with an unknown time varying perturbation in the input term. The goal here is twofold, namely to estimate the state and identify the unknown parameter in the input term using only input and output measurements. An adaptive observer along with a parameter adaptive law that is based on Lyapunov redesign is presented and, under certain conditions imposed on the plant, is shown to achieve state error convergence. Parameter convergence can be established by imposing the additional condition of persistence of excitation. Examples that illustrate the applicability of this approach to a parabolic partial differential equation and a delay system are included along with some numerical results.
---
paper_title: The generation of adaptive law structures for globally convergent adaptive observers
paper_content:
Observing the state of an unknown linear system by means of a parametrized representation of the standard Luenberger observer with an additional adaptive loop (adaptive observer) is considered. A general technique on how to construct suitable adaptive laws for this additional loop is presented and it is proven that these adaptive laws always result in global and arbitrarily fast convergence of the adaptive process. Because the adaptive laws can assume a variety of different structures, both structural and parametric degrees of freedom in the adaptive law (rather than the so far available parametric degrees alone) are obtained, which can be used in a future optimization of the adaptive observer performance.
---
paper_title: Finite-dimensional adaptive observers applied to distributed parameter systems
paper_content:
The application of adaptive observers to distributed parameter systems is considered. The effects of the infinite-dimensional unmodeled dynamics (residuals) on the observer state and parameter estimates are analyzed. For the purpose of the analysis, the state and parameter estimates of the nth-order observer are interpreted as estimates of an nth-order reduced-order model (ROM) of the infinite-dimensional plant. These estimates can then be compared to the states and parameters of a true ROM and bounds for the errors can be derived. It is proven that if the input provides definite excitation and the residual energy is bounded over a finite time interval, the estimation errors are ultimately bounded, with no assumption regarding stability of the plant or boundedness of the plant input. >
---
paper_title: Adaptive monitoring and accommodation of nonlinear actuator faults in positive real infinite dimensional systems
paper_content:
We consider a class of positive real infinite dimensional systems which are subjected to incipient actuator faults. The actuator fault is modelled as a time varying transition from an initial (linear or even hysteretic) map into a(another) hysteretic map at the onset of the fault occurrence. An infinite dimensional adaptive detection observer is utilized to generate a residual signal in order to detect the fault occurrence and to assist in the fault accommodation. This is done via an automated control reconfiguration which utilizes information on the new hysteretic map and adjusts the controller via a static right inverse of the new actuator map. A robust modification is utilized in order to avoid false alarms caused by unmodelled dynamics. An example is included to illustrate the applicability of the proposed detection scheme.
---
paper_title: An adaptive observer for single-input single-output linear systems
paper_content:
A full order adaptive observer is described for observing the state of a single-input single-output observable continuous differential system with unknown parameters. Convergence of the observer states to those of the system is accomplished by directly changing the parameters of the observer using an adaptive law based upon Lyapunov stability theory. Observer eigenvalues may be freely chosen. Some restriction is placed upon the system input in that it must be sufficiently rich in frequencies in order to insure convergence.
---
paper_title: Adaptive Observers for a Class of Infinite Dimensional Systems
paper_content:
Abstract An adaptive observer for an infinite dimensional plant with a structured perturbation is developed to estimate the state of the plant. The class under consideration includes a collocated (passive) feedback with uncertain feedback gain. This observer is based on the nominal part of the plant and the adaptation law for estimating the uncertain gain. The proposed adaptation law developed is based on the Lyapunov stability method. It is proved that the resulting estimator dynamics are stable in the sense that the tracking error converges to zero and the parameter estimate is bounded.
---
paper_title: Control of Distributed Parameter Systems with Spillover using an Augmented Observer
paper_content:
Modern modal control methods for flexible structures, based on a truncated model of the structure's dynamics, have control and observation spillover which can reduce the stability margin of the controlled structure. The sensor output is often filtered to reduce observation spillover, however, it introduces signal distortion which perturbs the closed-loop system pole locations. This can reduce stability margins and jeopardize convergence of the observer. When the filter equation is included in the standard observer equations the separation principle between the controller and the observer will no longer holds, and the closed-loop poles cannot be located using standard pole placement methods. A new method is presented, where a first order filter is included in an augmented observer. The separation principle, and controllability and observability properties of the original system are shown to remain when using the augmented observer. Filter and spillover produced pole perturbations in the different methods are illustrated with a numerical example.
---
paper_title: Dynamics And Control Of Structures
paper_content:
Newtonian Mechanics. Principles of Analytical Mechanics. Concepts from Linear System Theory. Lumped--Parameter Structures. Control of Lumped--Parameter Systems: Classical Approach. Control of Lumped--Parameter Systems: Modern Approach. Distributed--Parameter Structures: Exact and Approximate Methods. Control of Distributed Structures. A Review of Literature on Structural Control. References. Author Index. Subject Index.
---
paper_title: Feedback control of flexible systems
paper_content:
Feedback control is developed for the class of flexible systents described by the generalized wave equation with damping. The control force distribution is provided by a number of point force actuators and the system displacements and/or their velocities are measured at various points. A feedback controller is developed for a finite number of modes of the flexible system and the controllability and observability conditions necessary for successful operation are displayed. The control and observation spillover due to the residual (uncontrolled) modes is examined and the combined effect of control and observation spillover is shown to lead to potential instabilities in the closed-loop system. Some remedies for spillover, including a straightforward phase-locked loop prefilter, are suggested to remove the instability mechanism. The concepts of this paper are illustrated by some numerical studies on the feedback control of a simply-supported Euler-Bernoulli beam with a single actuator and sensor.
---
paper_title: Second-order observers for second-order distributed parameter systems in R2
paper_content:
Abstract This note addresses observer design for second-order distributed parameter systems in R 2 . Particularly, second-order distributed parameter systems without distributed damping are studied. Based on finite number of measurements, exponentially stable observer is designed. The existence, uniqueness and stability of solutions of the observers are based on semigroup theory.
---
paper_title: Robust Observer-Based Control of a Vibrating Beam
paper_content:
A new multiple-input multiple-output time domain stability criterion for large flexible structures is illustrated by application to observer-based control of a Bernoulli-Euler beam. Upper norm bounds of the state transition matrix of the residual dynamics and total spillover matrix (impulse response matrix of the residual model) are investigated. The approach can be easily extended to a practical consideration of the unavoidable saturation of the actuators and gives an insight into the stabilization analysis of saturating control of flexible structures.
---
paper_title: Natural consensus filters for second order infinite dimensional systems
paper_content:
Abstract This work proposes consensus filters for a class of second order infinite dimensional systems. The proposed structure of the consensus filters is that of a local filter written in the natural setting of a second order formulation with the additional coupling that enforces consensus by minimizing the disagreement between all local filters. An advantage of the second order formulation imposed on the local filters is the natural interpretation that they retain, namely that the time derivative of the estimated position is equal to the estimated velocity. Stability analysis of the collective dynamics is achieved via the use of a parameter-dependent Lyapunov functional, and which guarantees that asymptotically, all filters agree with each other and that they all converge to the true state of the second order system.
---
paper_title: Natural second-order observers for second-order distributed parameter systems
paper_content:
The aim of this manuscript is to present an alternative method for designing state observers for second-order distributed parameter systems without resorting to a first-order formulation. This method has the advantage of utilizing the algebraic structure that second-order systems enjoy with the obvious computational savings in observer gain calculations. The proposed scheme ensures that the derivative of the estimated position is indeed the estimate of the velocity component and to achieve such a result, a parameter-dependent Lyapunov function was utilized to ensure the asymptotic convergence of the state estimation error.
---
paper_title: Design of consensus and adaptive consensus filters for distributed parameter systems
paper_content:
This work establishes an abstract framework that considers the distributed filtering of spatially varying processes using a sensor network. It is assumed that the sensor network consists of groups of sensors, each of which provides a number of state measurements from sensing devices that are not necessarily identical and which only transmit their information to their own sensor group. A modification to the local spatially distributed filters provides the non-adaptive case of spatially distributed consensus filters which penalize the disagreement amongst themselves in a dynamic manner. A subsequent modification to this scheme incorporates the adaptation of the consensus gains in the disagreement terms of all local filters. Both the well-posedness of these two consensus spatially distributed filters and the convergence of the associated observation errors to zero in appropriate norms are presented. Their performance is demonstrated on three different examples of a diffusion partial differential equation with point measurements.
---
paper_title: Natural consensus filters for second order infinite dimensional systems
paper_content:
Abstract This work proposes consensus filters for a class of second order infinite dimensional systems. The proposed structure of the consensus filters is that of a local filter written in the natural setting of a second order formulation with the additional coupling that enforces consensus by minimizing the disagreement between all local filters. An advantage of the second order formulation imposed on the local filters is the natural interpretation that they retain, namely that the time derivative of the estimated position is equal to the estimated velocity. Stability analysis of the collective dynamics is achieved via the use of a parameter-dependent Lyapunov functional, and which guarantees that asymptotically, all filters agree with each other and that they all converge to the true state of the second order system.
---
paper_title: Consensus and Cooperation in Networked Multi-Agent Systems
paper_content:
This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analysis for the algorithms are provided. Our analysis frame- work is based on tools from matrix theory, algebraic graph theory, and control theory. We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators, flocking, formation control, fast consensus in small- world networks, Markov processes and gossip-based algo- rithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms. A brief introduction is provided on networked systems with nonlocal information flow that are considerably faster than distributed systems with lattice-type nearest neighbor interactions. Simu- lation results are presented that demonstrate the role of small- world effects on the speed of consensus algorithms and cooperative control of multivehicle formations.
---
paper_title: End-point sensing and state observation of a flexible-link robot
paper_content:
This paper presents an end-point sensor system and the development of an observer to reconstruct the states of a flexible-link robot. The sensor system includes a tip displacement sensor and an accelerometer. Based on the assumed-models method, an observer is developed using the Kalman filtering algorithm. Experimental results are given to demonstrate the effectiveness of the observer.
---
paper_title: Robust discrete‐time nonlinear sliding mode state estimation of uncertain nonlinear systems
paper_content:
In this paper, we propose a discrete-time nonlinear sliding mode observer for state and unknown input estimations of a class of single-input/single-output nonlinear uncertain systems. The uncertainties are characterized by a state-dependent vector and a scalar disturbance/unknown input. The discrete-time model is derived through Taylor series expansion together with nonlinear state transformation. A design methodology that combines the discrete-time sliding mode (DSM) and a nonlinear observer design is adopted, and a strategy is developed to guarantee the convergence of the estimation error to a bound within the specified boundary layer. A relation between sliding mode gain and boundary layer is established for the existence of DSM, and the estimation is made robust to external disturbances and uncertainties. The unknown input or disturbance can also be estimated through the sliding mode. The conditions for the asymptotical stability of the estimation error are analysed. Application to a bioreactor is given and the simulation results demonstrate the effectiveness of the proposed scheme. Copyright © 2006 John Wiley & Sons, Ltd.
---
paper_title: A robust observer for discrete time nonlinear systems
paper_content:
Abstract This paper extends the results developed in (Ciccarella et al., 1993) and presents a robust observer for discrete time nonlinear systems. A simple, robust and easy to implement algorithm is given whose convergence properties are guaranteed for autonomous and forced systems. Combined parameter and state estimation is made for a numerical example, which compares the robust observer to the observer given in (Ciccarella et al., 1993).
---
| Title: Observers for Linear Distributed-Parameter Systems: A Survey
Section 1: INTRODUCTION
Description 1: Provide an overview of Distributed-Parameter Systems (DPSs) and the need for observers. Discuss the motivation for the survey and outline the paper's structure.
Section 2: LUMPED-PARAMETER OBSERVER DESIGN
Description 2: Briefly recall the observer design problem for lumped-parameter systems, including the mathematical background and key concepts.
Section 3: DISTRIBUTED-PARAMETER SYSTEMS
Description 3: Introduce models of DPSs, focusing on their representation via partial differential equations and abstract state space equations. Discuss the concept of lumping and order reduction methods.
Section 4: DISTRIBUTED-PARAMETER OBSERVERS
Description 4: Present the various approaches to observer design for DPSs, including finite-dimensional and infinite-dimensional observers. Discuss generalization of observer theory to infinite-dimensional systems and provide an overview of specific observer design methods.
Section 5: SECOND-ORDER DISTRIBUTED-PARAMETER SYSTEMS
Description 5: Discuss observer design for second-order DPSs such as flexible structures and vibration systems. Include descriptions of natural observers and other specialized methods for second-order systems.
Section 6: DISTRIBUTED ESTIMATION
Description 6: Cover recent developments in distributed estimation using sensor networks. Explain how local observers and consensus methods are applied in this context.
Section 7: SUMMARY AND CHALLENGES
Description 7: Summarize the survey's findings and identify future research challenges, including robustness against model uncertainty, discrete-time observer analysis, and applications to real-world systems. |
Operating room planning and scheduling: A literature review | 9 | ---
paper_title: Surgical demand scheduling: a review.
paper_content:
Abstract ::: This article reviews the literature on scheduling of patient demand for surgery and outlines an approach to improving overall performance of hospital surgical suites. Reported scheduling systems are categorized into those that schedule patients in advance of the surgical date and those that schedule available patients on the day of surgery. Approaches to estimating surgical procedure times are also reviewed, and the article concludes with a discussion of the failure to implement the majority of reported scheduling schemes.
---
paper_title: Scheduling subject to resource constraints: classification and complexity
paper_content:
Abstract In deterministic sequencing and scheduling problems, jobs are to be processed on machines of limited capacity. We consider an extension of this class of problems, in which the jobs require the use of additional scarce resources during their execution. A classification scheme for resource constraints is proposed and the computational complexity of the extended problem class is investigated in terms of this classification. Models involving parallel machines, unit-time jobs and the maximum completion time criterion are studied in detail; other models are briefly discussed.
---
paper_title: A Review of the Application of Mathematical Programming to Tactical and Strategic Health and Social Services Problems
paper_content:
The paper first considers tactical allocation problems in the health service (particularly the hospital) field which have been tackled by the mathematical programming approach. Examples of such problem areas are those of hospital admissions scheduling, nurse staff assignment, menu planning and the location of hospitals. However, examples of the successful implementation of such models in the literature have only been found for the area of menu planning.
---
paper_title: Where are the costs in perioperative care : analysis of hospital costs and charges for inpatient surgical care
paper_content:
BackgroundMany health-care institutions are emphasizing cost reduction programs as a primary tool for managing profitability. The goal of this study was to elucidate the proportion of anesthesia costs relative to perioperative costs as determined by charges and actual costs.
---
paper_title: Surgical process scheduling: a structured review.
paper_content:
There is no generally accepted definition of surgical process scheduling available in the literature; nursing researchers, physicians, administrators, and management scientists each view scheduling differently. To overcome this communication problem, a number of authors have proposed conceptual frameworks for surgical process scheduling. These frameworks have unfortunately been either unsatisfactory or incomplete. In this paper, we describe a conceptual framework for surgical process scheduling and use it to classify the existing literature. Results from the review indicate that while operational aspects of advance and allocation scheduling are well understood, further research should be directed towards resolving scheduling issues at strategic and administrative levels. In addition, techniques for integrating operating room (OR) scheduling with other hospital operations are required.
---
paper_title: Achieving operating room efficiency through process integration.
paper_content:
As healthcare organizations look for ways to gain new efficiencies and reduce costs, they are examining surgical services with a critical eye. In many cases, the operating room (OR) was not included in enterprisewide reengineering efforts, thereby limiting the positive impact of those efforts. Healthcare organizations are recognizing that every point along the patient care continuum is interrelated. To truly maximize reengineering efforts, they need to integrate the entire process and information flow within the OR and across the enterprise.
---
paper_title: The Aging Population and Its Impact on the Surgery Workforce
paper_content:
The population is expanding and aging. According to the US Census Bureau, the domestic population will increase 7.9% by 2010, and 17.0% by 2020. The fastest growing segment of this population consists of individuals over the age of 65; their numbers are expected to increase 13.3% by 2010 and 53.2% by 2020. Two main factors are responsible for these forecasts. First, we are living longer; life expectancy has increased from 66.7 years for individuals born in 1946 to 76.1 years for those born in 1996.1 Second, the baby boomers (those born between 1946 and 1964) are a wave of population density that will begin to hit retirement age in 2011.2 ::: ::: Older individuals require more medical services relative to their younger counterparts. The National Hospital Discharge Survey (NHDS) reported that in 1999, patients aged 65 years or older comprised 12% of the population, but constituted 40% of hospital discharges and 48% of days of inpatient care.3 As the proportion of elderly patients in the population increases, the medical system will face new challenges. Will there be enough surgeons to meet the increased demand for surgical services? ::: ::: The last decade has been notable for a perception of balance with regard to the physician supply relative to demand. Against this calm background, we sought to isolate and predict the effect of the aging population on the use of surgical services and the need for surgeons. We hypothesized that the surgical workload will increase significantly over the next 2 decades due in large part to the aging of the US population. Toward evaluating this hypothesis, we employed an approach based upon historical patterns of care. Data from national surveys of medical and surgical services were used to establish a profile of age-specific rates of surgical use. This profile was then used to model the impact of forecasted population shifts on surgical work.
---
paper_title: Project scheduling : a research handbook
paper_content:
Scope and Relevance of Project Scheduling.- The Project Scheduling Process.- Classification of Project Scheduling Problems.- Temporal Analysis: The Basic Deterministic Case.- Temporal Analysis: Advanced Topics.- The Resource-Constrained Project Scheduling Problem.- Resource-Constrained Scheduling: Advanced Topics.- Project Scheduling with Multiple Activity Execution Modes.- Stochastic Project Scheduling.- Robust and Reactive Scheduling.
---
paper_title: Capacity Management in Health Care Services: Review and Future Research Directions
paper_content:
Health care has undergone a number of radical changes during the past five years. These include increased competition, fixed-rate reimbursement systems, declining hospital occupancy rates, and growth in health maintenance organizations and preferred provider organizations. Given these changes in the manner in which health care is provided, contracted, and paid for, it is appropriate to review the past research on capacity management and to determine its relevance to the changing industry. This paper provides a review, classification, and analysis of the literature on this topic. In addition, future research needs are discussed and specific problem areas not dealt with in the previous literature are targeted.
---
paper_title: Managing uncertainty in orthopaedic trauma theatres
paper_content:
Abstract The management of acute healthcare involves coping with a large uncertainty in demand. This uncertainty is a prevailing feature of orthopaedic care and many scarce resources are devoted to providing the contingent theatre time for orthopaedic trauma patients. However, given the variability and uncertainty in the demand much of the theatre time is not used. Simulation was used to explore the balance between maximising the utilisation of the theatre sessions, avoiding too many overruns and ensuring a reasonable quality of care in a typical hospital in the United Kingdom. The simulation was developed to examine a policy of including planned, elective patients within the trauma session: it appears that if patients are willing to accept a possibility of their treatment being cancelled, substantially greater throughputs can be achieved. A number of approximations were examined as an alternative to the full simulation: the simpler model offers reasonable accuracy and easier implementation.
---
paper_title: Optimization of operating room allocation using linear programming techniques
paper_content:
Abstract Background New and innovative approaches must be used to rationally allocate scarce resources such as operating room time while simultaneously optimizing the associated financial return. In this article we use the technique of linear programming to optimize allocation of OR time among a group of surgeons based on professional fee generation. Study design For the period of December 1, 2000, to July 31, 2002, the following individualized data were obtained for the Division of General Surgery at Duke University Medical Center: allocated OR time (hours), case mix as determined by CPT codes, total OR time used, and normalized professional charges and receipts. Inpatient, outpatient, and emergency cases were included. The Solver linear programming routine in Microsoft Excel (Microsoft Corp.) was used to determine the optimal mix of surgical OR time allocation to maximize professional receipts. Results Our model of optimized OR allocation would maximize weekly professional revenues at $237,523, a potential increase of 15% over the historical value of $207,700 or an annualized increase of approximately $1.5 million. Conclusions Our results suggest that mathematical modeling techniques used in operations research, management science, or decision science may rationally optimize OR allocation to maximize revenue or to minimize costs. These techniques may optimize allocation of scarce resources in the context of the goals specific to individual academic departments of surgery.
---
paper_title: A Norm Utilisation for Scarce Hospital Resources: Evidence from Operating Rooms in a Dutch University Hospital
paper_content:
BACKGROUND ::: Utilisation of operating rooms is high on the agenda of hospital managers and researchers. Many efforts in the area of maximising the utilisation have been focussed on finding the holy grail of 100% utilisation. The utilisation that can be realised, however, depends on the patient mix and the willingness to accept the risk of working in overtime. ::: ::: ::: MATERIALS AND METHODS ::: This is a mathematical modelling study that investigates the association between the utilisation and the patient mix that is served and the risk of working in overtime. Prospectively, consecutively, and routinely collected data of an operating room department in a Dutch university hospital are used. Basic statistical principles are used to establish the relation between realistic utilisation rates, patient mixes, and accepted risk of overtime. ::: ::: ::: RESULTS ::: Accepting a low risk of overtime combined with a complex patient mix results a low utilisation rate. If the accepted risk of overtime is higher and the patient mix is less complex, the utilisation rate that can be reached is closer to 100%. ::: ::: ::: CONCLUSION ::: Because of the inherent variability of healthcare processes, the holy grail of 100% utilisation is unlikely to be found. The method proposed in this paper calculates a realistic benchmark utilisation that incorporates the patient mix characteristics and the willingness to accept risk of overtime.
---
paper_title: Patient mix optimisation in hospital admission planning: a case study
paper_content:
Admissions planning decides on the number of patients admitted for a specialty each day, but also on the mix of patients admitted. Within a specialty different categories of patients can be distinguished on behalf of their requirement of resources. The type of resources required for an admission may involve beds, operating theatre capacity, nursing capacity and intensive care beds. The mix of patients is, therefore, an important decision variable for the hospital to manage the workload of the inflow of patients. In this paper we will consider the following planning problem: how can a hospital generate an admission profile for a specialty, given a target patient throughput and utilization of resources, while satisfying given restrictions? For this planning problem, we will develop an integer linear programming model, that has been tested in a pilot setting in a hospital. The paper includes an analysis of the planning problem, a description of the model developed, an application of a specialty orthopaedics, and a discussion of the results obtained.
---
paper_title: Closing Emergency Operating Rooms Improves Efficiency
paper_content:
Long waiting times for emergency operations increase a patient's risk of postoperative complications and morbidity. Reserving Operating Room (OR) capacity is a common technique to maximize the responsiveness of an OR in case of arrival of an emergency patient. This study determines the best way to reserve OR time for emergency surgery. In this study two approaches of reserving capacity were compared: (1) concentrating all reserved OR capacity in dedicated emergency ORs, and (2) evenly reserving capacity in all elective ORs. By using a discrete event simulation model the real situation was modelled. Main outcome measures were: (1) waiting time, (2) staff overtime, and (3) OR utilisation were evaluated for the two approaches. Results indicated that the policy of reserving capacity for emergency surgery in all elective ORs led to an improvement in waiting times for emergency surgery from 74 (±4.4) minutes to 8 (±0.5) min. Working in overtime was reduced by 20%, and overall OR utilisation can increase by around 3%. Emergency patients are operated upon more efficiently on elective Operating Rooms instead of a dedicated Emergency OR. The results of this study led to closing of the Emergency OR in the Erasmus MC (Rotterdam, The Netherlands).
---
paper_title: Analyzing incentives and scheduling in a major metropolitan hospital operating room through simulation
paper_content:
This paper discusses the application of simulation to analyze the value proposition and construction of an incentive program in an operating room (OR) environment. The model was further used to evaluate operational changes including scheduling processes within the OR and utilization rates in areas such as post anesthesia care unit (PACU) and the ambulatory surgery department (ASD). Lessons learned are presented on developing multiple simulation models from one application as well as issues regarding model transition to a client.
---
paper_title: Sampling Error Can Significantly Affect Measured Hospital Financial Performance of Surgeons and Resulting Operating Room Time Allocations
paper_content:
Hospitals with limited operating room (OR) hours, those with intensive care unit or ward beds that are always full, or those that have no incremental revenue for many patients need to choose which surgeons get the resources. Although such decisions are based on internal financial reports, whether the reports are statistically valid is not known. Random error may affect surgeons' measured financial performance and, thus, what cases the anesthesiologists get to do and which patients get to receive care. We tested whether one fiscal year of surgeon-specific financial data is sufficient for accurate financial accounting. We obtained accounting data for all outpatient or same-day-admit surgery cases during one fiscal year at an academic medical center. Linear programming was used to find the mix of surgeons' OR time allocations that would maximize the contribution margin or minimize variable costs. Confidence intervals were calculated on these end points by using Fieller's theorem and Monte-Carlo simulation. The 95% confidence intervals for increases in contribution margins or reductions in variable costs were 4.3% to 10.8% and 6.0% to 8.9%, respectively. As many as 22% of surgeons would have had OR time reduced because of sampling error. We recommend that physicians ask for and OR managers get confidence intervals of end points of financial analyses when making decisions based on them.
---
paper_title: Creating an optimal operating room schedule.
paper_content:
ABSTRACT • SCHEDULING IN THE OR suite is a particularly daunting task for many surgical services managers. • CREATING AN OPTIMAL OR schedule requires looking at constraining factors as well as historical data. • THIS ARTICLE presents one solution to creating an OR schedule that results in increased use of the OR suite and better profitability. AORN J 81 (January 2005) 580–588.
---
paper_title: Calculating a Potential Increase in Hospital Margin for Elective Surgery by Changing Operating Room Time Allocations or Increasing Nursing Staffing to Permit Completion of More Cases: A Case Study
paper_content:
UNLABELLED ::: Administrators routinely seek to increase contribution margin (revenue minus variable costs) to better cover fixed costs, provide indigent care, and meet other community service responsibilities. Hospitals with high operating room (OR) utilizations can allocate OR time for elective surgery to surgeons based partly on their contribution margins per hour of OR time. This applies particularly when OR caseload is limited by nursing recruitment. From a hospital's annual accounting data for elective cases, we calculated the following for each surgeon's patients: variable costs for the entire hospitalization or outpatient visit, revenues, hours of OR time, hours of regular ward time, and hours of intensive care unit (ICU) time. The contribution margin per hour of OR time varied more than 1000% among surgeons. Linear programming showed that reallocating OR time among surgeons could increase the overall hospital contribution margin for elective surgery by 7.1%. This was not achieved simply by taking OR time from surgeons with the smallest contribution margins per OR hour and giving it to the surgeons with the largest contribution margins per OR hour because different surgeons used differing amounts of hospital ward and ICU time. We conclude that to achieve substantive improvement in a hospital's perioperative financial performance despite restrictions on available OR, hospital ward, or ICU time, contribution margin per OR hour should be considered (perhaps along with OR utilization) when OR time is allocated. ::: ::: ::: IMPLICATIONS ::: For hospitals where elective surgery caseload is limited by nursing recruitment, to increase one surgeon's operating room time either another surgeon's time must be decreased, nurses need to be paid a premium for working longer hours, or higher-priced "traveling" nurses can be contracted. Linear programming was performed using Microsoft Excel to estimate the effect of each of these interventions on hospital contribution margin.
---
paper_title: Consequences of running more operating theatres than anaesthetists to staff them: a stochastic simulation study
paper_content:
Background Numerous hospitals implement a ratio of one anaesthetist supervising non-medically-qualified anaesthetist practitioners in two or more operating theatres. However, the risk of requiring anaesthetists simultaneously in several theatres due to concurrent critical periods has not been evaluated. It was examined in this simulation study. Methods Using a Monte Carlo stochastic simulation model, we calculated the risk of a staffing failure (no anaesthetist available when one is needed), in different scenarios of scheduling, staffing ratio, and number of theatres. Results With a staffing ratio of 0.5 for a two-theatre suite, the simulated risk that at least one failure occurring during a working day varied from 87% if only short operations were performed to 40% if only long operations performed (65% for a 50:50 mixture of short and long operations). Staffing-failure risk was particularly high during the first hour of the workday, and decreased as the number of theatres increased. The decrease was greater for simulations with only long operations than those with only short operations (the risk for 10 theatres declined to 12% and 74%, respectively). With a staffing ratio of 0.33, the staffing-failure risk was markedly higher than for a 0.5 ratio. The availability of a floater for the whole suite to intervene during failure strongly lowered this risk. Conclusions Scheduling one anaesthetist for two or three theatres exposes patients and staff to high risk of failure. Adequate planning of long and short operations and the presence of a floating anaesthetist are efficient means to optimize site activity and assure safety.
---
paper_title: Tactical Decision Making for Selective Expansion of Operating Room Resources Incorporating Financial Criteria and Uncertainty in Subspecialties' Future Workloads
paper_content:
We considered the allocation of operating room (OR) time at facilities where the strategic decision had been made to increase the number of ORs. Allocation occurs in two stages: a long-term tactical stage followed by short-term operational stage. Tactical decisions, approximately 1 yr in advance, determine what specialized equipment and expertise will be needed. Tactical decisionsarebasedonestimatesoffutureORworkload for each subspecialty or surgeon. We show that groups of surgeons can be excluded from consideration at this tactical stage (e.g., surgeons who need intensive care bedsorthosewithbelowaveragecontributionmargins perORhour).Lowerandupperlimitsareestimatedfor the future demand of OR time by the remaining surgeons.Thus,initialORallocationscanbeaccomplished with only partial information on future OR workload. Once the new ORs open, operational decision-making based on OR efficiency is used to fill the OR time and adjust staffing. Surgeons who were not allocated additional time at the tactical stage are provided increased OR time through operational adjustments based on their actual workload. In a case study from a tertiary hospital, future demand estimates were needed for only 15% of surgeons,illustratingthepracticalityofthesemethodsfor use in tactical OR allocation decisions. (Anesth Analg 2005;100:1425–32)
---
paper_title: Managing uncertainty in orthopaedic trauma theatres
paper_content:
Abstract The management of acute healthcare involves coping with a large uncertainty in demand. This uncertainty is a prevailing feature of orthopaedic care and many scarce resources are devoted to providing the contingent theatre time for orthopaedic trauma patients. However, given the variability and uncertainty in the demand much of the theatre time is not used. Simulation was used to explore the balance between maximising the utilisation of the theatre sessions, avoiding too many overruns and ensuring a reasonable quality of care in a typical hospital in the United Kingdom. The simulation was developed to examine a policy of including planned, elective patients within the trauma session: it appears that if patients are willing to accept a possibility of their treatment being cancelled, substantially greater throughputs can be achieved. A number of approximations were examined as an alternative to the full simulation: the simpler model offers reasonable accuracy and easier implementation.
---
paper_title: Changing Allocations of Operating Room Time From a System Based on Historical Utilization to One Where the Aim is to Schedule as Many Surgical Cases as Possible
paper_content:
Many facilities allocate operating room (OR) time based on historical utilization of OR time. This assumes that there is a fixed amount of regularly scheduled OR time, called “block time”. This “Fixed Hours” system does not apply to many surgical suites in the US. Most facilities make OR time available for all its surgeons’ patients, even if cases are expected to finish after the end of block time. In this setting, OR time should be allocated to maximize OR efficiency, not historical utilization. Then, cases are scheduled either on “Any Workday” (i.e., date chosen by patient and surgeon) or within a reasonable time (e.g., “Four Weeks”). In this study, we used anesthesia billing data from two facilities to study statistical challenges in converting from a Fixed Hours to an Any Workday or Four Weeks patient scheduling system. We report relationships among the number of staffed ORs (i.e., first case of the day starts), length of the regularly scheduled OR workday, OR efficiency, OR staffing cost, and changes in services’ OR allocations. These relationships determine the expected changes in each service’s OR allocation, when a facility using Fixed Hours considers converting to the Any Workday or Four Weeks systems.
---
paper_title: Surgical case scheduling as a generalized job shop scheduling problem
paper_content:
Surgical case scheduling allocates hospital resources to individual surgical cases and decides on the time to perform the surgeries. This task plays a decisive role in utilizing hospital resources efficiently while ensuring quality of care for patients. This paper proposes a new surgical case scheduling approach which uses a novel extension of the Job Shop scheduling problem called multi-mode blocking job shop (MMBJS). It formulates the MMBJS as a mixed integer linear programming (MILP) problem and discusses the use of the MMBJS model for scheduling elective and add-on cases. The model is illustrated by a detailed example, and preliminary computational experiments with the CPLEX solver on practical-sized instances are reported.
---
paper_title: Booked inpatient admissions and hospital capacity: mathematical modelling study.
paper_content:
Abstract Objectives: To investigate the variability of patients9 length of stay in intensive care after cardiac surgery. To investigate potential interactions between such variability, booked admissions, and capacity requirements. Design: Mathematical modelling study using routinely collected data. Setting: A cardiac surgery department. Source of data: Hospital records of 7014 people entering intensive care after cardiac surgery. Main outcome measures: Length of stay in intensive care; capacity requirements of an intensive care unit for a hypothetical booked admission system. Results: Although the vast majority of patients (89.5%) had a length of stay in intensive care of ≤48 hours, there was considerable overall variability and the distribution of stays has a lengthy tail. A mathematical model of the operation of a hypothetical booking system indicates that such variability has a considerable impact on intensive care capacity requirements, indicating that a high degree of reserve capacity is required to avoid high rates of operation cancellation because of unavailability of suitable postoperative care. Conclusion: Despite the considerable enthusiasm for booked admissions systems, queuing theory suggests that caution is required when considering such systems for inpatient admissions. Such systems may well result in frequent operational difficulties if there is a high degree of variability in length of stay and where reserve capacity is limited. Both of these are common in the NHS. What is already known in this topic Booking systems for hospital admissions have considerable potential benefits for patients in terms of peace of mind and planning their lives, but these benefits are dependent on having a low cancellation rate What this study adds Variability in length of stay can have a major impact on hospital operation and capacity requirements. Operational research techniques can be used to explore this impact If variability in length of stay is substantial, as is common, then booked admission systems may require considerable reserve capacity if cancellation rates are to be kept low
---
paper_title: ENDOSCOPIES SCHEDULING PROBLEM: A CASE STUDY
paper_content:
The efficient management of an operating theatre involves the problems of planning and scheduling, and this is the same case for endoscopy center. The research aims at building a feasible and efficient operating program within one week for an endoscopy unit composed of two specialized operating rooms, with an objective of both maximizing the utilization of operating rooms and minimizing their overtime cost. At the planning stage, a tactical planning model for one week is built and is solved by a column-generation based heuristic (CGBH) procedure. The solution of the planning stage assigns each operating room to a set of surgical cases on each day. Afterwards, a daily scheduling problem is built at the scheduling stage in order to finally schedule the surgical cases assigned at the planning stage. This daily scheduling model is firstly simplified by a group technology into an "open shop" model and then solved by the Gonzalez-Sahni algorithm. As a result, a final operating program is ob
---
paper_title: Operating Theatre Scheduling Using Lagrangian Relaxation
paper_content:
This paper addresses the surgery operation scheduling problem. Two types of resources are considered, operating rooms and recovery beds. Each patient first visits an operating room for surgery operation and is transferred to a recovery room immediately after the surgery operation. The operating room needs to be cleaned after the surgery operation before starting another operation. The problem consists in assigning patients to operating rooms and recovery beds in order to minimize the sum over all patients of one defined function of their completion times. According to this, the problem is NP harp problem. A Lagrangian relaxation approach is proposed in this paper to determine a near optimal schedule and a tight lower bound. Numerical results will be presented to show the efficiency of the method.
---
paper_title: Scheduling of cases in an ambulatory center.
paper_content:
Perhaps the most important thing for an anesthesiologist and OR manager to understand is that there are different systems for OR allocation and case scheduling. We referred to them as Fixed Hours, Any Workday, and Reasonable Time. This understanding makes the OR management literature clear and applicable to all staff members. Most ambulatory centers handle cases on the workday chosen by the patient and surgeon but strive to do the work each day as efficiently as possible. Precisely how to make OR allocation and case scheduling decisions to achieve these objectives have been worked out. Studies show that case scheduling decisions to enhance OR efficiency are practiced in many facilities. In contrast, OR allocation decisions tend to be different than what OR managers do in practice. This means that it is important to apply the statistical methods for allocating OR time.
---
paper_title: Optimization of operating room allocation using linear programming techniques
paper_content:
Abstract Background New and innovative approaches must be used to rationally allocate scarce resources such as operating room time while simultaneously optimizing the associated financial return. In this article we use the technique of linear programming to optimize allocation of OR time among a group of surgeons based on professional fee generation. Study design For the period of December 1, 2000, to July 31, 2002, the following individualized data were obtained for the Division of General Surgery at Duke University Medical Center: allocated OR time (hours), case mix as determined by CPT codes, total OR time used, and normalized professional charges and receipts. Inpatient, outpatient, and emergency cases were included. The Solver linear programming routine in Microsoft Excel (Microsoft Corp.) was used to determine the optimal mix of surgical OR time allocation to maximize professional receipts. Results Our model of optimized OR allocation would maximize weekly professional revenues at $237,523, a potential increase of 15% over the historical value of $207,700 or an annualized increase of approximately $1.5 million. Conclusions Our results suggest that mathematical modeling techniques used in operations research, management science, or decision science may rationally optimize OR allocation to maximize revenue or to minimize costs. These techniques may optimize allocation of scarce resources in the context of the goals specific to individual academic departments of surgery.
---
paper_title: Surgical block scheduling in a system of hospitals: an application to resource and wait list management in a British Columbia health authority.
paper_content:
Scheduling surgical specialties in a medical facility is a very complex process. The choice of schedules and resource availability impact directly on the number of patients treated by specialty, cancellations, wait times, and the overall performance of the system. In this paper we present a system-wide model developed to allow management to explore tradeoffs between OR availability, bed capacity, surgeons' booking privileges, and wait lists. We developed a mixed integer programming model to schedule surgical blocks for each specialty into ORs and applied it to the hospitals in a British Columbia Health Authority, considering OR time availability and post-surgical resource constraints. The results offer promising insights into resource optimization and wait list management, showing that without increasing post-surgical resources hospitals could handle more cases by scheduling specialties differently.
---
paper_title: Impact of surgical sequencing on post anesthesia care unit staffing
paper_content:
This paper analyzes the impact of sequencing rules on the phase I post anesthesia care unit (PACU) staffing and over-utilized operating room (OR) time resulting from delays in PACU admission. The sequencing rules are applied to each surgeon's list of cases independently. Discrete event simulation shows the importance of having a sufficient number of PACU nurses. Sequencing rules have a large impact on the maximum number of patients receiving care in the PACU (i.e., peak of activity). Seven sequencing rules are tested, over a wide range of scenarios. The largest effect of sequencing was on the percentage of days with at least one delay in PACU admission. The best rules are those that smooth the flow of patients entering in the PACU (HIHD (Half Increase in OR time and Half Decrease in OR time) and MIX (MIX OR time)). We advise against using the LCF (Longest Cases First) and equivalent sequencing methods. They generate more over-utilized OR time, require more PACU nurses during the workday, and result in more days with at least one delay in PACU admission.
---
paper_title: Evaluation of operating room suite efficiency in the Veterans Health Administration system by using data-envelopment analysis
paper_content:
Abstract Background Operating room (OR) activity transcends single ratios such as cases/room, but weighting multiple inputs and outputs may be arbitrary. Data-envelopment analysis (DEA) is a novel technique by which each facility is analyzed by the weightings that optimize its score. Methods We performed DEA analysis of 23 Veterans Health Administration annual OR activity; 87,180 cases were performed, 24 publications generated, and 560 trainee-years of education delivered, in 168 ORs over 166,377 hours by 1,384 full-time equivalents of surgical and anesthesia providers and 523 nonproviders. Results Varying analyzed parameters produced similar efficiency rankings, with individual differences suggesting possible inefficiencies. We characterized returns to scale for efficient sites, suggesting whether patient flow might be efficiently further increased through these sites. We matched inefficient sites to similar efficient sites for comparison and suggested resource alterations to increase efficiency. Conclusions Broader DEA application might characterize OR efficiency more informatively than conventional single-ratio rank ordering.
---
paper_title: The operating theatre planning by the follow-up of the risk of no realization
paper_content:
Abstract In the French context of healthcare expenses control, the operating theatre, that represents 9% of hospital's annual budget, presents a stake of major priority. The realization of the operating theatre planning is the fruit of negotiation of different actors of the block such as surgeons, anesthetists, nurses, managerial staff, etc. whose constraints and interests are often different. In this context, a win–win situation for this partnership (all parties involved) requires a good and constructive negotiation. In this paper, we propose an operating theatre planning procedure that aims at mastering the risk of no realization (RNR) of the tentative plan while stabilizing the operating rooms’ utilization time. During the application of this planning, we achieve the follow-up of the RNR and according to its evolution the research of another planning will be proposed in order to reduce the risk level. Finally, we present results obtained by simulations that support the interest for implementing these procedures.
---
paper_title: Using Computer Simulation in Operating Room Management: Impacts on Process Engineering and Performance
paper_content:
Operating rooms are regarded as the most costly hospital facilities. Due to rising costs and decreasing reimbursements, it is necessary to optimize the efficiency of the operating room suite. In this context several strategies have been proposed that optimize patient throughput by redesigning perioperative processes. The successful deployment of effective practices for continuous process improvements in operating rooms can require that operating room management sets targets and monitors improvements throughout all phases of process engineering. Simulation can be used to study the effects of process improvements through novel facilities, technologies and/or strategies. In this paper, we propose a conceptual framework to use computer simulations in different stages of business process management (BPM) lifecycle for operating room management. Additionally, we conduct simulation studies in different stages of the BPM lifecycle. The results of our studies provide evidence that simulation can provide effective decision support to drive performance in operating rooms in several phases of the BPM lifecycle
---
paper_title: Use of linear programming to estimate impact of changes in a hospital's operating room time allocation on perioperative variable costs.
paper_content:
BACKGROUND ::: Administrators at hospitals with a fixed annual budget may want to focus surgical services on priority areas to ensure its community receives the best health services possible. However, many hospitals lack the detailed managerial accounting data needed to ensure that such a change does not increase operating costs. The authors used a detailed hospital cost database to investigate by how much a change in allocations of operating room (OR) time among surgeons can increase perioperative variable costs. ::: ::: ::: METHODS ::: The authors obtained financial data for all patients who underwent outpatient or same-day admit surgery during a year. Linear programming was used to determine by how much changing the mix of surgeons can increase total variable costs while maintaining the same total hours of OR time for elective cases. ::: ::: ::: RESULTS ::: Changing OR allocations among surgeons without changing total OR hours allocated will likely increase perioperative variable costs by less than 34%. If, in addition, intensive care unit hours for elective surgical cases are not increased, hospital ward occupancy is capped, and implant use is tracked and capped, perioperative costs will likely increase by less than 10%. These four variables predict 97% of the variance in total variable costs. ::: ::: ::: CONCLUSIONS ::: The authors showed that changing OR allocations among surgeons without changing total OR hours allocated can increase hospital perioperative variable costs by up to approximately one third. Thus, at hospitals with fixed or nearly fixed annual budgets, allocating OR time based on an OR-based statistic such as utilization can adversely affect the hospital financially. The OR manager can reduce the potential increase in costs by considering not just OR time, but also the resulting use of hospital beds and implants.
---
paper_title: Patient mix optimisation in hospital admission planning: a case study
paper_content:
Admissions planning decides on the number of patients admitted for a specialty each day, but also on the mix of patients admitted. Within a specialty different categories of patients can be distinguished on behalf of their requirement of resources. The type of resources required for an admission may involve beds, operating theatre capacity, nursing capacity and intensive care beds. The mix of patients is, therefore, an important decision variable for the hospital to manage the workload of the inflow of patients. In this paper we will consider the following planning problem: how can a hospital generate an admission profile for a specialty, given a target patient throughput and utilization of resources, while satisfying given restrictions? For this planning problem, we will develop an integer linear programming model, that has been tested in a pilot setting in a hospital. The paper includes an analysis of the planning problem, a description of the model developed, an application of a specialty orthopaedics, and a discussion of the results obtained.
---
paper_title: An observational study of surgeons' sequencing of cases and its impact on postanesthesia care unit and holding area staffing requirements at hospitals.
paper_content:
BACKGROUND: Staffing requirements in the operating room (OR) holding area and in the Phase I postanesthesia care unit (PACU) are influenced by the sequencing of each surgeon's list of cases in the same OR on the same day. METHODS: Case sequencing was studied using 201 consecutive workdays of data from a 10 OR hospital surgical suite. RESULTS: The surgeons differed significantly among themselves in their sequencing of cases and were also internally nonsystematic, based on case durations. The functional effect of this uncoordinated sequencing was for the surgical suite to behave overall as if there was random sequencing. The resulting PACU staffing requirements were the same as those of the best sequencing method identified in prior simulation studies. Although sequencing "Longest Cases First" performs poorly when all ORs have close to 8 h of cases, at the studied hospital it performed no worse than the other methods. The reason was that some ORs were much busier than others on the same day. The standard deviation among ORs in the hours of cases, including turnovers, was 3.2 h; large relative to the mean workload. Data from 33 other hospitals confirmed that this situation is commonplace. Additional studies showed that case sequencing also had minimal effects on the peak number of patients in the holding area. CONCLUSIONS: The uncoordinated decision-making of multiple surgeons working in different ORs can result in a sufficiently uniform rate of admission of patients into the PACU and holding that the independent sequencing of each surgeon's list of cases would not reduce the incidence of delays in admission or staffing requirements.
---
paper_title: Scheduling hospital services: the efficacy of elective-surgery quotas
paper_content:
We take advantage of the advance-scheduling property for elective surgeries by exploring whether the use of a daily quota system with a 1-week or 2-week scheduling window would improve the performance of a typical intensive care unit (ICU) that serves patients coming from a number of different sources within the hospital. The exploration is carried out via a simulation model whose parameters are established from actual ICU data that were gathered over a 6-month period. It is shown that formally linking one controllable upstream process, namely the scheduling of elective surgeries through a quota system, to the downstream ICU admission process, can have beneficial effects throughout the hospital.
---
paper_title: A Hierarchical Multiple Criteria Mathematical Programming Approach for Scheduling General Surgery Operations in Large Hospitals
paper_content:
Limited staff and equipment within surgical services require efficient use of these resources among multiple surgeon groups. In this study, a set of hierarchical multiple criteria mathematical programming models are developed to generate weekly operating room schedules. The goals considered in these models are maximum utilization of operating room capacity, balanced distribution of operations among surgeon groups in terms of operation days, lengths of operation times, and minimization of patient waiting times. Because of computational difficulty of this scheduling problem, the overall problem is broken down into manageable hierarchical stages: (1) selection of patients, (2) assignment of operations to surgeon groups, and (3) determination of operation dates and operating rooms. Developed models are tested on the data collected in College of Medicine Research Hospital at Cukurova University as well as on simulated data sets, using MPL optimization package.
---
paper_title: Ambulatory Care and Orthopaedic Capacity Planning
paper_content:
Ambulatory Care facilities (often referred to as diagnosis and treatment centres) separate the routine elective activity from the uncertainty of complex inpatient and emergency treatment. Only routine patients with predictable outcomes should be treated in Ambulatory Care. Hence the centre should be able to plan its activities effectively. This paper considers the consequences for the remaining elective inpatient bed and theatre requirements. Computer models are used to simulate many years of activity in an orthopaedic department at a typical District General hospital.
---
paper_title: A three-phase approach for operating theatre schedules
paper_content:
In this paper we develop a three-phase, hierarchical approach for the weekly scheduling of operating rooms. This approach has been implemented in one of the surgical departments of a public hospital located in Genova (Genoa), Italy. Our aim is to suggest an integrated way of facing surgical activity planning in order to improve overall operating theatre efficiency in terms of overtime and throughput as well as waiting list reduction, while improving department organization. In the first phase we solve a bin packing-like problem in order to select the number of sessions to be weekly scheduled for each ward; the proposed and original selection criterion is based upon an updated priority score taking into proper account both the waiting list of each ward and the reduction of residual ward demand. Then we use a blocked booking method for determining optimal time tables, denoted Master Surgical Schedule (MSS), by defining the assignment between wards and surgery rooms. Lastly, once the MSS has been determined we use the simulation software environment Witness 2004 in order to analyze different sequencings of surgical activities that arise when priority is given on the basis of a) the longest waiting time (LWT), b) the longest processing time (LPT) and c) the shortest processing time (SPT). The resulting simulation models also allow us to outline possible organizational improvements in surgical activity. The results of an extensive computational experimentation pertaining to the studied surgical department are here given and analyzed.
---
paper_title: Comparison of two methods of operating theatre planning: Application in Belgian Hospital
paper_content:
Operating Theatre is the centre of the hospital management’s efforts. It constitutes the most expensive sector with more than 10% of the intended operating budget of the hospital. To reduce the costs while maintaining a good quality of care, one of the solutions is to improve the existent planning and scheduling methods by improving the services and surgical specialty coordination or finding the best estimation of surgical case durations. The other solution is to construct an effective surgical case plan and schedule. The operating theatre planning and scheduling is the two important steps, which aim to make a surgical case programming with an objective of obtaining a realizable and efficient surgical case schedule. This paper focuses on the first step, the operating theatre planning problem. Two planning methods are introduced and compared. Real data of a Belgian university hospital “Tivoli” are used for the experiments.
---
paper_title: A Set Packing Approach for Scheduling Elective Surgical Procedures
paper_content:
The efficient scheduling of surgical procedures to operating rooms in a hospital is a complex problem due to limited resources (e.g. medical staff, equipment) and conflicting objectives (e.g. reduce running costs and increase staff and patient satisfaction). A novel approach for scheduling elective surgeries over a short-term horizon is proposed which takes explicit consideration of these aspects. The problem is formulated as a set packing problem and solved optimally through column generation and constraint branching. Good results were obtained for instances from the literature.
---
paper_title: Tactical Operating Theatre Scheduling: Efficient Appointment Assignment
paper_content:
Finding an appointment for elective surgeries in hospitals is a task that has a direct impact on the optimization potential for offline and online daily surgery scheduling. A novel approach based on bin packing which takes into account limited resource availability (e.g. staff, equipment), its utilization, clinical priority, hospital bed distribution and surgery difficulty is proposed for this planning level. A solution procedure is presented that explores the specific structure of the model to find appointments for elective surgeries in real time. Tests performed with randomly generated data motivated by a mid size hospital suggest that the new approach yields high quality solutions.
---
paper_title: Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.
paper_content:
Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk.
---
paper_title: Tactical increases in operating room block time based on financial data and market growth estimates from data envelopment analysis.
paper_content:
BACKGROUND ::: Data envelopment analysis (DEA) is an established technique that hospitals and anesthesia groups can use to understand their potential to grow different specialties of inpatient surgery. Often related decisions such as recruitment of new physicians are made promptly. A practical challenge in using DEA in practice for this application has been the time to obtain access to and preprocess discharge data from states. ::: ::: ::: METHODS ::: A case study is presented to show how results of DEA are linked to financial analysis for purposes of deciding which surgical specialties should be provided more resources and institutional support, including the allocation of additional operating room (OR) block time on a tactical (1 yr) time course. State discharge abstract databases were used to study how to perform and present the DEA using data from websites of the United States' (US) Healthcare Cost and Utilization Project (HCUPNet) and Census Bureau (American FactFinder). ::: ::: ::: RESULTS ::: DEA was performed without state discharge data by using census data with federal surgical rates adjusted for age and gender. Validity was assessed based on multiple criteria, including: satisfaction of statistical assumptions, face validity of results for hospitals, differentiation between efficient and inefficient hospitals on other measures of how much surgery is done, and correlation of estimates of each hospital's potential to grow the workload of each of eight specialties with estimates obtained using unrelated statistical methods. ::: ::: ::: CONCLUSIONS ::: A hospital can choose specialties to target for expanded OR capacity based on its financial data, its caseloads for specific specialties, the caseloads from hospitals previously examined, and surgical rates from federal census data.
---
paper_title: Determining Optimum Operating Room Utilization
paper_content:
UNLABELLED ::: Economic considerations suggest that it is desirable to keep operating rooms fully used when staffed, but the optimum utilization of an operating room (OR) is not known. We created a simulation of an OR to define optimum utilization. We set operational goals of having cases start within 15 min of the scheduled time and of having the cases end no more than 15 min past the scheduled end of the day. Within these goals, a utilization of 85% to 90% is the highest that can be achieved without delay or running late. Increasing the variability of case duration decreases the utilization that can be achieved within these targets. ::: ::: ::: IMPLICATIONS ::: Using a simulated operating room (OR), the authors demonstrate that OR utilization higher than 85% to 90% leads to patient delays and staff overtime. Increased efficiency of an OR comes at a cost of patient convenience.
---
paper_title: SPECIAL ARTICLE Enterprise-Wide Patient Scheduling Information Systems to Coordinate Surgical Clinic and Operating Room Scheduling Can Impair Operating Room Efficiency
paper_content:
SUBSTITUTED-4-FLUORO-1,4-DIAZONIABICYCLO[2.2.2]OCTANE SALTS AND THEIR APPLICATION AS FLUORINATING AGENTS The present invention relates to the preparation and uses of 1-substituted-4-fluoro-1,4-diazoniabicyclo[2.2.2]octane salts, specifically 1-hydroxyl-4-fluoro-1,4-diazoniabicyclo[2.2.2]octane salts as reagents for the introduction of fluorine in organic compounds.
---
paper_title: Operating room utilization: information management systems.
paper_content:
Purpose of reviewAdvances during the past year in operational decision making using information management systems data have been predominantly in better understanding of how to allocate operating room time based on operating room efficiency, not just operating room utilization.Recent findingsEach q
---
paper_title: Operating room managers' use of integer programming for assigning block time to surgical groups: a case study.
paper_content:
UNLABELLED ::: A common problem at hospitals with fixed amounts of available operating room (OR) time (i.e., "block time") is determining an equitable method of distributing time to surgical groups. Typically, facilities determine a surgical group's share of available block time using formulas based on OR utilization, contribution margin, or some other performance metric. Once each group's share of time has been calculated, a method must be found for fitting each group's allocated OR time into the surgical master schedule. This involves assigning specific ORs on specific days of the week to specific surgical groups, usually with the objective of ensuring that the time assigned to each group is close to its target share. Unfortunately, the target allocated to a group is rarely expressible as a multiple of whole blocks. In this paper, we describe a hospital's experience using the mathematical technique of integer programming to solve the problem of developing a consistent schedule that minimizes the shortfall between each group's target and actual assignment of OR time. Schedule accuracy, the sum over all surgical groups of shortfalls divided by the total time available on the schedule, was 99.7% (SD 0.1%, n = 11). Simulations show the algorithm's accuracy can exceed 97% with > or =4 ORs. The method is a systematic and successful way to assign OR blocks to surgeons. ::: ::: ::: IMPLICATIONS ::: At hospitals with a fixed budget of operating room (OR) time, integer programming can be used by OR managers to decide which surgical group is to be allocated which OR on which day(s) of the week. In this case study, we describe the successful application of integer programming to this task, and discuss the applicability of the results to other hospitals.
---
paper_title: How to release allocated operating room time to increase efficiency: predicting which surgical service will have the most underutilized operating room time.
paper_content:
At many facilities, surgeons and patients choose the day of surgery, cases are not turned away, and staffing is adjusted to maximize operating room (OR) efficiency. If a surgical service has already filled its allocated OR time, but has an additional case to schedule, then OR efficiency is increased by scheduling the new case into the OR time of a different service with much underutilized OR time. The latter service is said to be "releasing" its allocated OR time. In this study, we analyzed 3 years of scheduling data from a medium-sized and a large surgical suite. Theoretically, the service that should have its OR time released is the service expected to have the most underutilized OR time on the day of surgery (i.e., any future cases that may be scheduled into that service's time also need to be factored in). However, we show that OR efficiency is only slightly less when the service whose time is released is the service that has the most allocated but unscheduled (i.e., unfilled) OR time at the moment the new case is scheduled. In contrast, compromising by releasing the OR time of a service other than the one with the most allocated but unscheduled OR time markedly reduces OR efficiency. OR managers can use these results when releasing allocated OR time.
---
paper_title: A stochastic model for operating room planning with elective and emergency demand for surgery
paper_content:
This paper describes a stochastic model for Operating Room (OR) planning with two types of demand for surgery: elective surgery and emergency surgery. Elective cases can be planned ahead and have a patient-related cost depending on the surgery date. Emergency cases arrive randomly and have to be performed on the day of arrival. The planning problem consists in assigning elective cases to different periods over a planning horizon in order to minimize the sum of elective patient related costs and overtime costs of operating rooms. A new stochastic mathematical programming model is first proposed. We then propose a Monte Carlo optimization method combining Monte Carlo simulation and Mixed Integer Programming. The solution of this method is proved to converge to a real optimum as the computation budget increases. Numerical results show that important gains can be realized by using a stochastic OR planning model.
---
paper_title: Operating room efficiency and scheduling
paper_content:
Purpose of reviewThe review focuses on six papers published in 2004 that pertain to operating room (OR) efficiency.Recent findingsWhen to release OR time was much less important than was having the correct OR allocations in the first place. If OR time must be released, then this decision should be b
---
paper_title: When to Release Allocated Operating Room Time to Increase Operating Room Efficiency
paper_content:
UNLABELLED ::: We studied when allocated, but unfilled, operating room (OR) time of surgical services should be released to maximize OR efficiency. OR time was allocated for two surgical suites based on OR efficiency. Then, we analyzed real OR schedules. We added new hypothetical cases lasting 1, 2, or 3 h into OR time of the service that had the largest difference between allocated and scheduled cases (i.e., the most unfilled OR time) 5 days before the day of surgery. The process was repeated using the updated OR schedule available the day before surgery. The pair-wise difference in resulting overutilized OR time was calculated for n = 754 days of data from each of the two surgical suites. We found that postponing the decision of which service gets the new case until early the day before surgery reduces overutilized OR time by <15 min per OR per day as compared to releasing the allocated OR time 5 days before surgery. These results show that when OR time is released has a negligible effect on OR efficiency. This is especially true for ambulatory surgery centers with brief cases or large surgical suites with specialty-specific OR teams. What matters much more is having the correct OR allocations and, if OR time needs to be released, making that decision based on the scheduled workload. ::: ::: ::: IMPLICATIONS ::: Provided operating room (OR) time is allocated and cases are scheduled based on maximizing OR efficiency, then whether OR time is released five days or one day before the day of surgery has a negligible effect on OR efficiency.
---
paper_title: Hospital Operating Room Capacity Expansion
paper_content:
A large midwestern hospital is expecting an increase in surgical caseload. New operating room (OR) capacity can be had by building new ORs or extending the working hours in the current ORs. The choice among these options is complicated by the fact that patients, surgeons and surgical staff, and hospital administrators are all important stakeholders in the health service operation, and each has different priorities. This paper investigates the trade-offs among three performance criteria (wait to get on schedule, scheduled procedure start-time reliability, and hospital profits), which are of particular importance to the different constituencies. The objective is to determine how the hospital can best expand its capacity, acknowledging the key role that each constituency plays in that objective. En route, the paper presents supporting analysis for process improvements and suggestions for optimal participation-inducing staff contracts for extending OR hours of operation.
---
paper_title: Operating Theatre Optimization : A Resource-Constrained Based Solving Approach
paper_content:
The operating theatre is considered as the bottleneck of the hospital and as one of the most resources consuming unit. Therefore its management is of primary interest : an efficient planning allows to use as better as possible the availability of the operating theatre and reduce human and financial working costs. This paper deals with operating theatre planning optimization. We present a mathematical model combining surgeries planning and scheduling over short time horizon, and by taking into account renewable and nonrenewable resources availabilities. This model gets its inspiration from project management and especially resource-constrained project scheduling problem (RCPSP). We also introduce a genetic algorithm approach to heuristically solve the problem. We base our approach on the multi-mode variation of the RCPSP to define the related crossover, mutation and selection operators allowing the global search to work effectively.
---
paper_title: Mount Sinai Hospital Uses Integer Programming to Allocate Operating Room Time
paper_content:
In concentrating polymer solutions up to a desired specification level of residual solvents, encrustations can be prevented and the yield and the degree of purity can be increased, when the product is heated up under pressure, expanded through a restrictor element (3) with vapor formation into a first, preferably coiled flow pipe (7) and concentrated therein as far as possible, the mixture of vapors and polymer solution is whirled at an angle into a second flow pipe (9) in a sloping arrangement and fitted with self-cleaning elements (11, 12) and concentrated therein up to the desired level, and vapors and concentrate are separately discharged only downstream thereof.
---
paper_title: A Sequential Bounding Approach for Optimal Appointment Scheduling
paper_content:
This study is concerned with the determination of optimal appointment times for a sequence of jobs with uncertain durations. Such appointment systems are used in many customer service applications to increase the utilization of resources, match workload to available capacity, and smooth the flow of customers. We show that the problem can be expressed as a two-stage stochastic linear program that includes the expected cost of customer waiting, server idling, and a cost of tardiness with respect to a chosen session length. We exploit the problem structure to derive upper bounds that are independent of job duration distribution type. These upper bounds are used in a variation of the standard L-shaped algorithm to obtain optimal solutions via successively finer partitions of the support of job durations. We present new analytical insights into the problem as well as a series of numerical experiments that illustrate properties of the optimal solution with respect to distribution type, cost structure, and number of jobs.
---
paper_title: Optimization of surgery sequencing and scheduling decisions under uncertainty
paper_content:
Operating rooms (ORs) are simultaneously the largest cost center and greatest source of revenues for most hospitals. Due to significant uncertainty in surgery durations, scheduling of ORs can be very challenging. Longer than average surgery durations result in late starts not only for the next surgery in the schedule, but potentially for the rest of the surgeries in the day as well. Late starts also result in direct costs associated with overtime staffing when the last surgery of the day finishes later than the scheduled shift end time. In this article we describe a stochastic optimization model and some practical heuristics for computing OR schedules that hedge against the uncertainty in surgery durations. We focus on the simultaneous effects of sequencing surgeries and scheduling start times. We show that a simple sequencing rule based on surgery duration variance can be used to generate substantial reductions in total surgeon and OR team waiting, OR idling, and overtime costs. We illustrate this with results of a case study that uses real data to compare actual schedules at a particular hospital to those recommended by our model.
---
paper_title: The use of Simulation to Determine Maximum Capacity in the Surgical Suite Operating Room
paper_content:
Utilizing ambulatory care units at optimal levels has become increasingly important to hospitals from both service and business perspectives. With the inherent variation in hospitals due to unique procedures and patients, performing capacity analysis through analytical models is difficult without making simplifying assumptions. Many hospitals calculate efficiency by comparing total operating room minutes available to total operating minutes used. This metric both fails to account for the required non-value added tasks between surgeries and the delicate balance necessary between having patients ready for surgery when an operating room becomes available, which can result in increased waiting times, and maximizing patient satisfaction. We present a general methodology for determining the maximum capacity within a surgical suite through the use of a discrete-event simulation model. This research is based on an actual hospital concerned with doctor/resource acquisition decisions, patient satisfaction improvements, and increased productivity.
---
paper_title: Allocation of Surgeries to Operating Rooms by Goal Programing
paper_content:
High usage rate in a surgical suite is extremely important in meeting the increasing demand for health care services and reducing costs to improve quality of care. In this paper a goal programming model which can produce schedules that best serve the needs of the hospital, i.e., by minimizing idle time and overtime, and increasing satisfaction of surgeons, patients, and staff, is described. The approach involves sorting the requests for a particular day on the basis of block restrictions, room utilization, surgeon preferences and intensive care capabilities. The model is tested using the data obtained during field studies at Dokuz Eylul University Hospital. The model is also tested for alternative achievement functions to examine the model's ability to satisfy abstract goals.
---
paper_title: The Impact on Revenue of Increasing Patient Volume at Surgical Suites with Relatively High Operating Room Utilization
paper_content:
MBA†Division of Management Consulting, Department of Anesthesia, University of Iowa, Iowa City, Iowa; *Department ofAnesthesia and Health Policy and Research, Stanford University, Stanford, California; †Department of Anesthesiology andFuqua School of Business, Duke University, Durham, North CarolinaWe previously studied hospitals in the United States ofAmerica that are losing money despite limiting the hoursthat operating room (OR) staff are available to care forpatients undergoing elective surgery. These hospitalsroutinelykeeputilizationrelativelyhightomaximizerev-enue. We tested, using discrete-event computer simula-tion, whether increasing patient volume while being re-imbursed less for each additional patient can reliablyachieve an increase in revenue when initial adjusted ORutilizationis90%.Wefoundthatincreasingthevolumeofreferred patients by the amount expected to fill the surgi-cal suite (100%/90%) would increase utilization by ,1%for a hospital surgical suite (with longer duration cases)and4%foranambulatorysurgerysuite(withshortcases).The increase in patient volume would result in longer pa-tient waiting times for surgery and more patients leavingthe surgical queue. With a 15% reduction in payment forthenewpatients,theincreaseinvolumemaynotincreaserevenue and can even decrease the contribution marginfor the hospital surgical suite. The implication is that forhospitalswitharelativelyhighORutilization,signingdis-counted contracts to increase patient volume by theamountexpectedto“fill”theORcanhavetheneteffectofdecreasing the contribution margin (i.e., profitability).(Anesth Analg 2001;92:1215–21)
---
paper_title: Building cyclic master surgery schedules with leveled resulting bed occupancy
paper_content:
This paper proposes and evaluates a number of models for building surgery schedules with leveled resulting bed occupancy. The developed models involve two types of constraints. Demand constraints ensure that each surgeon (or surgical group) obtains a specific number of operating room blocks. Capacity constraints limit the available blocks on each day. Furthermore, the number of operated patients per block and the length of stay of each operated patient are dependent on the type of surgery. Both are considered stochastic, following a multinomial distribution. We develop a number of mixed integer programming based heuristics and a metaheuristic to minimize the expected total bed shortage and present computational results.
---
paper_title: Scheduling Surgical Cases into Overflow Block Time— Computer Simulation of the Effects of Scheduling Strategies on Operating Room Labor Costs
paper_content:
“Overflow” block time is operating room (OR) time for a surgical group’s cases that cannot be completed in the regular block time allocated to each surgeon in the surgical group. Having such overflow block time increases OR utilization. The optimal way to schedule patients into a surgical group’s ov
---
paper_title: Determining the Number of Beds in the Postanesthesia Care Unit: A Computer Simulation Flow Approach
paper_content:
UNLABELLED ::: Designing a new operating room (OR) suite is a difficult process owing to the number of caregivers involved and because decision-making managers try to minimize the direct and indirect costs of operating the OR suite. In this study, we devised a computer simulation flow model to calculate, first, the minimum number of beds required in the postanesthesia care unit (PACU). In a second step, we evaluated the relationship between the global performance of the OR suite in terms of OR scheduling and number of staffed PACU beds and porters. We designed a mathematical model of OR scheduling. We then developed a computer simulation flow model of the OR suite. Both models were connected; the first one performed the input flows, and the second simulated the OR suite running. The simulations performed examined the number of beds in the PACU in an ideal situation or in the case of reduction in the number of porters. We then analyzed the variation of number of beds occupied per hour in the PACU when the time spent by patients in the PACU or the number of porters varied. The results highlighted the strong impact of the number of porters on the OR suite performance and particularly on PACU performances. ::: ::: ::: IMPLICATIONS ::: Designing new operating room (OR) facilities implies many decisions on the number of ORs, postanesthesia care unit (PACU) beds, and on the staff of nurses and porters. To make these decisions, managers can use rules of thumb or recommendations. Our study highlights the interest of using flow simulation to validate these choices. In this case study we determine the number of PACU beds and porter staff and assess the impact of decreasing the number of porters on PACU bed requirements.
---
paper_title: A branch-and-price approach for integrating nurse and surgery scheduling
paper_content:
A common problem at hospitals is the extreme variation in daily (even hourly) workload pressure for nurses. The operating room is considered to be the main engine and hence the main generator of variance in the hospital. The purpose of this paper is threefold. First of all, we present a concrete model that integrates both the nurse and the operating room scheduling process. Second, we show how the column generation technique approach, one of the most employed exact methods for solving nurse scheduling problems, can easily cope with this model extension. Third, by means of a large number of computational experiments we provide an idea of the cost saving opportunities and required solution times.
---
paper_title: Visualizing the Demand for Various Resources as a Function of the Master Surgery Schedule: A Case Study
paper_content:
This paper presents a software system that visualizes the impact of the master surgery schedule on the demand for various resources throughout the rest of the hospital. The master surgery schedule can be seen as the engine that drives the hospital. Therefore, it is very important for decision makers to have a clear image on how the demand for resources is linked to the surgery schedule. The software presented in this paper enables schedulers to instantaneously view the impact of, e.g., an exchange of two block assignments in the master surgery schedule on the expected resource consumption pattern. A case study entailing a large Belgian surgery unit illustrates how the software can be used to assist in building better surgery schedules.
---
paper_title: How to Schedule Elective Surgical Cases into Specific Operating Rooms to Maximize the Efficiency of Use of Operating Room Time
paper_content:
We considered elective case scheduling at hospitals and surgical centers at which surgeons and patients choose the day of surgery, cases are not turned away, and anesthesia and nursing staffing are adjusted to maximize the efficiency of use of operating room (OR) time. We investigated scheduling a new case into an OR by using two patient-scheduling rules: Earliest Start Time or Latest Start Time. By using several scenarios, we showed that the use of Earliest Start Time is rational economically at such facilities. Specifically, it maximizes OR efficiency when a service has nearly filled its regularly scheduled hours of OR time. However, Latest Start Time will perform better at balancing workload among services’ OR time. We then used historical case duration data from two facilities in computer simulations to investigate the effect of errors in predicting case durations on the performance of these two heuristics. The achievable incremental reduction in overtime by having perfect information on case duration versus using historical case durations was only a few minutes per OR. The differences between Earliest Start Time and Latest Start Time were also only a few minutes per OR. We conclude that for facilities at which the goals are, in order of importance, safety, patient and surgeon access to OR time, and then efficiency, few restrictions need to be placed on patient scheduling to achieve an efficient use of OR time. (Anesth Analg 2002;94:933–42)
---
paper_title: Schedule the Short Procedure First to Improve OR Efficiency
paper_content:
ABSTRACT • OPERATING ROOM MANAGERS are hampered in their efforts to optimize OR efficiency by surgical procedures that last a longer or shorter time than scheduled. The lack of predictability is a result of inaccuracy in scheduling and variability in the duration of procedures. • SCHEDULING SHORT PROCEDURES before long procedures theoretically limits this variability. • MONTE CARLO SIMULATION of ORs scheduled with various combinations of short and long procedures supports this concept's validity. • RESULTS INDICATE that scheduling short procedures first can improve on-time performance and decrease staff member overtime expense without reducing surgical throughput. AORNJ 78 (October 2003) 651–657.
---
paper_title: Optimisation Modelling of hospital operating room planning : analyzing strategies and problem settings
paper_content:
There is a growing proportion of elderly which increases the demand for health care. As a consequence health care costs are rising and the need for hospital resource planning seems urgent. Different aspects (often conflicting) such as patient demand, clinical need and political ambitions must be considered. In this paper we propose a model for analyzing a hospital surgical suite with focus on operating room planning. An optimization model is developed for patient operation scheduling and for key resource allocation. Medical examinations and treatments of patients are performed using a number of resources, similar to products being refined in a number of processes in a logistics chain. Optimal resource allocation, given different objectives according to patient perspective, staff perspective, costs etc. under different system settings (e.g. principles for operating room allocation and amount of stand-by personnel), is studied. Preliminary results are presented based on case studies from two Swedish hospitals.
---
paper_title: The value of the dedicated orthopaedic trauma operating room.
paper_content:
Background:Trauma centers and orthopaedic surgeons have traditionally been faced with limited operating room (OR) availability for fracture surgery. Orthopaedic trauma cases are often waitlisted and done late at night. We investigated the feasibility of having an unbooked orthopaedic trauma OR to re
---
paper_title: Scheduling patients in an ambulatory surgical center
paper_content:
This paper presents a deterministic approach to schedule patients in an ambulatory surgical center (ASC) such that the number of postanesthesia care unit nurses at the center is minimized. We formulate the patient scheduling problem as new variants of the no-wait, two-stage process shop scheduling problem and present computational complexity results for the new scheduling models. Also, we develop a tabu search-based heuristic algorithm to solve the patient scheduling problem. Our algorithm is shown to be very effective in finding near optimal schedules on a set of real data from a university hospital's ASC. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2003
---
paper_title: Analysis via goal programming of the minimum achievable stay in surgical waiting lists
paper_content:
In this paper, a Goal Programming model is developed in order to study the possibility of decreasing the length of stay on the waiting list of a hospital that belongs to the Spanish Health Service. First, a problem is solved to determine the optimal planning for one year, so as to make the maximum waiting time decrease to six months (at present, some operations have a waiting list of more than a year). Afterwards, two other problems are solved in order to determine the impact that a further reduction of the waiting time (four months) would have on the requirements of extra resources for the hospital. The particular problem for the Trauma service is described in detail, but global results are shown and commented.
---
paper_title: A goal programming approach to strategic resource allocation in acute care hospitals
paper_content:
Abstract This paper describes a methodology for allocating resources in hospitals. The methodology uses two linear goal-programming models. One model sets case mix and volume for physicians, while holding service costs fixed; the other translates case mix decisions into a commensurate set of practice changes for physicians. The models allow decision makers to set case mix and case costs in such a way that the institution is able to break even, while preserving physician income and minimizing disturbance to practice. The models also permit investigation of trade-offs between case mix and physician practice parameters. Results are presented from a decision-making scenario facing the surgical division of Toronto's Mount Sinai Hospital after the announcement of a 3-year, 18% reduction in funding.
---
paper_title: Patient mix optimization in tactical cardiothoracic surgery planning: a case study
paper_content:
Cardiothoracic surgery planning involves different resources such as operating theatre (OT) time, medium care beds, intensive care beds and nursing staff. Within cardiothoracic surgery different categories of patients can be distinguished with respect to their requirements of resources. The mix of patients is, therefore, an important aspect of decision making for the hospital to manage the use of these resources. A master OT schedule is used at the tactical level of planning for deriving the weekly OT plan. It defines for each day of a week the number of OT hours available and the number of patients operated from each patient category. We develop a model for this tactical level planning problem, the core of which is a mixed integer linear program. The model is used to evaluate scenarios for surgery planning at tactical as well as strategic levels, demonstrating the potential of integer programming for providing recommendations for change.
---
paper_title: Operating Room Managerial Decision-Making on the Day of Surgery With and Without Computer Recommendations and Status Displays
paper_content:
BACKGROUND: There are three basic types of decision aids to facilitate operating room (OR) management decision-making on the day of surgery. Decision makers can rely on passive status displays (e.g., big screens or whiteboards), active status displays (e.g., text pager notification), and/or command displays (e.g., text recommendations about what to do). METHODS: Anesthesiologists, OR nurses, and housekeepers were given nine simulated scenarios (vignettes) involving multiple ORs to study their decision-making. Participants were randomized to one of four groups, all with an updated paper OR schedule: with/without command display and with/without passive status display. RESULTS: Participants making decisions without command displays performed no better than random chance in terms of increasing the predictability of work hours, reducing over-utilized OR time, and increasing OR efficiency. Status displays had no effect on these end-points, whereas command displays improved the quality of decisions. In the scenarios for which the command displays provided recommendations that adversely affected safety, participants appropriately ignored advice. CONCLUSIONS: Anesthesia providers and nursing staff made decisions that increased clinical work per unit time in each OR, even when doing so resulted in an increase in over-utilized OR time, higher staffing costs, unpredictable work hours, and/or mandatory overtime. Organizational culture and socialization during clinical training may be a cause. Command displays showed promise in mitigating this tendency. Additional investigations are in our companion paper.
---
paper_title: Strategies to reduce delays in admission into a postanesthesia care unit from operating rooms
paper_content:
The authors performed a systematic review of strategies to reduce delays in admission into PACUs from ORs. The purpose of this article was to evaluate for managers how to choose interventions based on effectiveness and practicality. The authors discuss optimization methods that can be used to sequence consecutive cases in the same OR, by the same surgeon, on the same day, based on the objective of reducing delays in PACU admission due to the unavailability of unfilled PACU beds. Although effective, such methods can be impractical because of large organizational change required and limited equipment or personnel availability. When all physical beds are not full, PACU nurse staffing can be adjusted. Statistical methods can be used to ensure that nursing schedules closely match the times that minimize delays in PACU admission. These methods are effective and practical. Explicit criteria can be applied to assist in deciding when to assign other qualified nurses to the PACU, when to ask PACU nurses to work late, and/or when to make a decision on the day before surgery to add more PACU nurses for the next day (if available). The latter would be based on statistical forecasts of the timing of patients' admissions into the PACU. Whether or not all physical beds are full, the risk of delays in PACU admission is relatively insensitive to economically feasible reductions in PACU length of stay. Such interventions should be considered only if statistical analysis, performed by using computer simulation, has established that reducing PACU length of stay will reduce delays in admission at a manager's facility.
---
paper_title: An operating theatre planning and scheduling problem in the case of a "block scheduling" strategy
paper_content:
Operating theatre is always the most important and expensive sector of the hospital, and its surgical process management problem is always regarded as the kernel. In this paper, we focus on one of the surgical process management problems: block scheduling problem. An efficient weekly operating program is built for an operating theatre through two phases: at first the operating theatre weekly planning problem is solved with a heuristic procedure based on column generation procedure; then the operating theatre daily scheduling problem, based on the results from the first phase, is solved with a hybrid genetic algorithm. In the end, the proposed problem is tested and validated with randomly generated data, and then the numerical results are provided.
---
paper_title: A Hierarchical Multiple Criteria Mathematical Programming Approach for Scheduling General Surgery Operations in Large Hospitals
paper_content:
Limited staff and equipment within surgical services require efficient use of these resources among multiple surgeon groups. In this study, a set of hierarchical multiple criteria mathematical programming models are developed to generate weekly operating room schedules. The goals considered in these models are maximum utilization of operating room capacity, balanced distribution of operations among surgeon groups in terms of operation days, lengths of operation times, and minimization of patient waiting times. Because of computational difficulty of this scheduling problem, the overall problem is broken down into manageable hierarchical stages: (1) selection of patients, (2) assignment of operations to surgeon groups, and (3) determination of operation dates and operating rooms. Developed models are tested on the data collected in College of Medicine Research Hospital at Cukurova University as well as on simulated data sets, using MPL optimization package.
---
paper_title: A three-phase approach for operating theatre schedules
paper_content:
In this paper we develop a three-phase, hierarchical approach for the weekly scheduling of operating rooms. This approach has been implemented in one of the surgical departments of a public hospital located in Genova (Genoa), Italy. Our aim is to suggest an integrated way of facing surgical activity planning in order to improve overall operating theatre efficiency in terms of overtime and throughput as well as waiting list reduction, while improving department organization. In the first phase we solve a bin packing-like problem in order to select the number of sessions to be weekly scheduled for each ward; the proposed and original selection criterion is based upon an updated priority score taking into proper account both the waiting list of each ward and the reduction of residual ward demand. Then we use a blocked booking method for determining optimal time tables, denoted Master Surgical Schedule (MSS), by defining the assignment between wards and surgery rooms. Lastly, once the MSS has been determined we use the simulation software environment Witness 2004 in order to analyze different sequencings of surgical activities that arise when priority is given on the basis of a) the longest waiting time (LWT), b) the longest processing time (LPT) and c) the shortest processing time (SPT). The resulting simulation models also allow us to outline possible organizational improvements in surgical activity. The results of an extensive computational experimentation pertaining to the studied surgical department are here given and analyzed.
---
paper_title: SPECIAL ARTICLE Enterprise-Wide Patient Scheduling Information Systems to Coordinate Surgical Clinic and Operating Room Scheduling Can Impair Operating Room Efficiency
paper_content:
SUBSTITUTED-4-FLUORO-1,4-DIAZONIABICYCLO[2.2.2]OCTANE SALTS AND THEIR APPLICATION AS FLUORINATING AGENTS The present invention relates to the preparation and uses of 1-substituted-4-fluoro-1,4-diazoniabicyclo[2.2.2]octane salts, specifically 1-hydroxyl-4-fluoro-1,4-diazoniabicyclo[2.2.2]octane salts as reagents for the introduction of fluorine in organic compounds.
---
paper_title: Operating room managers' use of integer programming for assigning block time to surgical groups: a case study.
paper_content:
UNLABELLED ::: A common problem at hospitals with fixed amounts of available operating room (OR) time (i.e., "block time") is determining an equitable method of distributing time to surgical groups. Typically, facilities determine a surgical group's share of available block time using formulas based on OR utilization, contribution margin, or some other performance metric. Once each group's share of time has been calculated, a method must be found for fitting each group's allocated OR time into the surgical master schedule. This involves assigning specific ORs on specific days of the week to specific surgical groups, usually with the objective of ensuring that the time assigned to each group is close to its target share. Unfortunately, the target allocated to a group is rarely expressible as a multiple of whole blocks. In this paper, we describe a hospital's experience using the mathematical technique of integer programming to solve the problem of developing a consistent schedule that minimizes the shortfall between each group's target and actual assignment of OR time. Schedule accuracy, the sum over all surgical groups of shortfalls divided by the total time available on the schedule, was 99.7% (SD 0.1%, n = 11). Simulations show the algorithm's accuracy can exceed 97% with > or =4 ORs. The method is a systematic and successful way to assign OR blocks to surgeons. ::: ::: ::: IMPLICATIONS ::: At hospitals with a fixed budget of operating room (OR) time, integer programming can be used by OR managers to decide which surgical group is to be allocated which OR on which day(s) of the week. In this case study, we describe the successful application of integer programming to this task, and discuss the applicability of the results to other hospitals.
---
paper_title: Mount Sinai Hospital Uses Integer Programming to Allocate Operating Room Time
paper_content:
In concentrating polymer solutions up to a desired specification level of residual solvents, encrustations can be prevented and the yield and the degree of purity can be increased, when the product is heated up under pressure, expanded through a restrictor element (3) with vapor formation into a first, preferably coiled flow pipe (7) and concentrated therein as far as possible, the mixture of vapors and polymer solution is whirled at an angle into a second flow pipe (9) in a sloping arrangement and fitted with self-cleaning elements (11, 12) and concentrated therein up to the desired level, and vapors and concentrate are separately discharged only downstream thereof.
---
paper_title: Optimisation Modelling of hospital operating room planning : analyzing strategies and problem settings
paper_content:
There is a growing proportion of elderly which increases the demand for health care. As a consequence health care costs are rising and the need for hospital resource planning seems urgent. Different aspects (often conflicting) such as patient demand, clinical need and political ambitions must be considered. In this paper we propose a model for analyzing a hospital surgical suite with focus on operating room planning. An optimization model is developed for patient operation scheduling and for key resource allocation. Medical examinations and treatments of patients are performed using a number of resources, similar to products being refined in a number of processes in a logistics chain. Optimal resource allocation, given different objectives according to patient perspective, staff perspective, costs etc. under different system settings (e.g. principles for operating room allocation and amount of stand-by personnel), is studied. Preliminary results are presented based on case studies from two Swedish hospitals.
---
paper_title: Surgical case scheduling as a generalized job shop scheduling problem
paper_content:
Surgical case scheduling allocates hospital resources to individual surgical cases and decides on the time to perform the surgeries. This task plays a decisive role in utilizing hospital resources efficiently while ensuring quality of care for patients. This paper proposes a new surgical case scheduling approach which uses a novel extension of the Job Shop scheduling problem called multi-mode blocking job shop (MMBJS). It formulates the MMBJS as a mixed integer linear programming (MILP) problem and discusses the use of the MMBJS model for scheduling elective and add-on cases. The model is illustrated by a detailed example, and preliminary computational experiments with the CPLEX solver on practical-sized instances are reported.
---
paper_title: Data Structures, Algorithms, & Software Principles in C
paper_content:
(All chapters, except Chapter 1, begin with an Introduction and Motivation.) 1. Preparing for the Journey. Where Are We Going? Blending Mathematics, Science, and Engineering. The Search for Enduring Principles in Computer Science. Principles of Software System Structure. Efficiency and Tradeoffs. Software Engineering Principles. Our Approach to Mathematics. Some Notes on Programming Notation. Preview of Coming Attractions. 2. Linked Data Representations. What are Pointers? The Basic Intuition. Pointers in C-The Rudiments. Pointer Diagramming Notation. Linear Linked Lists. Other Linked Data Structures. 3. Introduction to Recursion. Thinking Recursively. Common Pitfall-Infinite Regresses. Quantitative Aspects of Recursive Algorithms. 4. Modularity and Data Abstraction. The Structure of C Modules. Priority Queues-An Abstract Data Type. A Pocket Calculator Interface. How to Hide Data Representations. Modularity and Information Hiding in Program Design. 5. Introduction to Software Engineering Concepts. Top-Down Programming By Stepwise Refinement. Proving Programs Correct. Transforming and Optimizing Programs. Testing Programs. The Philosophy of Measurement and Tuning. Software Reuse and Bottom-up Programming. Program Structuring and Documentation. 6. Introduction to Analysis of Algorithms. What Do We Use for a Yardstick? The Intuition Behind O-Notation. O-Notation-Definition and Manipulation. Analyzing Simple Algorithms. What O-Notation Doesn't Tell You. 7. Linear Data Structures-Stacks and Queues. Some Background on Stacks. ADTs for Stacks and Queues. Using the Stack ADT to Check for Balanced Parentheses. Using the Stack ADT to Evaluate Postfix Expressions. Implementing the Stack ADT. How C Implements Recursive Function Calls Using Stacks. Implementations of the Queue ADT. More Queue Applications. 8. Lists, Strings, and Dynamic Memory Allocation. Lists. Generalized Lists. Applications of Generalized Lists. Strings. Dynamic Memory Allocation. 9. Trees. Basic Concepts and Terminology. Binary Trees. A Sequential Binary Tree Representation. An Application-Heaps and Priority Queues. Traversing Binary Trees. Binary Search Trees. AVL Trees and Their Performance. Two-Three Trees. Tries. An Application-Huffman Codes. 10. Graphs. Basic Concepts and Terminology. Graph Representations. Graph Searching. Topological Ordering. Shortest Paths. Task Networks. Useful Background on Graphs. 11. Hashing and the Table ADT. The Table ADT. Introduction to Hashing by Simple Examples. Collisions, Load Factors, and Clusters. Algorithms for Hashing by Open Addressing. Choosing a Hash Function. Comparison of Searching Methods Using the Table ADT. 12. External Collections of Data. Characteristics of External Storage Devices. Techniques That Don't Work Well. Techniques That Work Well. Information Retrieval and Databases. 13. Sorting. Laying Some Groundwork. Priority Queue Sorting Methods. Divide-and-Conquer Methods. Methods That Insert Keys and Keep Them Sorted. O(n) Methods-Address Calculation Sorting. Other Methods. Comparison and Perspective. 14. Advanced Recursion. Recursion as a Descriptive Method. Using Recursion to Build a Parser. Translating from Infix to Postfix. Recursion and Program Verification. 15. Object-Oriented Programming. Exploring OOP Through Progressive Examples. Building Systems Using Object-Oriented Programming. Advantages and Disadvantages of Object-Oriented Programming. 16. Advanced Software Engineering Concepts. The Software Lifecycle. Software Productivity. Software Process Models. Appendix Math Reference and Tutorial. 0201591189T04062001
---
paper_title: Operating Theatre Scheduling Using Lagrangian Relaxation
paper_content:
This paper addresses the surgery operation scheduling problem. Two types of resources are considered, operating rooms and recovery beds. Each patient first visits an operating room for surgery operation and is transferred to a recovery room immediately after the surgery operation. The operating room needs to be cleaned after the surgery operation before starting another operation. The problem consists in assigning patients to operating rooms and recovery beds in order to minimize the sum over all patients of one defined function of their completion times. According to this, the problem is NP harp problem. A Lagrangian relaxation approach is proposed in this paper to determine a near optimal schedule and a tight lower bound. Numerical results will be presented to show the efficiency of the method.
---
paper_title: Surgical block scheduling in a system of hospitals: an application to resource and wait list management in a British Columbia health authority.
paper_content:
Scheduling surgical specialties in a medical facility is a very complex process. The choice of schedules and resource availability impact directly on the number of patients treated by specialty, cancellations, wait times, and the overall performance of the system. In this paper we present a system-wide model developed to allow management to explore tradeoffs between OR availability, bed capacity, surgeons' booking privileges, and wait lists. We developed a mixed integer programming model to schedule surgical blocks for each specialty into ORs and applied it to the hospitals in a British Columbia Health Authority, considering OR time availability and post-surgical resource constraints. The results offer promising insights into resource optimization and wait list management, showing that without increasing post-surgical resources hospitals could handle more cases by scheduling specialties differently.
---
paper_title: Evaluation of operating room suite efficiency in the Veterans Health Administration system by using data-envelopment analysis
paper_content:
Abstract Background Operating room (OR) activity transcends single ratios such as cases/room, but weighting multiple inputs and outputs may be arbitrary. Data-envelopment analysis (DEA) is a novel technique by which each facility is analyzed by the weightings that optimize its score. Methods We performed DEA analysis of 23 Veterans Health Administration annual OR activity; 87,180 cases were performed, 24 publications generated, and 560 trainee-years of education delivered, in 168 ORs over 166,377 hours by 1,384 full-time equivalents of surgical and anesthesia providers and 523 nonproviders. Results Varying analyzed parameters produced similar efficiency rankings, with individual differences suggesting possible inefficiencies. We characterized returns to scale for efficient sites, suggesting whether patient flow might be efficiently further increased through these sites. We matched inefficient sites to similar efficient sites for comparison and suggested resource alterations to increase efficiency. Conclusions Broader DEA application might characterize OR efficiency more informatively than conventional single-ratio rank ordering.
---
paper_title: A Hierarchical Multiple Criteria Mathematical Programming Approach for Scheduling General Surgery Operations in Large Hospitals
paper_content:
Limited staff and equipment within surgical services require efficient use of these resources among multiple surgeon groups. In this study, a set of hierarchical multiple criteria mathematical programming models are developed to generate weekly operating room schedules. The goals considered in these models are maximum utilization of operating room capacity, balanced distribution of operations among surgeon groups in terms of operation days, lengths of operation times, and minimization of patient waiting times. Because of computational difficulty of this scheduling problem, the overall problem is broken down into manageable hierarchical stages: (1) selection of patients, (2) assignment of operations to surgeon groups, and (3) determination of operation dates and operating rooms. Developed models are tested on the data collected in College of Medicine Research Hospital at Cukurova University as well as on simulated data sets, using MPL optimization package.
---
paper_title: A Set Packing Approach for Scheduling Elective Surgical Procedures
paper_content:
The efficient scheduling of surgical procedures to operating rooms in a hospital is a complex problem due to limited resources (e.g. medical staff, equipment) and conflicting objectives (e.g. reduce running costs and increase staff and patient satisfaction). A novel approach for scheduling elective surgeries over a short-term horizon is proposed which takes explicit consideration of these aspects. The problem is formulated as a set packing problem and solved optimally through column generation and constraint branching. Good results were obtained for instances from the literature.
---
paper_title: Tactical Operating Theatre Scheduling: Efficient Appointment Assignment
paper_content:
Finding an appointment for elective surgeries in hospitals is a task that has a direct impact on the optimization potential for offline and online daily surgery scheduling. A novel approach based on bin packing which takes into account limited resource availability (e.g. staff, equipment), its utilization, clinical priority, hospital bed distribution and surgery difficulty is proposed for this planning level. A solution procedure is presented that explores the specific structure of the model to find appointments for elective surgeries in real time. Tests performed with randomly generated data motivated by a mid size hospital suggest that the new approach yields high quality solutions.
---
paper_title: Tactical increases in operating room block time based on financial data and market growth estimates from data envelopment analysis.
paper_content:
BACKGROUND ::: Data envelopment analysis (DEA) is an established technique that hospitals and anesthesia groups can use to understand their potential to grow different specialties of inpatient surgery. Often related decisions such as recruitment of new physicians are made promptly. A practical challenge in using DEA in practice for this application has been the time to obtain access to and preprocess discharge data from states. ::: ::: ::: METHODS ::: A case study is presented to show how results of DEA are linked to financial analysis for purposes of deciding which surgical specialties should be provided more resources and institutional support, including the allocation of additional operating room (OR) block time on a tactical (1 yr) time course. State discharge abstract databases were used to study how to perform and present the DEA using data from websites of the United States' (US) Healthcare Cost and Utilization Project (HCUPNet) and Census Bureau (American FactFinder). ::: ::: ::: RESULTS ::: DEA was performed without state discharge data by using census data with federal surgical rates adjusted for age and gender. Validity was assessed based on multiple criteria, including: satisfaction of statistical assumptions, face validity of results for hospitals, differentiation between efficient and inefficient hospitals on other measures of how much surgery is done, and correlation of estimates of each hospital's potential to grow the workload of each of eight specialties with estimates obtained using unrelated statistical methods. ::: ::: ::: CONCLUSIONS ::: A hospital can choose specialties to target for expanded OR capacity based on its financial data, its caseloads for specific specialties, the caseloads from hospitals previously examined, and surgical rates from federal census data.
---
paper_title: A stochastic model for operating room planning with elective and emergency demand for surgery
paper_content:
This paper describes a stochastic model for Operating Room (OR) planning with two types of demand for surgery: elective surgery and emergency surgery. Elective cases can be planned ahead and have a patient-related cost depending on the surgery date. Emergency cases arrive randomly and have to be performed on the day of arrival. The planning problem consists in assigning elective cases to different periods over a planning horizon in order to minimize the sum of elective patient related costs and overtime costs of operating rooms. A new stochastic mathematical programming model is first proposed. We then propose a Monte Carlo optimization method combining Monte Carlo simulation and Mixed Integer Programming. The solution of this method is proved to converge to a real optimum as the computation budget increases. Numerical results show that important gains can be realized by using a stochastic OR planning model.
---
paper_title: Operating Theatre Optimization : A Resource-Constrained Based Solving Approach
paper_content:
The operating theatre is considered as the bottleneck of the hospital and as one of the most resources consuming unit. Therefore its management is of primary interest : an efficient planning allows to use as better as possible the availability of the operating theatre and reduce human and financial working costs. This paper deals with operating theatre planning optimization. We present a mathematical model combining surgeries planning and scheduling over short time horizon, and by taking into account renewable and nonrenewable resources availabilities. This model gets its inspiration from project management and especially resource-constrained project scheduling problem (RCPSP). We also introduce a genetic algorithm approach to heuristically solve the problem. We base our approach on the multi-mode variation of the RCPSP to define the related crossover, mutation and selection operators allowing the global search to work effectively.
---
paper_title: A Sequential Bounding Approach for Optimal Appointment Scheduling
paper_content:
This study is concerned with the determination of optimal appointment times for a sequence of jobs with uncertain durations. Such appointment systems are used in many customer service applications to increase the utilization of resources, match workload to available capacity, and smooth the flow of customers. We show that the problem can be expressed as a two-stage stochastic linear program that includes the expected cost of customer waiting, server idling, and a cost of tardiness with respect to a chosen session length. We exploit the problem structure to derive upper bounds that are independent of job duration distribution type. These upper bounds are used in a variation of the standard L-shaped algorithm to obtain optimal solutions via successively finer partitions of the support of job durations. We present new analytical insights into the problem as well as a series of numerical experiments that illustrate properties of the optimal solution with respect to distribution type, cost structure, and number of jobs.
---
paper_title: OPERATING THEATRE PLANNING
paper_content:
N patients must be planned in an operating theatre over a medium term horizon (one or two weeks). This operating theatre is composed of several operating rooms and of one recovery room where several beds are available. Each patient needs a particular surgical procedure, which defines the human (surgeon) and material (equipment) resources to use and the intervention duration. Additive characteristics must be taken into account: hospitalisation date, intervention deadline, etc. The patient satisfaction and resource efficiency are sought. These two criteria are, respectively, modelled by hospitalisation costs, i.e. the patient stay duration, and the overtime costs, i.e. the resource overloads. We propose to solve this problem in two steps. First, an operating theatre planning is defined. It consists in assigning patients to operating rooms over the horizon. Second, each loaded operating room is scheduled individually in order to synchronise the various human and material resources used. This paper focuses on the first step, i.e. the operating theatre planning, which defines a general assignment problem, i.e. a NP hard problem. In order to solve heuristically this problem, an assignment model with resource capacity and time-window additive constraints is proposed. Integrating most of the constraints in the cost objective function, an extension of the Hungarian method has been developed to calculate the operating theatre planning. This primal–dual heuristic has been successfully experimented on a wide range of problem test data.
---
paper_title: Robust surgery loading
paper_content:
We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This research was performed in collaboration with the Erasmus MC, a large academic hospital in the Netherlands, which has also provided historical data for the experiments. We propose various constructive heuristics and local search methods that use statistical information on surgery durations to exploit the portfolio effect, and thereby to minimize the required slack. We demonstrate that our approach frees a lot of operating room capacity, which may be used to perform additional surgeries. Furthermore, we show that by combining advanced optimization techniques with extensive historical statistical records on surgery durations can significantly improve the operating room department utilization.
---
paper_title: Determining the Number of Beds in the Postanesthesia Care Unit: A Computer Simulation Flow Approach
paper_content:
UNLABELLED ::: Designing a new operating room (OR) suite is a difficult process owing to the number of caregivers involved and because decision-making managers try to minimize the direct and indirect costs of operating the OR suite. In this study, we devised a computer simulation flow model to calculate, first, the minimum number of beds required in the postanesthesia care unit (PACU). In a second step, we evaluated the relationship between the global performance of the OR suite in terms of OR scheduling and number of staffed PACU beds and porters. We designed a mathematical model of OR scheduling. We then developed a computer simulation flow model of the OR suite. Both models were connected; the first one performed the input flows, and the second simulated the OR suite running. The simulations performed examined the number of beds in the PACU in an ideal situation or in the case of reduction in the number of porters. We then analyzed the variation of number of beds occupied per hour in the PACU when the time spent by patients in the PACU or the number of porters varied. The results highlighted the strong impact of the number of porters on the OR suite performance and particularly on PACU performances. ::: ::: ::: IMPLICATIONS ::: Designing new operating room (OR) facilities implies many decisions on the number of ORs, postanesthesia care unit (PACU) beds, and on the staff of nurses and porters. To make these decisions, managers can use rules of thumb or recommendations. Our study highlights the interest of using flow simulation to validate these choices. In this case study we determine the number of PACU beds and porter staff and assess the impact of decreasing the number of porters on PACU bed requirements.
---
paper_title: A branch-and-price approach for integrating nurse and surgery scheduling
paper_content:
A common problem at hospitals is the extreme variation in daily (even hourly) workload pressure for nurses. The operating room is considered to be the main engine and hence the main generator of variance in the hospital. The purpose of this paper is threefold. First of all, we present a concrete model that integrates both the nurse and the operating room scheduling process. Second, we show how the column generation technique approach, one of the most employed exact methods for solving nurse scheduling problems, can easily cope with this model extension. Third, by means of a large number of computational experiments we provide an idea of the cost saving opportunities and required solution times.
---
paper_title: Optimisation Modelling of hospital operating room planning : analyzing strategies and problem settings
paper_content:
There is a growing proportion of elderly which increases the demand for health care. As a consequence health care costs are rising and the need for hospital resource planning seems urgent. Different aspects (often conflicting) such as patient demand, clinical need and political ambitions must be considered. In this paper we propose a model for analyzing a hospital surgical suite with focus on operating room planning. An optimization model is developed for patient operation scheduling and for key resource allocation. Medical examinations and treatments of patients are performed using a number of resources, similar to products being refined in a number of processes in a logistics chain. Optimal resource allocation, given different objectives according to patient perspective, staff perspective, costs etc. under different system settings (e.g. principles for operating room allocation and amount of stand-by personnel), is studied. Preliminary results are presented based on case studies from two Swedish hospitals.
---
paper_title: The value of the dedicated orthopaedic trauma operating room.
paper_content:
Background:Trauma centers and orthopaedic surgeons have traditionally been faced with limited operating room (OR) availability for fracture surgery. Orthopaedic trauma cases are often waitlisted and done late at night. We investigated the feasibility of having an unbooked orthopaedic trauma OR to re
---
paper_title: Scheduling patients in an ambulatory surgical center
paper_content:
This paper presents a deterministic approach to schedule patients in an ambulatory surgical center (ASC) such that the number of postanesthesia care unit nurses at the center is minimized. We formulate the patient scheduling problem as new variants of the no-wait, two-stage process shop scheduling problem and present computational complexity results for the new scheduling models. Also, we develop a tabu search-based heuristic algorithm to solve the patient scheduling problem. Our algorithm is shown to be very effective in finding near optimal schedules on a set of real data from a university hospital's ASC. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2003
---
paper_title: Sampling Error Can Significantly Affect Measured Hospital Financial Performance of Surgeons and Resulting Operating Room Time Allocations
paper_content:
Hospitals with limited operating room (OR) hours, those with intensive care unit or ward beds that are always full, or those that have no incremental revenue for many patients need to choose which surgeons get the resources. Although such decisions are based on internal financial reports, whether the reports are statistically valid is not known. Random error may affect surgeons' measured financial performance and, thus, what cases the anesthesiologists get to do and which patients get to receive care. We tested whether one fiscal year of surgeon-specific financial data is sufficient for accurate financial accounting. We obtained accounting data for all outpatient or same-day-admit surgery cases during one fiscal year at an academic medical center. Linear programming was used to find the mix of surgeons' OR time allocations that would maximize the contribution margin or minimize variable costs. Confidence intervals were calculated on these end points by using Fieller's theorem and Monte-Carlo simulation. The 95% confidence intervals for increases in contribution margins or reductions in variable costs were 4.3% to 10.8% and 6.0% to 8.9%, respectively. As many as 22% of surgeons would have had OR time reduced because of sampling error. We recommend that physicians ask for and OR managers get confidence intervals of end points of financial analyses when making decisions based on them.
---
paper_title: Management of surgical waiting lists through a Possibilistic Linear Multiobjective Programming problem
paper_content:
This study attempts to apply a management science technique to improve the efficiency of Hospital Administration. We aim to design the performance of the surgical services at a Public Hospital that allows the Decision-Maker to plan surgical scheduling over one year in order to reduce waiting lists. Real decision problems usually involve several objectives that have parameters which are often given by the decision maker in an imprecise way. It is possible to handle these kinds of problems through multiple criteria models in terms of possibility theory. Here we apply a Possibilistic Linear Multiobjective Programming method, developed by the authors, for solving a hospital management problem using a Fuzzy Compromise Programming approach.
---
paper_title: Consequences of running more operating theatres than anaesthetists to staff them: a stochastic simulation study
paper_content:
Background Numerous hospitals implement a ratio of one anaesthetist supervising non-medically-qualified anaesthetist practitioners in two or more operating theatres. However, the risk of requiring anaesthetists simultaneously in several theatres due to concurrent critical periods has not been evaluated. It was examined in this simulation study. Methods Using a Monte Carlo stochastic simulation model, we calculated the risk of a staffing failure (no anaesthetist available when one is needed), in different scenarios of scheduling, staffing ratio, and number of theatres. Results With a staffing ratio of 0.5 for a two-theatre suite, the simulated risk that at least one failure occurring during a working day varied from 87% if only short operations were performed to 40% if only long operations performed (65% for a 50:50 mixture of short and long operations). Staffing-failure risk was particularly high during the first hour of the workday, and decreased as the number of theatres increased. The decrease was greater for simulations with only long operations than those with only short operations (the risk for 10 theatres declined to 12% and 74%, respectively). With a staffing ratio of 0.33, the staffing-failure risk was markedly higher than for a 0.5 ratio. The availability of a floater for the whole suite to intervene during failure strongly lowered this risk. Conclusions Scheduling one anaesthetist for two or three theatres exposes patients and staff to high risk of failure. Adequate planning of long and short operations and the presence of a floating anaesthetist are efficient means to optimize site activity and assure safety.
---
paper_title: Tactical Decision Making for Selective Expansion of Operating Room Resources Incorporating Financial Criteria and Uncertainty in Subspecialties' Future Workloads
paper_content:
We considered the allocation of operating room (OR) time at facilities where the strategic decision had been made to increase the number of ORs. Allocation occurs in two stages: a long-term tactical stage followed by short-term operational stage. Tactical decisions, approximately 1 yr in advance, determine what specialized equipment and expertise will be needed. Tactical decisionsarebasedonestimatesoffutureORworkload for each subspecialty or surgeon. We show that groups of surgeons can be excluded from consideration at this tactical stage (e.g., surgeons who need intensive care bedsorthosewithbelowaveragecontributionmargins perORhour).Lowerandupperlimitsareestimatedfor the future demand of OR time by the remaining surgeons.Thus,initialORallocationscanbeaccomplished with only partial information on future OR workload. Once the new ORs open, operational decision-making based on OR efficiency is used to fill the OR time and adjust staffing. Surgeons who were not allocated additional time at the tactical stage are provided increased OR time through operational adjustments based on their actual workload. In a case study from a tertiary hospital, future demand estimates were needed for only 15% of surgeons,illustratingthepracticalityofthesemethodsfor use in tactical OR allocation decisions. (Anesth Analg 2005;100:1425–32)
---
paper_title: Managing uncertainty in orthopaedic trauma theatres
paper_content:
Abstract The management of acute healthcare involves coping with a large uncertainty in demand. This uncertainty is a prevailing feature of orthopaedic care and many scarce resources are devoted to providing the contingent theatre time for orthopaedic trauma patients. However, given the variability and uncertainty in the demand much of the theatre time is not used. Simulation was used to explore the balance between maximising the utilisation of the theatre sessions, avoiding too many overruns and ensuring a reasonable quality of care in a typical hospital in the United Kingdom. The simulation was developed to examine a policy of including planned, elective patients within the trauma session: it appears that if patients are willing to accept a possibility of their treatment being cancelled, substantially greater throughputs can be achieved. A number of approximations were examined as an alternative to the full simulation: the simpler model offers reasonable accuracy and easier implementation.
---
paper_title: ENDOSCOPIES SCHEDULING PROBLEM: A CASE STUDY
paper_content:
The efficient management of an operating theatre involves the problems of planning and scheduling, and this is the same case for endoscopy center. The research aims at building a feasible and efficient operating program within one week for an endoscopy unit composed of two specialized operating rooms, with an objective of both maximizing the utilization of operating rooms and minimizing their overtime cost. At the planning stage, a tactical planning model for one week is built and is solved by a column-generation based heuristic (CGBH) procedure. The solution of the planning stage assigns each operating room to a set of surgical cases on each day. Afterwards, a daily scheduling problem is built at the scheduling stage in order to finally schedule the surgical cases assigned at the planning stage. This daily scheduling model is firstly simplified by a group technology into an "open shop" model and then solved by the Gonzalez-Sahni algorithm. As a result, a final operating program is ob
---
paper_title: Operating Theatre Scheduling Using Lagrangian Relaxation
paper_content:
This paper addresses the surgery operation scheduling problem. Two types of resources are considered, operating rooms and recovery beds. Each patient first visits an operating room for surgery operation and is transferred to a recovery room immediately after the surgery operation. The operating room needs to be cleaned after the surgery operation before starting another operation. The problem consists in assigning patients to operating rooms and recovery beds in order to minimize the sum over all patients of one defined function of their completion times. According to this, the problem is NP harp problem. A Lagrangian relaxation approach is proposed in this paper to determine a near optimal schedule and a tight lower bound. Numerical results will be presented to show the efficiency of the method.
---
paper_title: A Norm Utilisation for Scarce Hospital Resources: Evidence from Operating Rooms in a Dutch University Hospital
paper_content:
BACKGROUND ::: Utilisation of operating rooms is high on the agenda of hospital managers and researchers. Many efforts in the area of maximising the utilisation have been focussed on finding the holy grail of 100% utilisation. The utilisation that can be realised, however, depends on the patient mix and the willingness to accept the risk of working in overtime. ::: ::: ::: MATERIALS AND METHODS ::: This is a mathematical modelling study that investigates the association between the utilisation and the patient mix that is served and the risk of working in overtime. Prospectively, consecutively, and routinely collected data of an operating room department in a Dutch university hospital are used. Basic statistical principles are used to establish the relation between realistic utilisation rates, patient mixes, and accepted risk of overtime. ::: ::: ::: RESULTS ::: Accepting a low risk of overtime combined with a complex patient mix results a low utilisation rate. If the accepted risk of overtime is higher and the patient mix is less complex, the utilisation rate that can be reached is closer to 100%. ::: ::: ::: CONCLUSION ::: Because of the inherent variability of healthcare processes, the holy grail of 100% utilisation is unlikely to be found. The method proposed in this paper calculates a realistic benchmark utilisation that incorporates the patient mix characteristics and the willingness to accept risk of overtime.
---
paper_title: The operating theatre planning by the follow-up of the risk of no realization
paper_content:
Abstract In the French context of healthcare expenses control, the operating theatre, that represents 9% of hospital's annual budget, presents a stake of major priority. The realization of the operating theatre planning is the fruit of negotiation of different actors of the block such as surgeons, anesthetists, nurses, managerial staff, etc. whose constraints and interests are often different. In this context, a win–win situation for this partnership (all parties involved) requires a good and constructive negotiation. In this paper, we propose an operating theatre planning procedure that aims at mastering the risk of no realization (RNR) of the tentative plan while stabilizing the operating rooms’ utilization time. During the application of this planning, we achieve the follow-up of the RNR and according to its evolution the research of another planning will be proposed in order to reduce the risk level. Finally, we present results obtained by simulations that support the interest for implementing these procedures.
---
paper_title: Using Computer Simulation in Operating Room Management: Impacts on Process Engineering and Performance
paper_content:
Operating rooms are regarded as the most costly hospital facilities. Due to rising costs and decreasing reimbursements, it is necessary to optimize the efficiency of the operating room suite. In this context several strategies have been proposed that optimize patient throughput by redesigning perioperative processes. The successful deployment of effective practices for continuous process improvements in operating rooms can require that operating room management sets targets and monitors improvements throughout all phases of process engineering. Simulation can be used to study the effects of process improvements through novel facilities, technologies and/or strategies. In this paper, we propose a conceptual framework to use computer simulations in different stages of business process management (BPM) lifecycle for operating room management. Additionally, we conduct simulation studies in different stages of the BPM lifecycle. The results of our studies provide evidence that simulation can provide effective decision support to drive performance in operating rooms in several phases of the BPM lifecycle
---
paper_title: A Hierarchical Multiple Criteria Mathematical Programming Approach for Scheduling General Surgery Operations in Large Hospitals
paper_content:
Limited staff and equipment within surgical services require efficient use of these resources among multiple surgeon groups. In this study, a set of hierarchical multiple criteria mathematical programming models are developed to generate weekly operating room schedules. The goals considered in these models are maximum utilization of operating room capacity, balanced distribution of operations among surgeon groups in terms of operation days, lengths of operation times, and minimization of patient waiting times. Because of computational difficulty of this scheduling problem, the overall problem is broken down into manageable hierarchical stages: (1) selection of patients, (2) assignment of operations to surgeon groups, and (3) determination of operation dates and operating rooms. Developed models are tested on the data collected in College of Medicine Research Hospital at Cukurova University as well as on simulated data sets, using MPL optimization package.
---
paper_title: Ambulatory Care and Orthopaedic Capacity Planning
paper_content:
Ambulatory Care facilities (often referred to as diagnosis and treatment centres) separate the routine elective activity from the uncertainty of complex inpatient and emergency treatment. Only routine patients with predictable outcomes should be treated in Ambulatory Care. Hence the centre should be able to plan its activities effectively. This paper considers the consequences for the remaining elective inpatient bed and theatre requirements. Computer models are used to simulate many years of activity in an orthopaedic department at a typical District General hospital.
---
paper_title: A three-phase approach for operating theatre schedules
paper_content:
In this paper we develop a three-phase, hierarchical approach for the weekly scheduling of operating rooms. This approach has been implemented in one of the surgical departments of a public hospital located in Genova (Genoa), Italy. Our aim is to suggest an integrated way of facing surgical activity planning in order to improve overall operating theatre efficiency in terms of overtime and throughput as well as waiting list reduction, while improving department organization. In the first phase we solve a bin packing-like problem in order to select the number of sessions to be weekly scheduled for each ward; the proposed and original selection criterion is based upon an updated priority score taking into proper account both the waiting list of each ward and the reduction of residual ward demand. Then we use a blocked booking method for determining optimal time tables, denoted Master Surgical Schedule (MSS), by defining the assignment between wards and surgery rooms. Lastly, once the MSS has been determined we use the simulation software environment Witness 2004 in order to analyze different sequencings of surgical activities that arise when priority is given on the basis of a) the longest waiting time (LWT), b) the longest processing time (LPT) and c) the shortest processing time (SPT). The resulting simulation models also allow us to outline possible organizational improvements in surgical activity. The results of an extensive computational experimentation pertaining to the studied surgical department are here given and analyzed.
---
paper_title: Comparison of two methods of operating theatre planning: Application in Belgian Hospital
paper_content:
Operating Theatre is the centre of the hospital management’s efforts. It constitutes the most expensive sector with more than 10% of the intended operating budget of the hospital. To reduce the costs while maintaining a good quality of care, one of the solutions is to improve the existent planning and scheduling methods by improving the services and surgical specialty coordination or finding the best estimation of surgical case durations. The other solution is to construct an effective surgical case plan and schedule. The operating theatre planning and scheduling is the two important steps, which aim to make a surgical case programming with an objective of obtaining a realizable and efficient surgical case schedule. This paper focuses on the first step, the operating theatre planning problem. Two planning methods are introduced and compared. Real data of a Belgian university hospital “Tivoli” are used for the experiments.
---
paper_title: A Set Packing Approach for Scheduling Elective Surgical Procedures
paper_content:
The efficient scheduling of surgical procedures to operating rooms in a hospital is a complex problem due to limited resources (e.g. medical staff, equipment) and conflicting objectives (e.g. reduce running costs and increase staff and patient satisfaction). A novel approach for scheduling elective surgeries over a short-term horizon is proposed which takes explicit consideration of these aspects. The problem is formulated as a set packing problem and solved optimally through column generation and constraint branching. Good results were obtained for instances from the literature.
---
paper_title: Simulation of a Multiple Operating Room Surgical Suite
paper_content:
Outpatient surgery scheduling involves the coordination of several activities in an uncertain environment. Due to the very customized nature of surgical procedures there is significant uncertainty in the duration of activities related to the intake process, surgical procedure, and recovery process. Furthermore, there are multiple criteria which must be traded off when considering how to schedule surgical procedures including patient waiting, operating room (OR) team waiting, OR idling, and overtime for the surgical suite. Uncertainty combined with the need to tradeoff many criteria makes scheduling a complex task for OR managers. In this article we present a simulation model for a multiple OR surgical suite, describe some of the scheduling challenges, and illustrate how the model can be used as a decisions aid to improve strategic and operational decision making relating to the delivery of surgical services. All results presented are based on real data collected at Mayo Clinic in Rochester, MN.
---
paper_title: Operating room managers' use of integer programming for assigning block time to surgical groups: a case study.
paper_content:
UNLABELLED ::: A common problem at hospitals with fixed amounts of available operating room (OR) time (i.e., "block time") is determining an equitable method of distributing time to surgical groups. Typically, facilities determine a surgical group's share of available block time using formulas based on OR utilization, contribution margin, or some other performance metric. Once each group's share of time has been calculated, a method must be found for fitting each group's allocated OR time into the surgical master schedule. This involves assigning specific ORs on specific days of the week to specific surgical groups, usually with the objective of ensuring that the time assigned to each group is close to its target share. Unfortunately, the target allocated to a group is rarely expressible as a multiple of whole blocks. In this paper, we describe a hospital's experience using the mathematical technique of integer programming to solve the problem of developing a consistent schedule that minimizes the shortfall between each group's target and actual assignment of OR time. Schedule accuracy, the sum over all surgical groups of shortfalls divided by the total time available on the schedule, was 99.7% (SD 0.1%, n = 11). Simulations show the algorithm's accuracy can exceed 97% with > or =4 ORs. The method is a systematic and successful way to assign OR blocks to surgeons. ::: ::: ::: IMPLICATIONS ::: At hospitals with a fixed budget of operating room (OR) time, integer programming can be used by OR managers to decide which surgical group is to be allocated which OR on which day(s) of the week. In this case study, we describe the successful application of integer programming to this task, and discuss the applicability of the results to other hospitals.
---
paper_title: How to release allocated operating room time to increase efficiency: predicting which surgical service will have the most underutilized operating room time.
paper_content:
At many facilities, surgeons and patients choose the day of surgery, cases are not turned away, and staffing is adjusted to maximize operating room (OR) efficiency. If a surgical service has already filled its allocated OR time, but has an additional case to schedule, then OR efficiency is increased by scheduling the new case into the OR time of a different service with much underutilized OR time. The latter service is said to be "releasing" its allocated OR time. In this study, we analyzed 3 years of scheduling data from a medium-sized and a large surgical suite. Theoretically, the service that should have its OR time released is the service expected to have the most underutilized OR time on the day of surgery (i.e., any future cases that may be scheduled into that service's time also need to be factored in). However, we show that OR efficiency is only slightly less when the service whose time is released is the service that has the most allocated but unscheduled (i.e., unfilled) OR time at the moment the new case is scheduled. In contrast, compromising by releasing the OR time of a service other than the one with the most allocated but unscheduled OR time markedly reduces OR efficiency. OR managers can use these results when releasing allocated OR time.
---
paper_title: A stochastic model for operating room planning with elective and emergency demand for surgery
paper_content:
This paper describes a stochastic model for Operating Room (OR) planning with two types of demand for surgery: elective surgery and emergency surgery. Elective cases can be planned ahead and have a patient-related cost depending on the surgery date. Emergency cases arrive randomly and have to be performed on the day of arrival. The planning problem consists in assigning elective cases to different periods over a planning horizon in order to minimize the sum of elective patient related costs and overtime costs of operating rooms. A new stochastic mathematical programming model is first proposed. We then propose a Monte Carlo optimization method combining Monte Carlo simulation and Mixed Integer Programming. The solution of this method is proved to converge to a real optimum as the computation budget increases. Numerical results show that important gains can be realized by using a stochastic OR planning model.
---
paper_title: When to Release Allocated Operating Room Time to Increase Operating Room Efficiency
paper_content:
UNLABELLED ::: We studied when allocated, but unfilled, operating room (OR) time of surgical services should be released to maximize OR efficiency. OR time was allocated for two surgical suites based on OR efficiency. Then, we analyzed real OR schedules. We added new hypothetical cases lasting 1, 2, or 3 h into OR time of the service that had the largest difference between allocated and scheduled cases (i.e., the most unfilled OR time) 5 days before the day of surgery. The process was repeated using the updated OR schedule available the day before surgery. The pair-wise difference in resulting overutilized OR time was calculated for n = 754 days of data from each of the two surgical suites. We found that postponing the decision of which service gets the new case until early the day before surgery reduces overutilized OR time by <15 min per OR per day as compared to releasing the allocated OR time 5 days before surgery. These results show that when OR time is released has a negligible effect on OR efficiency. This is especially true for ambulatory surgery centers with brief cases or large surgical suites with specialty-specific OR teams. What matters much more is having the correct OR allocations and, if OR time needs to be released, making that decision based on the scheduled workload. ::: ::: ::: IMPLICATIONS ::: Provided operating room (OR) time is allocated and cases are scheduled based on maximizing OR efficiency, then whether OR time is released five days or one day before the day of surgery has a negligible effect on OR efficiency.
---
paper_title: Hospital Operating Room Capacity Expansion
paper_content:
A large midwestern hospital is expecting an increase in surgical caseload. New operating room (OR) capacity can be had by building new ORs or extending the working hours in the current ORs. The choice among these options is complicated by the fact that patients, surgeons and surgical staff, and hospital administrators are all important stakeholders in the health service operation, and each has different priorities. This paper investigates the trade-offs among three performance criteria (wait to get on schedule, scheduled procedure start-time reliability, and hospital profits), which are of particular importance to the different constituencies. The objective is to determine how the hospital can best expand its capacity, acknowledging the key role that each constituency plays in that objective. En route, the paper presents supporting analysis for process improvements and suggestions for optimal participation-inducing staff contracts for extending OR hours of operation.
---
paper_title: Operating Theatre Optimization : A Resource-Constrained Based Solving Approach
paper_content:
The operating theatre is considered as the bottleneck of the hospital and as one of the most resources consuming unit. Therefore its management is of primary interest : an efficient planning allows to use as better as possible the availability of the operating theatre and reduce human and financial working costs. This paper deals with operating theatre planning optimization. We present a mathematical model combining surgeries planning and scheduling over short time horizon, and by taking into account renewable and nonrenewable resources availabilities. This model gets its inspiration from project management and especially resource-constrained project scheduling problem (RCPSP). We also introduce a genetic algorithm approach to heuristically solve the problem. We base our approach on the multi-mode variation of the RCPSP to define the related crossover, mutation and selection operators allowing the global search to work effectively.
---
paper_title: Mount Sinai Hospital Uses Integer Programming to Allocate Operating Room Time
paper_content:
In concentrating polymer solutions up to a desired specification level of residual solvents, encrustations can be prevented and the yield and the degree of purity can be increased, when the product is heated up under pressure, expanded through a restrictor element (3) with vapor formation into a first, preferably coiled flow pipe (7) and concentrated therein as far as possible, the mixture of vapors and polymer solution is whirled at an angle into a second flow pipe (9) in a sloping arrangement and fitted with self-cleaning elements (11, 12) and concentrated therein up to the desired level, and vapors and concentrate are separately discharged only downstream thereof.
---
paper_title: A Sequential Bounding Approach for Optimal Appointment Scheduling
paper_content:
This study is concerned with the determination of optimal appointment times for a sequence of jobs with uncertain durations. Such appointment systems are used in many customer service applications to increase the utilization of resources, match workload to available capacity, and smooth the flow of customers. We show that the problem can be expressed as a two-stage stochastic linear program that includes the expected cost of customer waiting, server idling, and a cost of tardiness with respect to a chosen session length. We exploit the problem structure to derive upper bounds that are independent of job duration distribution type. These upper bounds are used in a variation of the standard L-shaped algorithm to obtain optimal solutions via successively finer partitions of the support of job durations. We present new analytical insights into the problem as well as a series of numerical experiments that illustrate properties of the optimal solution with respect to distribution type, cost structure, and number of jobs.
---
paper_title: Optimization of surgery sequencing and scheduling decisions under uncertainty
paper_content:
Operating rooms (ORs) are simultaneously the largest cost center and greatest source of revenues for most hospitals. Due to significant uncertainty in surgery durations, scheduling of ORs can be very challenging. Longer than average surgery durations result in late starts not only for the next surgery in the schedule, but potentially for the rest of the surgeries in the day as well. Late starts also result in direct costs associated with overtime staffing when the last surgery of the day finishes later than the scheduled shift end time. In this article we describe a stochastic optimization model and some practical heuristics for computing OR schedules that hedge against the uncertainty in surgery durations. We focus on the simultaneous effects of sequencing surgeries and scheduling start times. We show that a simple sequencing rule based on surgery duration variance can be used to generate substantial reductions in total surgeon and OR team waiting, OR idling, and overtime costs. We illustrate this with results of a case study that uses real data to compare actual schedules at a particular hospital to those recommended by our model.
---
paper_title: The use of Simulation to Determine Maximum Capacity in the Surgical Suite Operating Room
paper_content:
Utilizing ambulatory care units at optimal levels has become increasingly important to hospitals from both service and business perspectives. With the inherent variation in hospitals due to unique procedures and patients, performing capacity analysis through analytical models is difficult without making simplifying assumptions. Many hospitals calculate efficiency by comparing total operating room minutes available to total operating minutes used. This metric both fails to account for the required non-value added tasks between surgeries and the delicate balance necessary between having patients ready for surgery when an operating room becomes available, which can result in increased waiting times, and maximizing patient satisfaction. We present a general methodology for determining the maximum capacity within a surgical suite through the use of a discrete-event simulation model. This research is based on an actual hospital concerned with doctor/resource acquisition decisions, patient satisfaction improvements, and increased productivity.
---
paper_title: OPERATING THEATRE PLANNING
paper_content:
N patients must be planned in an operating theatre over a medium term horizon (one or two weeks). This operating theatre is composed of several operating rooms and of one recovery room where several beds are available. Each patient needs a particular surgical procedure, which defines the human (surgeon) and material (equipment) resources to use and the intervention duration. Additive characteristics must be taken into account: hospitalisation date, intervention deadline, etc. The patient satisfaction and resource efficiency are sought. These two criteria are, respectively, modelled by hospitalisation costs, i.e. the patient stay duration, and the overtime costs, i.e. the resource overloads. We propose to solve this problem in two steps. First, an operating theatre planning is defined. It consists in assigning patients to operating rooms over the horizon. Second, each loaded operating room is scheduled individually in order to synchronise the various human and material resources used. This paper focuses on the first step, i.e. the operating theatre planning, which defines a general assignment problem, i.e. a NP hard problem. In order to solve heuristically this problem, an assignment model with resource capacity and time-window additive constraints is proposed. Integrating most of the constraints in the cost objective function, an extension of the Hungarian method has been developed to calculate the operating theatre planning. This primal–dual heuristic has been successfully experimented on a wide range of problem test data.
---
paper_title: Allocation of Surgeries to Operating Rooms by Goal Programing
paper_content:
High usage rate in a surgical suite is extremely important in meeting the increasing demand for health care services and reducing costs to improve quality of care. In this paper a goal programming model which can produce schedules that best serve the needs of the hospital, i.e., by minimizing idle time and overtime, and increasing satisfaction of surgeons, patients, and staff, is described. The approach involves sorting the requests for a particular day on the basis of block restrictions, room utilization, surgeon preferences and intensive care capabilities. The model is tested using the data obtained during field studies at Dokuz Eylul University Hospital. The model is also tested for alternative achievement functions to examine the model's ability to satisfy abstract goals.
---
paper_title: Building cyclic master surgery schedules with leveled resulting bed occupancy
paper_content:
This paper proposes and evaluates a number of models for building surgery schedules with leveled resulting bed occupancy. The developed models involve two types of constraints. Demand constraints ensure that each surgeon (or surgical group) obtains a specific number of operating room blocks. Capacity constraints limit the available blocks on each day. Furthermore, the number of operated patients per block and the length of stay of each operated patient are dependent on the type of surgery. Both are considered stochastic, following a multinomial distribution. We develop a number of mixed integer programming based heuristics and a metaheuristic to minimize the expected total bed shortage and present computational results.
---
paper_title: Scheduling Surgical Cases into Overflow Block Time— Computer Simulation of the Effects of Scheduling Strategies on Operating Room Labor Costs
paper_content:
“Overflow” block time is operating room (OR) time for a surgical group’s cases that cannot be completed in the regular block time allocated to each surgeon in the surgical group. Having such overflow block time increases OR utilization. The optimal way to schedule patients into a surgical group’s ov
---
paper_title: Robust surgery loading
paper_content:
We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This research was performed in collaboration with the Erasmus MC, a large academic hospital in the Netherlands, which has also provided historical data for the experiments. We propose various constructive heuristics and local search methods that use statistical information on surgery durations to exploit the portfolio effect, and thereby to minimize the required slack. We demonstrate that our approach frees a lot of operating room capacity, which may be used to perform additional surgeries. Furthermore, we show that by combining advanced optimization techniques with extensive historical statistical records on surgery durations can significantly improve the operating room department utilization.
---
paper_title: Determining the Number of Beds in the Postanesthesia Care Unit: A Computer Simulation Flow Approach
paper_content:
UNLABELLED ::: Designing a new operating room (OR) suite is a difficult process owing to the number of caregivers involved and because decision-making managers try to minimize the direct and indirect costs of operating the OR suite. In this study, we devised a computer simulation flow model to calculate, first, the minimum number of beds required in the postanesthesia care unit (PACU). In a second step, we evaluated the relationship between the global performance of the OR suite in terms of OR scheduling and number of staffed PACU beds and porters. We designed a mathematical model of OR scheduling. We then developed a computer simulation flow model of the OR suite. Both models were connected; the first one performed the input flows, and the second simulated the OR suite running. The simulations performed examined the number of beds in the PACU in an ideal situation or in the case of reduction in the number of porters. We then analyzed the variation of number of beds occupied per hour in the PACU when the time spent by patients in the PACU or the number of porters varied. The results highlighted the strong impact of the number of porters on the OR suite performance and particularly on PACU performances. ::: ::: ::: IMPLICATIONS ::: Designing new operating room (OR) facilities implies many decisions on the number of ORs, postanesthesia care unit (PACU) beds, and on the staff of nurses and porters. To make these decisions, managers can use rules of thumb or recommendations. Our study highlights the interest of using flow simulation to validate these choices. In this case study we determine the number of PACU beds and porter staff and assess the impact of decreasing the number of porters on PACU bed requirements.
---
paper_title: A branch-and-price approach for integrating nurse and surgery scheduling
paper_content:
A common problem at hospitals is the extreme variation in daily (even hourly) workload pressure for nurses. The operating room is considered to be the main engine and hence the main generator of variance in the hospital. The purpose of this paper is threefold. First of all, we present a concrete model that integrates both the nurse and the operating room scheduling process. Second, we show how the column generation technique approach, one of the most employed exact methods for solving nurse scheduling problems, can easily cope with this model extension. Third, by means of a large number of computational experiments we provide an idea of the cost saving opportunities and required solution times.
---
paper_title: How to Schedule Elective Surgical Cases into Specific Operating Rooms to Maximize the Efficiency of Use of Operating Room Time
paper_content:
We considered elective case scheduling at hospitals and surgical centers at which surgeons and patients choose the day of surgery, cases are not turned away, and anesthesia and nursing staffing are adjusted to maximize the efficiency of use of operating room (OR) time. We investigated scheduling a new case into an OR by using two patient-scheduling rules: Earliest Start Time or Latest Start Time. By using several scenarios, we showed that the use of Earliest Start Time is rational economically at such facilities. Specifically, it maximizes OR efficiency when a service has nearly filled its regularly scheduled hours of OR time. However, Latest Start Time will perform better at balancing workload among services’ OR time. We then used historical case duration data from two facilities in computer simulations to investigate the effect of errors in predicting case durations on the performance of these two heuristics. The achievable incremental reduction in overtime by having perfect information on case duration versus using historical case durations was only a few minutes per OR. The differences between Earliest Start Time and Latest Start Time were also only a few minutes per OR. We conclude that for facilities at which the goals are, in order of importance, safety, patient and surgeon access to OR time, and then efficiency, few restrictions need to be placed on patient scheduling to achieve an efficient use of OR time. (Anesth Analg 2002;94:933–42)
---
paper_title: Schedule the Short Procedure First to Improve OR Efficiency
paper_content:
ABSTRACT • OPERATING ROOM MANAGERS are hampered in their efforts to optimize OR efficiency by surgical procedures that last a longer or shorter time than scheduled. The lack of predictability is a result of inaccuracy in scheduling and variability in the duration of procedures. • SCHEDULING SHORT PROCEDURES before long procedures theoretically limits this variability. • MONTE CARLO SIMULATION of ORs scheduled with various combinations of short and long procedures supports this concept's validity. • RESULTS INDICATE that scheduling short procedures first can improve on-time performance and decrease staff member overtime expense without reducing surgical throughput. AORNJ 78 (October 2003) 651–657.
---
paper_title: Encyclopedia of Operations Research and Management Science
paper_content:
The goal of the Encyclopedia of Operations Research and Management Science is to provide decision makers and problem solvers in business, industry, government and academia with a comprehensive overview of the wide range of ideas, methodologies, and synergistic forces that combine to form the pre-eminent decision-aiding fields of operations research and management science (OR/MS). The Second Edition is a further extension of this goal - which through addressing and solving a wide range of problems - OR/MS methodologies continue to flourish and grow. This is a field that is used extensively throughout the applied sciences, and, because of this, the new edition has added topics in the following areas: Analytic Network Process; Call Centers; Certainty Equivalence; Comb. Optimization by Simulated CE; Computational Organization; Constraint Programming; Data Mining; Degeneracy Graphs; Economic Order Q Extensions; Educational Issues in B-Schools; Electronic Commerce; Financial Markets; Global Climate Change; Hidden Markov Models; History of Early British OR; Implementation for Public Sector; Info Tech Benefits; Interactive Multi-Objective Math. Programming; Knapsacks with Nonlinearities; Little's Law in Distribution Form; Military Ops Other than War; Multivariate Quality Control; Perturbation Analysis; Simulation Metamodeling; Simulation Optimization; Supply Chain Management; Theory of Constraints; Timetabling. The intended audience of the Encyclopedia of Operations Research and Management Science is technically diverse and wide; it includes anyone concerned with the science, techniques, and ideas of how one makes decisions. As this audience encompasses many professions, educational backgrounds and skills, we were attentive to the form, format and scope of the articles. Thus, the articles are designed to serve as initial sources of information for all such readers, with special emphasis on the needs of students. Each article provides a background or history of the topic, describes relevant applications, overviews present and future trends, and lists seminal and current references. To allow for variety in exposition, the authors were instructed to present their material from both research and applied perspectives. The Encyclopedia has been organized into specific topics that collectively encompass the foundations, applications, and emerging elements of this ever-changing field. We also wanted to establish the close associations that OR/MS has maintained with other scientific endeavors, with special emphasis on its symbiotic relationships to computer science, information processing, and mathematics. Based on our broad view of OR/MS, we commissioned 228 major expository articles and complemented them by numerous entries: descriptions, discussions, definitions, and abbreviations. The connections between topics are highlighted by an entry's final `See' statement, as appropriate. Each topical article provides a background or history of the topic, describes relevant applications, overviews present and future trends, and lists seminal and current references. Of significant importance is that each contributed topic has been authored by a leading authoritative researcher on that particular topic.
---
paper_title: Scheduling patients in an ambulatory surgical center
paper_content:
This paper presents a deterministic approach to schedule patients in an ambulatory surgical center (ASC) such that the number of postanesthesia care unit nurses at the center is minimized. We formulate the patient scheduling problem as new variants of the no-wait, two-stage process shop scheduling problem and present computational complexity results for the new scheduling models. Also, we develop a tabu search-based heuristic algorithm to solve the patient scheduling problem. Our algorithm is shown to be very effective in finding near optimal schedules on a set of real data from a university hospital's ASC. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2003
---
paper_title: The impact of service-specific staffing, case scheduling, turnovers, and first-case starts on anesthesia group and operating room productivity: a tutorial using data from an Australian hospital.
paper_content:
BACKGROUND ::: In this tutorial, we consider the impact of operating room (OR) management on anesthesia group and OR labor productivity and costs. Most of the tutorial focuses on the steps required for each facility to refine its OR allocations using its own data collected during patient care. ::: ::: ::: METHODS ::: Data from a hospital in Australia are used throughout to illustrate the methods. OR allocation is a two-stage process. During the initial tactical stage of allocating OR time, OR capacity ("block time") is adjusted. For operational decision-making on a shorter-term basis, the existing workload can be considered fixed. Staffing is matched to that workload based on maximizing the efficiency of use of OR time. ::: ::: ::: RESULTS ::: Scheduling cases and making decisions on the day of surgery to increase OR efficiency are worthwhile interventions to increase anesthesia group productivity. However, by far, the most important step is the appropriate refinement of OR allocations (i.e., planning service-specific staffing) 2-3 mo before the day of surgery. ::: ::: ::: CONCLUSIONS ::: Reducing surgical and/or turnover times and delays in first-case-of-the-day starts generally provides small reductions in OR labor costs. Results vary widely because they are highly sensitive both to the OR allocations (i.e., staffing) and to the appropriateness of those OR allocations.
---
paper_title: Strategies to reduce delays in admission into a postanesthesia care unit from operating rooms
paper_content:
The authors performed a systematic review of strategies to reduce delays in admission into PACUs from ORs. The purpose of this article was to evaluate for managers how to choose interventions based on effectiveness and practicality. The authors discuss optimization methods that can be used to sequence consecutive cases in the same OR, by the same surgeon, on the same day, based on the objective of reducing delays in PACU admission due to the unavailability of unfilled PACU beds. Although effective, such methods can be impractical because of large organizational change required and limited equipment or personnel availability. When all physical beds are not full, PACU nurse staffing can be adjusted. Statistical methods can be used to ensure that nursing schedules closely match the times that minimize delays in PACU admission. These methods are effective and practical. Explicit criteria can be applied to assist in deciding when to assign other qualified nurses to the PACU, when to ask PACU nurses to work late, and/or when to make a decision on the day before surgery to add more PACU nurses for the next day (if available). The latter would be based on statistical forecasts of the timing of patients' admissions into the PACU. Whether or not all physical beds are full, the risk of delays in PACU admission is relatively insensitive to economically feasible reductions in PACU length of stay. Such interventions should be considered only if statistical analysis, performed by using computer simulation, has established that reducing PACU length of stay will reduce delays in admission at a manager's facility.
---
paper_title: Solving surgical cases assignment problem by a branch-and-price approach ☆
paper_content:
In this paper, we study a surgical cases assignment problem (SCAP) of assigning a set of surgical cases to several multifunctional operating rooms with an objective of minimizing total operating cost. Firstly, we formulate this problem as an integer problem and then reformulate the integer program by using Dantzig–Wolf decomposition as a set partitioning problem. Based on this set partitioning formulation, a so-called branch-and-price exact solution algorithm, combining Branch-and-Bound procedure with column generation (CG) method, is designed for the proposed problem where each node is the linear relaxation problem of a set partitioning problem. This linear relaxation problem is solved by a CG approach in which each column represents a plan for one operating room and is generated by solving a sub-problem (SP) of single operating room planning problem. The computational results indicate that the decomposition approach is promising and capable of solving large problems.
---
paper_title: An operating theatre planning and scheduling problem in the case of a "block scheduling" strategy
paper_content:
Operating theatre is always the most important and expensive sector of the hospital, and its surgical process management problem is always regarded as the kernel. In this paper, we focus on one of the surgical process management problems: block scheduling problem. An efficient weekly operating program is built for an operating theatre through two phases: at first the operating theatre weekly planning problem is solved with a heuristic procedure based on column generation procedure; then the operating theatre daily scheduling problem, based on the results from the first phase, is solved with a hybrid genetic algorithm. In the end, the proposed problem is tested and validated with randomly generated data, and then the numerical results are provided.
---
paper_title: Sampling Error Can Significantly Affect Measured Hospital Financial Performance of Surgeons and Resulting Operating Room Time Allocations
paper_content:
Hospitals with limited operating room (OR) hours, those with intensive care unit or ward beds that are always full, or those that have no incremental revenue for many patients need to choose which surgeons get the resources. Although such decisions are based on internal financial reports, whether the reports are statistically valid is not known. Random error may affect surgeons' measured financial performance and, thus, what cases the anesthesiologists get to do and which patients get to receive care. We tested whether one fiscal year of surgeon-specific financial data is sufficient for accurate financial accounting. We obtained accounting data for all outpatient or same-day-admit surgery cases during one fiscal year at an academic medical center. Linear programming was used to find the mix of surgeons' OR time allocations that would maximize the contribution margin or minimize variable costs. Confidence intervals were calculated on these end points by using Fieller's theorem and Monte-Carlo simulation. The 95% confidence intervals for increases in contribution margins or reductions in variable costs were 4.3% to 10.8% and 6.0% to 8.9%, respectively. As many as 22% of surgeons would have had OR time reduced because of sampling error. We recommend that physicians ask for and OR managers get confidence intervals of end points of financial analyses when making decisions based on them.
---
paper_title: Management of surgical waiting lists through a Possibilistic Linear Multiobjective Programming problem
paper_content:
This study attempts to apply a management science technique to improve the efficiency of Hospital Administration. We aim to design the performance of the surgical services at a Public Hospital that allows the Decision-Maker to plan surgical scheduling over one year in order to reduce waiting lists. Real decision problems usually involve several objectives that have parameters which are often given by the decision maker in an imprecise way. It is possible to handle these kinds of problems through multiple criteria models in terms of possibility theory. Here we apply a Possibilistic Linear Multiobjective Programming method, developed by the authors, for solving a hospital management problem using a Fuzzy Compromise Programming approach.
---
paper_title: Consequences of running more operating theatres than anaesthetists to staff them: a stochastic simulation study
paper_content:
Background Numerous hospitals implement a ratio of one anaesthetist supervising non-medically-qualified anaesthetist practitioners in two or more operating theatres. However, the risk of requiring anaesthetists simultaneously in several theatres due to concurrent critical periods has not been evaluated. It was examined in this simulation study. Methods Using a Monte Carlo stochastic simulation model, we calculated the risk of a staffing failure (no anaesthetist available when one is needed), in different scenarios of scheduling, staffing ratio, and number of theatres. Results With a staffing ratio of 0.5 for a two-theatre suite, the simulated risk that at least one failure occurring during a working day varied from 87% if only short operations were performed to 40% if only long operations performed (65% for a 50:50 mixture of short and long operations). Staffing-failure risk was particularly high during the first hour of the workday, and decreased as the number of theatres increased. The decrease was greater for simulations with only long operations than those with only short operations (the risk for 10 theatres declined to 12% and 74%, respectively). With a staffing ratio of 0.33, the staffing-failure risk was markedly higher than for a 0.5 ratio. The availability of a floater for the whole suite to intervene during failure strongly lowered this risk. Conclusions Scheduling one anaesthetist for two or three theatres exposes patients and staff to high risk of failure. Adequate planning of long and short operations and the presence of a floating anaesthetist are efficient means to optimize site activity and assure safety.
---
paper_title: Managing uncertainty in orthopaedic trauma theatres
paper_content:
Abstract The management of acute healthcare involves coping with a large uncertainty in demand. This uncertainty is a prevailing feature of orthopaedic care and many scarce resources are devoted to providing the contingent theatre time for orthopaedic trauma patients. However, given the variability and uncertainty in the demand much of the theatre time is not used. Simulation was used to explore the balance between maximising the utilisation of the theatre sessions, avoiding too many overruns and ensuring a reasonable quality of care in a typical hospital in the United Kingdom. The simulation was developed to examine a policy of including planned, elective patients within the trauma session: it appears that if patients are willing to accept a possibility of their treatment being cancelled, substantially greater throughputs can be achieved. A number of approximations were examined as an alternative to the full simulation: the simpler model offers reasonable accuracy and easier implementation.
---
paper_title: Booked inpatient admissions and hospital capacity: mathematical modelling study.
paper_content:
Abstract Objectives: To investigate the variability of patients9 length of stay in intensive care after cardiac surgery. To investigate potential interactions between such variability, booked admissions, and capacity requirements. Design: Mathematical modelling study using routinely collected data. Setting: A cardiac surgery department. Source of data: Hospital records of 7014 people entering intensive care after cardiac surgery. Main outcome measures: Length of stay in intensive care; capacity requirements of an intensive care unit for a hypothetical booked admission system. Results: Although the vast majority of patients (89.5%) had a length of stay in intensive care of ≤48 hours, there was considerable overall variability and the distribution of stays has a lengthy tail. A mathematical model of the operation of a hypothetical booking system indicates that such variability has a considerable impact on intensive care capacity requirements, indicating that a high degree of reserve capacity is required to avoid high rates of operation cancellation because of unavailability of suitable postoperative care. Conclusion: Despite the considerable enthusiasm for booked admissions systems, queuing theory suggests that caution is required when considering such systems for inpatient admissions. Such systems may well result in frequent operational difficulties if there is a high degree of variability in length of stay and where reserve capacity is limited. Both of these are common in the NHS. What is already known in this topic Booking systems for hospital admissions have considerable potential benefits for patients in terms of peace of mind and planning their lives, but these benefits are dependent on having a low cancellation rate What this study adds Variability in length of stay can have a major impact on hospital operation and capacity requirements. Operational research techniques can be used to explore this impact If variability in length of stay is substantial, as is common, then booked admission systems may require considerable reserve capacity if cancellation rates are to be kept low
---
paper_title: A Norm Utilisation for Scarce Hospital Resources: Evidence from Operating Rooms in a Dutch University Hospital
paper_content:
BACKGROUND ::: Utilisation of operating rooms is high on the agenda of hospital managers and researchers. Many efforts in the area of maximising the utilisation have been focussed on finding the holy grail of 100% utilisation. The utilisation that can be realised, however, depends on the patient mix and the willingness to accept the risk of working in overtime. ::: ::: ::: MATERIALS AND METHODS ::: This is a mathematical modelling study that investigates the association between the utilisation and the patient mix that is served and the risk of working in overtime. Prospectively, consecutively, and routinely collected data of an operating room department in a Dutch university hospital are used. Basic statistical principles are used to establish the relation between realistic utilisation rates, patient mixes, and accepted risk of overtime. ::: ::: ::: RESULTS ::: Accepting a low risk of overtime combined with a complex patient mix results a low utilisation rate. If the accepted risk of overtime is higher and the patient mix is less complex, the utilisation rate that can be reached is closer to 100%. ::: ::: ::: CONCLUSION ::: Because of the inherent variability of healthcare processes, the holy grail of 100% utilisation is unlikely to be found. The method proposed in this paper calculates a realistic benchmark utilisation that incorporates the patient mix characteristics and the willingness to accept risk of overtime.
---
paper_title: Proactive and reactive strategies fot resource-constrained project scheduling with uncertain resource availabilities
paper_content:
Research concerning project planning under uncertainty has primarily focused on the stochastic resource-constrained project scheduling problem (stochastic RCPSP), an extension of the basic RCPSP, in which the assumption of deterministic activity durations is dropped. In this paper, we introduce a new variant of the RCPSP, for which the uncertainty is modeled by means of resource availabilities that are subject to unforeseen breakdowns. Our objective is to build a robust schedule that meets the project deadline and minimizes the schedule instability cost, defined as the expected weighted sum of the absolute deviations between the planned and the actually realized activity starting times during project execution. We describe how stochastic resource breakdowns can be modeled, which reaction is recommended, when a resource infeasibility occurs due to a breakdown, and how one can protect the initial schedule from the adverse effects of potential breakdowns. An extensive computational experiment is used to show the relative performance of the proposed proactive and reactive strategies. It is shown that protection of the baseline schedule, coupled with intelligent schedule recovery, yields significant performance gains over the use of deterministic scheduling approaches in a stochastic setting.
---
paper_title: Impact of surgical sequencing on post anesthesia care unit staffing
paper_content:
This paper analyzes the impact of sequencing rules on the phase I post anesthesia care unit (PACU) staffing and over-utilized operating room (OR) time resulting from delays in PACU admission. The sequencing rules are applied to each surgeon's list of cases independently. Discrete event simulation shows the importance of having a sufficient number of PACU nurses. Sequencing rules have a large impact on the maximum number of patients receiving care in the PACU (i.e., peak of activity). Seven sequencing rules are tested, over a wide range of scenarios. The largest effect of sequencing was on the percentage of days with at least one delay in PACU admission. The best rules are those that smooth the flow of patients entering in the PACU (HIHD (Half Increase in OR time and Half Decrease in OR time) and MIX (MIX OR time)). We advise against using the LCF (Longest Cases First) and equivalent sequencing methods. They generate more over-utilized OR time, require more PACU nurses during the workday, and result in more days with at least one delay in PACU admission.
---
paper_title: The operating theatre planning by the follow-up of the risk of no realization
paper_content:
Abstract In the French context of healthcare expenses control, the operating theatre, that represents 9% of hospital's annual budget, presents a stake of major priority. The realization of the operating theatre planning is the fruit of negotiation of different actors of the block such as surgeons, anesthetists, nurses, managerial staff, etc. whose constraints and interests are often different. In this context, a win–win situation for this partnership (all parties involved) requires a good and constructive negotiation. In this paper, we propose an operating theatre planning procedure that aims at mastering the risk of no realization (RNR) of the tentative plan while stabilizing the operating rooms’ utilization time. During the application of this planning, we achieve the follow-up of the RNR and according to its evolution the research of another planning will be proposed in order to reduce the risk level. Finally, we present results obtained by simulations that support the interest for implementing these procedures.
---
paper_title: Using Computer Simulation in Operating Room Management: Impacts on Process Engineering and Performance
paper_content:
Operating rooms are regarded as the most costly hospital facilities. Due to rising costs and decreasing reimbursements, it is necessary to optimize the efficiency of the operating room suite. In this context several strategies have been proposed that optimize patient throughput by redesigning perioperative processes. The successful deployment of effective practices for continuous process improvements in operating rooms can require that operating room management sets targets and monitors improvements throughout all phases of process engineering. Simulation can be used to study the effects of process improvements through novel facilities, technologies and/or strategies. In this paper, we propose a conceptual framework to use computer simulations in different stages of business process management (BPM) lifecycle for operating room management. Additionally, we conduct simulation studies in different stages of the BPM lifecycle. The results of our studies provide evidence that simulation can provide effective decision support to drive performance in operating rooms in several phases of the BPM lifecycle
---
paper_title: An observational study of surgeons' sequencing of cases and its impact on postanesthesia care unit and holding area staffing requirements at hospitals.
paper_content:
BACKGROUND: Staffing requirements in the operating room (OR) holding area and in the Phase I postanesthesia care unit (PACU) are influenced by the sequencing of each surgeon's list of cases in the same OR on the same day. METHODS: Case sequencing was studied using 201 consecutive workdays of data from a 10 OR hospital surgical suite. RESULTS: The surgeons differed significantly among themselves in their sequencing of cases and were also internally nonsystematic, based on case durations. The functional effect of this uncoordinated sequencing was for the surgical suite to behave overall as if there was random sequencing. The resulting PACU staffing requirements were the same as those of the best sequencing method identified in prior simulation studies. Although sequencing "Longest Cases First" performs poorly when all ORs have close to 8 h of cases, at the studied hospital it performed no worse than the other methods. The reason was that some ORs were much busier than others on the same day. The standard deviation among ORs in the hours of cases, including turnovers, was 3.2 h; large relative to the mean workload. Data from 33 other hospitals confirmed that this situation is commonplace. Additional studies showed that case sequencing also had minimal effects on the peak number of patients in the holding area. CONCLUSIONS: The uncoordinated decision-making of multiple surgeons working in different ORs can result in a sufficiently uniform rate of admission of patients into the PACU and holding that the independent sequencing of each surgeon's list of cases would not reduce the incidence of delays in admission or staffing requirements.
---
paper_title: Scheduling hospital services: the efficacy of elective-surgery quotas
paper_content:
We take advantage of the advance-scheduling property for elective surgeries by exploring whether the use of a daily quota system with a 1-week or 2-week scheduling window would improve the performance of a typical intensive care unit (ICU) that serves patients coming from a number of different sources within the hospital. The exploration is carried out via a simulation model whose parameters are established from actual ICU data that were gathered over a 6-month period. It is shown that formally linking one controllable upstream process, namely the scheduling of elective surgeries through a quota system, to the downstream ICU admission process, can have beneficial effects throughout the hospital.
---
paper_title: A three-phase approach for operating theatre schedules
paper_content:
In this paper we develop a three-phase, hierarchical approach for the weekly scheduling of operating rooms. This approach has been implemented in one of the surgical departments of a public hospital located in Genova (Genoa), Italy. Our aim is to suggest an integrated way of facing surgical activity planning in order to improve overall operating theatre efficiency in terms of overtime and throughput as well as waiting list reduction, while improving department organization. In the first phase we solve a bin packing-like problem in order to select the number of sessions to be weekly scheduled for each ward; the proposed and original selection criterion is based upon an updated priority score taking into proper account both the waiting list of each ward and the reduction of residual ward demand. Then we use a blocked booking method for determining optimal time tables, denoted Master Surgical Schedule (MSS), by defining the assignment between wards and surgery rooms. Lastly, once the MSS has been determined we use the simulation software environment Witness 2004 in order to analyze different sequencings of surgical activities that arise when priority is given on the basis of a) the longest waiting time (LWT), b) the longest processing time (LPT) and c) the shortest processing time (SPT). The resulting simulation models also allow us to outline possible organizational improvements in surgical activity. The results of an extensive computational experimentation pertaining to the studied surgical department are here given and analyzed.
---
paper_title: Simulation of a Multiple Operating Room Surgical Suite
paper_content:
Outpatient surgery scheduling involves the coordination of several activities in an uncertain environment. Due to the very customized nature of surgical procedures there is significant uncertainty in the duration of activities related to the intake process, surgical procedure, and recovery process. Furthermore, there are multiple criteria which must be traded off when considering how to schedule surgical procedures including patient waiting, operating room (OR) team waiting, OR idling, and overtime for the surgical suite. Uncertainty combined with the need to tradeoff many criteria makes scheduling a complex task for OR managers. In this article we present a simulation model for a multiple OR surgical suite, describe some of the scheduling challenges, and illustrate how the model can be used as a decisions aid to improve strategic and operational decision making relating to the delivery of surgical services. All results presented are based on real data collected at Mayo Clinic in Rochester, MN.
---
paper_title: Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.
paper_content:
Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk.
---
paper_title: Determining Optimum Operating Room Utilization
paper_content:
UNLABELLED ::: Economic considerations suggest that it is desirable to keep operating rooms fully used when staffed, but the optimum utilization of an operating room (OR) is not known. We created a simulation of an OR to define optimum utilization. We set operational goals of having cases start within 15 min of the scheduled time and of having the cases end no more than 15 min past the scheduled end of the day. Within these goals, a utilization of 85% to 90% is the highest that can be achieved without delay or running late. Increasing the variability of case duration decreases the utilization that can be achieved within these targets. ::: ::: ::: IMPLICATIONS ::: Using a simulated operating room (OR), the authors demonstrate that OR utilization higher than 85% to 90% leads to patient delays and staff overtime. Increased efficiency of an OR comes at a cost of patient convenience.
---
paper_title: A stochastic model for operating room planning with elective and emergency demand for surgery
paper_content:
This paper describes a stochastic model for Operating Room (OR) planning with two types of demand for surgery: elective surgery and emergency surgery. Elective cases can be planned ahead and have a patient-related cost depending on the surgery date. Emergency cases arrive randomly and have to be performed on the day of arrival. The planning problem consists in assigning elective cases to different periods over a planning horizon in order to minimize the sum of elective patient related costs and overtime costs of operating rooms. A new stochastic mathematical programming model is first proposed. We then propose a Monte Carlo optimization method combining Monte Carlo simulation and Mixed Integer Programming. The solution of this method is proved to converge to a real optimum as the computation budget increases. Numerical results show that important gains can be realized by using a stochastic OR planning model.
---
paper_title: Hospital Operating Room Capacity Expansion
paper_content:
A large midwestern hospital is expecting an increase in surgical caseload. New operating room (OR) capacity can be had by building new ORs or extending the working hours in the current ORs. The choice among these options is complicated by the fact that patients, surgeons and surgical staff, and hospital administrators are all important stakeholders in the health service operation, and each has different priorities. This paper investigates the trade-offs among three performance criteria (wait to get on schedule, scheduled procedure start-time reliability, and hospital profits), which are of particular importance to the different constituencies. The objective is to determine how the hospital can best expand its capacity, acknowledging the key role that each constituency plays in that objective. En route, the paper presents supporting analysis for process improvements and suggestions for optimal participation-inducing staff contracts for extending OR hours of operation.
---
paper_title: A Sequential Bounding Approach for Optimal Appointment Scheduling
paper_content:
This study is concerned with the determination of optimal appointment times for a sequence of jobs with uncertain durations. Such appointment systems are used in many customer service applications to increase the utilization of resources, match workload to available capacity, and smooth the flow of customers. We show that the problem can be expressed as a two-stage stochastic linear program that includes the expected cost of customer waiting, server idling, and a cost of tardiness with respect to a chosen session length. We exploit the problem structure to derive upper bounds that are independent of job duration distribution type. These upper bounds are used in a variation of the standard L-shaped algorithm to obtain optimal solutions via successively finer partitions of the support of job durations. We present new analytical insights into the problem as well as a series of numerical experiments that illustrate properties of the optimal solution with respect to distribution type, cost structure, and number of jobs.
---
paper_title: Optimization of surgery sequencing and scheduling decisions under uncertainty
paper_content:
Operating rooms (ORs) are simultaneously the largest cost center and greatest source of revenues for most hospitals. Due to significant uncertainty in surgery durations, scheduling of ORs can be very challenging. Longer than average surgery durations result in late starts not only for the next surgery in the schedule, but potentially for the rest of the surgeries in the day as well. Late starts also result in direct costs associated with overtime staffing when the last surgery of the day finishes later than the scheduled shift end time. In this article we describe a stochastic optimization model and some practical heuristics for computing OR schedules that hedge against the uncertainty in surgery durations. We focus on the simultaneous effects of sequencing surgeries and scheduling start times. We show that a simple sequencing rule based on surgery duration variance can be used to generate substantial reductions in total surgeon and OR team waiting, OR idling, and overtime costs. We illustrate this with results of a case study that uses real data to compare actual schedules at a particular hospital to those recommended by our model.
---
paper_title: A strategy to decide whether to move the last case of the day in an operating room to another empty operating room to decrease overtime labor costs.
paper_content:
We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.
---
paper_title: The use of Simulation to Determine Maximum Capacity in the Surgical Suite Operating Room
paper_content:
Utilizing ambulatory care units at optimal levels has become increasingly important to hospitals from both service and business perspectives. With the inherent variation in hospitals due to unique procedures and patients, performing capacity analysis through analytical models is difficult without making simplifying assumptions. Many hospitals calculate efficiency by comparing total operating room minutes available to total operating minutes used. This metric both fails to account for the required non-value added tasks between surgeries and the delicate balance necessary between having patients ready for surgery when an operating room becomes available, which can result in increased waiting times, and maximizing patient satisfaction. We present a general methodology for determining the maximum capacity within a surgical suite through the use of a discrete-event simulation model. This research is based on an actual hospital concerned with doctor/resource acquisition decisions, patient satisfaction improvements, and increased productivity.
---
paper_title: Building cyclic master surgery schedules with leveled resulting bed occupancy
paper_content:
This paper proposes and evaluates a number of models for building surgery schedules with leveled resulting bed occupancy. The developed models involve two types of constraints. Demand constraints ensure that each surgeon (or surgical group) obtains a specific number of operating room blocks. Capacity constraints limit the available blocks on each day. Furthermore, the number of operated patients per block and the length of stay of each operated patient are dependent on the type of surgery. Both are considered stochastic, following a multinomial distribution. We develop a number of mixed integer programming based heuristics and a metaheuristic to minimize the expected total bed shortage and present computational results.
---
paper_title: Closing Emergency Operating Rooms Improves Efficiency
paper_content:
Long waiting times for emergency operations increase a patient's risk of postoperative complications and morbidity. Reserving Operating Room (OR) capacity is a common technique to maximize the responsiveness of an OR in case of arrival of an emergency patient. This study determines the best way to reserve OR time for emergency surgery. In this study two approaches of reserving capacity were compared: (1) concentrating all reserved OR capacity in dedicated emergency ORs, and (2) evenly reserving capacity in all elective ORs. By using a discrete event simulation model the real situation was modelled. Main outcome measures were: (1) waiting time, (2) staff overtime, and (3) OR utilisation were evaluated for the two approaches. Results indicated that the policy of reserving capacity for emergency surgery in all elective ORs led to an improvement in waiting times for emergency surgery from 74 (±4.4) minutes to 8 (±0.5) min. Working in overtime was reduced by 20%, and overall OR utilisation can increase by around 3%. Emergency patients are operated upon more efficiently on elective Operating Rooms instead of a dedicated Emergency OR. The results of this study led to closing of the Emergency OR in the Erasmus MC (Rotterdam, The Netherlands).
---
paper_title: Scheduling Surgical Cases into Overflow Block Time— Computer Simulation of the Effects of Scheduling Strategies on Operating Room Labor Costs
paper_content:
“Overflow” block time is operating room (OR) time for a surgical group’s cases that cannot be completed in the regular block time allocated to each surgeon in the surgical group. Having such overflow block time increases OR utilization. The optimal way to schedule patients into a surgical group’s ov
---
paper_title: Robust surgery loading
paper_content:
We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This research was performed in collaboration with the Erasmus MC, a large academic hospital in the Netherlands, which has also provided historical data for the experiments. We propose various constructive heuristics and local search methods that use statistical information on surgery durations to exploit the portfolio effect, and thereby to minimize the required slack. We demonstrate that our approach frees a lot of operating room capacity, which may be used to perform additional surgeries. Furthermore, we show that by combining advanced optimization techniques with extensive historical statistical records on surgery durations can significantly improve the operating room department utilization.
---
paper_title: Operating Room Utilization Alone Is Not an Accurate Metric for the Allocation of Operating Room Block Time to Individual Surgeons with Low Caseloads
paper_content:
Introduction Many surgical suites allocate operating room (OR) block time to individual surgeons. If block time is allocated to services/groups and yet the same surgeon invariably operates on the same weekday, for all practical purposes block time is being allocated to individual surgeons.Organizational conflict occurs when a surgeon with a relatively low OR utilization has his or her allocated block time reduced. The authors studied potential limitations affecting whether a facility can accurately estimate the average block time utilizations of individual surgeons performing low volumes of cases. Methods Discrete-event computer simulation. Results Neither 3 months nor 1 yr of historical data were enough to be able to identify surgeons who had persistently low average OR utilizations. For example, with 3 months of data, the widths of the 95% CIs for average OR utilization exceeded 10% for surgeons who had average raw utilizations of 83% or less. If during a 3-month period a surgeon's measured adjusted utilization is 65%, there is a 95% chance that the surgeon's average adjusted utilization is as low as 38% or as high as 83%. If two surgeons have measured adjusted utilizations of 65% and 80%, respectively, there is a 16% chance that they have the same average adjusted utilization. Average OR utilization can be estimated more precisely for surgeons performing more cases each week. Conclusions Average OR utilization probably cannot be estimated precisely for low-volume surgeons based on 3 months or 1 yr of historical OR utilization data. The authors recommend that at surgical suites trying to allocate OR time to individual low-volume surgeons, OR allocations be based on criteria other than only OR utilization (e.g., based on OR efficiency).
---
paper_title: How to Schedule Elective Surgical Cases into Specific Operating Rooms to Maximize the Efficiency of Use of Operating Room Time
paper_content:
We considered elective case scheduling at hospitals and surgical centers at which surgeons and patients choose the day of surgery, cases are not turned away, and anesthesia and nursing staffing are adjusted to maximize the efficiency of use of operating room (OR) time. We investigated scheduling a new case into an OR by using two patient-scheduling rules: Earliest Start Time or Latest Start Time. By using several scenarios, we showed that the use of Earliest Start Time is rational economically at such facilities. Specifically, it maximizes OR efficiency when a service has nearly filled its regularly scheduled hours of OR time. However, Latest Start Time will perform better at balancing workload among services’ OR time. We then used historical case duration data from two facilities in computer simulations to investigate the effect of errors in predicting case durations on the performance of these two heuristics. The achievable incremental reduction in overtime by having perfect information on case duration versus using historical case durations was only a few minutes per OR. The differences between Earliest Start Time and Latest Start Time were also only a few minutes per OR. We conclude that for facilities at which the goals are, in order of importance, safety, patient and surgeon access to OR time, and then efficiency, few restrictions need to be placed on patient scheduling to achieve an efficient use of OR time. (Anesth Analg 2002;94:933–42)
---
paper_title: Schedule the Short Procedure First to Improve OR Efficiency
paper_content:
ABSTRACT • OPERATING ROOM MANAGERS are hampered in their efforts to optimize OR efficiency by surgical procedures that last a longer or shorter time than scheduled. The lack of predictability is a result of inaccuracy in scheduling and variability in the duration of procedures. • SCHEDULING SHORT PROCEDURES before long procedures theoretically limits this variability. • MONTE CARLO SIMULATION of ORs scheduled with various combinations of short and long procedures supports this concept's validity. • RESULTS INDICATE that scheduling short procedures first can improve on-time performance and decrease staff member overtime expense without reducing surgical throughput. AORNJ 78 (October 2003) 651–657.
---
paper_title: The value of the dedicated orthopaedic trauma operating room.
paper_content:
Background:Trauma centers and orthopaedic surgeons have traditionally been faced with limited operating room (OR) availability for fracture surgery. Orthopaedic trauma cases are often waitlisted and done late at night. We investigated the feasibility of having an unbooked orthopaedic trauma OR to re
---
paper_title: The impact of service-specific staffing, case scheduling, turnovers, and first-case starts on anesthesia group and operating room productivity: a tutorial using data from an Australian hospital.
paper_content:
BACKGROUND ::: In this tutorial, we consider the impact of operating room (OR) management on anesthesia group and OR labor productivity and costs. Most of the tutorial focuses on the steps required for each facility to refine its OR allocations using its own data collected during patient care. ::: ::: ::: METHODS ::: Data from a hospital in Australia are used throughout to illustrate the methods. OR allocation is a two-stage process. During the initial tactical stage of allocating OR time, OR capacity ("block time") is adjusted. For operational decision-making on a shorter-term basis, the existing workload can be considered fixed. Staffing is matched to that workload based on maximizing the efficiency of use of OR time. ::: ::: ::: RESULTS ::: Scheduling cases and making decisions on the day of surgery to increase OR efficiency are worthwhile interventions to increase anesthesia group productivity. However, by far, the most important step is the appropriate refinement of OR allocations (i.e., planning service-specific staffing) 2-3 mo before the day of surgery. ::: ::: ::: CONCLUSIONS ::: Reducing surgical and/or turnover times and delays in first-case-of-the-day starts generally provides small reductions in OR labor costs. Results vary widely because they are highly sensitive both to the OR allocations (i.e., staffing) and to the appropriateness of those OR allocations.
---
paper_title: Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling.
paper_content:
BACKGROUND ::: An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques. ::: ::: ::: METHODS ::: We applied advanced mathematical algorithms in combination with scenarios that model relaxation of various organizational barriers using prospectively collected data. The setting is the main inpatient OR department of a university hospital, which sets its surgical case schedules 2 wk in advance using a block planning method. The main outcome measures are the number of freed OR blocks and OR utilization. ::: ::: ::: RESULTS ::: Lowering organizational barriers and applying mathematical algorithms can yield a 4.5% point increase in OR utilization (95% confidence interval 4.0%-5.0%). This is obtained by reducing the total required OR time. ::: ::: ::: CONCLUSIONS ::: Efficient OR departments can further improve their efficiency. The paper shows that a radical cultural change that comprises the use of mathematical algorithms and lowering organizational barriers improves OR utilization.
---
paper_title: Analyzing incentives and scheduling in a major metropolitan hospital operating room through simulation
paper_content:
This paper discusses the application of simulation to analyze the value proposition and construction of an incentive program in an operating room (OR) environment. The model was further used to evaluate operational changes including scheduling processes within the OR and utilization rates in areas such as post anesthesia care unit (PACU) and the ambulatory surgery department (ASD). Lessons learned are presented on developing multiple simulation models from one application as well as issues regarding model transition to a client.
---
paper_title: Operating room management and strategies in Switzerland: results of a survey.
paper_content:
Background and objective: Operating room management structures and interrelationships both within the operating suite and with other departments in the hospital can be very complex. Several different professional and support groups are represented that often have infrastructures of their own that may compete or conflict with the management hierarchy in the operating room. Today, there is often little actual management of the operating suite as an entity. We surveyed current operating room management in Switzerland. Methods: A questionnaire was sent to the chief anaesthesiologists of all public hospitals in Switzerland. It asked for information about the structure, organization and management of operating rooms as well as respondents' opinions and expectations about management. Derived from both the literature and the results of the survey, a 'stages of excellence' model of best practice was developed. Results: The overall response rate was 70%. Most anaesthesiologists were unsatisfied with current management policies and structures in their operating rooms. Of the hospitals questioned, 40% did not have an information system at all for the operating rooms. The remaining 60% had an information system that allowed rough scheduling in 71%, but only a few had more sophisticated systems that enabled dynamic scheduling (19%), user-defined conflict checking (5%), administration of a subsequent patient transfer station (postanaesthesia care units, intensive medical care, intensive care units) (10%) or other more advanced uses. All hospitals questioned offered some type of ambulatory surgery in a 'hospital-integrated' manner (i.e. use of the same operating room for both in- and outpatient surgery), but none had implemented a more efficient system where outpatient surgery was performed in separate facilities. Conclusions: Current management of the operating room in Switzerland is far from best-practice standards.
---
paper_title: A three-phase approach for operating theatre schedules
paper_content:
In this paper we develop a three-phase, hierarchical approach for the weekly scheduling of operating rooms. This approach has been implemented in one of the surgical departments of a public hospital located in Genova (Genoa), Italy. Our aim is to suggest an integrated way of facing surgical activity planning in order to improve overall operating theatre efficiency in terms of overtime and throughput as well as waiting list reduction, while improving department organization. In the first phase we solve a bin packing-like problem in order to select the number of sessions to be weekly scheduled for each ward; the proposed and original selection criterion is based upon an updated priority score taking into proper account both the waiting list of each ward and the reduction of residual ward demand. Then we use a blocked booking method for determining optimal time tables, denoted Master Surgical Schedule (MSS), by defining the assignment between wards and surgery rooms. Lastly, once the MSS has been determined we use the simulation software environment Witness 2004 in order to analyze different sequencings of surgical activities that arise when priority is given on the basis of a) the longest waiting time (LWT), b) the longest processing time (LPT) and c) the shortest processing time (SPT). The resulting simulation models also allow us to outline possible organizational improvements in surgical activity. The results of an extensive computational experimentation pertaining to the studied surgical department are here given and analyzed.
---
paper_title: Operating room managers' use of integer programming for assigning block time to surgical groups: a case study.
paper_content:
UNLABELLED ::: A common problem at hospitals with fixed amounts of available operating room (OR) time (i.e., "block time") is determining an equitable method of distributing time to surgical groups. Typically, facilities determine a surgical group's share of available block time using formulas based on OR utilization, contribution margin, or some other performance metric. Once each group's share of time has been calculated, a method must be found for fitting each group's allocated OR time into the surgical master schedule. This involves assigning specific ORs on specific days of the week to specific surgical groups, usually with the objective of ensuring that the time assigned to each group is close to its target share. Unfortunately, the target allocated to a group is rarely expressible as a multiple of whole blocks. In this paper, we describe a hospital's experience using the mathematical technique of integer programming to solve the problem of developing a consistent schedule that minimizes the shortfall between each group's target and actual assignment of OR time. Schedule accuracy, the sum over all surgical groups of shortfalls divided by the total time available on the schedule, was 99.7% (SD 0.1%, n = 11). Simulations show the algorithm's accuracy can exceed 97% with > or =4 ORs. The method is a systematic and successful way to assign OR blocks to surgeons. ::: ::: ::: IMPLICATIONS ::: At hospitals with a fixed budget of operating room (OR) time, integer programming can be used by OR managers to decide which surgical group is to be allocated which OR on which day(s) of the week. In this case study, we describe the successful application of integer programming to this task, and discuss the applicability of the results to other hospitals.
---
paper_title: Mount Sinai Hospital Uses Integer Programming to Allocate Operating Room Time
paper_content:
In concentrating polymer solutions up to a desired specification level of residual solvents, encrustations can be prevented and the yield and the degree of purity can be increased, when the product is heated up under pressure, expanded through a restrictor element (3) with vapor formation into a first, preferably coiled flow pipe (7) and concentrated therein as far as possible, the mixture of vapors and polymer solution is whirled at an angle into a second flow pipe (9) in a sloping arrangement and fitted with self-cleaning elements (11, 12) and concentrated therein up to the desired level, and vapors and concentrate are separately discharged only downstream thereof.
---
paper_title: What is the role and contribution of models to management and research in the health services? A view from Europe
paper_content:
Abstract Modelling in different forms has long been regarded as a cornerstone of health OR as in other fields of application. Models are now tending to become a standard tool in health services management and research. What are the lessons we as operational researchers have learnt during this development? How can health care managers and health service researchers benefit from modelling — with or without OR-analysts to guide them between the pitfalls? After an introductory overview concerning the nature and objectives of modelling, examples will be given of modelling applications from different health service areas in order to illustrate the versatility of the method. Further, the choice of methods and models is discussed with special attention to the problems of interpretation and implementation of results. From this overview some conclusions are drawn with regards to advantages and disadvantages of modelling as tools for policy planning and decision-making in the health area. Finally some observations are made concercing the conditions for the future development in the field.
---
paper_title: The value of the dedicated orthopaedic trauma operating room.
paper_content:
Background:Trauma centers and orthopaedic surgeons have traditionally been faced with limited operating room (OR) availability for fracture surgery. Orthopaedic trauma cases are often waitlisted and done late at night. We investigated the feasibility of having an unbooked orthopaedic trauma OR to re
---
paper_title: A goal programming approach to strategic resource allocation in acute care hospitals
paper_content:
Abstract This paper describes a methodology for allocating resources in hospitals. The methodology uses two linear goal-programming models. One model sets case mix and volume for physicians, while holding service costs fixed; the other translates case mix decisions into a commensurate set of practice changes for physicians. The models allow decision makers to set case mix and case costs in such a way that the institution is able to break even, while preserving physician income and minimizing disturbance to practice. The models also permit investigation of trade-offs between case mix and physician practice parameters. Results are presented from a decision-making scenario facing the surgical division of Toronto's Mount Sinai Hospital after the announcement of a 3-year, 18% reduction in funding.
---
| Title: Operating Room Planning and Scheduling: A Literature Review
Section 1: Introduction
Description 1: Introduce the importance and challenges of managing operating rooms in hospitals, and outline the goals and structure of the review.
Section 2: Patient Characteristics
Description 2: Describe the classification of patients (elective and non-elective) and discuss different approaches to planning and scheduling for these categories.
Section 3: Performance Measures
Description 3: Summarize the various criteria used to evaluate the performance of planning and scheduling methods, such as waiting time, resource utilization, and financial criteria.
Section 4: Decision Level
Description 4: Discuss the different levels at which planning and scheduling decisions are made, and summarize the types of decisions considered in the literature.
Section 5: Type of Analysis
Description 5: Outline the various types of analytical methods used, including optimization, simulation, and scenario analysis, and describe how each method is applied to operating room planning and scheduling.
Section 6: Solution Technique
Description 6: Review the specific solution techniques that are applied in the literature, including mathematical programming, heuristics, and meta-heuristics.
Section 7: Uncertainty
Description 7: Examine how uncertainty is incorporated into planning and scheduling models, highlighting types of uncertainty such as arrival and duration variability.
Section 8: Applicability of Research
Description 8: Discuss the extent to which research findings have been tested and implemented in practice, and identify gaps between theory and practice.
Section 9: Conclusion
Description 9: Summarize the key findings of the review, highlight major trends, and suggest areas for future research. |
Current Perspective in Task Scheduling Techniques in Cloud Computing: A Review | 7 | ---
paper_title: A Particle Swarm Optimization-Based Heuristic for Scheduling Workflow Applications in Cloud Computing Environments
paper_content:
Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the ‘execution time’. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We compare the cost savings when using PSO and existing ‘Best Resource Selection’ (BRS) algorithm. Our results show that PSO can achieve: a) as much as 3 times cost savings as compared to BRS, and b) good distribution of workload onto resources.
---
paper_title: Cost-Aware Scheduling Algorithm Based on PSO in Cloud Computing Environment
paper_content:
Traditional scheduling algorithms typically aim to minimize the total time cost for processing all tasks. However, in cloud computing environments, computing capability differs for different resources, and so does the cost of resource usage. Therefore, it is vital to take into consideration the usage cost of resources. Along this line, in this paper, we proposed a modified algorithm based on PSO to solve the task scheduling problem in cloud computing environments. Specifically, by adding a cost-aware fitness function to quantify the cost of resource usage, along with the fitness function for time cost, our method can achieve the goal of minimizing both the processing time and resource usage, and therefore reach a global optimal solution. Besides, our experiment on a simulated cloud computing environment proves the efficiency of our proposed algorithm.
---
paper_title: A modified particle swarm optimizer
paper_content:
Evolutionary computation techniques, genetic algorithms, evolutionary strategies and genetic programming are motivated by the evolution of nature. A population of individuals, which encode the problem solutions are manipulated according to the rule of survival of the fittest through "genetic" operations, such as mutation, crossover and reproduction. A best solution is evolved through the generations. In contrast to evolutionary computation techniques, Eberhart and Kennedy developed a different algorithm through simulating social behavior (R.C. Eberhart et al., 1996; R.C. Eberhart and J. Kennedy, 1996; J. Kennedy and R.C. Eberhart, 1995; J. Kennedy, 1997). As in other algorithms, a population of individuals exists. This algorithm is called particle swarm optimization (PSO) since it resembles a school of flying birds. In a particle swarm optimizer, instead of using genetic operators, these individuals are "evolved" by cooperation and competition among the individuals themselves through generations. Each particle adjusts its flying according to its own flying experience and its companions' flying experience. We introduce a new parameter, called inertia weight, into the original particle swarm optimizer. Simulations have been done to illustrate the significant and effective impact of this new parameter on the particle swarm optimizer.
---
paper_title: Enhanced Particle Swarm Optimization for Task Scheduling in Cloud Computing Environments
paper_content:
Abstract The most important requirement in cloud computing environment is the task scheduling which plays the key role of efficiency of the whole cloud computing facilities. Task scheduling in cloud computing means that to allocate best suitable resources for the task to be execute with the consideration of different parameters like time, cost, scalability, make span, reliability, availability, throughput, resource utilization and so on. The proposed algorithm considers reliability and availability. Most scheduling algorithms do not consider reliability and availability of the cloud computing environment because the complexity to achieve these parameters. We propose mathematical model using Load Balancing Mutation (balancing) a particle swarm optimization (LBMPSO) based schedule and allocation for cloud computing that takes into account reliability, execution time, transmission time, make span, round trip time, transmission cost and load balancing between tasks and virtual machine .LBMPSO can play a role in achieving reliability of cloud computing environment by considering the resources available and reschedule task that failure to allocate. Our approach LBMPSO compared with standard PSO, random algorithm and Longest Cloudlet to Fastest Processor (LCFP) algorithm to show that LBMPSO can save in make span, execution time, round trip time, transmission cost.
---
paper_title: Multi Objective Task Scheduling in Cloud Environment Using Nested PSO Framework
paper_content:
Abstract Cloud computing is an emerging computing paradigm with a large collection of heterogeneous autonomous systems with flexible computational architecture. Task scheduling is an important step to improve the overall performance of the cloud computing. Task scheduling is also essential to reduce power consumption and improve the profit of service providers by reducing processing time. This paper focuses on task scheduling using a multi-objective nested Particle Swarm Optimization(TSPSO) to optimize energy and processing time. The result obtained by TSPSO was simulated by an open source cloud platform (CloudSim). Finally, the results were compared to existing scheduling algorithms and found that the proposed algorithm (TSPSO) provide an optimal balance results for multiple objectives.
---
paper_title: A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems
paper_content:
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
---
paper_title: Reliability-driven scheduling of time/cost-constrained grid workflows
paper_content:
Abstract Workflow scheduling in Grids and Clouds is a NP-Hard problem. Constrained workflow scheduling, arisen in recent years, provides the description of the user requirements through defining constraints on factors like makespan and cost. This paper proposes a scheduling algorithm to maximize the workflow execution reliability while respecting the user-defined deadline and budget. We have used ant colony system to minimize an aggregation of reliability and constraints violation. Three novel heuristics have been proposed which are adaptively selected by ants. Two of them are employed to find feasible schedules and the other is used to enhance the reliability. Two methods have been investigated for time and cost considerations in the resource selection. One of them assigns equal importance to the time and cost factors, and the other weighs them according to the tightness of satisfaction of the corresponding constraints. Simulation results demonstrate the effectiveness of the proposed algorithm in finding feasible schedules with high reliability. As it is shown, as an additional achievement, the Grid profit loss has been decreased.
---
paper_title: An ACO-LB Algorithm for Task Scheduling in the Cloud Environment
paper_content:
In the face of a large number of task requests which are submitted by users, the cloud data centers need not only to finish these massive tasks but also to satisfy the user's service demand. How to allocate virtual machine reasonably and schedule the tasks efficiently becomes a key problem to be solved in the cloud environment. This paper proposes a ACO-LB(Load balancing optimization algorithm based on ant colony algorithm) algorithm to solve the load imbalance of virtual machine in the process of task scheduling .The ACO-LB algorithm can adapt to the dynamic cloud environment. It will not only shorten the makespan of task scheduling, but also maintain the load balance of virtual machines in the data center. In this paper, the workflow scheduling is simulated in CloudSim. The results show that the proposed ACO-LB algorithm has better performance and load balancing ability.
---
paper_title: Cloud task scheduling based on ant colony optimization
paper_content:
Cloud computing is the development of distributed computing, parallel computing and grid computing, or defined as the commercial implementation of these computer science concepts. One of the fundamental issues in this environment is related to task scheduling. Cloud task scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it. A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. In this paper a cloud task scheduling policy based on ant colony optimization algorithm compared with different scheduling algorithms FCFS and round-robin, has been presented. The main goal of these algorithms is minimizing the makespan of a given tasks set. Ant colony optimization is random optimization search approach that will be used for allocating the incoming jobs to the virtual machines. Algorithms have been simulated using Cloudsim toolkit package. Experimental results showed that the ant colony optimization outperformed FCFS and round-robin algorithms.
---
paper_title: Improving Task Scheduling in Large Scale Cloud Computing Environment using Artificial Bee Colony Algorithm
paper_content:
In the face of Scheduling, the tasks are scheduled by using Different scheduling Algorithms. Each Scheduling Algorithm has own particularity and complexity during Scheduling. In order to get the minimum time for the execution of the task the Scheduling algorithm must be good, once the performance of the scheduling algorithm is good then automatically the result obtained by that particular algorithm will be considered , there are huge number of task that are scheduled under cloud computing in order to get the minimum time and the maximum through put the Scheduling algorithm plays an important factor Here the algorithm which used for Scheduling the task is artificial bee colony algorithm this scheduling process is done under the cloud computing environment. In this Paper we are considering the time as the main QoS factor, minimum total task finishing time, mean task finishing time and load balancing time is obtained by using this Cloud simulation environment
---
paper_title: A two-stage artificial bee colony algorithm scheduling flexible job-shop scheduling problem with new job insertion
paper_content:
A heuristic is proposed for initializing ABC population.An ensemble local search method is proposed to improve the convergence of TABC.Three re-scheduling strategies are proposed and evaluated.TABC is tested using benchmark instances and real cases from re-manufacturing.TABC compared against several state-of-the-art algorithms. This study addresses the scheduling problem in remanufacturing engineering. The purpose of this paper is to model effectively to solve remanufacturing scheduling problem. The problem is modeled as flexible job-shop scheduling problem (FJSP) and is divided into two stages: scheduling and re-scheduling when new job arrives. The uncertainty in timing of returns in remanufacturing is modeled as new job inserting constraint in FJSP. A two-stage artificial bee colony (TABC) algorithm is proposed for scheduling and re-scheduling with new job(s) inserting. The objective is to minimize makespan (maximum complete time). A new rule is proposed to initialize bee colony population. An ensemble local search is proposed to improve algorithm performance. Three re-scheduling strategies are proposed and compared. Extensive computational experiments are carried out using fifteen well-known benchmark instances with eight instances from remanufacturing. For scheduling performance, TABC is compared to five existing algorithms. For re-scheduling performance, TABC is compared to six simple heuristics and proposed hybrid heuristics. The results and comparisons show that TABC is effective in both scheduling stage and rescheduling stage.
---
paper_title: Solving the large-scale hybrid flow shop scheduling problem with limited buffers by a hybrid artificial bee colony algorithm
paper_content:
This paper presents a novel hybrid algorithm (TABC) that combines the artificial bee colony (ABC) and tabu search (TS) to solve the hybrid flow shop (HFS) scheduling problem with limited buffers. The objective is to minimize the maximum completion time. Unlike the original ABC algorithm, in TABC, each food source is represented by a string of job numbers. A novel decoding method is embedded to tackle the limited buffer constraints in the schedules generated. Four neighborhood structures are embedded to balance the exploitation and exploration abilities of the algorithm. A TS-based self-adaptive neighborhood strategy is adopted to impart to the TABC algorithm a learning ability for producing neighboring solutions in different promising regions. Furthermore, a well-designed TS-based local search is developed to enhance the search ability of the employed bees and onlookers. Moreover, the effect of parameter setting is investigated by using the Taguchi method of design of experiment (DOE) to determine the suitable values for key parameters. The proposed TABC algorithm is tested on sets of instances with large scales that are generated based on realistic production. Through a detailed analysis of the experimental results, the highly effective and efficient performance of the proposed TABC algorithm is contrasted with the performance of several algorithms reported in the literature.
---
paper_title: Task scheduling in the Cloud Environments based on an Artificial Bee Colony Algorithm
paper_content:
Nowadays, Cloud computing has gained vast attention due to its technological advancement, cost reduction and availability. In the Cloud environments, the suitable scheduling of the received tasks over service provides has become an important and vital problem. The scheduling problem in Cloud environments is an NP- hard problem. Therefore, many heuristics have been proposed to solve this problem up to now. In this paper, we propose a new bee colony algorithm to schedule the tasks on service providers in the Cloud environments. The results demonstrated that the proposed algorithm has a better operation in terms of task execution time, waiting time and missed tasks.
---
paper_title: Chaotic bat algorithm
paper_content:
Abstract Bat algorithm (BA) is a recent metaheuristic optimization algorithm proposed by Yang. In the present study, we have introduced chaos into BA so as to increase its global search mobility for robust global optimization. Detailed studies have been carried out on benchmark problems with different chaotic maps. Here, four different variants of chaotic BA are introduced and thirteen different chaotic maps are utilized for validating each of these four variants. The results show that some variants of chaotic BAs can clearly outperform the standard BA for these benchmarks.
---
paper_title: A Bee Colony based optimization approach for simultaneous job scheduling and data replication in grid environments
paper_content:
This paper presents a novel Bee Colony based optimization algorithm, named Job Data Scheduling using Bee Colony (JDS-BC). JDS-BC consists of two collaborating mechanisms to efficiently schedule jobs onto computational nodes and replicate datafiles on storage nodes in a system so that the two independent, and in many cases conflicting, objectives (i.e., makespan and total datafile transfer time) of such heterogeneous systems are concurrently minimized. Three benchmarks - varying from small- to large-sized instances - are used to test the performance of JDS-BC. Results are compared against other algorithms to show JDS-BC's superiority under different operating scenarios. These results also provide invaluable insights into data-centric job scheduling for grid environments.
---
paper_title: A Genetic-Algorithm-Based Approach for Task Migration in Pervasive Clouds
paper_content:
Pervasive computing is converging with cloud computing which becomes pervasive cloud computing as an emerging computing paradigm. Users can run their applications or tasks in pervasive cloud environment in order to gain better execution efficiency and performance leveraging powerful computing and storage capacities of pervasive clouds through task migration. During task migration, there are possibly a number of conflicting objectives to be considered when making migration decisions, such as less energy consumption and quick response, in order to find an optimal migration path. In this paper, we propose a genetic algorithms- (GAs-) based approach which is effective in addressing multiobjective optimization problems. We have performed some preliminary evaluations of the proposed approach which shows quite promising results, using one of the classical genetic algorithms.The conclusion is that GAs can be used for decision making in task migrations in pervasive clouds.
---
paper_title: Multi-level hierarchic genetic-based scheduling of independent jobs in dynamic heterogeneous grid environment
paper_content:
Task scheduling and resource allocation are the key rationale behind the computational grid. Distributed resource clusters usually work in different autonomous domains with their own access and security policies that have a great impact on the successful task execution across the domain boundaries. Heuristics and metaheuristics are the effective technologies for scheduling in grids due to their ability to deliver high quality solutions in reasonable time. In this paper, we develop a Hierarchic Genetic Scheduler (HGS-Sched) for improving the effectiveness of the single-population genetic-based schedulers in the dynamic grid environment. The HGS-Sched enables a concurrent exploration of the solution space by many small dependent populations. We consider a bi-objective independent batch job scheduling problem with makespan and flowtime minimized in hierarchical mode (makespan is a dominant criterion). The empirical results show the high effectiveness of the proposed method in comparison with the mono-population and hybrid genetic-based schedulers.
---
paper_title: Independent Task Scheduling in Cloud Computing by Improved Genetic Algorithm
paper_content:
Scheduling is a critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud computing system. So scheduling is the major issue in establishing Cloud computing systems. A good scheduling technique also helps in proper and efficient utilization of the resources. Many scheduling techniques have been developed by the researchers like GA (Genetic Algorithm), PSO (Particle Swarm Optimizati on), Min-Min, Max-Min, X-Sufferage etc. Thi s paper proposes a new scheduling algorithm which is an i mproved version of Genetic Algorithm. In the proposed scheduling algorithm the Min-Min and Max-Min scheduling methods are merged in standard Genetic Algorit hm. Min-Min, Max-Mi n and Genetic Scheduling techniques are discussed and in the last the performance of t he standard Genetic Algorithm and proposed improved Genetic Algorithm is compared and is shown by graphs.
---
paper_title: Scheduling scientific workflow applications with deadline and budget constraints using genetic algorithms
paper_content:
Grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse computing services. It facilitates scientific applications to take advantage of computing resources distributed world wide to enhance the capability and performance. Many scientific applications in areas such as bioinformatics and astronomy require workflow processing in which tasks are executed based on their control or data dependencies. Scheduling such interdependent tasks on utility Grid environments need to consider users' QoS requirements. In this paper, we present a genetic algorithm approach to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget.
---
paper_title: A genetic algorithm for task scheduling on heterogeneous computing systems using multiple priority queues
paper_content:
On parallel and distributed heterogeneous computing systems, a heuristic-based task scheduling algorithm typically consists of two phases: task prioritization and processor selection. In a heuristic based task scheduling algorithm, different prioritization will produce different makespan on a heterogeneous computing system. Therefore, a good scheduling algorithm should be able to efficiently assign a priority to each subtask depending on the resources needed to minimize makespan. In this paper, a task scheduling scheme on heterogeneous computing systems using a multiple priority queues genetic algorithm (MPQGA) is proposed. The basic idea of our approach is to exploit the advantages of both evolutionary-based and heuristic-based algorithms while avoiding their drawbacks. The proposedalgorithm incorporates a genetic algorithm (GA) approach to assign a priority to each subtask while using a heuristic-based earliest finish time (EFT) approach to search for a solution for the task-to-processor mapping. The MPQGA method also designs crossover, mutation, and fitness function suitable for the scenario of directed acyclic graph (DAG) scheduling. The experimental results for large-sized problems from a large set of randomly generated graphs as well as graphs of real-world problems with various characteristics show that the proposed MPQGA algorithm outperforms two non-evolutionary heuristics and a random search method in terms of schedule quality.
---
paper_title: Task Scheduling Algorithm based on Greedy Strategy in Cloud Computing
paper_content:
In view of Min-Min algorithm prefers scheduling small tasks and Max-Min algorithm prefers scheduling big tasks led to the problem of load imbalance in cloud computing, a new algorithm named Min-Max is proposed. Min-Max makes good use of time of greedy strategy, small tasks and big tasks are put together for scheduling in order to solve the problem of load imbalance. Experimental results show that the Min-Max improves the utilization rate of the entire system and saves 9% of the overall execution time compared with Min-Min. As compared to Max-Min, Min-Max improves the utilization rate of the entire system and the total completion time and average response time are saved by 7% and 9%, re- spectively.
---
paper_title: A New Resource Scheduling Strategy Based on Genetic Algorithm in Cloud Computing Environment
paper_content:
In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. At the same time, this paper brings in variation rate to describe the load variation of system virtual machines, and it also introduces average load distance to measure the overall load balancing effect of the algorithm. The experiment shows that this strategy has fairly good global astringency and efficiency, and the algorithm of this paper is, to a great extent, able to solve the problems of load imbalance and high migration cost after system VM being scheduled. What is more, the average load distance does not grow with the increase of VM load variation rate, and the system scheduling algorithm has quite good resource utility.
---
paper_title: An Greedy-Based Job Scheduling Algorithm in Cloud Computing
paper_content:
Nowadays/cloud computing has become a popular platform for scientific applications. Cloud computing intends to share a large number of resources such as equipments for storage and computation, and information and knowledge for scientific researches. Job scheduling algorithm is one of the most challenging theoretical issues in the cloud computing area. How to use cloud computing resources efficiently and increase user satisfaction with jobs scheduling system is one of the cloud computing service providers important goals. Some intensive researches have been done in the area of job scheduling of cloud computing. In this paper we have proposed Greedy-Based Algorithm in cloud computing. In order to prove our opinions we will process this artical as the following steps. First of all, we will classify tasks based on QoS. Then, according to the tasks categories, we will select the appropriate branch of the function and compute the justice evaluation. This will also reflects the greedy algorithm to select local optimum. Compare to other methods, it can decrease the completion time of submitted jobs and increases the user satisfaction.
---
paper_title: Credit Based Scheduling Algorithm in Cloud Computing Environment
paper_content:
Abstract Cloud computing in today's world has become synonymous with good service policies. In order to achieve good services from a cloud, the need for a number of resources arose. But cloud providers are limited by the amount of resources they have, and are thus compelled to strive to maximum utilization. Min-Min algorithm is used to reduce the make span of tasks by considering the task length. Keeping this in mind, cloud providers should achieve user satisfaction. Thus research favors scheduling algorithms that consider both user satisfaction and resources availability. In this paper an improved scheduling algorithm is introduced after analyzing the traditional algorithms which are based on user priority and task length. High prioritized tasks are not given any special importance when they arrive. The proposed approach considers all of these factors. The experimental results show a considerable improvement in the utilization of resources.
---
paper_title: Cost performance of QoS Driven task scheduling in cloud computing
paper_content:
Abstract Till now many parameters considered in QoS driven like makespan, latency and load balancing. But allocation cost parameter is not considered in QoS-driven scheduling algorithm. Minimizing the total allocation cost is an important issue in cloud computing. In this paper, the cost is calculated of QoS-driven task scheduling algorithm and compare with traditional task scheduling algorithm in cloud computing environment. The experimental results based on cloudsim3.0 toolkit with NetBeans IDE8.0 shows that QoS-driven achieves good performance in cost parameter.
---
paper_title: Resource-aware hybrid scheduling algorithm in heterogeneous distributed computing
paper_content:
Today, almost everyone is connected to the Internet and uses different Cloud solutions to store, deliver and process data. Cloud computing assembles large networks of virtualized services such as hardware and software resources. The new era in which ICT penetrated almost all domains (healthcare, aged-care, social assistance, surveillance, education, etc.) creates the need of new multimedia content-driven applications. These applications generate huge amount of data, require gathering, processing and then aggregation in a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems. We proposed a resource-aware hybrid scheduling algorithm for different types of application: batch jobs and workflows. The proposed algorithm considers hierarchical clustering of the available resources into groups in the allocation phase. Task execution is performed in two phases: in the first, tasks are assigned to groups of resources and in the second phase, a classical scheduling algorithm is used for each group of resources. The proposed algorithm is suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications. We evaluate their performance in a realistic setting of CloudSim tool with respect to load-balancing, cost savings, dependency assurance for workflows and computational efficiency, and investigate the computing methods of these performance metrics at runtime. We proposed a hybrid approach for tasks scheduling in Heterogeneous Distributed Computing.The proposed algorithm considers hierarchical clustering of the available resources into groups.We considered different scheduling strategies for independent tasks and scheduling for DAG scheduling.We analyze the performance of our proposed algorithm through simulation by using and extending CloudSim.
---
paper_title: Cloud-aware data intensive workflow scheduling on volunteer computing systems
paper_content:
Abstract Volunteer computing systems offer high computing power to the scientific communities to run large data intensive scientific workflows. However, these computing environments provide the best effort infrastructure to execute high performance jobs. This work aims to schedule scientific and data intensive workflows on hybrid of the volunteer computing system and Cloud resources to enhance the utilization of these environments and increase the percentage of workflow that meets the deadline. The proposed workflow scheduling system partitions a workflow into sub-workflows to minimize data dependencies among the sub-workflows. Then these sub-workflows are scheduled to distribute on volunteer resources according to the proximity of resources and the load balancing policy. The execution time of each sub-workflow on the selected volunteer resources is estimated in this phase. If any of the sub-workflows misses the sub-deadline due to the large waiting time, we consider re-scheduling of this sub-workflow into the public Cloud resources. This re-scheduling improves the system performance by increasing the percentage of workflows that meet the deadline. The proposed Cloud-aware data intensive scheduling algorithm increases the percentage of workflow that meet the deadline with a factor of 75% in average with respect to the execution of workflows on the volunteer resources.
---
paper_title: Exploring decentralized dynamic scheduling for grids and clouds using the community-aware scheduling algorithm
paper_content:
Job scheduling strategies have been studied for decades in a variety of scenarios. Due to the new characteristics of the emerging computational systems, such as the grid and cloud, metascheduling turns out to be an important scheduling pattern because it is responsible for orchestrating resources managed by independent local schedulers and bridges the gap between participating nodes. Equally, to overcome issues such as bottleneck, single point failure, and impractical unique administrative management, which are normally led by conventional centralized or hierarchical schemes, the decentralized scheduling scheme is emerging as a promising approach because of its capability with regards to scalability and flexibility.In this work, we introduce a decentralized dynamic scheduling approach entitled the community-aware scheduling algorithm (CASA). The CASA is a two-phase scheduling solution comprised of a set of heuristic sub-algorithms to achieve optimized scheduling performance over the scope of overall grid or cloud, instead of individual participating nodes. The extensive experimental evaluation with a real grid workload trace dataset shows that, when compared to the centralized scheduling scheme with BestFit as the metascheduling policy, the use of CASA can lead to a 30%-61% better average job slowdown, and a 68%-86% shorter average job waiting time in a decentralized scheduling manner without requiring detailed real-time processing information from participating nodes. Highlights? We introduce a decentralized scheduling algorithm without requiring detailed node information. ? Our algorithm is able to adapt to the changes in grids through time by rescheduling. ? Comparisons with the known BestFit algorithm within a centralized scheduling scheme are made. ? Our algorithm leads to a 30%-61% better average job slowdown. ? Our algorithm leads to a 68%-86% shorter average job waiting time.
---
paper_title: Multi-objective energy-efficient workflow scheduling using list-based heuristics
paper_content:
Abstract Workflow applications are a popular paradigm used by scientists for modelling applications to be run on heterogeneous high-performance parallel and distributed computing systems. Today, the increase in the number and heterogeneity of multi-core parallel systems facilitates the access to high-performance computing to almost every scientist, yet entailing additional challenges to be addressed. One of the critical problems today is the power required for operating these systems for both environmental and financial reasons. To decrease the energy consumption in heterogeneous systems, different methods such as energy-efficient scheduling are receiving increasing attention. Current schedulers are, however, based on simplistic energy models not matching the reality, use techniques like DVFS not available on all types of systems, or do not approach the problem as a multi-objective optimisation considering both performance and energy as simultaneous objectives. In this paper, we present a new Pareto-based multi-objective workflow scheduling algorithm as an extension to an existing state-of-the-art heuristic capable of computing a set of tradeoff optimal solutions in terms of makespan and energy efficiency. Our approach is based on empirical models which capture the real behaviour of energy consumption in heterogeneous parallel systems. We compare our new approach with a classical mono-objective scheduling heuristic and state-of-the-art multi-objective optimisation algorithm and demonstrate that it computes better or similar results in different scenarios. We analyse the different tradeoff solutions computed by our algorithm under different experimental configurations and we observe that in some cases it finds solutions which reduce the energy consumption by up to 34.5% with a slight increase of 2% in the makespan.
---
paper_title: Heuristics for periodical batch job scheduling in a MapReduce computing framework
paper_content:
Task scheduling has a significant impact on the performance of the MapReduce computing framework. In this paper, a scheduling problem of periodical batch jobs with makespan minimization is considered. The problem is modeled as a general two-stage hybrid flow shop scheduling problem with schedule-dependent setup times. The new model incorporates the data locality of tasks and is formulated as an integer program. Three heuristics are developed to solve the problem and an improvement policy based on data locality is presented to enhance the methods. A lower bound of the makespan is derived. 150 instances are randomly generated from data distributions drawn from a real cluster. The parameters involved in the methods are set according to different cluster setups. The proposed heuristics are compared over different numbers of jobs and cluster setups. Computational results show that the performance of the methods is highly dependent on both the number of jobs and the cluster setups. The proposed improvement policy is effective and the impact of the input data distribution on the policy is analyzed and tested.
---
paper_title: Reliability-driven scheduling of time/cost-constrained grid workflows
paper_content:
Abstract Workflow scheduling in Grids and Clouds is a NP-Hard problem. Constrained workflow scheduling, arisen in recent years, provides the description of the user requirements through defining constraints on factors like makespan and cost. This paper proposes a scheduling algorithm to maximize the workflow execution reliability while respecting the user-defined deadline and budget. We have used ant colony system to minimize an aggregation of reliability and constraints violation. Three novel heuristics have been proposed which are adaptively selected by ants. Two of them are employed to find feasible schedules and the other is used to enhance the reliability. Two methods have been investigated for time and cost considerations in the resource selection. One of them assigns equal importance to the time and cost factors, and the other weighs them according to the tightness of satisfaction of the corresponding constraints. Simulation results demonstrate the effectiveness of the proposed algorithm in finding feasible schedules with high reliability. As it is shown, as an additional achievement, the Grid profit loss has been decreased.
---
paper_title: An incentive-based heuristic job scheduling algorithm for utility grids
paper_content:
Abstract Job scheduling in utility grids should take into account the incentives for both grid users and resource providers. However, most of existing studies on job scheduling in utility grids only address the incentive for one party, i.e., either the users or the resource providers. Very few studies on job scheduling in utility grids consider incentives for both parties, in which the cost, one of the most attractive incentives for users, is not addressed. In this paper, we study the job scheduling in utility grid by optimizing the incentives for both parties. We propose a multi-objective optimization approach, i.e., maximizing the successful execution rate of jobs and minimizing the combined cost (incentives for grid users), and minimizing the fairness deviation of profits (incentive for resource providers). The proposed multi-objective optimization approach could offer sufficient incentives for the two parties to stay and play in the utility grid. A heuristic scheduling algorithm called Cost-Greedy Price-Adjusting (CGPA) algorithm is developed to optimize the incentives for both parties. Simulation results show that the CGPA algorithm is effective and could lead to higher successful execution rate, lower combined cost and lower fairness deviation compared with some popular algorithms in most cases.
---
| Title: Current Perspective in Task Scheduling Techniques in Cloud Computing: A Review
Section 1: INTRODUCTION
Description 1: This section provides an overview of cloud computing models and emphasizes the importance of efficient task scheduling, resource allocation, and resource sharing. It also introduces various scheduling approaches and outlines the paper's structure.
Section 2: TASK SCHEDULING TAXONOMY
Description 2: This section presents a novel taxonomy for classifying task scheduling approaches in cloud computing, based on Goal Oriented Task Scheduling (GOTS) and Constraint Oriented Task Scheduling (COTS).
Section 3: METAHEURISTIC BASED SCHEDULING
Description 3: This section reviews metaheuristic-based solutions for task scheduling, including Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), and BAT Optimization.
Section 4: GENETIC ALGORITHM (GA) BASED SCHEDULING
Description 4: This section discusses various proposals that use Genetic Algorithms (GA) for task scheduling, including hybrid approaches that combine GA with other techniques.
Section 5: GREEDY APPROACH BASED SCHEDULING SCHEMES
Description 5: This section covers scheduling schemes that employ greedy algorithms to optimize resource utilization and task completion times in cloud environments.
Section 6: HEURISTIC BASED SCHEDULING
Description 6: This section explores heuristic-based scheduling methods and their application in optimizing different parameters like cost, time, and resource utilization in cloud computing.
Section 7: CONCLUSION
Description 7: This section summarizes the review of task scheduling schemes, highlights the proposed taxonomy, and discusses the implications of Goal Oriented Task Scheduling (GOTS) and Constraint Oriented Task Scheduling (COTS). It also suggests future research directions in the field of task scheduling in cloud computing. |
Review Paper Medication-related Clinical Decision Support in Computerized Provider Order Entry Systems: A Review | 13 | ---
paper_title: Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System-related Errors
paper_content:
Medical error reduction is an international issue, as is the implementation of patient care information systems (PCISs) as a potential means to achieving it. As researchers conducting separate studies in the United States, The Netherlands, and Australia, using similar qualitative methods to investigate implementing PCISs, the authors have encountered many instances in which PCIS applications seem to foster errors rather than reduce their likelihood. The authors describe the kinds of silent errors they have witnessed and, from their different social science perspectives (information science, sociology, and cognitive science), they interpret the nature of these errors. The errors fall into two main categories: those in the process of entering and retrieving information, and those in the communication and coordination process that the PCIS is supposed to support. The authors believe that with a heightened awareness of these issues, informaticians can educate, design systems, implement, and conduct research in such a way that they might be able to avoid the unintended consequences of these subtle silent errors.
---
paper_title: Improving Safety with Information Technology
paper_content:
Information technology can improve patient safety by preventing errors and facilitating rapid response to adverse events. Computerized prescribing by physicians reduces the rate of medication-related errors. Systems that automatically page clinicians about serious laboratory abnormalities and remote monitoring of patients in intensive care units also appear promising.
---
paper_title: Patient safety and computerized medication ordering at Brigham and Women's Hospital.
paper_content:
Article-at-a-Glance Background Medications are important therapeutic tools in health care, yet creating safe medication processes is challenging for many reasons. Computerized physician order entry (CPOE), one important way that technology can be used to improve the medication process, has been in place at Brigham and Women's Hospital (BWH; Boston) since 1993. CPOE at BWH The CPOE application, designed and developed internally by the BWH information systems team, allows physicians and other clinicians to enter all patient orders into the computer. Physicians enter 85% of orders, with the remainder entered electronically by other clinicians. CPOE and safe medication use The CPOE application at BWH includes several features designed to improve medication safety—structural features (for example, required fields, use of pick lists), enhanced workflow features (order sets, standard scales for insulin and potassium), alerts and reminders (drug–drug and drug–allergy interaction checking), and adjunct features (the pharmacy system, access to online reference information). Results at BWH Studies of the impact of CPOE on physician decision making and patient safety at BWH include assessment of CPOE's impact on the serious medication error and the preventable adverse drug event rate, the impact of computer guidelines on the use of vancomycin, the impact of guidelines on the use of heparin in patients at bed rest, and the impact of dosing suggestions on excessive dosing. Conclusion CPOE and several forms of clinical decision support targeted at increasing patient safety have substantially decreased the frequency of serious medication errors and have had an even bigger impact on the overall medication error rate.
---
paper_title: Adverse Drug Events in Ambulatory Care
paper_content:
Background Adverse events related to drugs occur frequently among inpatients, and many of these events are preventable. However, few data are available on adverse drug events among outpatients. We conducted a study to determine the rates, types, severity, and preventability of such events among outpatients and to identify preventive strategies. Methods We performed a prospective cohort study, including a survey of patients and a chart review, at four adult primary care practices in Boston (two hospital-based and two community-based), involving a total of 1202 outpatients who received at least one prescription during a four-week period. Prescriptions were computerized at two of the practices and handwritten at the other two. Results Of the 661 patients who responded to the survey (response rate, 55 percent), 162 had adverse drug events (25 percent; 95 percent confidence interval, 20 to 29 percent), with a total of 181 events (27 per 100 patients). Twenty-four of the events (13 percent) were serious, 51 (28...
---
paper_title: Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group.
paper_content:
OBJECTIVES ::: To assess incidence and preventability of adverse drug events (ADEs) and potential ADEs. To analyze preventable events to develop prevention strategies. ::: ::: ::: DESIGN ::: Prospective cohort study. ::: ::: ::: PARTICIPANTS ::: All 4031 adult admissions to a stratified random sample of 11 medical and surgical units in two tertiary care hospitals over a 6-month period. Units included two medical and three surgical intensive care units and four medical and two surgical general care units. ::: ::: ::: MAIN OUTCOME MEASURES ::: Adverse drug events and potential ADEs. ::: ::: ::: METHODS ::: Incidents were detected by stimulated self-report by nurses and pharmacists and by daily review of all charts by nurse investigators. Incidents were subsequently classified by two independent reviewers as to whether they represented ADEs or potential ADEs and as to severity and preventability. ::: ::: ::: RESULTS ::: Over 6 months, 247 ADEs and 194 potential ADEs were identified. Extrapolated event rates were 6.5 ADEs and 5.5 potential ADEs per 100 nonobstetrical admissions, for mean numbers per hospital per year of approximately 1900 ADEs and 1600 potential ADEs. Of all ADEs, 1% were fatal (none preventable), 12% life-threatening, 30% serious, and 57% significant. Twenty-eight percent were judged preventable. Of the life-threatening and serious ADEs, 42% were preventable, compared with 18% of significant ADEs. Errors resulting in preventable ADEs occurred most often at the stages of ordering (56%) and administration (34%); transcription (6%) and dispensing errors (4%) were less common. Errors were much more likely to be intercepted if the error occurred earlier in the process: 48% at the ordering stage vs 0% at the administration stage. ::: ::: ::: CONCLUSION ::: Adverse drug events were common and often preventable; serious ADEs were more likely to be preventable. Most resulted from errors at the ordering stage, but many also occurred at the administration stage. Prevention strategies should target both stages of the drug delivery process.
---
paper_title: Risk factors for adverse drug events among nursing home residents.
paper_content:
Background: In a prospective study of nursing home residents, we found adverse drug events (ADEs) to be common, serious, and often preventable. To direct prevention efforts at high-risk residents, information is needed on resident-level risk factors. Methods: Case-control study nested within a prospective study of ADEs among residents in 18 nursing homes. For each ADE, we randomly selected a control from the same home. Data were abstracted from medical records on functional status, medical conditions, and medication use.
---
paper_title: A research agenda for bridging the 'quality chasm.'.
paper_content:
Realizing the vision of the IOM's landmark report, Crossing the Quality Chasm, will require new knowledge to support new policy and management. This paper lays out a research agenda that must be pursued if the health care system is to bridge the quality chasm. Based on a consensus process involving leading health care researchers and authorities, the paper highlights knowledge gaps and research directions in five areas identified by the Quality Chasm report as critical to its goals of building organizational supports for change; applying evidence to health care delivery; developing information technology; aligning payment policies with quality improvement; and preparing the workforce.
---
paper_title: Characteristics and Consequences of Drug Allergy Alert Overrides in a Computerized Physician Order Entry System
paper_content:
OBJECTIVE ::: The aim of this study was to determine characteristics of drug allergy alert overrides, assess how often they lead to preventable adverse drug events (ADEs), and suggest methods for improving the allergy-alerting system. ::: ::: ::: DESIGN ::: Chart review was performed on a stratified random subset of all allergy alerts occurring during a 3-month period (August through October 2002) at a large academic hospital. ::: ::: ::: MEASUREMENTS ::: Factors that were measured were drug/allergy combinations that triggered alerts, frequency of specific override reasons, characteristics of ADEs, and completeness of allergy documentation. ::: ::: ::: RESULTS ::: A total of 6,182 (80%) of 7,761 alerts were overridden in 1,150 patients. In this sample, only 10% of alerts were triggered by an exact match between the drug ordered and allergy listed. Physicians' most common reasons for overriding alerts were "Aware/Will monitor" (55%), "Patient does not have this allergy/tolerates" (33%), and "Patient taking already" (10%). In a stratified random subset of 320 patients (28% of 1,150) on chart review, 19 (6%) experienced ADEs attributed to the overridden drug; of these, 9 (47%) were serious. None of the ADEs was considered preventable, because the overrides were deemed clinically justifiable. The degree of completeness of patients' allergy lists was highly variable and generally low in both paper charts and the CPOE system. ::: ::: ::: CONCLUSION ::: Overrides of drug-allergy alerts were common and about 1 in 20 resulted in ADEs, but all of the overrides resulting in ADEs appeared clinically justifiable. The high rate of alert overrides was attributable to frequent nonexact match alerts and infrequent updating of allergy lists. Based on these findings, we have made specific recommendations for increasing the specificity of alerting and thereby improving the clinical utility of the drug allergy alerting system.
---
paper_title: Effective drug-allergy checking: methodological and operational issues
paper_content:
Adverse drug events cause a large number of injuries, and adverse events caused by medications administered in the face of known allergies represent an important preventable cause of patient harm. Computerized systems can effectively prevent reactions due to known allergies, but building an effective allergy prevention feature is challenging and presents many interesting informatics issues that have both methodological and operational implications. In this paper, we present the experiences from one large delivery system in delivering allergy-related decision support, discuss some of the different approaches that we have used, and then propose a future approach. We also discuss the methodological, behavioral, and operational issues that have arisen which have a major impact on success. Key factors in drug-allergy checking include storing patient allergy data in a single common repository, representing allergy data using suitable terminologies and creating groups of allergies for inferencing purposes, being judicious about which allergy warnings to display, conveying the reaction that the patient has experienced when exposed to the drug to inform the provider of the importance of the warning, and perhaps most important, implementing strategies to optimize the likelihood that allergy information will be entered.
---
paper_title: Improving allergy alerting in a computerized physician order entry system.
paper_content:
Abstract ::: Computerized physician order entry has been shown to reduce the frequency of serious medication errors. Decision support tools such as alerting functions for patient medication allergy are a key part of these applications. However, optimal performance requires iterative refinement. As systems become increasingly complex, mechanisms to monitor their performance become increasingly critical. We analyzed trend data obtained over a five-year period that showed decreasing compliance to allergy alert functions within computerized order entry. Many medication-allergy pairs were being consistently overridden. Renewal policies affecting reordering narcotics also contributed heavily to this trend. Each factor revealed a system-wide trend that could result in suggestions for policy or software change. Monitoring trends such as these is very important to maintain software correctness and ensure user trust in alerting systems, so users remain responsive to computerized alerts.
---
paper_title: Creating an enterprise-wide allergy repository at Partners HealthCare System.
paper_content:
A significant fraction of medication errors and preventable adverse drug events are related to drug-allergy interactions (DAIs). Computerized prescribing can help prevent DAIs, but an accurate record of the patient's allergies is required. At Partners HealthCare System in Boston, the patient's allergy list is distributed across several applications including computer physician order entry (CPOE), the outpatient medical record, pharmacy applications, and nurse charting applications. Currently, each application has access only to its own allergy data. This paper presents details of a project designed to integrate the various allergy repositories at Partners. We present data documenting that patients have allergy data stored in multiple repositories. We give detail about issues we are encountering such as which applications should participate in the repository, whether "NKA" or "NKDA" should be used to document known absence of allergies, and which personnel should be allowed to enter allergies. The issues described in this paper may well be faced by other initiatives intended to create comprehensive allergy repositories.
---
paper_title: Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations
paper_content:
The quality and safety of health care leaves much to be desired.1,2 Automated clinical decision support (CDS) tools embedded in clinical information systems (CISs) such as computer provider order entry (CPOE) and electronic health records (EHR) applications have the potential to improve care and should be part of any comprehensive approach to improve quality.3,4,5,6 Medication prescribing is a component of health care with well documented quality and safety problems that can be improved by CDS.7,8,9 ::: ::: Medication-related CDS requires that pharmaceutical knowledge be represented in a computable, explicit and unambiguous form. Creating an automated representation of medical knowledge often is the most time consuming step in the development of a CDS system and is known as the “knowledge acquisition bottleneck.”10 For a time, it was hoped that the move toward explicit guidelines in medicine would decrease the knowledge acquisition effort,11 but that has not happened.12 Experiments on data sharing from over a decade ago have not progressed.13 As a result, just a few organizations, primarily academic medical centers, are creating rules and benefiting from CDS,14 but most health care organizations do not have the expertise or resources to create such knowledge bases themselves. ::: ::: One potential solution to the problem of access to automated medication-related knowledge is the set of commercial vendors that supply medication-related knowledge bases for pharmacy and prescribing applications. These vendors' products contain such knowledge as drug-drug and drug-disease interactions, minimum and maximum dosing suggestions, drug-allergy cross-sensitivity groupings, and groupings of medications by therapeutic class. Developers of CISs (either vendor-based or “homegrown”), with appropriate licensing, can incorporate commercial knowledge bases into their products. The knowledge base vendor receives a licensing fee for each CIS implementation and can amortize the …
---
paper_title: Effects of computerized physician order entry on prescribing practices.
paper_content:
BACKGROUND ::: Computerized order entry systems have the potential to prevent errors, to improve quality of care, and to reduce costs by providing feedback and suggestions to the physician as each order is entered. This study assesses the impact of an inpatient computerized physician order entry system on prescribing practices. ::: ::: ::: METHODS ::: A time series analysis was performed at an urban academic medical center at which all adult inpatient orders are entered through a computerized system. When physicians enter drug orders, the computer displays drug use guidelines, offers relevant alternatives, and suggests appropriate doses and frequencies. ::: ::: ::: RESULT ::: For medication selection, use of a computerized guideline resulted in a change in use of the recommended drug (nizatidine) from 15.6% of all histamine(2)-blocker orders to 81.3% (P<.001). Implementation of dose selection menus resulted in a decrease in the SD of drug doses by 11% (P<.001). The proportion of doses that exceeded the recommended maximum decreased from 2.1% before order entry to 0.6% afterward (P<.001). Display of a recommended frequency for ondansetron hydrochloride administration resulted in an increase in the use of the approved frequency from 6% of all ondansetron orders to 75% (P<.001). The use of subcutaneous heparin sodium to prevent thrombosis in patients at bed rest increased from 24% to 47% when the computer suggested this option (P<.001). All these changes persisted at 1- and 2-year follow-up analyses. ::: ::: ::: CONCLUSION ::: Computerized physician order entry is a powerful and effective tool for improving physician prescribing practices.
---
paper_title: The Epidemiology of Prescribing Errors The Potential Impact of Computerized Prescriber Order Entry
paper_content:
BACKGROUND ::: Adverse drug events (ADEs) are the most common cause of injury to hospitalized patients and are often preventable. Medication errors resulting in preventable ADEs most commonly occur at the prescribing stage. ::: ::: ::: OBJECTIVES ::: To describe the epidemiology of medication prescribing errors averted by pharmacists and to assess the likelihood that these errors would be prevented by implementing computerized prescriber order entry (CPOE). ::: ::: ::: METHODS ::: At a 700-bed academic medical center in Chicago, Ill, clinical staff pharmacists saved all orders that contained a prescribing error for a week in early 2002. Pharmacist investigators subsequently classified drug class, error type, proximal cause, phase of hospitalization, and potential for patient harm and rated the likelihood that CPOE would have prevented the prescribing error. ::: ::: ::: RESULTS ::: A total of 1111 prescribing errors were identified (62.4 errors per 1000 medication orders), most occurring on admission (64%). Of these, 30.8% were rated clinically significant and were most frequently related to anti-infective medication orders, incorrect dose, and medication knowledge deficiency. Of all verified prescribing errors, 64.4% were rated as likely to be prevented with CPOE (including 43% of the potentially harmful errors), 13.2% unlikely to be prevented with CPOE, and 22.4% possibly prevented with CPOE depending on specific CPOE system characteristics. ::: ::: ::: CONCLUSIONS ::: Prescribing errors are common in the hospital setting. While CPOE systems could improve practitioner prescribing, design and implementation of a CPOE system should focus on errors with the greatest potential for patient harm. Pharmacist involvement, in addition to a CPOE system with advanced clinical decision support, is vital for achieving maximum medication safety.
---
paper_title: Guided Prescription of Psychotropic Medications for Geriatric Inpatients
paper_content:
Background Inappropriate use or excessive dosing of psychotropic medications in the elderly is common and can lead to a variety of adverse drug events including falls, oversedation, and cognitive impairment. Methods We developed a database of psychotropic medication dosing and selection guidelines for elderly inpatients. We displayed these recommendations to physicians through a computerized order entry system at a tertiary care academic hospital. The system was activated for 2 of 4 six-week study periods in an off-on-off-on pattern. Main outcome measures were agreement with the recommended daily dose for the initial drug order, incidence of dosing at least 10-fold greater than the recommended daily dose, prescription of nonrecommended drugs, inpatient falls, altered mental status as measured by a brief nursing assessment, and hospital length of stay. Results A total of 7456 initial orders for psychotropic medications were prescribed for 3718 hospitalized elderly patients with a mean ± SD age of 74.7 ± 6.7 years. The intervention increased the prescription of the recommended daily dose (29% vs 19%; P P P P = .001). No effect on hospital length of stay or days of altered mental status was found. Conclusion A geriatric decision support system for psychotropic medications increased the prescription of recommended doses, reduced the prescription of nonrecommended drugs, and was associated with fewer inpatient falls.
---
paper_title: Factors related to errors in medication prescribing.
paper_content:
Objective. —To quantify the type and frequency of identifiable factors associated with medication prescribing errors. Design and Setting. —Systematic evaluation of every third prescribing error detected and averted by pharmacists in a 631-bed tertiary care teaching hospital between July 1,1994, and June 30, 1995. Each error was concurrently evaluated for the potential to result in adverse patient consequences. Each error was retrospectively evaluated by a physician and 2 pharmacists and a factor likely related to the error was identified. Participants. —All physicians prescribing medications during the study period and all staff pharmacists involved in the routine review of medication orders. Main Outcome Measures. —Frequency of association of factors likely related to medication errors in general and specific to medication classes and prescribing services (needed for medical, pediatric, obstetric-gynecologic, surgical, or emergency department patients); and potential consequences of errors for negative patient outcomes. Results. —A total of 2103 errors thought to have potential clinical importance were detected during the 1-year study period. The overall rate of errors was 3.99 errors per 1000 medication orders, and the error rate varied among medication classes and prescribing services. A total of 696 errors met study criteria (ie, errors with the potential for adverse patient effects) and were evaluated for a likely related factor. The most common specific factors associated with errors were decline in renal or hepatic function requiring alteration of drug therapy (97 errors, 13.9%), patient history of allergy to the same medication class (84 errors, 12.1%), using the wrong drug name, dosage form, or abbreviation (total of 79 errors, 11.4%, for both brand name and generic name orders), incorrect dosage calculations (77 errors, 11.1%), and atypical or unusual and critical dosage frequency considerations (75 errors, 10.8%). The most common groups of factors associated with errors were those related to knowledge and the application of knowledge regarding drug therapy (209 errors, 30%); knowledge and use of knowledge regarding patient factors that affect drug therapy (203 errors, 29.2%); use of calculations, decimal points, or unit and rate expression factors (122 errors, 17.5%); and nomenclature factors (incorrect drug name, dosage form, or abbreviation) (93 errors, 13.4%). Conclusions. —Several easily identified factors are associated with a large proportion of medication prescribing errors. By improving the focus of organizational, technological, and risk management educational and training efforts using the factors commonly associated with prescribing errors, risk to patients from adverse drug events should be reduced.
---
paper_title: Outpatient prescribing errors and the impact of computerized prescribing
paper_content:
BACKGROUND ::: Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. ::: ::: ::: OBJECTIVE ::: To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. ::: ::: ::: DESIGN ::: Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. ::: ::: ::: PARTICIPANTS ::: Outpatients over age 18 who received a prescription from 24 participating physicians. ::: ::: ::: RESULTS ::: We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. ::: ::: ::: CONCLUSIONS ::: Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors.
---
paper_title: Patient safety and computerized medication ordering at Brigham and Women's Hospital.
paper_content:
Article-at-a-Glance Background Medications are important therapeutic tools in health care, yet creating safe medication processes is challenging for many reasons. Computerized physician order entry (CPOE), one important way that technology can be used to improve the medication process, has been in place at Brigham and Women's Hospital (BWH; Boston) since 1993. CPOE at BWH The CPOE application, designed and developed internally by the BWH information systems team, allows physicians and other clinicians to enter all patient orders into the computer. Physicians enter 85% of orders, with the remainder entered electronically by other clinicians. CPOE and safe medication use The CPOE application at BWH includes several features designed to improve medication safety—structural features (for example, required fields, use of pick lists), enhanced workflow features (order sets, standard scales for insulin and potassium), alerts and reminders (drug–drug and drug–allergy interaction checking), and adjunct features (the pharmacy system, access to online reference information). Results at BWH Studies of the impact of CPOE on physician decision making and patient safety at BWH include assessment of CPOE's impact on the serious medication error and the preventable adverse drug event rate, the impact of computer guidelines on the use of vancomycin, the impact of guidelines on the use of heparin in patients at bed rest, and the impact of dosing suggestions on excessive dosing. Conclusion CPOE and several forms of clinical decision support targeted at increasing patient safety have substantially decreased the frequency of serious medication errors and have had an even bigger impact on the overall medication error rate.
---
paper_title: Large Errors in the Dosing of Medications for Children
paper_content:
To the Editor: Dosing errors are among the most common types of medication errors.1–3 Errors by a factor of 10 (the administration of a dose 10 times or 1/10 as high as appropriate) are of particul...
---
paper_title: Impact of a computerized alert during physician order entry on medication dosing in patients with renal impairment.
paper_content:
Computerized assistance to clinicians during physician order entry can provide protection against medical errors. However, computer systems that provide too much assistance may adversely affect training of medical students and residents. Trainees may rely on the computer to automatically perform complex calculations and create appropriate orders and are thereby deprived of an important educational exercise. An alternative strategy is to provide a critique at the completion of an order, requiring the trainee to enter the entire order but displaying an alert if an error is made. While this approach preserves the educational components of order-writing, the potential for errors exists if the computerized critique does not induce clinicians to correct the order. The goal of this study was to determine (a) the frequency with which errors are made by trainees in an environment in which renal dosing adjustment calculation for antimicrobials are done by the system after the user has entered an order, and (b) the frequency with which prompts to clinicians regarding these errors leads to correction of those orders.
---
paper_title: Medication error prevention by clinical pharmacists in two children's hospitals.
paper_content:
The purpose of this study was to record prospectively the frequency of and potential harm caused by errant medication orders at two large pediatric hospitals. The objective of the study was to assess the impact of pharmacist intervention in preventing potential harm. The study was conducted during a 6-month period. A total of 281 and 198 errors were detected at the institutions. The overall error rates for the two hospitals were 1.35 and 1.77 per 100-patient days, and 4.9 and 4.5 per 1,000 medication orders, respectively. Pediatric patients aged 2 years and less and pediatric intensive care unit patients received the greatest proportion of errant orders. Neonatal patients received the lowest rate of errant orders. The most common type of error was incorrect dosage, and the most prevalent type of error was overdosage. Antibiotics was the class of drugs for which errant orders were most common. Orders for theophylline, analgesics, and fluid and electrolytes, including hyperalimentation, were also frequently in error. In general, the error rate was greatest among physicians with the least training, but no physician group was error free. Involving pharmacists in reviewing drug orders significantly reduced the potential harm resulting from errant medication orders.
---
paper_title: Effects of computerized physician order entry on prescribing practices.
paper_content:
BACKGROUND ::: Computerized order entry systems have the potential to prevent errors, to improve quality of care, and to reduce costs by providing feedback and suggestions to the physician as each order is entered. This study assesses the impact of an inpatient computerized physician order entry system on prescribing practices. ::: ::: ::: METHODS ::: A time series analysis was performed at an urban academic medical center at which all adult inpatient orders are entered through a computerized system. When physicians enter drug orders, the computer displays drug use guidelines, offers relevant alternatives, and suggests appropriate doses and frequencies. ::: ::: ::: RESULT ::: For medication selection, use of a computerized guideline resulted in a change in use of the recommended drug (nizatidine) from 15.6% of all histamine(2)-blocker orders to 81.3% (P<.001). Implementation of dose selection menus resulted in a decrease in the SD of drug doses by 11% (P<.001). The proportion of doses that exceeded the recommended maximum decreased from 2.1% before order entry to 0.6% afterward (P<.001). Display of a recommended frequency for ondansetron hydrochloride administration resulted in an increase in the use of the approved frequency from 6% of all ondansetron orders to 75% (P<.001). The use of subcutaneous heparin sodium to prevent thrombosis in patients at bed rest increased from 24% to 47% when the computer suggested this option (P<.001). All these changes persisted at 1- and 2-year follow-up analyses. ::: ::: ::: CONCLUSION ::: Computerized physician order entry is a powerful and effective tool for improving physician prescribing practices.
---
paper_title: Guided Prescription of Psychotropic Medications for Geriatric Inpatients
paper_content:
Background Inappropriate use or excessive dosing of psychotropic medications in the elderly is common and can lead to a variety of adverse drug events including falls, oversedation, and cognitive impairment. Methods We developed a database of psychotropic medication dosing and selection guidelines for elderly inpatients. We displayed these recommendations to physicians through a computerized order entry system at a tertiary care academic hospital. The system was activated for 2 of 4 six-week study periods in an off-on-off-on pattern. Main outcome measures were agreement with the recommended daily dose for the initial drug order, incidence of dosing at least 10-fold greater than the recommended daily dose, prescription of nonrecommended drugs, inpatient falls, altered mental status as measured by a brief nursing assessment, and hospital length of stay. Results A total of 7456 initial orders for psychotropic medications were prescribed for 3718 hospitalized elderly patients with a mean ± SD age of 74.7 ± 6.7 years. The intervention increased the prescription of the recommended daily dose (29% vs 19%; P P P P = .001). No effect on hospital length of stay or days of altered mental status was found. Conclusion A geriatric decision support system for psychotropic medications increased the prescription of recommended doses, reduced the prescription of nonrecommended drugs, and was associated with fewer inpatient falls.
---
paper_title: Conversion from intravenous to oral medications: assessment of a computerized intervention for hospitalized patients.
paper_content:
BACKGROUND ::: Many hospitalized patients continue to receive intravenous medications longer than necessary. Earlier conversion from the intravenous to the oral route could increase patient safety and comfort, reduce costs, and facilitate earlier discharge from the hospital without compromising clinical care. We examined the effect of a computer-based intervention to prompt physicians to switch appropriate patients from intravenous to oral medications. ::: ::: ::: METHODS ::: This study was performed at Brigham and Women's Hospital, an academic tertiary care hospital at which all medications are ordered online. We targeted 5 medications with equal oral and intravenous bioavailability: fluconazole, levofloxacin, metronidazole, ranitidine, and amiodarone. We used the hospital's computerized order entry system to prompt physicians to convert appropriate intravenous medications to the oral route. We measured the total use of the targeted medications via each route in the 4 months before and after the implementation of the intervention. We also measured the rate at which physicians responded to the intervention when prompted. ::: ::: ::: RESULTS ::: The average intravenous defined daily dose declined by 11.1% (P =.002) from the preintervention to the postintervention period, while the average oral defined daily dose increased by 3.7% (P =.002). Length of stay, case-mix index, and total drug use at the hospital increased during the study period. The average total monthly use of the intravenous preparation of all of the targeted medications declined in the 4 months after the intervention began, compared with the 4 months before. In 35.6% of 1045 orders for which a prompt was generated, the physician either made a conversion from the intravenous to the oral version or canceled the order altogether. ::: ::: ::: CONCLUSIONS ::: Computer-generated reminders can produce a substantial reduction in excessive use of targeted intravenous medications. As online prescribing becomes more common, this approach can be used to reduce excess use of intravenous medications, with potential benefits in patient comfort, safety, and cost.
---
paper_title: How to Design Computerized Alerts to Ensure Safe Prescribing Practices
paper_content:
Article-at-a-Glance Background Medication errors and preventable adverse drug events are common, and about half of medication errors occur during medication ordering. This study was designed to develop and evaluate medication safety alerts and processes for educating prescribers about the alerts. Methods At Kaiser Permanente Northwest, a group-model health maintenance organization where prescribers have used computerized order entry since 1996, qualitative interviews were conducted with 20 primary care prescribers. Results Prescribers considered alerts helpful for providing prescribing and preventive health information. More than half the interviewees stated that it would be unwise to let clinicians control or avoid safety alerts. Common frustrations were (1) being delayed by the alert, (2) having difficulty interpreting the alert, and (3) receiving the same alert repeatedly. Most prescribers preferred small-group educational sessions tied to existing meetings and having local physicians conduct education sessions. Discussion The findings were used to design a strategy for introducing and promoting the interventions, modifying the alert text and tools, and focusing the education on how clinicians could use the alerts effectively.
---
paper_title: Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors
paper_content:
ContextHospital computerized physician order entry (CPOE) systems are widely ::: regarded as the technical solution to medication ordering errors, the largest ::: identified source of preventable hospital medical error. Published studies ::: report that CPOE reduces medication errors up to 81%. Few researchers, however, ::: have focused on the existence or types of medication errors facilitated by ::: CPOE.ObjectiveTo identify and quantify the role of CPOE in facilitating prescription ::: error risks.Design, Setting, and ParticipantsWe performed a qualitative and quantitative study of house staff interaction ::: with a CPOE system at a tertiary-care teaching hospital (2002-2004). We surveyed ::: house staff (N = 261; 88% of CPOE users); conducted 5 focus groups ::: and 32 intensive one-on-one interviews with house staff, information technology ::: leaders, pharmacy leaders, attending physicians, and nurses; shadowed house ::: staff and nurses; and observed them using CPOE. Participants included house ::: staff, nurses, and hospital leaders.Main Outcome MeasureExamples of medication errors caused or exacerbated by the CPOE system.ResultsWe found that a widely used CPOE system facilitated 22 types of medication ::: error risks. Examples include fragmented CPOE displays that prevent a coherent ::: view of patients’ medications, pharmacy inventory displays mistaken ::: for dosage guidelines, ignored antibiotic renewal notices placed on paper ::: charts rather than in the CPOE system, separation of functions that facilitate ::: double dosing and incompatible orders, and inflexible ordering formats generating ::: wrong orders. Three quarters of the house staff reported observing each of ::: these error risks, indicating that they occur weekly or more often. Use of ::: multiple qualitative and survey methods identified and quantified error risks ::: not previously considered, offering many opportunities for error reduction.ConclusionsIn this study, we found that a leading CPOE system often facilitated ::: medication error risks, with many reported to occur frequently. As CPOE systems ::: are implemented, clinicians and hospitals must attend to errors that these ::: systems cause in addition to errors that they prevent.
---
paper_title: Reconciliation of discrepancies in medication histories and admission orders of newly hospitalized patients
paper_content:
A 1999 Institute of Medicine report received national attention by highlighting system vulnerabilities within health care and indicating that medication errors are a leading cause of morbidity and mortality. One area of concern was the increased number of errors occurring in the prescribing phase of the medication-use process due to prescribers’ lack of essential drug knowledge and patient information at the time of ordering. Pharmacists’ participation in medical rounds has demonstrated a reduction in medication errors in the ordering stage. However, at most hospitals, pharmacists are not directly involved in obtaining medication histories, despite the findings of one study showing that over 70% of drug-related problems were recognized only through a patient interview and another study reporting a 51% reduction in medication errors when pharmacists were involved in obtaining medication histories. Medication errors and patient harm can result from inaccurate or incomplete histories that are subsequently used to generate medication KRISTINE M. GLEASON, B.S.PHARM., is Research Pharmacist Coordinator; JENNIFER M. GROSZEK, R.N., B.S.N., M.J., and CAROL SULLIVAN, R.N., M.B.A., are Research Nurse Coordinators, Patient Safety Team; DENISE ROONEY, R.N., B.S.N., O.C.N., is Manager, Patient Safety Team; and CYNTHIA BARNARD, M.B.A., M.S.J.S., C.P.H.Q., is Director, Quality Strategies and Patient Safety Team, Division of Quality and Operations, Northwestern Memorial Hospital (NMH), Chicago, IL. GARY A. NOSKIN, M.D., is Associate Professor of Medicine, Department of Medicine, Division of Infectious Diseases, Feinberg School of Medicine, Northwestern University, Chicago, and Medical Director, Healthcare Epidemiology and Quality, NMH. Address correspondence to Ms. Gleason at the Division of Quality and Operations, Northwestern Memorial Hospital, 676 North St. Clair Street, Suite 700, Chicago, IL 60611 ([email protected]). The Patient Safety Team and Failure Mode and Effects Analysis team members at Northwestern Memorial Hospital are acknowledged for their active participation and support. Karen Nordstrom, B.S.Pharm., Michael Fotis, B.S.Pharm., and Desi Kotis, Pharm.D., provided invaluable assistance and insight into this project. The dedicated clinical staff pharmacists are acknowledged for enhancing patient safety by obtaining medication and allergy histories, reconciling discrepancies in medication histories and orders, and collecting data. Supported in part by an Excellence in Academic Medicine Grant from the State of Illinois Department of Public Aid and U.S. Public Health Service grant UR8/515081. Presented at the ASHP Midyear Clinical Meeting, Atlanta, GA, December 11, 2002; the 5th Annual National Patient Safety Foundation Patient Safety Congress, Washington, DC, March 12–14, 2003; and the Institute for Healthcare Improvement 15th Annual National Forum on Quality Improvement in Health Care, New Orleans, LA, December 4, 2003.
---
paper_title: Improving Acceptance of Computerized Prescribing Alerts in Ambulatory Care
paper_content:
Computerized drug prescribing alerts can improve patient safety, but are often overridden because of poor specificity and alert overload. Our objective was to improve clinician acceptance of drug alerts by designing a selective set of drug alerts for the ambulatory care setting and minimizing workflow disruptions by designating only critical to high-severity alerts to be interruptive to clinician workflow. The alerts were presented to clinicians using computerized prescribing within an electronic medical record in 31 Boston-area practices. There were 18,115 drug alerts generated during our six-month study period. Of these, 12,933 (71%) were noninterruptive and 5,182 (29%) interruptive. Of the 5,182 interruptive alerts, 67% were accepted. Reasons for overrides varied for each drug alert category and provided potentially useful information for future alert improvement. These data suggest that it is possible to design computerized prescribing decision support with high rates of alert recommendation acceptance by clinicians.
---
paper_title: Medication-prescribing errors in a teaching hospital. A 9-year experience.
paper_content:
BACKGROUND ::: Improved understanding of medication prescribing errors should be useful in the design of error prevention strategies. ::: ::: ::: OBJECTIVE ::: To report analysis of a 9-year experience with a systematic program of detecting, recording, and evaluating medication-prescribing errors in a teaching hospital. ::: ::: ::: METHODS ::: All medication-prescribing errors with potential for adverse patient outcome detected and averted by staff pharmacists from January 1, 1987, through December 31, 1995, were systematically recorded and analyzed. Errors were evaluated by type of error, medication class involved, prescribing service, potential severity, time of day, and month. Data were analyzed to determine changes in medication-prescribing error frequency and characteristics occurring during the 9-year study period. ::: ::: ::: RESULTS ::: A total of 11,186 confirmed medication-prescribing errors with potential for adverse patient consequences were detected and averted during the study period. The annual number of errors detected increased from 522 in the index year 1987 to 2115 in 1995. The rate of errors occurring per order written, per admission, and per patient-day, all increased significantly during the study duration (P < .001). Increased error rates were correlated with the number of admissions (P < .001). Antimicrobials, cardiovascular agents, gastrointestinal agents, and narcotics were the most common medication classes involved in errors. The most common type of errors were dosing errors, prescribing medications to which the patient was allergic, and prescribing inappropriate dosage forms. ::: ::: ::: CONCLUSIONS ::: The results of this study suggest there may exist a progressively increasing risk of adverse drug events for hospitalized patients. The increased rate of errors is possibly associated with increases in the intensity of medical care and use of drug therapy. Limited changes in the characteristics of prescribing errors occurred, as similar type errors were found to be repeated with increasing frequency. New errors were encountered as new drug therapies were introduced. Health care practitioners and health care systems must incorporate adequate error reduction, prevention, and detection mechanisms into the routine provision of care.
---
paper_title: Medication Prescribing Errors in a Teaching Hospital
paper_content:
A study of prescribing errors committed by physicians that occurred in a tertiarycare teaching hospital is reported. From a total of 289 411 medication orders written during the 1-year study period, 905 prescribing errors were detected and averted, of which 522 (57.7%) were rated as having potential for adverse consequences. The overall detected error rate was 3.13 errors for each 1000 orders written and a rate of 1.81 significant errors per 1000 orders. The error rate (4.01 per 1000 orders) was greatest between 12 PM and 3:59 PM. First-year postgraduate residents were found to have a higher error rate (4.25 per 1000 orders) than other prescriber classes, and obstetrics/gynecology services (3.54 per 1000 orders) and surgery/anesthesia services (3.42 per 1000 orders) had greater error rates than other services. The study results demonstrate the significant risk to patients from medication prescribing errors. Educational, operational, and risk-management activities should include efforts directed at reducing the risk to patients from prescribing errors. (JAMA. 1990;263:2329-2334)
---
paper_title: Impact of Computerized Prescriber Order Entry on Medication Errors at an Acute Tertiary Care Hospital
paper_content:
The authors analyzed medication errors documented in a hospital's database of clinical interventions as a continuous quality improvement activity. They compared the number of errors reported prior to and after computerized prescriber order entry (CPOE) was implemented in the hospital. Results indicated that in the first 12 months of CPOE, overall medication errors were reduced by more than 40%, incomplete orders declined by more than 70%, and incorrect orders decreased by at least 45%. Illegible orders were virtually eliminated but the level of medication errors categorized by drug therapy problems remained significantly unchanged. The study underscores the positive impact of CPOE on medication safety and reemphasizes the need for proactive clinical interventions by pharmacists.
---
paper_title: Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group.
paper_content:
OBJECTIVES ::: To assess incidence and preventability of adverse drug events (ADEs) and potential ADEs. To analyze preventable events to develop prevention strategies. ::: ::: ::: DESIGN ::: Prospective cohort study. ::: ::: ::: PARTICIPANTS ::: All 4031 adult admissions to a stratified random sample of 11 medical and surgical units in two tertiary care hospitals over a 6-month period. Units included two medical and three surgical intensive care units and four medical and two surgical general care units. ::: ::: ::: MAIN OUTCOME MEASURES ::: Adverse drug events and potential ADEs. ::: ::: ::: METHODS ::: Incidents were detected by stimulated self-report by nurses and pharmacists and by daily review of all charts by nurse investigators. Incidents were subsequently classified by two independent reviewers as to whether they represented ADEs or potential ADEs and as to severity and preventability. ::: ::: ::: RESULTS ::: Over 6 months, 247 ADEs and 194 potential ADEs were identified. Extrapolated event rates were 6.5 ADEs and 5.5 potential ADEs per 100 nonobstetrical admissions, for mean numbers per hospital per year of approximately 1900 ADEs and 1600 potential ADEs. Of all ADEs, 1% were fatal (none preventable), 12% life-threatening, 30% serious, and 57% significant. Twenty-eight percent were judged preventable. Of the life-threatening and serious ADEs, 42% were preventable, compared with 18% of significant ADEs. Errors resulting in preventable ADEs occurred most often at the stages of ordering (56%) and administration (34%); transcription (6%) and dispensing errors (4%) were less common. Errors were much more likely to be intercepted if the error occurred earlier in the process: 48% at the ordering stage vs 0% at the administration stage. ::: ::: ::: CONCLUSION ::: Adverse drug events were common and often preventable; serious ADEs were more likely to be preventable. Most resulted from errors at the ordering stage, but many also occurred at the administration stage. Prevention strategies should target both stages of the drug delivery process.
---
paper_title: Accuracy of Data in Computer-based Patient Records
paper_content:
Data in computer-based patient records (CPRs) have many uses beyond their primary role in patient care, including research and health-system management. Although the accuracy of CPR data directly affects these applications, there has been only sporadic interest in, and no previous review of, data accuracy in CPRs. This paper reviews the published studies of data accuracy in CPRs. These studies report highly variable levels of accuracy. This variability stems from differences in study design, in types of data studied, and in the CPRs themselves. These differences confound interpretation of this literature. We conclude that our knowledge of data accuracy in CPRs is not commensurate with its importance and further studies are needed. We propose methodological guidelines for studying accuracy that address shortcomings of the current literature. As CPR data are used increasingly for research, methods used in research databases to continuously monitor and improve accuracy should be applied to CPRs.
---
paper_title: Improving Acceptance of Computerized Prescribing Alerts in Ambulatory Care
paper_content:
Computerized drug prescribing alerts can improve patient safety, but are often overridden because of poor specificity and alert overload. Our objective was to improve clinician acceptance of drug alerts by designing a selective set of drug alerts for the ambulatory care setting and minimizing workflow disruptions by designating only critical to high-severity alerts to be interruptive to clinician workflow. The alerts were presented to clinicians using computerized prescribing within an electronic medical record in 31 Boston-area practices. There were 18,115 drug alerts generated during our six-month study period. Of these, 12,933 (71%) were noninterruptive and 5,182 (29%) interruptive. Of the 5,182 interruptive alerts, 67% were accepted. Reasons for overrides varied for each drug alert category and provided potentially useful information for future alert improvement. These data suggest that it is possible to design computerized prescribing decision support with high rates of alert recommendation acceptance by clinicians.
---
paper_title: Clinical Relevance of Automated Drug Alerts From the Perspective of Medical Providers
paper_content:
The authors used a real-time survey instrument and subsequent focus group among primary care clinicians at a large healthcare system to assess usefulness of automated drug alerts. Of 108 alerts encountered, 0.9% (n = 1) represented critical alerts, and 16% (n = 17) were significant drug interaction alerts. Sixty-one percent (n = 66) involved duplication of a medication or medication class. The rest (n = 24) involved topical medications, inhalers, or vaccines. Of the 84 potentially relevant alerts, providers classified 11% (9/84), or about 1 in 9, as useful. Drug interaction alerts were more often deemed useful than drug duplication alerts (44.4% versus 1.5%, P< .001). Focus group participants generally echoed these results when ranking the relevance of 15 selected alerts, although there was wide variance in ratings for individual alerts. Hence, a “smarter” system that utilizes a set of mandatory alerts while allowing providers to tailor use of other automated warnings may improve clinical relevance of dru...
---
paper_title: Physicians' decisions to override computerized drug alerts in primary care.
paper_content:
BACKGROUND ::: Although computerized physician order entry reduces medication errors among inpatients, little is known about the use of this system in primary care. ::: ::: ::: METHODS ::: We calculated the override rate among 3481 consecutive alerts generated at 5 adult primary care practices that use a common computerized physician order entry system for prescription writing. For detailed review, we selected a random sample of 67 alerts in which physicians did not prescribe an alerted medication and 122 alerts that resulted in a written prescription. We identified factors associated with the physicians' decisions to override a medication alert, and determined whether an adverse drug event (ADE) occurred. ::: ::: ::: RESULTS ::: Physicians overrode 91.2% of drug allergy and 89.4% of high-severity drug interaction alerts. In the multivariable analysis using the medical chart review sample (n = 189), physicians were less likely to prescribe an alerted medication if the prescriber was a house officer (odds ratio [OR], 0.26; 95% confidence interval [CI], 0.08-0.84) and if the patient had many drug allergies (OR, 0.70; 95% CI, 0.53-0.93). They were more likely to override alerts for renewals compared with new prescriptions (OR, 17.74; 95% CI, 5.60-56.18). We found no ADEs in cases where physicians observed the alert and 3 ADEs among patients with alert overrides, a nonsignificant difference (P =.55). Physician reviewers judged that 36.5% of the alerts were inappropriate. ::: ::: ::: CONCLUSIONS ::: Few physicians changed their prescription in response to a drug allergy or interaction alert, and there were few ADEs, suggesting that the threshold for alerting was set too low. Computerized physician order entry systems should suppress alerts for renewals of medication combinations that patients currently tolerate.
---
paper_title: Characteristics and override rates of order checks in a practitioner order entry system.
paper_content:
Order checks are important error prevention tools when used in conjunction with practitioner order entry systems. We studied characteristics of order checks generated in a sample of consecutively entered orders during a 4 week period in an electronic medical record at VA Puget Sound. We found that in the 42,641 orders where an order check could potentially be generated, 11% generated at least one order check and many generated more than one order check. The rates at which the ordering practitioner overrode 'Critical drug interaction' and 'Allergy-drug interaction' alerts in this sample were 88% and 69% respectively. This was in part due to the presence of alerts for interactions between systemic and topical medications and for alerts generated during medication renewals. Refinement in order check logic could lead to lower override rates and increase practitioner acceptance and effectiveness of order checks.
---
paper_title: Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations
paper_content:
The quality and safety of health care leaves much to be desired.1,2 Automated clinical decision support (CDS) tools embedded in clinical information systems (CISs) such as computer provider order entry (CPOE) and electronic health records (EHR) applications have the potential to improve care and should be part of any comprehensive approach to improve quality.3,4,5,6 Medication prescribing is a component of health care with well documented quality and safety problems that can be improved by CDS.7,8,9 ::: ::: Medication-related CDS requires that pharmaceutical knowledge be represented in a computable, explicit and unambiguous form. Creating an automated representation of medical knowledge often is the most time consuming step in the development of a CDS system and is known as the “knowledge acquisition bottleneck.”10 For a time, it was hoped that the move toward explicit guidelines in medicine would decrease the knowledge acquisition effort,11 but that has not happened.12 Experiments on data sharing from over a decade ago have not progressed.13 As a result, just a few organizations, primarily academic medical centers, are creating rules and benefiting from CDS,14 but most health care organizations do not have the expertise or resources to create such knowledge bases themselves. ::: ::: One potential solution to the problem of access to automated medication-related knowledge is the set of commercial vendors that supply medication-related knowledge bases for pharmacy and prescribing applications. These vendors' products contain such knowledge as drug-drug and drug-disease interactions, minimum and maximum dosing suggestions, drug-allergy cross-sensitivity groupings, and groupings of medications by therapeutic class. Developers of CISs (either vendor-based or “homegrown”), with appropriate licensing, can incorporate commercial knowledge bases into their products. The knowledge base vendor receives a licensing fee for each CIS implementation and can amortize the …
---
paper_title: Customizing a commercial rule base for detecting drug-drug interactions.
paper_content:
We developed and implemented an adverse drug event system (PharmADE) that detects potentially dangerous drug combinations using a commercial rule base. While commercial rule bases can be useful for rapid deployment of a safety net to screen for drug-drug interactions, they sometimes do not provide the desired rule sensitivity. We implemented methods for enhancing commercial drug-drug interaction rules while preserving the original rule base architecture for easy and low cost maintenance.
---
paper_title: Effects of computerized physician order entry on prescribing practices.
paper_content:
BACKGROUND ::: Computerized order entry systems have the potential to prevent errors, to improve quality of care, and to reduce costs by providing feedback and suggestions to the physician as each order is entered. This study assesses the impact of an inpatient computerized physician order entry system on prescribing practices. ::: ::: ::: METHODS ::: A time series analysis was performed at an urban academic medical center at which all adult inpatient orders are entered through a computerized system. When physicians enter drug orders, the computer displays drug use guidelines, offers relevant alternatives, and suggests appropriate doses and frequencies. ::: ::: ::: RESULT ::: For medication selection, use of a computerized guideline resulted in a change in use of the recommended drug (nizatidine) from 15.6% of all histamine(2)-blocker orders to 81.3% (P<.001). Implementation of dose selection menus resulted in a decrease in the SD of drug doses by 11% (P<.001). The proportion of doses that exceeded the recommended maximum decreased from 2.1% before order entry to 0.6% afterward (P<.001). Display of a recommended frequency for ondansetron hydrochloride administration resulted in an increase in the use of the approved frequency from 6% of all ondansetron orders to 75% (P<.001). The use of subcutaneous heparin sodium to prevent thrombosis in patients at bed rest increased from 24% to 47% when the computer suggested this option (P<.001). All these changes persisted at 1- and 2-year follow-up analyses. ::: ::: ::: CONCLUSION ::: Computerized physician order entry is a powerful and effective tool for improving physician prescribing practices.
---
paper_title: A computer-assisted management program for antibiotics and other antiinfective agents.
paper_content:
BACKGROUND AND METHODS ::: Optimal decisions about the use of antibiotics and other antiinfective agents in critically ill patients require access to a large amount of complex information. We have developed a computerized decision-support program linked to computer-based patient records that can assist physicians in the use of antiinfective agents and improve the quality of care. This program presents epidemiologic information, along with detailed recommendations and warnings. The program recommends antiinfective regimens and courses of therapy for particular patients and provides immediate feedback. We prospectively studied the use of the computerized antiinfectives-management program for one year in a 12-bed intensive care unit. ::: ::: ::: RESULTS ::: During the intervention period, all 545 patients admitted were cared for with the aid of the antiinfectives-management program. Measures of processes and outcomes were compared with those for the 1136 patients admitted to the same unit during the two years before the intervention period. The use of the program led to significant reductions in orders for drugs to which the patients had reported allergies (35, vs. 146 during the preintervention period; P<0.01), excess drug dosages (87 vs. 405, P<0.01), and antibiotic-susceptibility mismatches (12 vs. 206, P<0.01). There were also marked reductions in the mean number of days of excessive drug dosage (2.7 vs. 5.9, P<0.002) and in adverse events caused by antiinfective agents (4 vs. 28, P<0.02). In analyses of patients who received antiinfective agents, those treated during the intervention period who always received the regimens recommended by the computer program (n=203) had significant reductions, as compared with those who did not always receive the recommended regimens (n= 195) and those in the preintervention cohort (n = 766), in the cost of antiinfective agents (adjusted mean, $102 vs. $427 and $340, respectively; P<0.001), in total hospital costs (adjusted mean, $26,315 vs. $44,865 and $35,283; P<0.001), and in the length of the hospital stay days (adjusted mean, 10.0 vs. 16.7 and 12.9; P<0.001). CONCLUSIONS; A computerized antiinfectives-management program can improve the quality of patient care and reduce costs.
---
paper_title: Guided Prescription of Psychotropic Medications for Geriatric Inpatients
paper_content:
Background Inappropriate use or excessive dosing of psychotropic medications in the elderly is common and can lead to a variety of adverse drug events including falls, oversedation, and cognitive impairment. Methods We developed a database of psychotropic medication dosing and selection guidelines for elderly inpatients. We displayed these recommendations to physicians through a computerized order entry system at a tertiary care academic hospital. The system was activated for 2 of 4 six-week study periods in an off-on-off-on pattern. Main outcome measures were agreement with the recommended daily dose for the initial drug order, incidence of dosing at least 10-fold greater than the recommended daily dose, prescription of nonrecommended drugs, inpatient falls, altered mental status as measured by a brief nursing assessment, and hospital length of stay. Results A total of 7456 initial orders for psychotropic medications were prescribed for 3718 hospitalized elderly patients with a mean ± SD age of 74.7 ± 6.7 years. The intervention increased the prescription of the recommended daily dose (29% vs 19%; P P P P = .001). No effect on hospital length of stay or days of altered mental status was found. Conclusion A geriatric decision support system for psychotropic medications increased the prescription of recommended doses, reduced the prescription of nonrecommended drugs, and was associated with fewer inpatient falls.
---
paper_title: Guided Medication Dosing for Inpatients With Renal Insufficiency
paper_content:
ContextUsual drug-prescribing practices may not consider the effects of renal ::: insufficiency on the disposition of certain drugs. Decision aids may help ::: optimize prescribing behavior and reduce medical error.ObjectiveTo determine if a system application for adjusting drug dose and frequency ::: in patients with renal insufficiency, when merged with a computerized order ::: entry system, improves drug prescribing and patient outcomes.Design, Setting, and PatientsFour consecutive 2-month intervals consisting of control (usual computerized ::: order entry) alternating with intervention (computerized order entry plus ::: decision support system), conducted in September 1997–April 1998 with ::: outcomes assessed among a consecutive sample of 17 828 adults admitted ::: to an urban tertiary care teaching hospital.InterventionReal-time computerized decision support system for prescribing drugs ::: in patients with renal insufficiency. During intervention periods, the adjusted ::: dose list, default dose amount, and default frequency were displayed to the ::: order-entry user and a notation was provided that adjustments had been made ::: based on renal insufficiency. During control periods, these recommended adjustments ::: were not revealed to the order-entry user, and the unadjusted parameters were ::: displayed.Main Outcome MeasuresRates of appropriate prescription by dose and frequency, length of stay, ::: hospital and pharmacy costs, and changes in renal function, compared among ::: patients with renal insufficiency who were hospitalized during the intervention ::: vs control periods.ResultsA total of 7490 patients were found to have some degree of renal insufficiency. ::: In this group, 97 151 orders were written on renally cleared or nephrotoxic ::: medications, of which 14 440 (15%) had at least 1 dosing parameter modified ::: by the computer based on renal function. The fraction of prescriptions deemed ::: appropriate during the intervention vs control periods by dose was 67% vs ::: 54% (P<.001) and by frequency was 59% vs 35% (P<.001). Mean (SD) length of stay was 4.3 (4.5) days ::: vs 4.5 (4.8) days in the intervention vs control periods, respectively (P = .009). There were no significant differences in estimated ::: hospital and pharmacy costs or in the proportion of patients who experienced ::: a decline in renal function during hospitalization.ConclusionsGuided medication dosing for inpatients with renal insufficiency appears ::: to result in improved dose and frequency choices. This intervention demonstrates ::: a way in which computer-based decision support systems can improve care.
---
paper_title: The Impact of Computerized Physician Order Entry on Medication Error Prevention
paper_content:
BACKGROUND ::: Medication errors are common, and while most such errors have little potential for harm they cause substantial extra work in hospitals. A small proportion do have the potential to cause injury, and some cause preventable adverse drug events. ::: ::: ::: OBJECTIVE ::: To evaluate the impact of computerized physician order entry (POE) with decision support in reducing the number of medication errors. ::: ::: ::: DESIGN ::: Prospective time series analysis, with four periods. ::: ::: ::: SETTING AND PARTICIPANTS ::: All patients admitted to three medical units were studied for seven to ten-week periods in four different years. The baseline period was before implementation of POE, and the remaining three were after. Sophistication of POE increased with each successive period. ::: ::: ::: INTERVENTION ::: Physician order entry with decision support features such as drug allergy and drug-drug interaction warnings. ::: ::: ::: MAIN OUTCOME MEASURE ::: Medication errors, excluding missed dose errors. ::: ::: ::: RESULTS ::: During the study, the non-missed-dose medication error rate fell 81 percent, from 142 per 1,000 patient-days in the baseline period to 26.6 per 1,000 patient-days in the final period (P < 0.0001). Non-intercepted serious medication errors (those with the potential to cause injury) fell 86 percent from baseline to period 3, the final period (P = 0.0003). Large differences were seen for all main types of medication errors: dose errors, frequency errors, route errors, substitution errors, and allergies. For example, in the baseline period there were ten allergy errors, but only two in the following three periods combined (P < 0.0001). ::: ::: ::: CONCLUSIONS ::: Computerized POE substantially decreased the rate of non-missed-dose medication errors. A major reduction in errors was achieved with the initial version of the system, and further reductions were found with addition of decision support features.
---
paper_title: Implementing renal impairment and geriatric decision support in ambulatory e-prescribing.
paper_content:
An advanced decision support system for prescribing to patients with renal impairment and geriatric patients was successfully implemented in an ambulatory electronic medical record (EMR) system.
---
paper_title: Drug-age alerting for outpatient geriatric prescriptions: a joint study using interoperable drug standards.
paper_content:
For more than a decade, the Beers criteria have identified specific medications that should generally be avoided in the geriatric population. Studies that have shown high prevalence rates of these potentially inappropriate medications have used disparate methodologies to identify these medications and hence are difficult to replicate and generalize. In an effort to improve prescribing behavior, we are building a drug-age alerting system utilizing standard drug coding systems for use in our Electronic Health Record (EHR) systems.
---
paper_title: Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group.
paper_content:
OBJECTIVES ::: To assess incidence and preventability of adverse drug events (ADEs) and potential ADEs. To analyze preventable events to develop prevention strategies. ::: ::: ::: DESIGN ::: Prospective cohort study. ::: ::: ::: PARTICIPANTS ::: All 4031 adult admissions to a stratified random sample of 11 medical and surgical units in two tertiary care hospitals over a 6-month period. Units included two medical and three surgical intensive care units and four medical and two surgical general care units. ::: ::: ::: MAIN OUTCOME MEASURES ::: Adverse drug events and potential ADEs. ::: ::: ::: METHODS ::: Incidents were detected by stimulated self-report by nurses and pharmacists and by daily review of all charts by nurse investigators. Incidents were subsequently classified by two independent reviewers as to whether they represented ADEs or potential ADEs and as to severity and preventability. ::: ::: ::: RESULTS ::: Over 6 months, 247 ADEs and 194 potential ADEs were identified. Extrapolated event rates were 6.5 ADEs and 5.5 potential ADEs per 100 nonobstetrical admissions, for mean numbers per hospital per year of approximately 1900 ADEs and 1600 potential ADEs. Of all ADEs, 1% were fatal (none preventable), 12% life-threatening, 30% serious, and 57% significant. Twenty-eight percent were judged preventable. Of the life-threatening and serious ADEs, 42% were preventable, compared with 18% of significant ADEs. Errors resulting in preventable ADEs occurred most often at the stages of ordering (56%) and administration (34%); transcription (6%) and dispensing errors (4%) were less common. Errors were much more likely to be intercepted if the error occurred earlier in the process: 48% at the ordering stage vs 0% at the administration stage. ::: ::: ::: CONCLUSION ::: Adverse drug events were common and often preventable; serious ADEs were more likely to be preventable. Most resulted from errors at the ordering stage, but many also occurred at the administration stage. Prevention strategies should target both stages of the drug delivery process.
---
paper_title: High Rates of Adverse Drug Events in a Highly Computerized Hospital
paper_content:
Background Numerous studies have shown that specific computerized interventions may reduce medication errors, but few have examined adverse drug events (ADEs) across all stages of the computerized medication process. We describe the frequency and type of inpatient ADEs that occurred following the adoption of multiple computerized medication ordering and administration systems, including computerized physician order entry (CPOE). Methods Using explicit standardized criteria, pharmacists classified inpatient ADEs from prospective daily reviews of electronic medical records from a random sample of all admissions during a 20-week period at a Veterans Administration hospital. We analyzed ADEs that necessitated a changed treatment plan. Results Among 937 hospital admissions, 483 clinically significant inpatient ADEs were identified, accounting for 52 ADEs per 100 admissions and an incidence density of 70 ADEs per 1000 patient-days. One quarter of the hospitalizations had at least 1 ADE. Of all ADEs, 9% resulted in serious harm, 22% in additional monitoring and interventions, 32% in interventions alone, and 11% in monitoring alone; 27% should have resulted in additional interventions or monitoring. Medication errors contributed to 27% of these ADEs. Errors associated with ADEs occurred in the following stages: 61% ordering, 25% monitoring, 13% administration, 1% dispensing, and 0% transcription. The medical record reflected recognition of 76% of the ADEs. Conclusions High rates of ADEs may continue to occur after implementation of CPOE and related computerized medication systems that lack decision support for drug selection, dosing, and monitoring.
---
paper_title: Monitoring in chronic disease: a rational approach
paper_content:
> “Know which abnormality you are going to follow during treatment. Pick something you can measure.” ::: > ::: > Meador C. A Little Book of Doctors' Rules.Lyons: IARC Press, 1999 ::: The ritual of routine visits for most chronic diseases usually includes monitoring to check on the progress or regress of the disease and the development of complications. Such checks require that we choose what to monitor, when to monitor, and how to adjust treatment. Poor choices in each can lead to poor control, poor use of time, and dangerous adjustments to treatment. For example, an audit of serum digoxin monitoring in a UK teaching hospital more than 20 years ago showed that the logic behind more than 80% of the tests requested could not be established, the timing of tests reflected poor understanding of the clinical pharmacokinetics, and about one result in four was followed by an inappropriate clinical decision.1 Improvements are possible. For example, a computerised reminder of inappropriate testing reduced the volume of testing for the concentration of antiepileptic drugs by 20%2; a decision support system for anticoagulation with warfarin led to an improvement from 45% to 63% of patients being within target range3; and quality control charts for peak flow measurements for people with asthma could detect exacerbations four days earlier than conventional methods.4 Given the extent of monitoring, even modest improvements are likely to improve benefits for patients and may reduce costs. ::: ::: Monitoring is periodic measurement that guides the management of a chronic or recurrent condition. It can be done by clinicians, patients, or both. In Australia, monitoring comprises between a third and half of all tests ordered in general practice and outpatients (Pirozzo, personal communication, 2002). Despite the considerable staff time and resources involved, monitoring is a surprisingly understudied area. We review the current literature (based on a Medline search using the terms “monitor*“, …
---
paper_title: A Randomized Trial of "Corollary Orders" to Prevent Errors of Omission
paper_content:
Objective: Errors of omission are a common cause of systems failures. Physicians often fail to order tests or treatments needed to monitor/ameliorate the effects of other tests or treatments. The authors hypothesized that automated, guideline-based reminders to physicians, provided as they wrote orders, could reduce these omissions. ::: ::: Design: The study was performed on the inpatient general medicine ward of a public teaching hospital. Faculty and housestaff from the Indiana University School of Medicine, who used computer workstations to write orders, were randomized to intervention and control groups. As intervention physicians wrote orders for 1 of 87 selected tests or treatments, the computer suggested corollary orders needed to detect or ameliorate adverse reactions to the trigger orders. The physicians could accept or reject these suggestions. ::: ::: Results: During the 6-month trial, reminders about corollary orders were presented to 48 intervention physicians and withheld from 41 control physicians. Intervention physicians ordered the suggested corollary orders in 46.3% of instances when they received a reminder, compared with 21.9% compliance by control physicians (p < 0.0001). Physicians discriminated in their acceptance of suggested orders, readily accepting some while rejecting others. There were one third fewer interventions initiated by pharmacists with physicians in the intervention than control groups. ::: ::: Conclusion: This study demonstrates that physician workstations, linked to a comprehensive electronic medical record, can be an efficient means for decreasing errors of omissions and improving adherence to practice guidelines.
---
paper_title: Comparison of an anticoagulation clinic with usual medical care: anticoagulation control, patient outcomes, and health care costs.
paper_content:
BACKGROUND ::: The outcomes of an inception cohort of patients seen at an anticoagulation clinic (AC) were published previously. The temporary closure of this clinic allowed the evaluation of 2 more inception cohorts: usual medical care and an AC. ::: ::: ::: OBJECTIVE ::: To compare newly anticoagulated patients who were treated with usual medical care with those treated at an AC for patient characteristics, anticoagulation control, bleeding and thromboembolic events, and differences in costs for hospitalizations and emergency department visits. ::: ::: ::: RESULTS ::: Rates are expressed as percentage per patient-year. Patients treated at an AC who received lower-range anticoagulation had fewer international normalized ratios greater than 5.0 (7.0% vs 14.7%), spent more time in range (40.0% vs 37.0%), and spent less time at an international normalized ratio greater than 5 (3.5% vs 9.8%). Patients treated at an AC who received higher-range anticoagulation had more international normalized ratios within range (50.4% vs 35.0%), had fewer international normalized ratios less than 2.0 (13.0% vs 23.8%), and spent more time within range (64.0% vs 51.0%). The AC group had lower rates (expressed as percentage per patient-year) of significant bleeding (8.1% vs 35.0%), major to fatal bleeding (1.6% vs 3.9%), and thromboembolic events (3.3% vs 11.8%); the AC group also demonstrated a trend toward a lower mortality rate (0% vs 2.9%; P= .09). Significantly lower annual rates of warfarin sodium-related hospitalizations (5% vs 19%) and emergency department visits (6% vs 22%) reduced annual health care costs by $132086 per 100 patients. Additionally, a lower rate of warfarin-unrelated emergency department visits (46.8% vs 168.0%) produced an additional annual savings in health care costs of $29 72 per 100 patients. ::: ::: ::: CONCLUSIONS ::: A clinical pharmacist-run AC improved anticoagulation control, reduced bleeding and thromboembolic event rates, and saved $162058 per 100 patients annually in reduced hospitalizations and emergency department visits.
---
paper_title: Appropriateness of antiepileptic drug level monitoring
paper_content:
Objectives. —To develop explicit, reliable appropriateness criteria for antiepileptic drug level monitoring and to assess the appropriateness of monitoring in one tertiary care institution. Design. —Appropriateness criteria derived from the literature and through expert opinion were used to evaluate a stratified random sample of antiepileptic drug level determinations obtained from chart review. Setting. —Tertiary care center performing more than 10 000 antiepileptic drug level determinations per year. Patients. —A total of 330 inpatients in whom antiepileptic drug levels were measured a total of 855 times. Methods. —Drug levels were assessed at least 200 times for each of four antiepileptic drugs (phenytoin, carbamazepine, phenobarbital, and valproic acid). Main Outcome Measures. —The proportion of antiepileptic drug levels with an appropriate indication and, of those, the proportion sampled appropriately. Results. —Overall, 27% (95% confidence interval, 24% to 30%) of levels had an appropriate indication. Interrater agreement for appropriateness was substantial (κ=0.61). There was no significant difference in the appropriateness rate among the four drugs (range, 25% to 29%). Of the 624 antiepileptic drug level determinations considered inappropriate (73%), only four (0.6%) were more than 20% higher than the upper limit of normal, and none of the four patients had clinical signs of drug toxicity. A median of six levels (range, one through 69) was determined per patient, and the median interval between level determinations was 24 hours. Of the 27% of level determinations with an appropriate indication, 51% were sampled correctly, resulting in an overall appropriateness rate of 14%. Conclusions. —Only 27% of antiepileptic drug level determinations had an appropriate indication, and half of these were not sampled correctly. Routine daily monitoring without pharmacological justification accounted for most of the inappropriate drug level determinations. Efforts to decrease inappropriate monitoring may result in substantial cost reductions without missing important clinical results. ( JAMA . 1995;274:1622-1626)
---
paper_title: Adherence to black box warnings for prescription medications in outpatients.
paper_content:
BACKGROUND ::: Few data are available regarding the prevalence of potentially dangerous drug-drug, drug-laboratory, and drug-disease interactions among outpatients. Our objectives were to determine how frequently clinicians prescribe drugs in violation of black box warnings for these issues and to determine how frequently such prescribing results in harm. ::: ::: ::: METHODS ::: In an observational study of 51 outpatient practices using an electronic health record, we measured the frequency with which patients received prescriptions in violation of black box warnings for drug-drug, drug-laboratory, and/or drug-disease interactions. We performed medical record reviews in a sample of patients to detect adverse drug events. Multivariate analysis was conducted to assess the relationship of prescribing in violation of black box warnings to patient and clinician characteristics, adjusting for potential confounders and clustering. ::: ::: ::: RESULTS ::: Of 324 548 outpatients who received a medication in 2002, 2354 (0.7%) received a prescription in violation of a black box warning. After adjustment, receipt of medication in violation of a black box warning was more likely when patients were 75 years or older or female. The number of medications taken, the number of medical problems, and the site of care were also associated with violations. Less than 1% of patients who received a drug in violation of a black box warning had an adverse drug event as a result. ::: ::: ::: CONCLUSIONS ::: About 7 in 1000 outpatients received a prescription violating a black box warning. Few incidents resulted in detectable harm.
---
paper_title: A Computer-Based Intervention for Improving the Appropriateness of Antiepileptic Drug Level Monitoring
paper_content:
We designed and implemented 2 automated, computerized screens for use at the time of antiepileptic drug (AED) test order entry to improve appropriateness by reminding physicians when a potentially redundant test was ordered and providing common indications for monitoring and pharmacokinetics of the specific AED. All computerized orders for inpatient serum AED levels during two 3-month periods were included in the study. During the 3-month period after implementation of the automated intervention, 13% of all AED tests ordered were canceled following computerized reminders. For orders appearing redundant, the cancellation rate was 27%. For nonredundant orders, 4% were canceled when
---
paper_title: Incidence and possible causes of prescribing potentially hazardous/contraindicated drug combinations in general practice.
paper_content:
BACKGROUND ::: Preventing the use of medications where there is the potential for serious drug-drug interactions or drug-disease interactions (contraindications) is essential to ensure patient safety. Previous studies have looked at the incidence of prescribing contraindicated drug combinations, but little is known about the underlying reasons for the co-prescribing events. The objectives of this study were to estimate the incidence of prescribing contraindicated drug combinations in general practice and to explore the clinical context, possible causes and potential systems failures leading to their occurrence. ::: ::: ::: METHODS ::: A list of contraindicated drug combinations was compiled according to established references. A search of computerised patient medication records was performed, followed by detailed chart review and assessment. The patient records from four general practices in an area of England were searched for a period of 1 year (1 June 1999-31 May 2000) to identify contraindicated drug combinations. All patients registered with the four participating practices during the study period were included (estimated n = 37 940). Medical records of the cases identified by the computer search were reviewed in detail and relevant information was extracted. Each case was then independently assessed by a pharmacist and a physician who judged whether the co-prescribing was justified and whether it was associated with an adverse drug event. Proximal causes and potential systems failures were suggested for each co-prescribing event. ::: ::: ::: MAIN OUTCOME MEASURES AND RESULTS ::: Fourteen patients with potential drug-drug interactions and 50 patients with potential drug-disease interactions were identified. Overall, these represent an incidence of 1.9 per 1000 patient-years (95% CI 1.5, 2.3) or 4.3 per 1000 patients being concurrently prescribed > or =2 drugs per year (95% CI 3.2, 5.4). 62 cases involving 63 co-prescribing events were reviewed. Two-thirds of these events involved medications that were initiated by hospital doctors. Awareness of the potential drug-drug or drug-disease interactions was documented in one-third of the events at the time of initial co-prescribing. Within the study period, the co-prescribing was judged to be not justified in 44 events (70%). Potential drug-drug interactions possibly resulted in two adverse drug events. The majority of contraindicated co-prescribing related to drug-disease interactions involved the use of propranolol or timolol eye drops for patients receiving bronchodilators and the use of amiodarone for patients receiving levothyroxine sodium. ::: ::: ::: CONCLUSION ::: The prescribing of contraindicated drug combinations was relatively rare in this study. Multiple possible causes and systems failures were identified and could be used to develop strategies for the prevention of prescribing errors involving contraindicated drug combinations in primary care.
---
paper_title: Effects of Computerized Clinical Decision Support Systems on Practitioner Performance and Patient Outcomes: A Systematic Review
paper_content:
ContextDevelopers of health care software have attributed improvements in patient ::: care to these applications. As with any health care intervention, such claims ::: require confirmation in clinical trials.ObjectivesTo review controlled trials assessing the effects of computerized clinical ::: decision support systems (CDSSs) and to identify study characteristics predicting ::: benefit.Data SourcesWe updated our earlier reviews by searching the MEDLINE, EMBASE, Cochrane ::: Library, Inspec, and ISI databases and consulting reference lists through ::: September 2004. Authors of 64 primary studies confirmed data or provided additional ::: information.Study SelectionWe included randomized and nonrandomized controlled trials that evaluated ::: the effect of a CDSS compared with care provided without a CDSS on practitioner ::: performance or patient outcomes.Data ExtractionTeams of 2 reviewers independently abstracted data on methods, setting, ::: CDSS and patient characteristics, and outcomes.Data SynthesisOne hundred studies met our inclusion criteria. The number and methodologic ::: quality of studies improved over time. The CDSS improved practitioner performance ::: in 62 (64%) of the 97 studies assessing this outcome, including 4 (40%) of ::: 10 diagnostic systems, 16 (76%) of 21 reminder systems, 23 (62%) of 37 disease ::: management systems, and 19 (66%) of 29 drug-dosing or prescribing systems. ::: Fifty-two trials assessed 1 or more patient outcomes, of which 7 trials (13%) ::: reported improvements. Improved practitioner performance was associated with ::: CDSSs that automatically prompted users compared with requiring users to activate ::: the system (success in 73% of trials vs 47%; P = .02) ::: and studies in which the authors also developed the CDSS software compared ::: with studies in which the authors were not the developers (74% success vs ::: 28%; respectively, P = .001).ConclusionsMany CDSSs improve practitioner performance. To date, the effects on ::: patient outcomes remain understudied and, when studied, inconsistent.
---
paper_title: Evaluating the Appropriateness of Digoxin Level Monitoring
paper_content:
BACKGROUND ::: Digoxin level determinations can be useful clinically in patients receiving digoxin therapy but are sometimes misused. ::: ::: ::: METHODS ::: Explicit appropriateness criteria were adapted from previously published criteria and revised using local expert opinion. They were then used to evaluate the appropriateness of random samples of inpatient and outpatient serum digoxin levels. Overall agreement between reviewers regarding appropriateness was good (K = 0.65). Patients in the study included 162 inpatients in whom 224 digoxin levels were measured and 117 outpatients in whom 130 digoxin levels were measured during a 6-month period. The main outcome measure was the proportion of digoxin levels with an appropriate indication. ::: ::: ::: RESULTS ::: Among inpatient levels, only 16% (95% confidence intervals [CI], 11%-20%) were appropriate. Of the 189 digoxin levels considered inappropriate, only 26 (14%) had a result of 2.3 nmol/L or more (> or =1.8 ng/ mL). None of these levels resulted in an important change in therapy, and no patient had a toxic reaction to the therapy. Among inappropriate levels, daily routine monitoring accounted for 78%. Of the 130 outpatient levels, 52% (95% CI, 44%-61%) were appropriate. Of 62 inappropriate levels, only 4 (6%) had a result of 2.3 nmol/L or more (> or =1.8 ng/mL). One result led to a change in therapy, but none of the patients were believed to experience a toxic reaction. Among the inappropriate levels, 87% of patients underwent early routine monitoring before a steady state was achieved. ::: ::: ::: CONCLUSIONS ::: A high proportion of digoxin levels were inappropriate, particularly among inpatients. In both groups, the primary reason tests were judged inappropriate was early routine monitoring. Few inappropriate tests resulted in important data. Interventions to improve the use of digoxin levels could potentially save substantial resources without missing important clinical results.
---
paper_title: Improving Acceptance of Computerized Prescribing Alerts in Ambulatory Care
paper_content:
Computerized drug prescribing alerts can improve patient safety, but are often overridden because of poor specificity and alert overload. Our objective was to improve clinician acceptance of drug alerts by designing a selective set of drug alerts for the ambulatory care setting and minimizing workflow disruptions by designating only critical to high-severity alerts to be interruptive to clinician workflow. The alerts were presented to clinicians using computerized prescribing within an electronic medical record in 31 Boston-area practices. There were 18,115 drug alerts generated during our six-month study period. Of these, 12,933 (71%) were noninterruptive and 5,182 (29%) interruptive. Of the 5,182 interruptive alerts, 67% were accepted. Reasons for overrides varied for each drug alert category and provided potentially useful information for future alert improvement. These data suggest that it is possible to design computerized prescribing decision support with high rates of alert recommendation acceptance by clinicians.
---
paper_title: Empirical Derivation of an Electronic Clinically Useful Problem Statement System
paper_content:
Problem lists are tools to improve patient management. In the medical record, they connect diagnoses to therapy, prognosis, and psychosocial issues. Computer-based problem lists enhance paper-based approaches by enabling cost-containment and quality assurance applications, but they require clinically expressive controlled vocabularies. Because existing controlled vocabularies do not represent problem statements at a clinically useful level, we derived a new canonical problem statement vocabulary through semi-automated analysis and distillation of providerentered problem lists collected over 6 years from 74 696 patients. We combined automated and manual methods to condense 891 770 problem statements entered by 1961 care providers at Grady Memorial Hospital in Atlanta, Georgia, to 15 534 Canonical Clinical Problem Statement System (CCPSS) terms. The nature and frequency of problem statements were characterized, interrelations among them were enumerated, and a database capturing the epidemiology of problems was created. The authors identified 23 503 problem relations (co-occurrences, sign‐ symptom complexes, and differential diagnoses) and 22 690 modifier words that further categorized “canonical” problems. To assess completeness, CCPSS content was compared with that of the 1997 Unified Medical Language System Metathesaurus (containing terms from 44 clinical vocabularies). Unified Medical Language System terms expressed 25% of individual CCPSS terms exactly (71% of problems by frequency), 27% partially, and 48% poorly or not at all. Clinicians judged that CCPSS terms completely captured their clinical intent for 84% of 686 randomly selected free-text problem statements. The CCPSS represents clinical concepts at a level exceeding that of previous approaches. A similar national approach could create a standardized, useful, shared resource for clinical practice.
---
paper_title: Automated coded ambulatory problem lists: evaluation of a vocabulary and a data entry tool
paper_content:
Abstract Background: Problem lists are fundamental to electronic medical records (EMRs). However, obtaining an appropriate problem list dictionary is difficult, and getting users to code their problems at the time of data entry can be challenging. Objective: To develop a problem list dictionary and search algorithm for an EMR system and evaluate its use. Methods: We developed a problem list dictionary and lookup tool and implemented it in several EMR systems. A sample of 10,000 problem entries was reviewed from each system to assess overall coding rates. We also performed a manual review of a subset of entries to determine the appropriateness of coded entries, and to assess the reasons other entries were left uncoded. Results: The overall coding rate varied significantly between different EMR implementations (63–79%). Coded entries were virtually always appropriate (99%). The most frequent reasons for uncoded entries were due to user interface failures (44–45%), insufficient dictionary coverage (20–32%), and non-problem entries (10–12%). Conclusion: The problem list dictionary and search algorithm has achieved a good coding rate, but the rate is dependent on the specific user interface implementation. Problem coding is essential for providing clinical decision support, and improving usability should result in better coding rates.
---
paper_title: Incidence and possible causes of prescribing potentially hazardous/contraindicated drug combinations in general practice.
paper_content:
BACKGROUND ::: Preventing the use of medications where there is the potential for serious drug-drug interactions or drug-disease interactions (contraindications) is essential to ensure patient safety. Previous studies have looked at the incidence of prescribing contraindicated drug combinations, but little is known about the underlying reasons for the co-prescribing events. The objectives of this study were to estimate the incidence of prescribing contraindicated drug combinations in general practice and to explore the clinical context, possible causes and potential systems failures leading to their occurrence. ::: ::: ::: METHODS ::: A list of contraindicated drug combinations was compiled according to established references. A search of computerised patient medication records was performed, followed by detailed chart review and assessment. The patient records from four general practices in an area of England were searched for a period of 1 year (1 June 1999-31 May 2000) to identify contraindicated drug combinations. All patients registered with the four participating practices during the study period were included (estimated n = 37 940). Medical records of the cases identified by the computer search were reviewed in detail and relevant information was extracted. Each case was then independently assessed by a pharmacist and a physician who judged whether the co-prescribing was justified and whether it was associated with an adverse drug event. Proximal causes and potential systems failures were suggested for each co-prescribing event. ::: ::: ::: MAIN OUTCOME MEASURES AND RESULTS ::: Fourteen patients with potential drug-drug interactions and 50 patients with potential drug-disease interactions were identified. Overall, these represent an incidence of 1.9 per 1000 patient-years (95% CI 1.5, 2.3) or 4.3 per 1000 patients being concurrently prescribed > or =2 drugs per year (95% CI 3.2, 5.4). 62 cases involving 63 co-prescribing events were reviewed. Two-thirds of these events involved medications that were initiated by hospital doctors. Awareness of the potential drug-drug or drug-disease interactions was documented in one-third of the events at the time of initial co-prescribing. Within the study period, the co-prescribing was judged to be not justified in 44 events (70%). Potential drug-drug interactions possibly resulted in two adverse drug events. The majority of contraindicated co-prescribing related to drug-disease interactions involved the use of propranolol or timolol eye drops for patients receiving bronchodilators and the use of amiodarone for patients receiving levothyroxine sodium. ::: ::: ::: CONCLUSION ::: The prescribing of contraindicated drug combinations was relatively rare in this study. Multiple possible causes and systems failures were identified and could be used to develop strategies for the prevention of prescribing errors involving contraindicated drug combinations in primary care.
---
paper_title: Improving Acceptance of Computerized Prescribing Alerts in Ambulatory Care
paper_content:
Computerized drug prescribing alerts can improve patient safety, but are often overridden because of poor specificity and alert overload. Our objective was to improve clinician acceptance of drug alerts by designing a selective set of drug alerts for the ambulatory care setting and minimizing workflow disruptions by designating only critical to high-severity alerts to be interruptive to clinician workflow. The alerts were presented to clinicians using computerized prescribing within an electronic medical record in 31 Boston-area practices. There were 18,115 drug alerts generated during our six-month study period. Of these, 12,933 (71%) were noninterruptive and 5,182 (29%) interruptive. Of the 5,182 interruptive alerts, 67% were accepted. Reasons for overrides varied for each drug alert category and provided potentially useful information for future alert improvement. These data suggest that it is possible to design computerized prescribing decision support with high rates of alert recommendation acceptance by clinicians.
---
paper_title: Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations
paper_content:
The quality and safety of health care leaves much to be desired.1,2 Automated clinical decision support (CDS) tools embedded in clinical information systems (CISs) such as computer provider order entry (CPOE) and electronic health records (EHR) applications have the potential to improve care and should be part of any comprehensive approach to improve quality.3,4,5,6 Medication prescribing is a component of health care with well documented quality and safety problems that can be improved by CDS.7,8,9 ::: ::: Medication-related CDS requires that pharmaceutical knowledge be represented in a computable, explicit and unambiguous form. Creating an automated representation of medical knowledge often is the most time consuming step in the development of a CDS system and is known as the “knowledge acquisition bottleneck.”10 For a time, it was hoped that the move toward explicit guidelines in medicine would decrease the knowledge acquisition effort,11 but that has not happened.12 Experiments on data sharing from over a decade ago have not progressed.13 As a result, just a few organizations, primarily academic medical centers, are creating rules and benefiting from CDS,14 but most health care organizations do not have the expertise or resources to create such knowledge bases themselves. ::: ::: One potential solution to the problem of access to automated medication-related knowledge is the set of commercial vendors that supply medication-related knowledge bases for pharmacy and prescribing applications. These vendors' products contain such knowledge as drug-drug and drug-disease interactions, minimum and maximum dosing suggestions, drug-allergy cross-sensitivity groupings, and groupings of medications by therapeutic class. Developers of CISs (either vendor-based or “homegrown”), with appropriate licensing, can incorporate commercial knowledge bases into their products. The knowledge base vendor receives a licensing fee for each CIS implementation and can amortize the …
---
paper_title: Managing the alert process at NewYork-Presbyterian Hospital.
paper_content:
Clinical decision support can improve the quality of care, but requires substantial knowledge management activities. At NewYork-Presbyterian Hospital in New York City, we have implemented a formal alert management process whereby only hospital committees and departments can request alerts. An explicit requestor, who will help resolve the details of the alert logic and the alert message must be identified. Alerts must be requested in writing using a structured alert request form. Alert requests are reviewed by the Alert Committee and then forwarded to the Information Systems department for a software development estimate. The model required that clinical committees and departments become more actively involved in the development of alerts than had previously been necessary. In the 12 months following implementation, 10 alert requests were received. The model has been well received. A lot of the knowledge engineering work has been distributed and burden has been removed from scarce medical informatics resources.
---
| Title: Review Paper Medication-related Clinical Decision Support in Computerized Provider Order Entry Systems: A Review
Section 1: Introduction
Description 1: Introduce the paper, its purpose, and provide background information on medication use, clinical decision support (CDS), and computerized provider order entry (CPOE) systems.
Section 2: Drug-Allergy Checking
Description 2: Discuss the drug-allergy checking capabilities in CPOE systems, including features, challenges, and recommendations.
Section 3: Basic Dosing Guidance for Medications in CPOE
Description 3: Explore the importance of dosing guidance, common errors, and the role of CPOE systems in improving dosing accuracy.
Section 4: Formulary Decision Support
Description 4: Examine how CPOE systems provide formulary decision support, including methods, success factors, and recommendations.
Section 5: Duplicate Therapy Checking
Description 5: Describe duplicate therapy checking features, challenges, and the impact of CPOE systems on reducing medication duplication errors.
Section 6: Drug-Drug Interaction Checking
Description 6: Discuss the functionalities and limitations of drug-drug interaction checking in CPOE systems and their implications.
Section 7: Categories of Advanced Medication-related Decision Support
Description 7: Outline advanced medication-related decision support categories that should be considered after basic CDS is in place.
Section 8: Advanced Dosing Guidance in CPOE
Description 8: Detail advanced dosing guidance features specific to various patient populations and their impact on dosing accuracy.
Section 9: Advanced Guidance for Medication-associated Laboratory Testing
Description 9: Discuss the role of CPOE systems in facilitating medication-associated laboratory testing and monitoring.
Section 10: Advanced Checking of Drug-Disease Interactions and Contraindications
Description 10: Examine advanced drug-disease interaction checking features and the importance of capturing accurate patient conditions.
Section 11: Advanced Drug-Pregnancy Alerting
Description 11: Explore the implementation of drug-pregnancy alerting in CPOE systems and associated challenges.
Section 12: Recommendations for Future Work
Description 12: Provide detailed recommendations for various stakeholders on improving medication-related CDS.
Section 13: Summary
Description 13: Summarize the overall findings, importance, and future directions for medication-related CDS in CPOE systems. |
A Review of the Combination of Experimental Measurements and Fibril-Reinforced Modeling for Investigation of Articular Cartilage and Chondrocyte Response to Loading | 13 | ---
paper_title: Normal and pathological adaptations of articular cartilage to joint loading
paper_content:
Joints are functional units that transmit mechanical loads between contacting bones during normal daily or specialized activities, e.g., sports. All components of the joint, i.e. articular cartilage, bone, muscles, ligaments/tendons and nerves, participate in load transmission. Failure in any of these components can cause joint malfunction, which, in turn, may lead to accumulation of damage in other joint components. Mechanical forces have great influence on the synthesis and rate of turnover of articular cartilage molecules, such as proteoglycans (PGs). Regular cyclic loading of the joint enhances PG synthesis and makes cartilage stiff. On the other hand, loading appears to have less evident effects on the articular cartilage collagen fibril network. Continuous compression of the cartilage diminishes PG synthesis and causes damage of the tissue through necrosis. The prevailing view is that osteoarthrosis (OA) starts from the cartilage surface through PG depletion and fibrillation of the superficial collagen network. It has also been suggested that the initial structural changes take place in the subchondral bone, especially when the joint is exposed to an impact type of loading. This in turn would create an altered stress pattern on joint surfaces, which leads to structural damage and mechanical failure of articular cartilage. The importance of the neuromuscular system to the initiation and progression of OA is still poorly understood. Many surgical extra- and intra-articular procedures have been used for the treatment of OA. Although some of the new methods, such as autologous chondrocyte transplantation and mosaicplasty, have given good clinical results, it is reasonable to emphasize that the methods still are experimental and more controlled studies are needed.
---
paper_title: Further insights into the structural principles governing the function of articular cartilage.
paper_content:
Abstract ::: A new experimental technique involving the observation of an artificial notch propagating through articular cartilage has been used to examine the biomechanical properties of this tissue. By predetermining both the orientation of the notch and its location with respect to the primary functional zones a more rigorous description of the structure/function relationships in cartilage has been achieved. The principal findings are: A primary 'strain-locking' role for the superficial zone has been demonstrated experimentally in articular cartilage. Comparison of the behaviour of radial and transverse notches has revealed a primary structural anisotropy in the general matrix. This is strong evidence in support of the morphological model proposed in a recent paper by the present author. A range of mechanical responses is shown to be reflected consistently in structural features considered to arise principally from variations in the degree of crosslinking between the overall radial configuration of collagen fibres. It is possible to separate mechanically the collagen fibres from the general matrix and the bonding relationship between them is time-dependent. Measurement of loads required to propagate a radial notch suggest (a) that the strength of the fibres and/or that of the crosslinks between fibres increases with depth through the cartilage thickness, and (b) that the radial columns of chondrocytes typical of the deep zone do not represent planes of significantly reduced strength relative to the adjacent matrix. A major structural discontinuity exists in normal articular cartilage in a plane parallel to and below the articular surface. It is argued that this plane represents a major change in overall orientation of the collagen fibres. Finally, by applying the experimental techniques described in this paper both to degenerative articular cartilage and to healthy articular cartilage in which the primary components have been selectively degraded enzymatically it should be possible to gain a more precise picture of the structural origin of malfunction in this tissue.
---
paper_title: Cell deformation behavior in mechanically loaded rabbit articular cartilage 4 weeks after anterior cruciate ligament transection
paper_content:
OBJECTIVE ::: Chondrocyte stresses and strains in articular cartilage are known to modulate tissue mechanobiology. Cell deformation behavior in cartilage under mechanical loading is not known at the earliest stages of osteoarthritis. Thus, the aim of this study was to investigate the effect of mechanical loading on volume and morphology of chondrocytes in the superficial tissue of osteoarthritic cartilage obtained from anterior cruciate ligament transected (ACLT) rabbit knee joints, 4 weeks after intervention. ::: ::: ::: METHODS ::: A unique custom-made microscopy indentation system with dual-photon microscope was used to apply controlled 2 MPa force-relaxation loading on patellar cartilage surfaces. Volume and morphology of chondrocytes were analyzed before and after loading. Also global and local tissue strains were calculated. Collagen content, collagen orientation and proteoglycan content were quantified with Fourier transform infrared microspectroscopy, polarized light microscopy and digital densitometry, respectively. ::: ::: ::: RESULTS ::: Following the mechanical loading, the volume of chondrocytes in the superficial tissue increased significantly in ACLT cartilage by 24% (95% confidence interval (CI) 17.2-31.5, P < 0.001), while it reduced significantly in contralateral group tissue by -5.3% (95% CI -8.1 to -2.5, P = 0.003). Collagen content in ACLT and contralateral cartilage were similar. PG content was reduced and collagen orientation angle was increased in the superficial tissue of ACLT cartilage compared to the contralateral cartilage. ::: ::: ::: CONCLUSIONS ::: We found the novel result that chondrocyte deformation behavior in the superficial tissue of rabbit articular cartilage is altered already at 4 weeks after ACLT, likely because of changes in collagen fibril orientation and a reduction in PG content.
---
paper_title: A biphasic viscohyperelastic fibril-reinforced model for articular cartilage: formulation and comparison with experimental data.
paper_content:
Experiments in articular cartilage have shown highly nonlinear stress-strain curves under finite deformations, nonlinear tension-compression response as well as intrinsic viscous effects of the proteoglycan matrix and the collagen fibers. A biphasic viscohyperelastic fibril-reinforced model is proposed here, which is able to describe the intrinsic viscoelasticity of the fibrillar and nonfibrillar components of the solid phase, the nonlinear tension-compression response and the nonlinear stress-strain curves under tension and compression. A viscohyperelastic constitutive equation was used for the matrix and the fibers encompassing, respectively, a hyperelastic function used previously for the matrix and a hyperelastic law used before to represent biological connective tissues. This model, implemented in an updated Lagrangian finite element code, displayed good ability to follow experimental stress-strain equilibrium curves under tension and compression for human humeral cartilage. In addition, curve fitting of experimental reaction force and lateral displacement unconfined compression curves showed that the inclusion of viscous effects in the matrix allows the description of experimental data with material properties for the fibers consistent with experimental tensile tests, suggesting that intrinsic viscous effects in the matrix of articular cartilage plays an important role in the mechanical response of the tissue.
---
paper_title: Gd-DTPA2− as a measure of cartilage degradation
paper_content:
Glycosaminoglycans (GAGs) are the main source of tissue fixed charge density (FCD) in cartilage, and are lost early in arthritic diseases. We tested the hypothesis that, like Na + , the charged contrast agent Gd-DTPA 2- (and hence proton T 1 ) could be used to measure tissue FCD and hence GAG concentration. NMR spectroscopy studies of cartilage explants demonstrated that there was a strong correlation (r > 0.96) between proton T 1 in the presence of Gd-DTPA 2- and tissue sodium and GAG concentrations. An ideal one-compartment electrochemical (Donnan) equilibrium model was examined as a means of quantifying FCD from Gd-DTPA 2- concentration, yielding a value 50% less but linearly correlated with the validated method of quantifying FCD from Na + . These data could be used as the basis of an empirical model with which to quantify FCD from Gd-DTPA 2- concentration, or a more sophisticated physical model could be developed. Spatial distributions of FCD were easily observed in T,-weighted MRI studies of trypsin and interleukin-1 induced cartilage degradation, with good histological correlation. Therefore, equilibration of the tissue in Gd-DTPA 2- gives us the opportunity to directly image (through T, weighting) the concentration of GAG, a major and critically important macromolecule in cartilage. Pilot clinical studies demonstrated Gd-DTPA 2- penetration into cartilage, suggesting that this technique is clinically feasible.
---
paper_title: Stress–relaxation of human patellar articular cartilage in unconfined compression: Prediction of mechanical response by tissue composition and structure
paper_content:
Abstract Mechanical properties of articular cartilage are controlled by tissue composition and structure. Cartilage function is sensitively altered during tissue degeneration, in osteoarthritis (OA). However, mechanical properties of the tissue cannot be determined non-invasively. In the present study, we evaluate the feasibility to predict, without mechanical testing, the stress–relaxation response of human articular cartilage under unconfined compression. This is carried out by combining microscopic and biochemical analyses with composition-based mathematical modeling. Cartilage samples from five cadaver patellae were mechanically tested under unconfined compression. Depth-dependent collagen content and fibril orientation, as well as proteoglycan and water content were derived by combining Fourier transform infrared imaging, biochemical analyses and polarized light microscopy. Finite element models were constructed for each sample in unconfined compression geometry. First, composition-based fibril-reinforced poroviscoelastic swelling models, including composition and structure obtained from microscopical and biochemical analyses were fitted to experimental stress–relaxation responses of three samples. Subsequently, optimized values of model constants, as well as compositional and structural parameters were implemented in the models of two additional samples to validate the optimization. Theoretical stress–relaxation curves agreed with the experimental tests ( R =0.95–0.99). Using the optimized values of mechanical parameters, as well as composition and structure of additional samples, we were able to predict their mechanical behavior in unconfined compression, without mechanical testing ( R =0.98). Our results suggest that specific information on tissue composition and structure might enable assessment of cartilage mechanics without mechanical testing.
---
paper_title: The effects of selective matrix degradation on the short-term compressive properties of adult human articular cartilage.
paper_content:
The effects of proteoglycan and collagen digestion on the transient response of human articular cartilage when tested in unconfined compression were determined. Small cylindrical specimens of cartilage, isolated from the femoral head of the hip joint and from the femoral condyles of the knee joint, were subjected to a suddenly applied compressive load using a test apparatus designed to yield a transient oscillatory response. From this response values of the elastic stiffness and the viscous damping coefficient were determined. Cathepsin D and cathepsin B1 were used to digest the proteoglycan in some specimens, while in other specimens leukocyte elastase was used to attack the non-helical terminal regions of the Type II tropocollagen molecules and possibly the Type IX collagen molecule and thereby disturb the integrity of the collagen mesh. The results showed that proteoglycan digestion alone reduced the viscous damping coefficient but it did not significantly alter the elastic stiffness as determined from the oscillatory response. In contrast, the action of elastase reduced both the damping coefficient and the elastic stiffness of the cartilage. The results demonstrated the role of proteoglycans in regulating fluid transport in cartilage and hence controlling the time-dependent viscous properties. The elastic stiffness was shown to be dependent on the integrity of the collagen fibre network and not on the proteoglycans.
---
paper_title: A composition-based cartilage model for the assessment of compositional changes during cartilage damage and adaptation
paper_content:
Summary Objective The composition of articular cartilage changes with progression of osteoarthritis. Since compositional changes are associated with changes in the mechanical properties of the tissue, they are relevant for understanding how mechanical loading induces progression. The objective of this study is to present a computational model of articular cartilage which enables to study the interaction between composition and mechanics. Methods Our previously developed fibril-reinforced poroviscoelastic swelling model for articular cartilage was combined with our tissue composition-based model. In the combined model both the depth- and strain-dependencies of the permeability are governed by tissue composition. All local mechanical properties in the combined model are directly related to the local composition of the tissue, i.e., to the local amounts of proteoglycans and collagens and to tissue anisotropy. Results Solely based on the composition of the cartilage, we were able to predict the equilibrium and transient response of articular cartilage during confined compression, unconfined compression, indentation and two different 1D-swelling tests, simultaneously. Conclusion Since both the static and the time-dependent mechanical properties have now become fully dependent on tissue composition, the model allows assessing the mechanical consequences of compositional changes seen during osteoarthritis without further assumptions. This is a major step forward in quantitative evaluations of osteoarthritis progression.
---
paper_title: Confocal microscopy indentation system for studying in situ chondrocyte mechanics
paper_content:
Abstract Chondrocytes synthesize extracellular matrix molecules, thus they are essential for the development, adaptation and maintenance of articular cartilage. Furthermore, it is well accepted that the biosynthetic activity of chondrocytes is influenced by the mechanical environment. Therefore, their response to mechanical stimuli has been studied extensively. Much of the knowledge in this area of research has been derived from testing of isolated cells, cartilage explants, and fixed cartilage specimens: systems that differ in important aspects from chondrocytes embedded in articular cartilage and observed during loading conditions. In this study, current model systems have been improved by working with the intact cartilage in real time. An indentation system was designed on a confocal microscope that allows for simultaneous loading and observation of chondrocytes in their native environment. Cell mechanics were then measured under precisely controlled loading conditions. The indentation system is based on a light transmissible cylindrical glass indentor of 0.17 mm thickness and 1.64 mm diameter that is aligned along the focal axis of the microscope and allows for real time observation of live cells in their native environment. The system can be used to study cell deformation and biological responses, such as calcium sparks, while applying prescribed loads on the cartilage surface. It can also provide novel information on the relationship between cell loading and cartilage adaptive/degenerative processes in the intact tissue.
---
paper_title: Importance of collagen orientation and depth-dependent fixed charge densities of cartilage on mechanical behavior of chondrocytes.
paper_content:
The collagen network and proteoglycan matrix of articular cartilage are thought to play an important role in controlling the stresses and strains in and around chondrocytes, in regulating the biosynthesis of the solid matrix, and consequently in maintaining the health of diarthrodial joints. Understanding the detailed effects of the mechanical environment of chondrocytes on cell behavior is therefore essential for the study of the development, adaptation, and degeneration of articular cartilage. Recent progress in macroscopic models has improved our understanding of depth-dependent properties of cartilage. However, none of the previous works considered the effect of realistic collagen orientation or depth-dependent negative charges in microscopic models of chondrocyte mechanics. The aim of this study was to investigate the effects of the collagen network and fixed charge densities of cartilage on the mechanical environment of the chondrocytes in a depth-dependent manner. We developed an anisotropic, inhomogeneous, microstructural fibril-reinforced finite element model of articular cartilage for application in unconfined compression. The model consisted of the extracellular matrix and chondrocytes located in the superficial, middle, and deep zones. Chondrocytes were surrounded by a pericellular matrix and were assumed spherical prior to tissue swelling and load application. Material properties of the chondrocytes, pericellular matrix, and extracellular matrix were obtained from the literature. The loading protocol included a free swelling step followed by a stress-relaxation step. Results from traditional isotropic and transversely isotropic biphasic models were used for comparison with predictions from the current model. In the superficial zone, cell shapes changed from rounded to elliptic after free swelling. The stresses and strains as well as fluid flow in cells were greatly affected by the modulus of the collagen network. The fixed charge density of the chondrocytes, pericellular matrix, and extracellular matrix primarily affected the aspect ratios (height/ width) and the solid matrix stresses of cells. The mechanical responses of the cells were strongly location and time dependent. The current model highlights that the collagen orientation and the depth-dependent negative fixed charge densities of articular cartilage have a great effect in modulating the mechanical environment in the vicinity of chondrocytes, and it provides an important improvement over earlier models in describing the possible pathways from loading of articular cartilage to the mechanical and biological responses of chondrocytes.
---
paper_title: Regulatory volume decrease (RVD) by isolated and in situ bovine articular chondrocytes
paper_content:
Articular chondrocytes in vivo are exposed to a changing osmotic environment under both physiological (static load) and pathological (osteoarthritis) conditions. Such changes to matrix hydration could alter cell volume in situ and influence matrix metabolism. However the ability of chondrocytes to regulate their volume in the face of osmotic perturbations have not been studied in detail. We have investigated the regulatory volume decrease (RVD) capacity of bovine articular chondrocytes within, and isolated from the matrix, before and following acute hypotonic challenge. Cell volumes were determined by visualising fluorescently-labelled chondrocytes using confocal laser scanning microscopy (CLSM) at 21°C. Chondrocytes in situ were grouped into superficial (SZ), mid (MZ), and deep zones (DZ). When exposed to 180mOsm or 250mOsm hypotonic challenge, cells in situ swelled rapidly (within ∼90 sec). Chondrocytes then exhibited rapid RVD (t1/2 ∼ 8 min), with cells from all zones returning to ∼3% of their initial volume after 20 min. There was no significant difference in the rates of RVD between chondrocytes in the three zones. Similarly, no difference in the rate of RVD was observed for an osmotic shock from 280 to 250 or 180mOsm. Chondrocytes isolated from the matrix into medium of 380mOsm and then exposed to 280mOsm showed an identical RVD response to that of in situ cells. The RVD response of in situ cells was inhibited by REV 5901. The results suggested that the signalling pathways involved in RVD remained intact after chondrocyte isolation from cartilage and thus it was likely that there was no role for cell-matrix interactions in mediating RVD. © 2001 Wiley-Liss, Inc.
---
paper_title: Biomechanical properties of knee articular cartilage
paper_content:
Structure and properties of knee articular cartilage are adapted to stresses exposed on it during physiological activ- ities. In this study, we describe site- and depth-dependence of the biomechanical properties of bovine knee articular cartilage. We also investigate the effects of tissue structure and composition on the biomechanical parameters as well as characterize experimentally and numerically the compression-tension nonlinearity of the cartilage matrix. In vitro mechano-optical mea- surements of articular cartilage in unconfined compression geometry are conducted to obtain material parameters, such as thickness, Young's and aggregate modulus or Poisson's ratio of the tissue. The experimental results revealed significant site- and depth-dependent variations in recorded parameters. After enzymatic modification of matrix collagen or proteoglycans our results show that collagen primarily controls the dynamic tissue response while proteoglycans affect more the static properties. Experimental measurements in compression and tension suggest a nonlinear compression-tension behavior of articular carti- lage in the direction perpendicular to articular surface. Fibril reinforced poroelastic finite element model was used to capture the experimentally found compression-tension nonlinearity of articular cartilage.
---
paper_title: Is classical consolidation theory applicable to articular cartilage deformation?
paper_content:
In this paper, classical consolidation theory has been used to investigate the time-dependent response of articular cartilage to static loading. An experimental technique was developed to measure simultaneously the matrix internal pressure and creep strain under conditions of one-dimensional consolidation. This is the first measurement of the internal stress state of loaded cartilage. It is demonstrated that under static compression the applied load is shared by the components of the matrix (i.e. water, the proteoglycans, and the collagen fibrillar meshwork), during which time a maximum hydrostatic excess pore pressure is developed as initial water exudation occurs. This pressure decays as water is further exuded from the matrix and effective consolidation begins with a progressive transfer of the applied stress from water to the collagen fibrils and proteoglycan gel. Consolidation is completed when the hydrostatic excess pore pressure is reduced to zero and the solid components sustain in full the applied load.
---
paper_title: Physical properties of rabbit articular cartilage after transection of the anterior cruciate ligament
paper_content:
The effect of unilateral transection of the anterior cruciate ligament on the confined compression and swelling properties of the distal femoral articular cartilage of skeletally mature rabbits at 9 weeks after surgery was determined. Gross morphological grading of the transected and contralateral control distal femora stained with India ink confirmed that cartilage degeneration had been induced by ligament transection. Osteochondral cores, 1.8 mm in diameter, were harvested from the medial femoral condyles. The modulus, permeability, and electrokinetic (streaming potential) coefficient of the articular cartilage of the osteochondral cores were assessed by confined compression creep experiments. The properties (mean ± SD) of control cartilage were: confined compression modulus, 0.75 ± 0.28 MPa; hydraulic permeability, 0.63 ± 0.28 × 10−15 m2/Pa*sec; and electrokinetic coefficient, 0.16 ± 0.31 ± 10−9 V/Pa. In transected knees, the modulus was reduced by 18% (p = 0.04), while the permeability and electrokinetic coefficient were not detectably altered. The change in modulus was accompanied by a trend (p = 0.07) toward a decrease (-11%) in the glycosaminoglycan density within the tissue, a significant increase (p <0.001) in the water content of the cartilage after equilibration in 1 × phosphate buffered saline from 70.3 ± 4.1% in control knees to 75.2 ± 4.0% in transected knees, and little further swelling after tissue equilibration in hypotonic saline. The compressive modulus of the cartilage from both control and transected knees was positively correlated with the density of tissue glycosaminoglycan. The alterations in the physical properties of the articular cartilage after transection of the anterior cruciate ligament in the rabbit show trends similar to those observed in human and other animal models of osteoarthritis and provide further support for the use of this model in the study of cartilage degeneration.
---
paper_title: Modeling the Matrix of Articular Cartilage Using a Continuous Fiber Angular Distribution Predicts Many Observed Phenomena
paper_content:
A number of theoretical frameworks embodying the disparity between tensile and compressive properties of cartilage have been proposed, accounting for the collagen fibers implicitly [1,2] or explicitly [3–5]. These models generally propose discrete fiber families to describe the collagen matrix. They are able to capture the most salient features of the cartilage mechanical response, namely, the tension-compression nonlinearity of the stress-strain curve [6].Copyright © 2009 by ASME
---
paper_title: Depth-dependent Compressive Equilibrium Properties of Articular Cartilage Explained by its Composition
paper_content:
For this study, we hypothesized that the depth-dependent compressive equilibrium properties of articular cartilage are the inherent consequence of its depth-dependent composition, and not the result of depth-dependent material properties. To test this hypothesis, our recently developed fibril-reinforced poroviscoelastic swelling model was expanded to include the influence of intra- and extra-fibrillar water content, and the influence of the solid fraction on the compressive properties of the tissue. With this model, the depth-dependent compressive equilibrium properties of articular cartilage were determined, and compared with experimental data from the literature. The typical depth-dependent behavior of articular cartilage was predicted by this model. The effective aggregate modulus was highly strain-dependent. It decreased with increasing strain for low strains, and increases with increasing strain for high strains. This effect was more pronounced with increasing distance from the articular surface. The main insight from this study is that the depth-dependent material behavior of articular cartilage can be obtained from its depth-dependent composition only. This eliminates the need for the assumption that the material properties of the different constituents themselves vary with depth. Such insights are important for understanding cartilage mechanical behavior, cartilage damage mechanisms and tissue engineering studies.
---
paper_title: Implementation of subject‐specific collagen architecture of cartilage into a 2D computational model of a knee joint—data from the osteoarthritis initiative (OAI)
paper_content:
A subject-specific collagen architecture of cartilage, obtained from T2 mapping of 3.0 T magnetic resonance imaging (MRI; data from the Osteoarthritis Initiative), was implemented into a 2D finite element model of a knee joint with fibril-reinforced poroviscoelastic cartilage properties. For comparison, we created two models with alternative collagen architectures, addressing the potential inaccuracies caused by the nonoptimal estimation of the collagen architecture from MRI. Also two models with constant depth-dependent zone thicknesses obtained from literature were created. The mechanical behavior of the models were analyzed and compared under axial impact loading of 846N. Compared to the model with patient-specific collagen architecture, the cartilage model without tangentially oriented collagen fibrils in the superficial zone showed up to 69% decrease in maximum principal stress and fibril strain and 35% and 13% increase in maximum principal strain and pore pressure, respectively, in the superficial layers of the cartilage. The model with increased thickness for the superficial and middle zones, as obtained from the literature, demonstrated at most 73% increase in stress, 143% increase in fibril strain, and 26% and 23% decrease in strain and pore pressure, respectively, in the intermediate cartilage. The present results demonstrate that the computational model of a knee joint with the collagen architecture of cartilage estimated from patient-specific MRI or literature lead to different stress and strain distributions. The findings also suggest that minor errors in the analysis of collagen architecture from MRI, for example due to the analysis method or MRI resolution, can lead to alterations in knee joint stresses and strains. © 2012 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 31:10–22, 2012
---
paper_title: Complex nature of stress inside loaded articular cartilage
paper_content:
Abstract We show that in the early stages of loading of the cartilage matrix extensive water exudation and related physicochemical and structural changes give rise to a distinctly consolidatable system. By enzymatically modifying the pre-existing osmotic condition of the normal matrix and measuring its hydrostatic excess pore pressure, we have studied the exact influence of physicochemistry on the consolidation of cartilage. We argue that the attainment of a certain minimum level of swelling stiffness of the solid skeleton, which is developed at the maximum hydrostatic excess pore pressure of the fluid, controls the effective consolidation of articular cartilage. Three related but distinct stresses are developed during cartilage deformation, namely (1) the swelling stress in the coupled proteoglycan/collagen skeleton in the early stages of deformation, (2) the hydrostatic excess pore pressure carried by the fluid component, and (3) the effective stress generated on top of the minimum value of the swelling stress in the consolidation stages following the attainment of the fluid's maximum pore pressure. The minimum value of the swelling pressure is in turn generated over and above the intrinsic osmotic pressure in the unloaded matrix. The response of the hyaluronidase-digested matrix relative to its intact state again highlights the important influence of the osmotic pressure and the coefficient of permeability, both of which are related to the volume fraction of proteoglycans on cartilage deformation, and therefore its ability to function as an effective stress-redistributing layer above the subchondral bone.
---
paper_title: The collagenous architecture of articular cartilage--a synthesis of ultrastructure and mechanical function.
paper_content:
The fibrillar ultrastructure within the general matrix of articular cartilage has been examined in the stressed root region of predetermined notches propagating under tension in directions both perpendicular and parallel to the articular surface. From the different ultrastructural responses induced by the 2 notch geometries, it has been possible to further clarify the relationship between structure and load bearing function in normal articular cartilage, and identify features of the collagenous architecture that seem directly related to a loss of load bearing function associated with both osteoarthritis and nonprogressive degeneration.
---
paper_title: A hyperelastic biphasic fibre-reinforced model of articular cartilage considering distributed collagen fibre orientations: continuum basis, computational aspects and applications
paper_content:
Cartilage is a multi-phase material composed of fluid and electrolytes (68-85% by wet weight), proteoglycans (5-10% by wet weight), chondrocytes, collagen fibres and other glycoproteins. The solid ...
---
paper_title: Traversing the intact/fibrillated joint surface: a biomechanical interpretation
paper_content:
Cartilage taken from the osteoarthritic bovine patellae was used to investigate the progression of change in the collagenous architecture associated with the development of fibrillated lesions. Differential interference contrast optical microscopy using fully hydrated radial sections revealed a continuity in the alteration of the fibrillar architecture in the general matrix consistent with the progressive destructuring of a native radial arrangement of fibrils repeatedly interconnected in the transverse direction via a non-entwinement-based linking mechanism. This destructuring is shown to occur in the still intact regions adjacent to the disrupted lesion thus rendering them more vulnerable to radial rupture. Two contrasting modes of surface rupture were observed and these are explained in terms of the absence or presence of a skewed structural weakening of the intermediate zone. A mechanism of surface rupture initiation based on simple bi-layer theory is proposed to account for the intensification of surface ruptures observed in the intact regions on advancing towards the fibrillation front. Focusing specifically on the primary collagen architecture in the cartilage matrix, this study proposes a pathway of change from intact to overt disruption within a unified structural framework.
---
paper_title: Stress-sharing between the fluid and solid components of articular cartilage under varying rates of compression
paper_content:
This paper investigates the factors affecting the mechanical behavior of the articular matrix with special emphasis on the effect of compressive strain-rate on the short and long term responses of the fluid and the solid components. The relationships expressed in the general theory of one-dimensional consolidation are generalized to account for strain-rate in the deformation process with the result that the stiffness due to the fluid and the solid components, and a parameter representing the degree of drag, can be calculated explicity
---
paper_title: Chondrocyte deformation and local tissue strain in articular cartilage: A confocal microscopy study
paper_content:
It is well accepted that mechanical forces can modulate the metabolic activity of chondrocytes, although the specific mechanisms of mechanical signal transduction in articular cartilage are still unknown. One proposed pathway through which chondrocytes may perceive changes in their mechanical environment is directly through cellular deformation. An important step toward understanding the role of chondrocyte deformation in signal transduction is to determine the changes in the shape and volume of chondrocytes during applied compression of the tissue. Recently, a technique was developed for quantitative morphometry of viable chondrocytes within the extracellular matrix using three-dimensional confocal scanning laser microscopy. In the present study, this method was used to quantify changes in chondrocyte morphology and local tissue deformation in the surface, middle, and deep zones in explants of canine articular cartilage subjected to physiological levels of matrix deformation. The results indicated that at 15% surface-to-surface equilibrium strain in the tissue, a similar magnitude of local tissue strain occurs in the middle and deep zones. In the surface zone, local strains of 19% were observed, indicating that the compressive stiffness of the surface zone is significantly less than that of the middle and deep zones. With this degree of tissue deformation, significant decreases in cellular height of 26, 19, and 20% and in cell volume of 22, 16, and 17% were observed in the surface, middle, and deep zones, respectively. The deformation of chondrocytes in the surface zone was anisotropic, with significant lateral expansion occurring in the direction perpendicular to the local split-line pattern. When compression was removed, there was complete recovery of cellular morphology in all cases. These observations support the hypothesis that deformation of chondrocytes or a change in their volume may occur during in vivo joint loading and may have a role in the mechanical signal transduction pathway of articular cartilage.
---
paper_title: Maturation of collagen fibril network structure in tibial and femoral cartilage of rabbits.
paper_content:
OBJECTIVE ::: The structure and composition of articular cartilage change during development and growth, as well as in response to varying loading conditions. These changes modulate the functional properties of cartilage. We studied maturation-related changes in the collagen network organization of cartilage as a function of tissue depth. ::: ::: ::: DESIGN ::: Articular cartilage from the tibial medial plateaus and femoral medial condyles of female New Zealand white rabbits was collected from six age-groups: 4 weeks (n=30), 6 weeks (n=30), 3 months (n=24), 6 months (n=24), 9 months (n=27) and 18 months (n=19). Collagen fibril orientation, parallelism (anisotropy) and optical retardation were analyzed with polarized light microscopy. Differences in the development of depth-wise collagen organization in consecutive age-groups and the two joint locations were compared statistically. ::: ::: ::: RESULTS ::: The collagen fibril network of articular cartilage undergoes significant changes during maturation. The most prominent changes in collagen architecture, as assessed by orientation, parallelism and retardation were noticed between the ages of 4 and 6 weeks in tibial cartilage and between 6 weeks and 3 months in femoral cartilage, i.e., orientation became more perpendicular-to-surface, and parallelism and retardation increased with changes being most prominent in the deep zone. At the age of 6 weeks, tibial cartilage had a more perpendicular-to-surface orientation in the middle and deep zones than femoral cartilage (P<0.001) and higher parallelism throughout the tissue depth (P<0.001), while femoral cartilage exhibited more parallel-to-surface orientation (P<0.01) above the deep zone after maturation. Optical retardation of collagen was higher in tibial than in femoral cartilage at the ages of 4 and 6 weeks (P<0.001), while at older ages, retardation below the superficial zone in the femoral cartilage became higher than in the tibial cartilage. ::: ::: ::: CONCLUSIONS ::: During maturation, there is a significant modulation of collagen organization in articular cartilage which occurs earlier in tibial than in femoral cartilage, and is most pronounced in the deep zone.
---
paper_title: A Study of the Structural Response of Wet Hyaline Cartilage to Various Loading Situations
paper_content:
A direct view has been obtained of the manner in which the fibrous components and chondrocytes in hyaline cartilage respond to the application of uniaxial tensile loading and plane-strain compressive loading.A micro-mechanical testing device has been developed which inserts directly into the stage of a high-resolution optical microscope fitted with Nomarski interference contrast and this has permitted simultaneous morphological and mechanical observations to be conducted on articular cartilage maintained in its wet functional condition.Aligned and crimped fibrous arrays surround the deeper chondrocytes and can be observed to undergo well-defined geometric changes with applied stress. It is thought that these arrays may act as displacement or strain sensors transmitting mechanical information from the bulk matrix to their associated cells thus inducing a specific metabolic response.The process of tissue recovery following sustained high levels of compressive loading can also be observed with this experimen...
---
paper_title: Comparison of single-phase isotropic elastic and fibril-reinforced poroelastic models for indentation of rabbit articular cartilage.
paper_content:
Classically, single-phase isotropic elastic (IE) model has been used for in situ or in vivo indentation analysis of articular cartilage. The model significantly simplifies cartilage structure and properties. In this study, we apply a fibril-reinforced poroelastic (FRPE) model for indentation to extract more detailed information on cartilage properties. Specifically, we compare the information from short-term (instantaneous) and long-term (equilibrium) indentations, as described here by IE and FRPE models. Femoral and tibial cartilage from rabbit (age 0-18 months) knees (n=14) were tested using a plane-ended indenter (diameter=0.544 mm). Stepwise creep tests were conducted to equilibrium. Single-phase IE solution for indentation was used to derive instantaneous modulus and equilibrium (Young's) modulus for the samples. The classical and modified Hayes' solutions were used to derive values for the indentation moduli. In the FRPE model, the indentation behavior was sample-specifically described with three material parameters, i.e. fibril network modulus, non-fibrillar matrix modulus and permeability. The instantaneous and fibril network modulus, and the equilibrium Young's modulus and non-fibrillar matrix modulus showed significant (p<0.01) linear correlations of R(2)=0.516 and 0.940, respectively (Hayes' solution) and R(2)=0.531 and 0.960, respectively (the modified Hayes' solution). No significant correlations were found between the non-fibrillar matrix modulus and instantaneous moduli or between the fibril network modulus and the equilibrium moduli. These results indicate that the instantaneous indentation modulus (IE model) provides information on tensile stiffness of collagen fibrils in cartilage while the equilibrium modulus (IE model) is a significant measure for stiffness of PG matrix. Thereby, this study highlights the feasibility of a simple indentation analysis.
---
paper_title: Fourier transform infrared imaging spectroscopy investigations in the pathogenesis and repair of cartilage.
paper_content:
Significant complications in the management of osteoarthritis (OA) are the inability to identify early cartilage changes during the development of the disease, and the lack of techniques to evaluate the tissue response to therapeutic and tissue engineering interventions. In recent studies several spectroscopic parameters have been elucidated by Fourier transform infrared imaging spectroscopy (FT-IRIS) that enable evaluation of molecular and compositional changes in human cartilage with progressively severe OA, and in repair cartilage from animal models. FT-IRIS permits evaluation of early-stage matrix changes in the primary components of cartilage, collagen and proteoglycan on histological sections at a spatial resolution of approximately 6.25 microm. In osteoarthritic cartilage, the collagen integrity, monitored by the ratio of peak areas at 1338 cm(-1)/Amide II, was found to correspond to the histological Mankin grade, the gold standard scale utilized to evaluate cartilage degeneration. Apparent matrix degradation was observable in the deep zone of cartilage even in the early stages of OA. FT-IRIS studies also found that within the territorial matrix of the cartilage cells (chondrocytes), proteoglycan content increased with progression of cartilage degeneration while the collagen content remained the same, but the collagen integrity decreased. Regenerative (repair) tissue from microfracture treatment of an equine cartilage defect showed significant changes in collagen distribution and loss in proteoglycan content compared to the adjacent normal cartilage, with collagen fibrils demonstrating a random orientation in most of the repair tissue. These studies demonstrate that FT-IRIS is a powerful technique that can provide detailed ultrastructural information on heterogeneous tissues such as diseased cartilage and thus has great potential as a diagnostic modality for cartilage degradation and repair.
---
paper_title: Mechanical characterization of articular cartilage by combining magnetic resonance imaging and finite-element analysis—a potential functional imaging technique
paper_content:
Magnetic resonance imaging (MRI) provides a method for non-invasive characterization of cartilage composition and structure. We aimed to see whether T1 and T2 relaxation times are related to proteoglycan (PG) and collagen-specific mechanical properties of articular cartilage. Specifically, we analyzed whether variations in the depthwise collagen orientation, as assessed by the laminae obtained from T2 profiles, affect the mechanical characteristics of cartilage. After MRI and unconfined compression tests of human and bovine patellar cartilage samples, fibril-reinforced poroviscoelastic finite-element models (FEM), with depthwise collagen orientations implemented from quantitative T2 maps (3 laminae for human, 3–7 laminae for bovine), were constructed to analyze the non-fibrillar matrix modulus (PG specific), fibril modulus (collagen specific) and permeability of the samples. In bovine cartilage, the non-fibrillar matrix modulus (R = −0.64, p < 0.05) as well as the initial permeability (R = 0.70, p < 0.05) correlated with T1. In bovine cartilage, T2 correlated positively with the initial fibril modulus (R = 0.62, p = 0.05). In human cartilage, the initial fibril modulus correlated negatively (R = −0.61, p < 0.05) with T2. Based on the simulations, cartilage with a complex collagen architecture (5 or 7 laminae), leading to high bulk T2 due to magic angle effects, provided higher compressive stiffness than tissue with a simple collagen architecture (3 laminae). Our results suggest that T1 reflects PG-specific mechanical properties of cartilage. High T2 is characteristic to soft cartilage with a classical collagen architecture. Contradictorily, high bulk T2 can also be found in stiff cartilage with a multilaminar collagen fibril network. By emerging MRI and FEM, the present study establishes a step toward functional imaging of articular cartilage.
---
paper_title: MRI assessment of cartilage ultrastructure
paper_content:
In T2-weighted MRI images joint cartilage can appear laminated. The multilaminar appearance is visualized as zones of different intensity. This appearance is based on the dipolar interaction of water molecules within cartilage zones of different collageneous network structures. Therefore, the MR visualization of zones of anisotropic arrangement of the collagen fibers depends upon their orientation to the static magnetic field (magic-angle effect). The aim of this article is to demonstrate the potential of high-resolution MRI for characterizing cartilage network structuring and biomechanical properties. Information equivalent to that from polarization light microscopy can be derived noninvasively. Based on NMR microscopic (microMRI) data, potential new possibilities of MRI for quantitative assessment of collagen structuring and intracartilagenous load distribution are presented. These methods use MR intensity angle dependence and load influence on cartilage visualization. Alternatively to the determination of mechanical parameters from cartilage deformation, it is demonstrated that stress distribution and biomechanical properties can be derived in principle from the local intensity variation of anisotropic fiber orientation zones. The limitations with respect to a clinical application of the proposed methods are discussed.
---
paper_title: Biochemical (and Functional) Imaging of Articular Cartilage
paper_content:
Over the coming decades nondestructive biochemical imaging by magnetic resonance imaging (MRI) will provide an adjunct or surrogate for the destructive histologic and biochemical assays used today. A number of MRI methods demonstrate image contrast that, although influenced by the biochemical composition, is not normally specific to a particular measure of the biochemical state. The most widely used of these is T2-weighted imaging, which variably reveals collagen ultrastructure, hydration (or collagen content), and, to a lesser extent, glycosaminoglycan (GAG) concentration (each of these biochemical metrics is an important determinant of the functional integrity of cartilage). The lack of specificity of this technique (and others discussed herein) confounds efforts to improve strategies for evaluating cartilage. However, three methods permit a very specific measure of the cartilage biochemical state. Each of these three methods, explored in detail in this article, is rooted in a biophysical theory that relates the image signal intensity to a specific biochemical feature. Proton-density imaging directly measures water content (hydration), a parameter that might increase approximately 5% with significant degeneration. Magic-angle imaging, in which the angle dependence of T2 is measured, can provide a specific measure of collagen (or macromolecular) ultrastructure. The difficulty in getting the angle dependence presently precludes its use clinically. Delayed gadolinium-enhanced MRI of cartilage provides a specific measure of the distribution of GAGs. This method measures the distribution of a charged contrast agent, which in turn reflects the distribution of charge associated with GAG. This technique can be used in a clinical setting, and ongoing studies will explore its utility in monitoring therapeutic efficacy and disease progression. Although none of these techniques are presently in routine clinical use, emerging data provide promise that the future will see patient-specific biochemical analysis of cartilage, an outcome almost unimaginable 20 years ago.
---
paper_title: Articular cartilage superficial zone collagen birefringence reduced and cartilage thickness increased before surface fibrillation in experimental osteoarthritis
paper_content:
Objectives—To investigate articular cartilage collagen network, thickness of birefringent cartilage zones, and glycosaminoglycan concentration in macroscopically normal looking knee joint cartilage of young beagles subjected to experimental slowly progressive osteoarthritis (OA). Methods—OA was induced by a tibial 30° valgus osteotomy in 15 female beagles at the age of 3 months. Fifteen sisters were controls. Cartilage specimens were collected seven (Group 1) and 18 months (Group 2) postoperatively. Collagen induced optical path diVerence and cartilage zone thickness measurements were determined from histological sections of articular cartilage with smooth and intact surface by computer assisted quantitative polarised light microscopy. Volume density of cartilage collagen fibrils was determined by image analysis from transmission electron micrographs and content of glycosaminoglycans by quantitative digital densitometry from histological sections. Results—In the superficial zone of the lateral tibial and femoral cartilage,the collagen induced optical path diVerence (birefringence) decreased by 19 to 71% (p < 0.05) seven months postoperatively.This suggests that severe superficial collagen fibril network deterioration took place, as 18 months postoperatively, macroscopic and microscopic OA was present in many cartilage areas. Thickness of the uncalcified cartilage increased while the superficial zone became thinner in the same sites. In operated dogs, glycosaminoglycan content first increased (Group 1) in the lateral tibial condyle and then decreased (Group 2) (p < 0.05). Conclusion—In this OA model, derangement of the superficial zone collagen network was the probable reason for birefringence reduction. This change occurred well before macroscopic OA. (Ann Rheum Dis 1998;57:237‐245)
---
paper_title: Effect of superficial collagen patterns and fibrillation of femoral articular cartilage on knee joint mechanics-a 3D finite element analysis.
paper_content:
Collagen fibrils of articular cartilage have specific depth-dependent orientations and the fibrils bend in the cartilage surface to exhibit split-lines. Fibrillation of superficial collagen takes place in osteoarthritis. We aimed to investigate the effect of superficial collagen fibril patterns and collagen fibrillation of cartilage on stresses and strains within a knee joint. A 3D finite element model of a knee joint with cartilage and menisci was constructed based on magnetic resonance imaging. The fibril-reinforced poroviscoelastic material properties with depth-dependent collagen orientations and split-line patterns were included in the model. The effects of joint loading on stresses and strains in cartilage with various split-line patterns and medial collagen fibrillation were simulated under axial impact loading of 1000 N. In the model, the collagen fibrils resisted strains along the split-line directions. This increased also stresses along the split-lines. On the contrary, contact and pore pressures were not affected by split-line patterns. Simulated medial osteoarthritis increased tissue strains in both medial and lateral femoral condyles, and contact and pore pressures in the lateral femoral condyle. This study highlights the importance of the collagen fibril organization, especially that indicated by split-line patterns, for the weight-bearing properties of articular cartilage. Osteoarthritic changes of cartilage in the medial femoral condyle created a possible failure point in the lateral femoral condyle. This study provides further evidence on the importance of the collagen fibril organization for the optimal function of articular cartilage.
---
paper_title: Indentation diagnostics of cartilage degeneration.
paper_content:
OBJECTIVE ::: Mechanical indentation and ultrasound (US) indentation instruments have been introduced for quantitative assessment of cartilage properties in vivo. In this study, we compared capabilities of these instruments to determine properties of healthy and spontaneously degenerated human patellar cartilage in situ and to diagnose the early stages of osteoarthritis (OA). ::: ::: ::: DESIGN ::: Six anatomical sites were localized from human patellae (N=14). By determining the force by which the tissue resists constant deformation (F(IND)), a mechanical indentation instrument was used to measure the compressive dynamic stiffness of cartilage. Further, the dynamic modulus (E(US)) and the US reflection coefficient of cartilage surface (R(US)) were measured with an US indentation instrument. For reference, Young's modulus and dynamic modulus were determined from cartilage disks using unconfined compression geometry. Proteoglycan and collagen contents of samples were analyzed microscopically. The samples were divided into three categories (healthy, early degeneration, and advanced degeneration) based on the Osteoarthritis Research Society International (OARSI) OA-grading. ::: ::: ::: RESULTS ::: Parameters R(US), E(US) and F(IND) were significantly associated with the histological, compositional and mechanical properties of cartilage (|r|=0.28-0.72, n=73-75, P<0.05). Particularly, R(US) was able to discern degeneration of the samples with high sensitivity (0.77) and specificity (0.98). All parameters, except R(US,) showed statistically significant site-dependent variation in healthy cartilage. ::: ::: ::: CONCLUSIONS ::: US reflection measurement shows potential for diagnostics of early OA as no site-matched reference values are needed. In addition, the high linear correlations between indentation and reference measurements suggest that these arthroscopic indentation instruments can be used for quantitative evaluation of cartilage mechanical properties, e.g., after cartilage repair surgery.
---
paper_title: In situ compressive stiffness, biochemical composition, and structural integrity of articular cartilage of the human knee joint
paper_content:
Abstract Objective Reduction of compressive stiffness of articular cartilage has been reported as one of the first signs of cartilage degeneration. For the measurement of in situ compressive stiffness, a hand-held indentation probe has recently been developed and baseline data for macroscopically normal knee joint cartilage were provided. However, the histological stage of degeneration of the measured cartilage was not known. The purpose of this study was to investigate whether there is a relationship between the in situ measured compressive stiffness, the histological stage of degeneration, and the biochemical composition of articular cartilage. Design Instantaneous compressive stiffness was measured for the articular cartilage of 24 human cadaver knees. Additionally, biochemical composition (total proteoglycan and collagen content) and histological appearance (according to the Mankin score) were assessed for each measurement location. Results Despite visually normal surfaces, various histological signs of degeneration were present. A high correlation between Mankin score and cartilage stiffness was observed for the lateral patellar groove ( R 2 =0.81), the medial ( R 2 =0.83) and the lateral femoral condyle ( R 2 =0.71), whereas a moderate correlation was found for the medial patellar groove ( R 2 =0.44). No correlation was observed between biochemical composition and cartilage compressive stiffness. Conclusions Our results are in agreement with others and show that the instantaneous compressive stiffness is primarily dependent on the integrity of the extracellular matrix, and not on the content of the major cartilage constituents. The high correlation between stiffness and Mankin score in mild osteoarthrosis suggests that the stage of cartilage degeneration can be assessed quantitatively with the hand-held indentation probe. Moderate and severe case of osteoarthrosis remains to be investigated.
---
paper_title: A nonlinear biphasic fiber-reinforced porohyperviscoelastic model of articular cartilage incorporating fiber reorientation and dispersion.
paper_content:
A nonlinear biphasic fiber-reinforced porohyperviscoelastic (BFPHVE) model of articular cartilage incorporating fiber reorientation effects during applied load was used to predict the response of ovine articular cartilage at relatively high strains (20%). The constitutive material parameters were determined using a coupled finite element-optimization algorithm that utilized stress relaxation indentation tests at relatively high strains. The proposed model incorporates the strain-hardening, tension-compression, permeability, and finite deformation nonlinearities that inherently exist in cartilage, and accounts for effects associated with fiber dispersion and reorientation and intrinsic viscoelasticity at relatively high strains. A new optimization cost function was used to overcome problems associated with large peak-to-peak differences between the predicted finite element and experimental loads that were due to the large strain levels utilized in the experiments. The optimized material parameters were found to be insensitive to the initial guesses. Using experimental data from the literature, the model was also able to predict both the lateral displacement and reaction force in unconfined compression, and the reaction force in an indentation test with a single set of material parameters. Finally, it was demonstrated that neglecting the effects of fiber reorientation and dispersion resulted in poorer agreement with experiments than when they were considered. There was an indication that the proposed BFPHVE model, which includes the intrinsic viscoelasticity of the nonfibrillar matrix (proteoglycan), might be used to model the behavior of cartilage up to relatively high strains (20%). The maximum percentage error between the indentation force predicted by the FE model using the optimized material parameters and that measured experimentally was 3%.
---
paper_title: The Effect of Matrix Tension-Compression Nonlinearity and Fixed Negative Charges on Chondrocyte Responses in Cartilage
paper_content:
Thorough analyses of the mechano-electrochemical interaction between articular cartilage matrix and the chondrocytes are crucial to understanding of the signal transduction mechanisms that modulate the cell metabolic activities and biosynthesis. Attempts have been made to model the chondrocytes embedded in the collagen-proteoglycan extracellular matrix to determine the distribution of local stress-strain field, fluid pressure and the time-dependent deformation of the cell. To date, these models still have not taken into account a remarkable characteristic of the cartilage extracellular matrix given rise from organization of the collagen fiber architecture, now known as the tension-compression nonlinearity (TCN) of the tissue, as well as the effect of negative charges attached to the proteoglycan molecules, and the cell cytoskeleton that interacts with mobile ions in the interstitial fluid to create osmotic and electro-kinetic events in and around the cells. In this study, we proposed a triphasic, multi-scale, finite element model incorporating the Conewise Linear Elasticity that can describe the various known coupled mechanical, electrical and chemical events, while at the same time representing the TCN of the extracellular matrix. The model was employed to perform a detailed analysis of the chondrocytes' deformational and volume responses, and to quantitatively describe the mechano-electrochemical environment of these cells. Such a model describes contributions of the known detailed micro-structural and composition of articular cartilage. Expectedly, results from model simulations showed substantial effects of the matrix TCN on the cell deformational and volume change response. A low compressive Poisson's ratio of the cartilage matrix exhibiting TCN resulted in dramatic recoiling behavior of the tissue under unconfined compression and induced significant volume change in the cell. The fixed charge density of the chondrocyte and the pericellular matrix were also found to play an important role in both the time-dependent and equilibrium deformation of the cell. The pericellular matrix tended to create a uniform osmolarity around the cell and overall amplified the cell volume change. It is concluded that the proposed model can be a useful tool that allows detailed analysis of the mechano-electrochemical interactions between the chondrocytes and its surrounding extracellular matrix, which leads to more quantitative insights in the cell mechano-transduction.
---
paper_title: Depth-wise progression of osteoarthritis in human articular cartilage: investigation of composition, structure and biomechanics
paper_content:
Summary Objective Osteoarthritis (OA) is characterized by the changes in structure and composition of articular cartilage. However, it is not fully known, what is the depth-wise change in two major components of the cartilage solid matrix, i.e., collagen and proteoglycans (PGs), during OA progression. Further, it is unknown how the depth-wise changes affect local tissue strains during compression. Our aim was to address these issues. Methods Data from the previous microscopic and biochemical measurements of the collagen content, distribution and orientation, PG content and distribution, water content and histological grade of normal and degenerated human patellar articular cartilage ( n =73) were reanalyzed in a depth-wise manner. Using this information, a composition-based finite element (FE) model was used to estimate tissue function solely based on its composition and structure. Results The orientation angle of collagen fibrils in the superficial zone of cartilage was significantly less parallel to the surface ( P P P Conclusion For the first time, depth-wise point-by-point statistical comparisons of structure and composition of human articular cartilage were conducted. The present results indicated that early OA is primarily characterized by the changes in collagen orientation and PG content in the superficial zone, while collagen content does not change until OA has progressed to its late stage. Our simulation results suggest that impact loads in OA joint could create a risk for tissue failure and cell death.
---
paper_title: Fourier transform infrared imaging and MR microscopy studies detect compositional and structural changes in cartilage in a rabbit model of osteoarthritis.
paper_content:
Assessment of subtle changes in proteoglycan (PG) and collagen, the primary macromolecular components of cartilage, which is critical for diagnosis of the early stages of osteoarthritis (OA), has so far remained a challenge. In this study we induced osteoarthritic cartilage changes in a rabbit model by ligament transection and medial meniscectomy and monitored disease progression by infrared fiber optic probe (IFOP) spectroscopy, Fourier transform infrared imaging spectroscopy (FT-IRIS), and magnetic resonance imaging (MRI) microscopy. IFOP studies combined with chemometric partial least-squares analysis enabled us to monitor progressive cartilage surface changes from two to twelve weeks post-surgery. FT-IRIS studies of histological sections of femoral condyle cartilage revealed that compared with control cartilage the OA cartilage had significantly reduced PG content 2 and 4 weeks post-surgery, collagen fibril orientation changes 2 and 4 weeks post-surgery, and changes in collagen integrity 2 and 10 weeks post-surgery, but no significant changes in collagen content at any time. MR microscopy studies revealed reduced fixed charge density (FCD), indicative of reduced PG content, in the OA cartilage, compared with controls, 4 weeks post-surgery. A non-significant trend toward higher apparent MT exchange rate, k(m), was also found in the OA cartilage at this time point, suggesting changes in collagen structural features. These two MR findings for FCD and k(m) parallel the FT-IRIS findings of reduced PG content and altered collagen integrity, respectively. MR microscopy studies of the cartilage at the 12-week time point also found a trend toward longer T (2) values and reduced anisotropy in the deep zone of the OA cartilage, consistent with increased hydration and less ordered collagen. These studies reveal that FT-IRIS and MR microscopy provide complementary data on compositional changes in articular cartilage in the early stages of osteoarthritic degradation.
---
paper_title: A novel method for determination of collagen orientation in cartilage by Fourier transform infrared imaging spectroscopy (FT-IRIS).
paper_content:
OBJECTIVE ::: The orientation of collagen molecules is an important determinant of their functionality in connective tissues. The objective of the current study is to establish a method to determine the alignment of collagen molecules in histological sections of cartilage by polarized Fourier transform infrared imaging spectroscopy (FT-IRIS), a method based on molecular vibrations. ::: ::: ::: METHODS ::: Polarized FT-IRIS data obtained from highly oriented tendon collagen were utilized to calibrate the derived spectral parameters. The ratio of the integrated areas of the collagen amide I/II absorbances was used as an indicator of collagen orientation. These data were then applied to FT-IRIS analysis of the orientation of collagen molecules in equine articular cartilage, in equine repair cartilage after microfracture treatment, and in human osteoarthritic cartilage. Polarized light microscopy (PLM), the most frequently utilized technique to evaluate collagen fibril orientation in histological sections, was performed on picrosirius red-stained sections for comparison. ::: ::: ::: RESULTS AND CONCLUSION ::: Thicknesses of each zone of normal equine cartilage (calculated based on differences in collagen orientation) were equivalent as determined by PLM and FT-IRIS. Comparable outcomes were obtained from the PLM and FT-IRIS analyses of repair and osteoarthritis tissues, whereby similar zonal variations in collagen orientation were apparent for the two methods. However, the PLM images of human osteoarthritic cartilage showed less obvious zonal discrimination and orientation compared to the FT-IRIS images, possibly attributable to the FT-IRIS method detecting molecular orientation changes prior to their manifestation at the microscopic level.
---
paper_title: Characterization of articular cartilage by combining microscopic analysis with a fibril-reinforced finite-element model.
paper_content:
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
---
paper_title: Mechanically induced calcium signaling in chondrocytes in situ.
paper_content:
Changes in intracellular calcium (Ca(2+)) concentration, also known as Ca(2+) signaling, have been widely studied in articular cartilage chondrocytes to investigate pathways of mechanotransduction. Various physical stimuli can generate an influx of Ca(2+) into the cell, which in turn is thought to trigger a range of metabolic and signaling processes. In contrast to most studies, the approach used in this study allows for continuous real time recording of calcium signals in chondrocytes in their native environment. Therefore, interactions of cells with the extracellular matrix (ECM) are fully accounted for. Calcium signaling was quantified for dynamic loading conditions and at different temperatures. Peak magnitudes of calcium signals were greater and of shorter duration at 37°C than at 21°C. Furthermore, Ca(2+) signals were involved in a greater percentage of cells in the dynamic compared to the relaxation phases of loading. In contrast to the time-delayed signaling observed in isolated chondrocytes seeded in agarose gel, Ca(2+) signaling in situ is virtually instantaneous in response to dynamic loading. These differences between in situ and in vitro cell signaling responses might provide crucial insight into the role of the ECM in providing pathways of mechanotransduction in the intact cartilage that are absent in isolated cells seeded in gel constructs.
---
paper_title: Changes in spatial collagen content and collagen network architecture in porcine articular cartilage during growth and maturation
paper_content:
Summary Objectives The present study was designed to reveal changes in the collagen network architecture and collagen content in cartilage during growth and maturation of pigs. Methods Femoral groove articular cartilage specimens were collected from 4-, 11- and 21-month-old domestic pigs ( n =12 in each group). The animal care conditions were kept constant throughout the study. Polarized light microscopy was used to determine the collagen fibril network birefringence, fibril orientation and parallelism. Infrared spectroscopy was used to monitor changes in the spatial collagen content in cartilage tissue. Results During growth, gradual alterations were recorded in the collagen network properties. At 4 months of age, a major part of the collagen fibrils was oriented parallel to the cartilage surface throughout the tissue. However, the fibril orientation changed considerably as skeletal maturation progressed. At 21 months of age, the fibrils of the deep zone cartilage ran predominantly at right angles to the cartilage surface. The collagen content increased and its depthwise distribution changed during growth and maturation. A significant increase of the collagen network birefringence was observed in the deep tissue at the age of 21 months. Conclusions The present study revealed dynamic changes of the collagen network during growth and maturation of the pigs. The structure of the collagen network of young pigs gradually approached a network with the classical Benninghoff architecture. The probable explanation for the alterations is growth of the bone epiphysis with simultaneous adaptation of the cartilage to increased joint loading. The maturation of articular cartilage advances gradually with age and offers, in principle, the possibility to influence the quality of the tissue, especially by habitual joint loading. These observations in porcine cartilage may be of significance with respect to the maturation of human articular cartilage.
---
paper_title: Uncertainties in indentation testing of articular cartilage: A fibril-reinforced poroviscoelastic study
paper_content:
Indentation testing provides a quantitative technique to evaluate mechanical characteristics of articular cartilage in situ and in vivo. Traditionally, analytical solutions proposed by Hayes et al. [Hayes WC, Keer LM, Herrmann G, Mockros LF. A mathematical analysis for indentation tests of articular cartilage. J Biomech 1972;5(5):541-51] have been applied for the analysis of indentation measurements, and due to their practicality, they have been used for clinical diagnostics. Using this approach, the elastic modulus is derived based on scaling factors which depend on cartilage thickness, indenter radius and Poisson's ratio, and the cartilage model is assumed isotropic and homogeneous, thereby greatly simplifying the true tissue characteristics. The aim was to investigate the validity of previous model assumptions for indentation testing. Fibril-reinforced poroviscoelastic cartilage (FRPVE) model including realistic tissue characteristics was used to simulate indentation tests. The effects of cartilage inhomogeneity, anisotropy, and indentation velocity on the indentation response were evaluated, and scaling factors from the FRPVE analysis were derived. Subsequently, the validity of scaling factors obtained using the traditional and the FRPVE analyses was studied by calculating indentation moduli for bovine cartilage samples, and comparing these values to those obtained experimentally in unconfined compression testing. Collagen architecture and compression velocity had significant effects on the indentation response. Isotropic elastic analysis gave significantly higher (30-107%) Young's moduli for indentation compared to unconfined compression testing. Modification of Hayes' scaling factors by accounting for cartilage inhomogeneity and anisotropy improved the agreement of Young's moduli obtained for the two test configurations by 14-28%. These results emphasize the importance of realistic cartilage structure and mechanical properties in the indentation analysis. Although it is not possible to fully describe tissue inhomogeneity and anisotropy with just the Young's modulus and Poisson's ratio, accounting for inhomogeneity and anisotropy in these two parameters may help to improve the in vivo characterization of tissue using arthroscopic indentation testing.
---
paper_title: Nondestructive imaging of human cartilage glycosaminoglycan concentration by MRI
paper_content:
Despite the compelling need mandated by the prevalence and morbidity of degenerative cartilage diseases, it is extremely difficult to study disease progression and therapeutic efficacy, either in vitro or in vivo (clinically). This is partly because no techniques have been available for nondestructively visualizing the distribution of functionally important macromolecules in living cartilage. Here we describe and validate a technique to image the glycosaminoglycan concentration ([GAG]) of human cartilage nondestructively by magnetic resonance imaging (MRI). The technique is based on the premise that the negatively charged contrast agent gadolinium diethylene triamine pentaacetic acid (Gd(DTPA)2-) will distribute in cartilage in inverse relation to the negatively charged GAG concentration. Nuclear magnetic resonance spectroscopy studies of cartilage explants demonstrated that there was an approximately linear relationship between T1 (in the presence of Gd(DTPA)2-) and [GAG] over a large range of [GAG]. Furthermore, there was a strong agreement between the [GAG] calculated from [Gd(DTPA)2-] and the actual [GAG] determined from the validated methods of calculations from [Na+] and the biochemical DMMB assay. Spatial distributions of GAG were easily observed in T1-weighted and T1-calculated MRI studies of intact human joints, with good histological correlation. Furthermore, in vivo clinical images of T1 in the presence of Gd(DTPA)2- (i.e., GAG distribution) correlated well with the validated ex vivo results after total knee replacement surgery, showing that it is feasible to monitor GAG distribution in vivo. This approach gives us the opportunity to image directly the concentration of GAG, a major and critically important macromolecule in human cartilage. Magn Reson Med 41:857–865, 1999. © 1999 Wiley-Liss, Inc.
---
paper_title: A fibril-reinforced poroviscoelastic swelling model for articular cartilage.
paper_content:
From a mechanical point of view, the most relevant components of articular cartilage are the tight and highly organized collagen network together with the charged proteoglycans. Due to the fixed charges of the proteoglycans, the cation concentration inside the tissue is higher than in the surrounding synovial fluid. This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. The fibrillar collagen network resists straining and swelling pressures. This combination makes cartilage a unique, highly hydrated and pressurized tissue, enforced with a strained collagen network. Many theories to explain articular cartilage behavior under loading, expressed in computational models that either include the swelling behavior or the properties of the anisotropic collagen structure, can be found in the literature. The most common tests used to determine the mechanical quality of articular cartilage are those of confined compression, unconfined compression, indentation and swelling. All theories currently available in the literature can explain the cartilage response occurring in some of the above tests, but none of them can explain these for all of the tests. We hypothesized that a model including simultaneous mathematical descriptions of (1) the swelling properties due to the fixed-change densities of the proteoglycans and (2) the anisotropic viscoelastic collagen structure, can explain all these test simultaneously. To study this hypothesis we extended our fibril-reinforced poroviscoelastic finite element model with our biphasic swelling model. We have shown that the newly developed fibril-reinforced poroviscoelastic swelling (FPVES) model for articular cartilage can simultaneously account for the reaction force during swelling, confined compression, indentation and unconfined compression as well as the lateral deformation during unconfined compression. Using this theory it is possible to analyze the link between the collagen network and the swelling properties of articular cartilage.
---
paper_title: The Pericellular Matrix as a Transducer of Biomechanical and Biochemical Signals in Articular Cartilage
paper_content:
: The pericellular matrix (PCM) is a narrow tissue region surrounding chondrocytes in articular cartilage, which together with the enclosed cell(s) has been termed the “chondron.” While the function of this region is not fully understood, it is hypothesized to have important biological and biomechanical functions. In this article, we review a number of studies that have investigated the structure, composition, mechanical properties, and biomechanical role of the chondrocyte PCM. This region has been shown to be rich in proteoglycans (e.g., aggrecan, hyaluronan, and decorin), collagen (types II, VI, and IX), and fibronectin, but is defined primarily by the presence of type VI collagen as compared to the extracellular matrix (ECM). Direct measures of PCM properties via micropipette aspiration of isolated chondrons have shown that the PCM has distinct mechanical properties as compared to the cell or ECM. A number of theoretical and experimental studies suggest that the PCM plays an important role in regulating the microenvironment of the chondrocyte. Parametric studies of cell–matrix interactions suggest that the presence of the PCM significantly affects the micromechanical environment of the chondrocyte in a zone-dependent manner. These findings provide support for a potential biomechanical function of the chondrocyte PCM, and furthermore, suggest that changes in the PCM and ECM properties that occur with osteoarthritis may significantly alter the stress-strain and fluid environments of the chondrocytes. An improved understanding of the structure and function of the PCM may provide new insights into the mechanisms that regulate chondrocyte physiology in health and disease.
---
paper_title: Contribution of postnatal collagen reorientation to depth-dependent mechanical properties of articular cartilage
paper_content:
The collagen fibril network is an important factor for the depth-dependent mechanical behaviour of adult articular cartilage (AC). Recent studies show that collagen orientation is parallel to the articular surface throughout the tissue depth in perinatal animals, and that the collagen orientations transform to a depth-dependent arcade-like structure in adult animals. Current understanding on the mechanobiology of postnatal AC development is incomplete. In the current paper, we investigate the contribution of collagen fibril orientation changes to the depth-dependent mechanical properties of AC. We use a composition-based finite element model to simulate in a 1-D confined compression geometry the effects of ten different collagen orientation patterns that were measured in developing sheep. In initial postnatal life, AC is mostly subject to growth and we observe only small changes in depth-dependent mechanical behaviour. Functional adaptation of depth-dependent mechanical behaviour of AC takes place in the second half of life before puberty. Changes in fibril orientation alone increase cartilage stiffness during development through the modulation of swelling strains and osmotic pressures. Changes in stiffness are most pronounced for small stresses and for cartilage adjacent to the bone. We hypothesize that postnatal changes in collagen fibril orientation induce mechanical effects that in turn promote these changes. We further hypothesize that a part of the depth-dependent postnatal increase in collagen content in literature is initiated by the depth-dependent postnatal increase in fibril strain due to collagen fibril reorientation.
---
paper_title: CHONDROCYTES IN AGAROSE CULTURE SYNTHESIZE A MECHANICALLY FUNCTIONAL EXTRACELLULAR MATRIX
paper_content:
The ability of chondrocytes from calf articular cartilage to synthesize and assemble a mechanically functional cartilage-like extracellular matrix was quantified in high cell density (∼ 107 cell/ml) agarose gel culture. The time evolution of chondrocyte proliferation, proteoglycan synthesis and loss to the media, and total deposition of glycosaminoglycan (GAG)-containing matrix within agarose gels was characterized during 10 weeks in culture. To assess whether the matrix deposited within the agarose gel was mechanically and electromechanically functional, we measured in parallel cultures the time evolution of dynamic mechanical stiffness and oscillatory streaming potential in uniaxial confined compression, and determined the intrinsic equilibrium modulus, hydraulic permeability, and electrokinetic coupling coefficient of the developing cultures. Biosynthetic rates were initially high, but by 1 month had fallen to a level similar to that found in the parent calf articular cartilage from which the cells were extracted. The majority of the newly synthesized proteoglycans remained in the gel. Histological sections showed matrix rich in proteoglycans and collagen fibrils developing around individual cells. The equilibrium modulus, dynamic stiffness, and oscillatory streaming potential rose to many times (>5X) their initial values at the start of the culture; the hydraulic permeability decreased to a fraction (∼1/10) that of the cell-laden porous agarose at the beginning of the culture. By day 35 of culture, DNA concentration (cell density), GAG concentration, stiffness, and streaming potential were all ∼25% that of calf articular cartilage. The frequency dependence of the dynamic stiffness and potential was similar to that of calf articular cartilage. Together, these results suggested the formation of mechanically functional matrix.
---
paper_title: T2relaxation reveals spatial collagen architecture in articular cartilage: A comparative quantitative MRI and polarized light microscopic study
paper_content:
It has been suggested that orientational changes in the collagen network of articular cartilage account for the depthwise T2 anisotropy of MRI through the magic angle effect. To investigate the relationship between laminar T2 appearance and collagen organization (anisotropy), bovine osteochondral plugs (N = 9) were T2 mapped at 9.4T with cartilage surface normal to the static magnetic field. Collagen fibril arrangement of the same samples was studied with polarized light microscopy, a quantitative technique for probing collagen organization by analyzing its ability to rotate plane polarized light, i.e., birefringence (BF). Depthwise variation of safranin O-stained proteoglycans was monitored with digital densitometry. The spatially varying cartilage T2 followed the architectural arrangement of the collagen fibril network: a linear positive correlation between T2 and the reciprocal of BF was established in each sample, with r = 0.91 +/- 0.02 (mean +/- SEM, N = 9). The current results reveal the close connection between the laminar T2 structure and the collagen architecture in histologic zones.
---
paper_title: Mechanical loading of in situ chondrocytes in lapine retropatellar cartilage after anterior cruciate ligament transection
paper_content:
The aims of this study were (i) to quantify chondrocyte mechanics in fully intact articular cartilage attached to its native bone and (ii) to compare the chondrocyte mechanics for cells in healthy and early osteoarthritis (OA) tissue. We hypothesized that cells in the healthy tissue would deform less for given articular surface pressures than cells in the early OA tissue because of a loss of matrix integrity in early OA and the associated loss of structural integrity that is thought to protect chondrocytes. Chondrocyte dynamics were quantified by measuring the deformation response of the cells to controlled loading of fully intact cartilage using a custom-designed confocal indentation system. Early OA was achieved nine weeks following transection of the anterior cruciate ligament (ACL) in rabbit knees. Experiments were performed on the retropatellar cartilage of early OA rabbit knees (four joints and 48 cells), the corresponding intact contralateral control knees (four joints and 48 cells) and knees from normal control rabbits (four joints and 48 cells). Nine weeks following ACL transection, articular cartilage of the experimental joints showed substantial increases in thickness, and progression towards OA as assessed using histological grading. Local matrix strains in the superficial zone were greater for the experimental (38 ± 4%) compared with the contralateral (27 ± 5%) and normal (28 ± 4%) joints (p = 0.04). Chondrocyte deformations in the axial and depth directions were similar during indentation loading for all experimental groups. However, cell width increased more for the experimental cartilage chondrocytes (12 ± 1%) than the contralateral (6 ± 1%) and normal control chondrocytes (6 ± 1%; p < 0.001). On average, chondrocyte volume increased with indentation loading in the early OA cartilage (8 ± 3%, p = 0.001), while it decreased for the two control groups (−8 ± 2%, p = 0.002 for contralateral and −8 ± 1%, p = 0.004 for normal controls). We conclude from these results that our hypothesis of cell deformations in the early OA tissue was only partially supported: specifically, changes in chondrocyte mechanics in early OA were direction-specific with the primary axial deformations remaining unaffected despite vastly increased average axial matrix deformations. Surprisingly, chondrocyte deformations increased in early OA in specific transverse directions which have received little attention to date but might be crucial to chondrocyte signalling in early OA.
---
paper_title: MR Imaging of Normal and Matrix-depleted Cartilage: Correlation with Biomechanical Function and Biochemical Composition
paper_content:
PURPOSE: To correlate articular cartilage function, as reflected in biomechanical properties and biochemical composition, with magnetic resonance (MR) imaging parameters of normal articular cartilage and cartilage partially depleted of matrix components. MATERIALS AND METHODS: Normal articular cartilage from 12 porcine patellae was evaluated biomechanically, biochemically, and with MR imaging (with and without gadolinium enhancement). The patellae were then enzymatically treated to deplete the matrix of either collagen or proteoglycan and then reevaluated biomechanically, biochemically, and with MR imaging. Correlations between cartilaginous tissue function and MR imaging parameters were made. Analysis of variance was performed to assess the effect of enzymatic treatment on measured parameters. Linear correlations among the MR imaging, biochemical, and biomechanical parameters were performed to determine the strengths of the relationships. P < .05 indicated statistically significant differences. RESULTS: ...
---
paper_title: Detecting structural changes in early experimental osteoarthritis of tibial cartilage by microscopic magnetic resonance imaging and polarised light microscopy
paper_content:
Objectives: To detect changes in the collagen fibril network in articular cartilage in a canine experimental model of early osteoarthritis (OA) using microscopic magnetic resonance imaging (µMRI) and polarised light microscopy (PLM). ::: ::: Methods: Eighteen specimens from three pairs of the medial tibia of an anterior cruciate ligament transection canine model were subjected to µMRI and PLM study 12 weeks after surgery. For each specimen, the following experiments were carried out: (a) two dimensional µMRI images of T2 relaxation at four orientations; (b) the tangent Young's modulus; and (c) two dimensional PLM images of optical retardance and fibril angle. Disease induced changes in tissue were examined across the depth of the cartilage at a µMRI resolution of 13.7–23.1 µm. ::: ::: Results: Several distinct changes from T2 weighted images of cartilage in OA tibia were seen. For the specimens that were covered at least in part by the meniscus, the significant changes in µMRI included a clear shift in the depth of maximum T2 (21–36%), a decrease in the superficial zone thickness (37–38%), and an increase in cartilage total thickness (15–27%). These µMRI changes varied topographically in the tibia surface because they were not significant in completely exposed locations in medial tibia. The µMRI results were confirmed by the PLM measurements and correlated well with the mechanical measurements. ::: ::: Conclusion: Both µMRI and PLM can detect quantitatively changes in collagen fibre architecture in early OA and resolve topographical variations in cartilage microstructure of canine tibia.
---
paper_title: Contribution of tissue composition and structure to mechanical response of articular cartilage under different loading geometries and strain rates
paper_content:
Mechanical function of articular cartilage in joints between articulating bones is dependent on the composition and structure of the tissue. The mechanical properties of articular cartilage are traditionally tested in compression using one of the three loading geometries, i.e., confined compression, unconfined compression or indentation. The aim of this study was to utilize a composition-based finite element model in combination with a fractional factorial design to determine the importance of different cartilage constituents in the mechanical response of the tissue, and to compare the importance of the tissue constituents with different loading geometries and loading rates. The evaluated parameters included water and collagen fraction as well as fixed charge density on cartilage surface and their slope over the tissue thickness. The thicknesses of superficial and middle zones, as based on the collagen orientation, were also included in the evaluated parameters. A three-level resolution V fractional factorial design was used. The model results showed that inhomogeneous composition plays only a minor role in indentation, though that role becomes more significant in confined compression and unconfined compression. In contrast, the collagen architecture and content had a more profound role in indentation than with two other loading geometries. These differences in the mechanical role of composition and structure between the loading geometries were emphasized at higher loading rates. These findings highlight how the results from mechanical tests of articular cartilage under different loading conditions are dependent upon tissue composition and structure.
---
paper_title: Mechanical behavior of articular cartilage quantitative changes with enzymatic alteration of the proteoglycan fraction.
paper_content:
The in-vitro viscoelastic mechanical response of normal rabbit articular cartilage is strongly dependent on the quantity and integrity of the proteoglycan fraction of the tissue matrix. Experimental results demonstrate that specific functional relationships exist between shear moduli, retardation time spectra, and proteoglycan content. Quantitative enzymolysis of the proteoglycan fraction of the tissue alters the form of these relationships in a fashion consistent with the altered physiochemical make-up of the tissue. The observed changes in mechanical behavior with controlled enzymolysis are similar to those associated with the early stages of osteoarthritis, rheumatoid arthritis, joint sepsis, and synovitis in animal models.
---
paper_title: Hypotonic challenge modulates cell volumes differently in the superficial zone of intact articular cartilage and cartilage explant
paper_content:
The objective of this study was to evaluate the effect of sample preparation on the biomechanical behaviour of chondrocytes. We compared the volumetric and dimensional changes of chondrocytes in the superficial zone (SZ) of intact articular cartilage and cartilage explant before and after a hypotonic challenge. Calcein-AM labelled SZ chondrocytes were imaged with confocal laser scanning microscopy through intact cartilage surfaces and through cut surfaces of cartilage explants. In order to clarify the effect of tissue composition on cell volume changes, Fourier Transform Infrared microspectroscopy was used for estimating the proteoglycan and collagen contents of the samples. In the isotonic medium (300 mOsm), there was a significant difference (p < 0.05) in the SZ cell volumes and aspect ratios between intact cartilage samples and cartilage explants. Changes in cell volumes at both short-term (2 min) and long-term (2 h) time points after the hypotonic challenge (180 mOsm) were significantly different (p < 0.05) between the groups. Further, proteoglycan content was found to correlate significantly (r 2 = 0.63, p < 0.05) with the cell volume changes in cartilage samples with intact surfaces. Collagen content did not correlate with cell volume changes. The results suggest that the biomechanical behaviour of chondrocytes following osmotic challenge is different in intact cartilage and in cartilage explant. This indicates that the mechanobiological responses of cartilage and cell signalling may be significantly dependent on the integrity of the mechanical environment of chondrocytes.
---
paper_title: Fibril reinforced poroelastic model predicts specifically mechanical behavior of normal, proteoglycan depleted and collagen degraded articular cartilage
paper_content:
Abstract Degradation of collagen network and proteoglycan (PG) macromolecules are signs of articular cartilage degeneration. These changes impair cartilage mechanical function. Effects of collagen degradation and PG depletion on the time-dependent mechanical behavior of cartilage are different. In this study, numerical analyses, which take the compression-tension nonlinearity of the tissue into account, were carried out using a fibril reinforced poroelastic finite element model. The study aimed at improving our understanding of the stress-relaxation behavior of normal and degenerated cartilage in unconfined compression. PG and collagen degradations were simulated by decreasing the Young's modulus of the drained porous (nonfibrillar) matrix and the fibril network, respectively. Numerical analyses were compared to results from experimental tests with chondroitinase ABC (PG depletion) or collagenase (collagen degradation) digested samples. Fibril reinforced poroelastic model predicted the experimental behavior of cartilage after chondroitinase ABC digestion by a major decrease of the drained porous matrix modulus (−64±28%) and a minor decrease of the fibril network modulus (−11±9%). After collagenase digestion, in contrast, the numerical analyses predicted the experimental behavior of cartilage by a major decrease of the fibril network modulus (−69±5%) and a decrease of the drained porous matrix modulus (−44±18%). The reduction of the drained porous matrix modulus after collagenase digestion was consistent with the microscopically observed secondary PG loss from the tissue. The present results indicate that the fibril reinforced poroelastic model is able to predict specifically characteristic alterations in the stress-relaxation behavior of cartilage after enzymatic modifications of the tissue. We conclude that the compression-tension nonlinearity of the tissue is needed to capture realistically the mechanical behavior of normal and degenerated articular cartilage.
---
paper_title: Cartilage failure in osteoarthritis: Relevance of normal structure and function. A review
paper_content:
Osteoarthritis is a multifactorial condition of diverse etiology affecting synovial joints, in which the final common pathway is destruction and loss of articular cartilage (and changes in other joint structures), which normally protects the bone against compressive and shearing forces. Known causes of secondary osteoarthritis can be grouped into those that cause abnormal stresses and those that produce an abnormal cartilage. The resulting imbalance between tissue texture and mechanical environment may eventually lead to cartilage destruction. ::: ::: The histopathology of cartilage fibrillation is described and related to its normal structure, particularly collagen fibril orientation. Load carriage in normal cartilage involves an interaction between the fibril mesh, proteoglycan, and water. Possible defects in normal matrix organization may lead to failure of this mechanism and to structural breakdown of the cartilage. Biochemical changes in the matrix in early osteoarthritis, particularly in experimental models, are described. The initial change, before loss of proteoglycan or fibril breakage, is an increased water content. A similar change is seen in normal cartilage after brief exercise. There may be a mechanism common to both the normal reversible change (thus providing a means for adaption to changed stresses) and the pathological prolonged change. ::: ::: Chondrocyte metabolism and its modification in early degenerative change are described. Factors regulating metabolism include nutrition, growth factors and cytokines, ionic constitution of the matrix, and mechanical stimuli. Known causes of secondary osteoarthritis may act via these factors to cause changes in cell metabolism that play a part in the initiation and ongoing pathogenesis of cartilage failure.
---
paper_title: Computer simulation of damage on distal femoral articular cartilage after meniscectomies
paper_content:
It is commonly accepted that total or partial meniscectomies cause wear of articular cartilages that leads to severe damage in a period of few years. This also produces alteration of the biomechanical environment and increases articular instability, with a progressive and degenerative arthrosic pathology. Due to these negative consequences, total meniscectomy technique has been avoided, with a clear preference for partial meniscectomies. Despite the better results obtained with this latter technique, it has been demonstrated that the knee still suffers progressive long-term wear, which alters the properties of the surface of articular cartilage. In this paper, a phenomenological isotropic damage model of articular cartilage is presented and implemented in a finite element code. We hypothesized that there is a relation between the increase of shear stress and cartilage degeneration. To confirm the hypothesis, the obtained results were compared to experimental ones. It is used to investigate the effect of meniscectomies on articular damage in the human knee joint. Two different situations were compared for the tibio-femoral joint: healthy and after meniscectomy. The distribution of damaged regions and the damage level distribution resulted qualitatively similar to experimental results, showing, for instance that, after meniscectomy, significant degeneration occurs in the lateral compartment. A noteworthy result was that patterns of damage in a total meniscectomy model give better agreement to clinical results when using relative increases in shear stress, rather than an absolute shear stress criterion. The predictions for partial meniscectomies indicated the relative severity of the procedures.
---
paper_title: Experimental Verification of the Roles of Intrinsic Matrix Viscoelasticity and Tension-Compression Nonlinearity in the Biphasic Response of Cartilage
paper_content:
A biphasic-CLE-QLV model proposed in our recent study [2001, J. Biomech. Eng., 123, pp. 410-417] extended the biphasic theory of Mow et al. [1980, J. Biomech. Eng., 102, pp. 73-84] to include both tension-compression nonlinearity and intrinsic viscoelasticity of the cartilage solid matrix by incorporating it with the conewise linear elasticity (CLE) model [1995, J. Elasticity, 37, pp. 1-38] and the quasi-linear viscoelasticity (QLV) model [Biomechanics: Its foundations and objectives, Prentice Hall, Englewood Cliffs, 1972]. This model demonstrates that a simultaneous prediction of compression and tension experiments of articular cartilage, under stress-relaxation and dynamic loading, can be achieved when properly taking into account both flow-dependent and flow-independent viscoelastic effects, as well as tension-compression nonlinearity. The objective of this study is to directly test this biphasic-CLE-QLV model against experimental data from unconfined compression stress-relaxation tests at slow and fast strain rates as well as dynamic loading. Twelve full-thickness cartilage cylindrical plugs were harvested from six bovine glenohumeral joints and multiple confined and unconfined compression stress-relaxation tests were performed on each specimen. The material properties of specimens were determined by curve-fitting the experimental results from the confined and unconfined compression stress relaxation tests. The findings of this study demonstrate that the biphasic-CLE-QLV model is able to describe the strain-rate-dependent mechanical behaviors of articular cartilage in unconfined compression as attested by good agreements between experimental and theoretical curvefits (r 2 =0.966±0.032 for testing at slow strain rate; r 2 =0.998±0.002 for testing at fast strain rate) and predictions of the dynamic response (r 2 =0.91±0.06). This experimental study also provides supporting evidence for the hypothesis that both tension-compression nonlinearity and intrinsic viscoelasticity of the solid matrix of cartilage are necessary for modeling the transient and equilibrium responses of this tissue in tension and compression. Furthermore, the biphasic-CLE-QLV model can produce better predictions of the dynamic modulus of cartilage in unconfined dynamic compression than the biphasic-CLE and biphasic poroviscoelastic models, indicating that intrinsic viscoelasticity and tension-compression nonlinearity of articular cartilage may play important roles in the load-support mechanism of cartilage under physiologic loading.
---
paper_title: ON THE FUNDAMENTAL FLUID TRANSPORT MECHANISMS THROUGH NORMAL AND PATHOLOGICAL ARTICULAR CARTILAGE DURING FUNCTION-II. THE ANALYSIS, SOLUTION AND CONCLUSIONS*?
paper_content:
Abstract The articulating process of synovial joints is represented by a spatially fixed cyclically time varying normal surface traction applied over a layered model of the cartilage-subchondral bone system. A two-phase mechanically interacting mixture is used to represent the solid matrix and the interstitial fluid of the cartilage continuum. The resulting equations of motion may be reduced to the currently employed physical laws used to describe the transport of fluid through the tissue, i.e. Darcy's law and Biot's consolidation equations. These equations were simplified by an order of magnitude analysis, and the resulting equations were solved by a double Fourier-Laplace transform procedure. The analytical solution shows that the fluid transport mechanism is strongly dependent upon a nondimensional parameter defined by ϵ 12 = Nk h 2 ω . This parameter is the ratio of the force required to deform the tissue as a whole to the force of frictional resistance due to the rate of movement of the interstitial fluid relative to the solid matrix. It is found that during normal function of healthy articular cartilage the consolidation effects will dominate the movement of interstitial fluid. In degenerative cartilage, as characterized by increasing the surface porosity and permeability and a decrease in tissue stiffness, the direct pressure effects, i.e. Darcy's law, becomes comparable to those of consolidation.
---
paper_title: A Transversely Isotropic Biphasic Model for Unconfined Compression of Growth Plate and Chondroepiphysis
paper_content:
Using the biphasic theory for hydrated soft tissues (Mow et al., 1980) and a transversely isotropic elastic model for the solid matrix, an analytical solution is presented for the unconfined compression of cylindrical disks of growth plate tissues compressed between two rigid platens with a frictionless interface. The axisymmetric case where the plane of transverse isotropy is perpendicular to the cylindrical axis is studied, and the stress-relaxation response to imposed step and ramp displacements is solved. This solution is then used to analyze experimental data from unconfined compression stress-relaxation tests performed on specimens from bovine distal ulnar growth plate and chondroepiphysis to determine the biphasic material parameters. The transversely isotropic biphasic model provides an excellent agreement between theory and experimental results, better than was previously achieved with an isotropic model, and can explain the observed experimental behavior in unconfined compression of these tissues.
---
paper_title: Cartilage is poroelastic, not viscoelastic (including and exact theorem about strain energy and viscous loss, and an order of magnitude relation for equilibration time)
paper_content:
Abstract Cartilage is often called viscoelastic, yet when strain lags stress in cartilage it is not primarily because of effects within the material of the cartilage skeleton itself. It is because the cartilage skeleton is bathed in fluid. Except in pure shear deformation, attaining equilibrium strain requires that pore fluid flow within the cartilage. Viscous forces retard this flow. This behavior is known as poroelastic. The equilibrium time is of the order L 2 (Yσ) , where Y is the Young's modulus, σ the permeability of the cartilage, and L is the length of the path along which liquid flows during equilibration. I show that this is true for any consolidation experiment, whatever the direction of consolidation and the direction of liquid flow. In the course of this demonstration I prove that if load is applied abruptly to a Hookean material and is thereafter held constant, the strain energy at equilibrium equals the energy dissipated in the material during equilibration.
---
paper_title: Partial Meniscectomy Changes Fluid Pressurization in Articular Cartilage in Human Knees
paper_content:
Partial meniscectomy is believed to change the biomechanics of the knee joint through alterations in the contact of articular cartilages and menisci. Although fluid pressure plays an important role in the load support mechanism of the knee, the fluid pressurization in the cartilages and menisci has been ignored in the finite element studies of the mechanics of meniscectomy. In the present study, a 3D fibril-reinforced poromechanical model of the knee joint was used to explore the fluid flow dependent changes in articular cartilage following partial medial and lateral meniscectomies. Six partial longitudinal meniscectomies were considered under relaxation, simple creep, and combined creep loading conditions. In comparison to the intact knee, partial meniscectomy not only caused a substantial increase in the maximum fluid pressure but also shifted the location of this pressure in the femoral cartilage. Furthermore, these changes were positively correlated to the size of meniscal resection. While in the intact joint, the location of the maximum fluid pressure was dependent on the loading conditions, in the meniscectomized joint the location was predominantly determined by the site of meniscal resection. The partial meniscectomy also reduced the rate of the pressure dissipation, resulting in even larger difference between creep and relaxation times as compared to the case of the intact knee. The knee joint became stiffer after meniscectomy because of higher fluid pressure at knee compression followed by slower pressure dissipation. The present study indicated the role of fluid pressurization in the altered mechanics of meniscectomized knees.
---
paper_title: The role of viscoelasticity of collagen fibers in articular cartilage: theory and numerical formulation.
paper_content:
The relative importance of fluid-dependent and fluid-independent transient mechanical behavior in articular cartilage was examined for tensile and unconfined compression testing using a fibril reinforced model. The collagen matrix of articular cartilage was modeled as viscoelastic using a quasi-linear viscoelastic formulation with strain-dependent elastic modulus, while the proteoglycan matrix was considered as linearly elastic. The collagen viscoelastic properties were obtained by fitting experimental data from a tensile test. These properties were used to investigate unconfined compression testing, and the sensitivity of the properties was also explored. It was predicted that the stress relaxation observed in tensile tests was not caused by fluid pressurization at the macroscopic level. A multi-step tensile stress relaxation test could be approximated using a hereditary integral in which the elastic fibrillar modulus was taken to be a linear function of the fibrillar strain. Applying the same formulation to the radial fibers in unconfined compression, stress relaxation could not be simulated if fluid pressurization were absent. Collagen viscoelasticity was found to slightly weaken fluid pressurization in unconfined compression, and this effect was relatively more significant at moderate strain rates. Therefore, collagen viscoelasticity appears to play an import role in articular cartilage in tensile testing, while fluid pressurization dominates the transient mechanical behavior in compression. Collagen viscoelasticity plays a minor role in the mechanical response of cartilage in unconfined compression if significant fluid flow is present.
---
paper_title: On the fundamental fluid transport mechanisms through normal and pathological articular cartilage during function—I the formulation
paper_content:
Abstract The fundamental fluid transport mechanisms associated with articular cartilage are important toward the understanding of the biomechanical processes involved in the physiology of normal and pathological synovial joints. Phenomenologically, articular cartilage is viewed as a mixture of two mechanically interacting continua, i.e. a solid matrix phase and a liquid phase composed of water. In synovial joints the liquid phase may be transported through the solid matrix by a direct pressure gradient as a result of the squeeze film action of synovial fluid during articulation and as a result of the consolidation of the solid matrix. The mechanically interacting mixture is composed of a solid. defined by an elastic internal energy function, and an incompressible liquid. The accompanying frictional resistance of relative motion is considered by a linear diffusive dissipation term. The equations of motion for each phase and the total mixture were derived from the extended Hamilton's Principle, where the Rayleigh dissipative resistance is considered as a generalized body force field. This procedure yields, as special cases, the classical Darcy's Law for the liquid transport due to direct pressure gradients, as well as Biot's consolidation equations for the liquid transport due to the dilatation of the solid phase.
---
paper_title: A composition-based cartilage model for the assessment of compositional changes during cartilage damage and adaptation
paper_content:
Summary Objective The composition of articular cartilage changes with progression of osteoarthritis. Since compositional changes are associated with changes in the mechanical properties of the tissue, they are relevant for understanding how mechanical loading induces progression. The objective of this study is to present a computational model of articular cartilage which enables to study the interaction between composition and mechanics. Methods Our previously developed fibril-reinforced poroviscoelastic swelling model for articular cartilage was combined with our tissue composition-based model. In the combined model both the depth- and strain-dependencies of the permeability are governed by tissue composition. All local mechanical properties in the combined model are directly related to the local composition of the tissue, i.e., to the local amounts of proteoglycans and collagens and to tissue anisotropy. Results Solely based on the composition of the cartilage, we were able to predict the equilibrium and transient response of articular cartilage during confined compression, unconfined compression, indentation and two different 1D-swelling tests, simultaneously. Conclusion Since both the static and the time-dependent mechanical properties have now become fully dependent on tissue composition, the model allows assessing the mechanical consequences of compositional changes seen during osteoarthritis without further assumptions. This is a major step forward in quantitative evaluations of osteoarthritis progression.
---
paper_title: A Conewise Linear Elasticity Mixture Model for the Analysis of Tension-Compression Nonlinearity in Articular Cartilage
paper_content:
A biphasic mixture model is developed which can account for the observed tension-compression nonlinearity of cartilage by employing the continuum-based Conewise Linear Elasticity (CLE) model of Curnier et al. (J Elasticity 37:1–38, 1995) to describe the solid phase of the mixture. In this first investigation, the orthotropic octantwise linear elasticity model was reduced to the more specialized case of cubic symmetry, to reduce the number of elastic constants from twelve to four. Confined and unconfined compression stress-relaxation, and torsional shear testing were performed on each of nine bovine humeral head articular cartilage cylindrical plugs from 6 month old calves. Using the CLE model with cubic symmetry, the aggregate modulus in compression and axial permeability were obtained from confined compression (H−A =0.64±0.22 MPa, kz = 3.62 ± .97 × 10−16 m4/N.s, r2 =0.95±0.03), the tensile modulus, compressive Poisson ratio and radial permeability were obtained from unconfined compression (E+Y = 12.75 ± 1.56 MPa, ν− =0.03±0.01, kr =6.06±2.10×10−16 m4/N.s, r2 =0.99±0.00), and the shear modulus was obtained from torsional shear (µ=0.17±0.06 MPa). The model was also employed to successfully predict the interstitial fluid pressure at the center of the cartilage plug in unconfined compression (r2 =0.98±0.01). The results of this study demonstrate that the integration of the CLE model with the biphasic mixture theory can provide a model of cartilage which can successfully curvefit three distinct testing configurations while producing material parameters consistent with previous reports in the literature.
---
paper_title: Biomechanical properties of knee articular cartilage
paper_content:
Structure and properties of knee articular cartilage are adapted to stresses exposed on it during physiological activ- ities. In this study, we describe site- and depth-dependence of the biomechanical properties of bovine knee articular cartilage. We also investigate the effects of tissue structure and composition on the biomechanical parameters as well as characterize experimentally and numerically the compression-tension nonlinearity of the cartilage matrix. In vitro mechano-optical mea- surements of articular cartilage in unconfined compression geometry are conducted to obtain material parameters, such as thickness, Young's and aggregate modulus or Poisson's ratio of the tissue. The experimental results revealed significant site- and depth-dependent variations in recorded parameters. After enzymatic modification of matrix collagen or proteoglycans our results show that collagen primarily controls the dynamic tissue response while proteoglycans affect more the static properties. Experimental measurements in compression and tension suggest a nonlinear compression-tension behavior of articular carti- lage in the direction perpendicular to articular surface. Fibril reinforced poroelastic finite element model was used to capture the experimentally found compression-tension nonlinearity of articular cartilage.
---
paper_title: Is classical consolidation theory applicable to articular cartilage deformation?
paper_content:
In this paper, classical consolidation theory has been used to investigate the time-dependent response of articular cartilage to static loading. An experimental technique was developed to measure simultaneously the matrix internal pressure and creep strain under conditions of one-dimensional consolidation. This is the first measurement of the internal stress state of loaded cartilage. It is demonstrated that under static compression the applied load is shared by the components of the matrix (i.e. water, the proteoglycans, and the collagen fibrillar meshwork), during which time a maximum hydrostatic excess pore pressure is developed as initial water exudation occurs. This pressure decays as water is further exuded from the matrix and effective consolidation begins with a progressive transfer of the applied stress from water to the collagen fibrils and proteoglycan gel. Consolidation is completed when the hydrostatic excess pore pressure is reduced to zero and the solid components sustain in full the applied load.
---
paper_title: The Apparent Viscoelastic Behavior of Articular Cartilage—The Contributions From the Intrinsic Matrix Viscoelasticity and Interstitial Fluid Flows
paper_content:
Articular cartilage was modeled rheologically as a biphasic poroviscoelastic material. A specific integral-type linear viscoelastic model was used to describe the constitutive relation of the collagen-proteoglycan matrix in shear. For bulk deformation, the matrix was assumed either to be linearly elastic, or viscoelastic with an identical reduced relaxation spectrum as in shear. The interstitial fluid was considered to be incompressible and inviscid. The creep and the rate-controlled stress-relaxation experiments on articular cartilage under confined compression were analyzed using this model. Using the material data available in the literature, it was concluded that both the interstitial fluid flow and the intrinsic matrix viscoelasticity contribute significantly to the apparent viscoelastic behavior of this tissue under confined compression.
---
paper_title: Biomechanics of the knee joint in flexion under various quadriceps forces.
paper_content:
Bioemchanics of the entire knee joint including tibiofemoral and patellofemoral joints were investigated at different flexion angles (0 degrees to 90 degrees ) and quadriceps forces (3, 137, and 411 N). In particular, the effect of changes in location and magnitude of restraining force that counterbalances the isometric extensor moment on predictions was investigated. The model consisted of three bony structures and their articular cartilage layers, menisci, principal ligaments, patellar tendon, and quadriceps muscle. Quadriceps forces significantly increased the anterior cruciate ligament, patellar tendon, and contact forces/areas as well as the joint resistant moment. Joint flexion, however, substantially diminished them all with the exception of the patellofemoral contact force/area that markedly increased in flexion. When resisting extensor moment by a force applied on the tibia, the force in cruciate ligaments and tibial translation significantly altered as a function of magnitude and location of the restraining force. Quadriceps activation generated large ACL forces at full extension suggesting that post ACL reconstruction exercises should avoid large quadriceps exertions at near full extension angles. In isometric extension exercises against a force on the tibia, larger restraining force and its more proximal location to the joint substantially decreased forces in the anterior cruciate ligament at small flexion angles whereas they significantly increased forces in the posterior cruciate ligament at larger flexion angles.
---
paper_title: Time-Dependent Nanomechanics of Cartilage
paper_content:
In this study, atomic force microscopy-based dynamic oscillatory and force-relaxation indentation was employed to quantify the time-dependent nanomechanics of native (untreated) and proteoglycan (PG)-depleted cartilage disks, including indentation modulus Eind, force-relaxation time constant t, magnitude of dynamic complex modulus jE*j, phase angle d between force and indentation depth, storage modulus E 0 , and loss modulus E 00 . At ~2 nm dynamic deformation amplitude, jE*j increased significantly with frequency from 0.22 5 0.02 MPa (1 Hz) to 0.77 5 0.10 MPa (316 Hz), accompanied by an increase in d (energy dissipation). At this length scale, the energy dissipation mechanisms were deconvoluted: the dynamic frequency dependence was primarily governed by the fluid-flow-induced poroelasticity, whereas the long-time force relaxation reflected flow-independent viscoelasticity. After PG depletion, the change in the frequency response of jE*j and d was consistent with an increase in cartilage local hydraulic permeability. Although untreated disks showed only slight dynamic amplitude-dependent behavior, PG-depleted disks showed great amplitude-enhanced energy dissipation, possibly due to additional viscoelastic mechanisms. Hence, in addition to functioning as a primary determinant of cartilage compressive stiffness and hydraulic permeability, the presence of aggrecan minimized the amplitude dependence of jE*j at nanometer-scale deformation.
---
paper_title: Tensile and compressive properties of healthy and osteoarthritic human articular cartilage
paper_content:
Osteoarthritis (OA) is a disease affecting articular cartilage and the underlying bone, resulting from many biological and mechanical interacting factors which change the extracellular matrix (ECM) and cells and lead to increasing levels of cartilage degeneration, like softening, fibrillation, ulceration and cartilage loss. The early diagnosis of the disease is fundamental to prevent pain, further tissue degeneration and reduce hospital costs. Although morphological modifications can be detected by modern non-invasive diagnostic techniques, they may not be evident in the early stages of OA. The mechanical properties of articular cartilage are related to its composition and structure and are sensitive to even small changes in the ECM that could occur in early OA. The aim of the present study was to compare the mechanical properties of healthy and OA cartilage using a combined experimental-numerical approach. Experimental assessments consisted of step wise confined and unconfined compression and tension stress relaxation tests on disks (for compression) or strips (for tension) of cartilage obtained from human femoral heads discarded from the operating room after total hip replacement. The numerical model was based on the biphasic theory and included the tension-compression non-linearity. Considering OA samples vs normal samples, the static compressive modulus was 55-68% lower, the permeability was 60-80% higher, the dynamic compressive modulus was 59-64% lower, the static tension modulus was 72-83% lower. The model successfully simulated the experimental tests performed on healthy and OA cartilage and was used in combination with the experimental tests to evaluate the role of different ECM components in the mechanical response of normal and OA cartilage.
---
paper_title: A cross-validation of the biphasic poroviscoelastic model of articular cartilage in unconfined compression, indentation, and confined compression.
paper_content:
The biphasic poroviscoelastic (BPVE) model was curve fit to the simultaneous relaxation of reaction force and lateral displacement exhibited by articular cartilage in unconfined compression (n=18). Model predictions were also made for the relaxation observed in reaction force during indentation with a porous plane-ended metal indenter (n=4), indentation with a nonporous plane ended metal indenter (n=4), and during confined compression (n=4). Each prediction was made using material parameters resulting from curve fits of the unconfined compression response of the same tissue. The BPVE model was able to account for both the reaction force and the lateral displacement during unconfined compression very well. Furthermore, model predictions for both indentation and confined compression also followed the experimental data well. These results provide substantial evidence for the efficacy of the biphasic poroviscoelastic model for articular cartilage, as no successful cross-validation of a model simulation has been demonstrated using other mathematical models.
---
paper_title: The viscoelastic shear behavior of normal rabbit articular cartilage.
paper_content:
Abstract A theoretical solution ‡ for the indentation of a layered medium by an axisymmetric plane-ended ram has been applied in the in vitro study of the mechanical properties of the articular surface of the distal femur of the rabbit. Experimental results directly yield shear moduli and retardation spectra which are invariant with respect to cartilage thickness and applied stress within the stress range used. The success of this theory, coupled with the simplicity and reproducibility of the test, suggests that this method has wide applicability in the study of experimentally or pathologically altered cartilage.
---
paper_title: Equivalence Between Short-Time Biphasic and Incompressible Elastic Material Responses
paper_content:
Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response deltat<<Delta(2) / //parallelC(4)//K//, where Delta is a characteristic dimension, C(4) is the elasticity tensor, and K is the hydraulic permeability tensor of the solid matrix. Certain notes of caution are provided with regard to implementation issues, particularly when finite element formulations of incompressible elasticity employ an uncoupled strain energy function consisting of additive deviatoric and volumetric components.
---
paper_title: The permeability of articular cartilage under compressive strain and at high pressures
paper_content:
The permeability of bovine articular cartilage was measured in an apparatus designed to permit this measurement while the fluid pressure gradient across the cartilage and the axial compressive strain applied to the cartilage were varied independently. For all of the pressure gradients tested the permeability of the cartilage decreased as the compressive strain increased. From previous work, it was postulated that joint lubrication is accomplished first by fluid exudation into the joint space. both at the leading edge of the moving contact area and between portions of the opposing cartilaginous surfaces, and second by imbibition of the expelled fluid back into the cartilage toward the trailing edge of the contact area caused by the "elastic" recovery of the tissue. The present work extends this model to include the condition that the permeability of cartilage is dependent on the extent to which it is deformed.
---
paper_title: Viscoelastic shear properties of articular cartilage and the effects of glycosidase treatments
paper_content:
The objectives of this study were to determine the viscoelastic shear properties of articular cartilage and to investigate the effects of the alteration of proteoglycan structure on these shear properties. Glycosidase treatments (chondroitinase ABC and Streptomyces hyaluronidase) were used to alter the proteoglycan structure and content of the tissue. The dynamic viscoelastic shear properties of control and treated tissues were measured and statistically compared. Specifically, cylindrical bovine cartilage specimens were subjected to oscillatory shear deformation of small amplitude (γo = 0.001 radian) over a physiological range of frequencies (0.01–20 Hz) and at various compressive strains (5, 9, 12, and 16%). The dynamic complex shear modulus was calculated from the measurements. The experimental results show that the solid matrix of normal articular cartilage exhibits intrinsic viscoelastic properties in shear over the range of frequencies tested. These viscoelastic shear properties were found to be dependent on compressive strains. Our data also provide significant insights into the structure-function relationships for articular cartilage. Significant correlations were found between the material properties (the magnitude of dynamic shear modulus, the phase shift angle, and the equilibrium compressive modulus), and the biochemical compositions of the cartilage (collagen, proteoglycan, and water contents). The shear modulus was greatly reduced when the proteoglycans were degraded by either chondroitinase ABC or Streptomyces hyaluronidase. The results suggest that the ability of collagen to resist tension elastically provides the stiffness of the cartilage matrix in shear and its elastic energy storage capability. Proteoglycans enmeshed in the collagen matrix inflate the collagen network and induce a tensile prestress in the collagen fibrils. This interaction of the collagen and proteoglycan within the cartilage matrix provides the complex mechanism that allows the tissue to resist shear deformation.
---
paper_title: Stress-sharing between the fluid and solid components of articular cartilage under varying rates of compression
paper_content:
This paper investigates the factors affecting the mechanical behavior of the articular matrix with special emphasis on the effect of compressive strain-rate on the short and long term responses of the fluid and the solid components. The relationships expressed in the general theory of one-dimensional consolidation are generalized to account for strain-rate in the deformation process with the result that the stiffness due to the fluid and the solid components, and a parameter representing the degree of drag, can be calculated explicity
---
paper_title: Technical Note: Modelling Soft Tissue Using Biphasic Theory - A Word of Caution.
paper_content:
In recent years the biphasic approach initiated by Mow and coworkers, has been very popular in modelling soft, hydrated, cartilage tissues as well as other soft tissues, such as the brain. This work points out that due to the inherent inability of biphasic models in their present form to account for stress-strain rate dependence resulting from the viscoelasticity of the solid phase, the applicability of these models is limited to the loading conditions producing large relative velocities of phases.
---
paper_title: Analysis of partial meniscectomy and ACL reconstruction in knee joint biomechanics under a combined loading
paper_content:
Abstract Background Despite partial meniscectomies and ligament reconstructions as treatments of choice for meniscal and ligament injuries, respectively, the knee joint osteoarthritis persists. Methods A detailed nonlinear finite element model of the knee joint was developed to evaluate biomechanics of the tibiofemoral joint under 200 N drawer load with and without 1500 N compression preload. The model incorporated composite structure of cartilage and meniscus. The effects on joint response and articular contact pressure of unilateral partial meniscectomy, of changes in prestrain or material properties of the anterior cruciate ligament and of their combination were investigated. Findings Compressive preload further increases anterior cruciate ligament strains/forces in drawer loading. Partial meniscectomy and perturbations in anterior cruciate ligament prestrain/material properties, alone or combined, substantially alter the load transfer via covered and uncovered areas of cartilage as well as contact pressure distribution on cartilage. Partial meniscectomy especially when combined with a slacker anterior cruciate ligament diminish the load via affected meniscus generating unloaded regions on the cartilage. Interpretation Partial meniscectomy concurrent with a slack anterior cruciate ligament substantially alter cartilage contact pressures. These alterations further intensify in the event of greater external forces, larger meniscal resections and total anterior cruciate ligament rupture, thus suggesting a higher risk of joint degeneration.
---
paper_title: Effects of proteoglycan extraction on the tensile behavior of articular cartilage
paper_content:
We undertook an interdisciplinary biomechanical and biochemical study to explore the extent and manner in which the total pool of proteoglycans influences the kinetic and static behavior of bovine articular cartilage in tension. Two biomechanical tests were used: (a) the viscoelastic creep test and (b) a slow constant-rate uniaxial tension test; and two enzymatic proteoglycan extraction procedures were used: (a) chondroitinase ABC treatment and (b) a sequential enzymatic treatment with chondroitinase ABC, trypsin, and Streptomyces hyaluronidase. We found that the viscoelastic creep response of all cartilage specimens may be divided into two distinct phases: an initial phase (less than 15 s), characterized by a rapid increase in strain following load application, and a late phase (15 s less than or equal to t less than 25,000 s), characterized by a more gradual increase in strain. A major finding of this study is that the kinetics of the creep response is greatly influenced by the glycosaminoglycan content of the tissue. For untreated and control specimens, the initial response comprises about 50% of the total strain, while for chondroitinase ABC and sequentially extracted specimens, the initial response comprises up to 83% of the total strain. Furthermore, most untreated and control specimens did not reach equilibrium within the 25,000 s test period, while enzymatically digested specimens often reached equilibrium in less than 100 s. Thus, we conclude that through their physical restraints on collagen, the bulk of proteoglycan present in the tissue acts to retard fibrillar reorganization and alignment under tensile loading, thereby effectively preventing sudden extension of the collagen network. In contrast, the results of our slow constant-rate uniaxial tension experiment show that essentially complete extraction of proteoglycan glycosaminoglycans does not affect the intrinsic tensile stiffness and strength of cartilage specimens or the collagen network in a significant manner. Hence, an important function of the bulk proteoglycans (i.e., the large aggregating type) in cartilage is to retard the rate of stretch and alignment when a tensile load is suddenly applied. This mechanism may be useful in protecting the cartilage collagen network during physiological situations, where sudden impact forces are imposed on a joint.
---
paper_title: Biphasic Poroviscoelastic Simulation of the Unconfined Compression of Articular Cartilage: I—Simultaneous Prediction of Reaction Force and Lateral Displacement
paper_content:
This study investigated the ability of the linear biphasic poroelastic (BPE) model and the linear biphasic poroviscoelastic (BPVE) model to simultaneously predict the reaction force and lateral displacement exhibited by articular cartilage during stress relaxation in unconfined compression. Both models consider articular cartilage as a binary mixture of a porous incompressible solid phase and an incompressible inviscid fluid phase. The BPE model assumes the solid phase is elastic, while the BPVE model assumes the solid phase is viscoelastic. In addition, the efficacy of two additional models was also examined, i.e., the transversely isotropic BPE (TIBPE) model, which considers transverse isotropy of the solid matrix within the framework of the linear BPE model assumptions, and a linear viscoelastic solid (LVE) model, which assumes that the viscoelastic behavior of articular cartilage is solely governed by the intrinsic viscoelastic nature of the solid matrix, independent of the imerstitial fluid flow. It was found that the BPE model was able to accurately account for the lateral displacement, but unable to fit the short-term reaction force data of all specimens tested. The TIBPE model was able to account for either the lateral displacement or the reaction force, but not both simultaneously. The LVE model was able to account for the complete reaction force, but unable to fit the lateral displacement measured experimentally. The BPVE model was able to completely account for both lateral displacement and reaction force for all specimens tested. These results suggest that both the fluid flow-dependent and fluid flow-independent viscoelastic mechanisms are essential for a complete simulation of the viscoelastic phenomena of articular cartilage.
---
paper_title: A VISCOELASTIC MODEL FOR COLLAGEN FIBRES
paper_content:
The viscoelastic behaviour of collagen fibres of different lengths was studied by developing simulation models. These models were found to explain the creep and stress-relaxation behaviour of the collagen fibres.
---
paper_title: A Phenomenological Approach Toward Patient-Specific Computational Modeling of Articular Cartilage Including Collagen Fiber Tracking
paper_content:
To model the cartilage morphology and the material response, a phenomenological and patient-specific simulation approach incorporating the collagen fiber fabric is proposed. Cartilage tissue respon ...
---
paper_title: Cartilage stress-relaxation proceeds slower at higher compressive strains.
paper_content:
Articular cartilage is the connective tissue which covers bone surfaces and deforms during in vivo activity. Previous research has investigated flow-dependent cartilage viscoelasticity, but relatively few studies have investigated flow-independent mechanisms. This study investigated polymer dynamics as an explanation for the molecular basis of flow-independent cartilage viscoelasticity. Polymer dynamics predicts that stress-relaxation will proceed more slowly at higher volumetric concentrations of polymer. Stress-relaxation tests were performed on cartilage samples after precompression to different strain levels. Precompression increases the volumetric concentration of cartilage biopolymers, and polymer dynamics predicts an increase in relaxation time constant. Stress-relaxation was slower for greater precompression. There was a significant correlation between the stress-relaxation time constant and cartilage volumetric concentration. Estimates of the flow-dependent timescale suggest that flow-dependent relaxation occurs on a longer timescale than presently observed. These results are consistent with polymer dynamics as a mechanism of cartilage viscoelasticity.
---
paper_title: Cartilage Stresses in the Human Hip Joint
paper_content:
The total surface stress measured in vitro on acetabular cartilage when step-loaded by an instrumented hemiprosthesis are partitioned into fluid and cartilage network stresses using a finite element model of the cartilage layer and measurements of the layer consolidation. The finite element model is based on in situ measurements of cartilage geometry and constitutive properties. Unique instrumentation was em ployed to collect the geometry and constitutive properties and pressure and con solidation data. When loaded, cartilage consolidates and exudes its interstitial fluid through and from its solid network into the interarticular gap. The finite element solutions include the spatial distributions of fluid and network stresses, the normal flow velocities into the gap, and the contact network stresses at the cartilage surface, all versus time. Even after long-duration application of physiological-level force, fluid pressure supports 90 percent of the load with the cartilage network stresses remaining well below the drained modulus of cartilage. The results support the "weeping" mechanism of joint lubrication proposed by McCutchen.
---
paper_title: The nonlinear interaction between cartilage deformation and interstitial fluid flow
paper_content:
Abstract The movement of interstitial fluid through articular cartilage and its influence on the creep behavior of the tissue due to a unit step load function has been investigated. Experimental results are presented to show that the hydraulic permeability of articular cartilage depends on both the axial compressive strain on the tissue sample, as well as the driving pressure difference maintained across the specimen. These experimental results have been utilized in a known theory of a deformable and permeable material used to describe the behavior of articular cartilage. The creep-like behavior of the cartilage in compression has been analytically treated. It has been determined that the nonlinear interaction between the hydraulic permeability of the tissue and the compressive strain on the tissue retards the progress of the consolidation of the cartilage during uniaxial compression. However, the equilibrium displacement of the articular surface as t → ∞ depends only on the elastic constant of the parallel spring and dashpot viscoelastic behavior of the solid component of the tissue.
---
paper_title: Structure-Function Relationships in Enzymatically Modified Articular Cartilage
paper_content:
The present study is aimed at revealing structure-function relationships of bovine patellar articular cartilage. Collagenase, chondroitinase ABC and elastase were used for controlled and selective enzymatic modifications of cartilage structure, composition and functional properties. The effects of the enzymatic degradations were quantitatively evaluated using quantitative polarized light microscopy, digital densitometry of safranin O-stained sections as well as with biochemical and biomechanical techniques. The parameters related to tissue composition and structure were correlated with the indentation stiffness of cartilage. In general, tissue alterations after enzymatic digestions were restricted to the superficial cartilage. All enzymatic degradations induced superficial proteoglycan (PG) depletion. Collagenase also induced detectable superficial collagen damage, though without causing cartilage fibrillation or tissue swelling. Quantitative microscopic techniques were more sensitive than biochemical methods in detecting these changes. The Young's modulus of cartilage decreased after enzymatic treatments indicating significant softening of the tissue. The PG concentration of the superficial zone proved to be the major determinant of the Young's modulus (r(2) = 0.767, n = 72, p < 0.001). Results of the present study indicate that specific enzymatic degradations of the tissue PGs and collagen can provide reproducible experimental models to clarify the structure-function relationships of cartilage. Effects of these models mimic the changes observed in early osteoarthrosis. Biomechanical testing and quantitative microscopic techniques proved to be powerful tools for detecting the superficial structural and compositional changes while the biochemical measurements on the whole uncalcified cartilage were less sensitive.
---
paper_title: Effect of superficial collagen patterns and fibrillation of femoral articular cartilage on knee joint mechanics-a 3D finite element analysis.
paper_content:
Collagen fibrils of articular cartilage have specific depth-dependent orientations and the fibrils bend in the cartilage surface to exhibit split-lines. Fibrillation of superficial collagen takes place in osteoarthritis. We aimed to investigate the effect of superficial collagen fibril patterns and collagen fibrillation of cartilage on stresses and strains within a knee joint. A 3D finite element model of a knee joint with cartilage and menisci was constructed based on magnetic resonance imaging. The fibril-reinforced poroviscoelastic material properties with depth-dependent collagen orientations and split-line patterns were included in the model. The effects of joint loading on stresses and strains in cartilage with various split-line patterns and medial collagen fibrillation were simulated under axial impact loading of 1000 N. In the model, the collagen fibrils resisted strains along the split-line directions. This increased also stresses along the split-lines. On the contrary, contact and pore pressures were not affected by split-line patterns. Simulated medial osteoarthritis increased tissue strains in both medial and lateral femoral condyles, and contact and pore pressures in the lateral femoral condyle. This study highlights the importance of the collagen fibril organization, especially that indicated by split-line patterns, for the weight-bearing properties of articular cartilage. Osteoarthritic changes of cartilage in the medial femoral condyle created a possible failure point in the lateral femoral condyle. This study provides further evidence on the importance of the collagen fibril organization for the optimal function of articular cartilage.
---
paper_title: Biomechanics: Mechanical Properties of Living Tissues
paper_content:
Prefaces. 1. Introduction: A sketch of the History and Scope of the Field. 2. The Meaning of the Constitutive Equation. 3. The Flow Properties of Blood. 4. Mechanics of Erythrocytes, Leukocytes, and Other Cells. 5. Interaction of Red Blood Cells with Vessel Wall, and Wall Shear with Endothelium. 6 Bioviscoelastic Fluids. Bioviscoelastic Solids. 8. Mechanical Properties and Active Remodeling of Blood Vessels. 9. Skeletal Muscle. 10. Heart Muscle. 11. Smooth Muscles. 12. Bone and Cartilage. Indices
---
paper_title: Confined and unconfined stress relaxation of cartilage: appropriateness of a transversely isotropic analysis
paper_content:
Abstract Previous studies have shown that stress relaxation behavior of calf ulnar growth plate and chondroepiphysis cartilage can be described by a linear transverse isotropic biphasic model. The model provides a good fit to the observed unconfined compression transients when the out-of-plane Poisson's ratio is set to zero. This assumption is based on the observation that the equilibrium stress in the axial direction ( σ z ) is the same in confined and unconfined compression, which implies that the radial stress σ r =0 in confined compression. In our study, we further investigated the ability of the transversely isotropic model to describe confined and unconfined stress relaxation behavior of calf cartilage. A series of confined and unconfined stress relaxation tests were performed on calf articular cartilage (4.5 mm diameter, ∼3.3 mm height) in a displacement-controlled compression apparatus capable of measuring σ z and σ r . In equilibrium, σ r >0 and σ z in confined compression was greater than in unconfined compression. Transient data at each strain were fitted by the linear transversely isotropic biphasic model and the material parameters were estimated. Although the model could provide good fits to the unconfined transients, the estimated parameters overpredicted the measured σ r . Conversely, if the model was constrained to match equilibrium σ r , the fits were poor. These findings suggest that the linear transversely isotropic biphasic model could not simultaneously describe the observed stress relaxation and equilibrium behavior of calf cartilage.
---
paper_title: Determination of collagen-proteoglycan interactions in vitro.
paper_content:
The objective of this study was to characterize the physical interactions of the molecular networks formed by mixtures of collagen and proteoglycan in vitro. Pure proteoglycan aggrecan solutions, collagen (type II) suspensions and mixtures of these molecules in varying proportions and concentrations were subjected to viscometric flow measurements using a cone-on-plate viscometer. Linear viscoelastic and non-Newtonian flow properties of these solutions and suspensions were described using a second-order statistical network theory for polymeric fluids (Zhu et al., 1991, J. Biomechanics24, 1007–1018). This theory provides a set of material coefficients which relate the macroscopic flow behavior of the fluid to an idealized molecular network structure. The results indicated distinct differences between the flow properties of pure collagen suspensions and those of pure proteoglycan solutions. The collagen network showed much greater shear stiffness and more effective energy storage capability than the proteoglycan network. The relative proportion of collagen to proteoglycan is the dominant factor in determining the flow behavior of the mixtures. Analysis of the statistical network theory indicated that the collagen in a collagen-proteoglycan mixture enhances molecular interactions by increasing the amount of entanglement interactions and/or the strength of interaction, while aggrecan acts to reduce the number and/or strength of molecular interactions. These results characterize the physical interactions between type II collagen and aggrecan and provide some insight into their potential roles in giving articular cartilage its mechanical behavior.
---
paper_title: A nonlinear biphasic fiber-reinforced porohyperviscoelastic model of articular cartilage incorporating fiber reorientation and dispersion.
paper_content:
A nonlinear biphasic fiber-reinforced porohyperviscoelastic (BFPHVE) model of articular cartilage incorporating fiber reorientation effects during applied load was used to predict the response of ovine articular cartilage at relatively high strains (20%). The constitutive material parameters were determined using a coupled finite element-optimization algorithm that utilized stress relaxation indentation tests at relatively high strains. The proposed model incorporates the strain-hardening, tension-compression, permeability, and finite deformation nonlinearities that inherently exist in cartilage, and accounts for effects associated with fiber dispersion and reorientation and intrinsic viscoelasticity at relatively high strains. A new optimization cost function was used to overcome problems associated with large peak-to-peak differences between the predicted finite element and experimental loads that were due to the large strain levels utilized in the experiments. The optimized material parameters were found to be insensitive to the initial guesses. Using experimental data from the literature, the model was also able to predict both the lateral displacement and reaction force in unconfined compression, and the reaction force in an indentation test with a single set of material parameters. Finally, it was demonstrated that neglecting the effects of fiber reorientation and dispersion resulted in poorer agreement with experiments than when they were considered. There was an indication that the proposed BFPHVE model, which includes the intrinsic viscoelasticity of the nonfibrillar matrix (proteoglycan), might be used to model the behavior of cartilage up to relatively high strains (20%). The maximum percentage error between the indentation force predicted by the FE model using the optimized material parameters and that measured experimentally was 3%.
---
paper_title: Experimental Verification and Theoretical Prediction of Cartilage Interstitial Fluid Pressurization At an Impermeable Contact Interface in Confined Compression
paper_content:
Interstitial fluid pressurization has long been hypothesized to play a fundamental role in the load support mechanism and frictional response of articular cartilage. However, to date, few experimental studies have been performed to verify this hypothesis from direct measurements. The first objective of this study was to investigate experimentally the hypothesis that cartilage interstitial fluid pressurization does support the great majority of the applied load, in the testing configurations of confined compression creep and stress relaxation. The second objective was to investigate the hypothesis that the experimentally observed interstitial fluid pressurization could also be predicted using the linear biphasic theory of Mow et al. (J. Biomech. Engng ASME, 102, 73-84, 1980). Fourteen bovine cartilage samples were tested in a confined compression chamber fitted with a microchip piezoresistive transducer to measure interstitial fluid pressure, while simultaneously measuring (during stress relaxation) or prescribing (during creep) the total stress. It was found that interstitial fluid pressure supported more than 90% of the total stress for durations as long as 725 +/- 248 s during stress relaxation (mean +/- S.D., n = 7), and 404 +/- 229 s during creep (n = 7). When comparing experimental measurements of the time-varying interstitial fluid pressure against predictions from the linear biphasic theory, nonlinear coefficients of determination r2 = 0.871 +/- 0.086 (stress relaxation) and r2 = 0.941 +/- 0.061 (creep) were found. The results of this study provide some of the most direct evidence to date that interstitial fluid pressurization plays a fundamental role in cartilage mechanics; they also indicate that the mechanism of fluid load support in cartilage can be properly predicted from theory.
---
paper_title: Computational biomechanics of articular cartilage of human knee joint: effect of osteochondral defects.
paper_content:
Articular cartilage and its supporting bone functional conditions are tightly coupled as injuries of either adversely affects joint mechanical environment. The objective of this study was set to quantitatively investigate the extent of alterations in the mechanical environment of cartilage and knee joint in presence of commonly observed osteochondral defects. An existing validated finite element model of a knee joint was used to construct a refined model of the tibial lateral compartment including proximal tibial bony structures. The response was computed under compression forces up to 2000 N while simulating localized bone damage, cartilage-bone horizontal split, bone overgrowth and absence of deep vertical collagen fibrils. Localized tibial bone damage increased overall joint compliance and substantially altered pattern and magnitude of contact pressures and cartilage strains in both tibia and femur. These alterations were further exacerbated when bone damage was combined with base cartilage split and absence of deep vertical collagen fibrils. Local bone boss markedly changed contact pressures and strain patterns in neighbouring cartilage. Bone bruise/fracture and overgrowth adversely perturbed the homeostatic balance in the mechanical environment of articulate cartilage surrounding and opposing the lesion as well as the joint compliance. As such, they potentially contribute to the initiation and development of post-traumatic osteoarthritis.
---
paper_title: The biphasic poroviscoelastic behavior of articular cartilage: role of the surface zone in governing the compressive behavior.
paper_content:
Surface fibrillation of articular cartilage is an early sign of degenerative changes in the development of osteoarthritis. To assess the influence of the surface zone on the viscoelastic properties of cartilage under compressive loading, we prepared osteochondral plugs from skeletally mature steers, with and without the surface zone of articular cartilage, for study in the confined compression creep experiment. The relative contributions of two viscoelastic mechanisms, i.e. a flow-independent mechanism [Hayes and Bodine, J. Biomechanics 11, 407-419 (1978)], and a flow-dependent mechanism [Mow et al. J. biomech. Engng 102, 73-84 (1980)], to the compressive creep response of these two types of specimens were determined using the biphasic poroviscoelastic theory proposed by Mak. [J. Biomechanics 20, 703-714 (1986)]. From the experimental results and the biphasic poroviscoelastic theory, we found that frictional drag associated with interstitial fluid flow and fluid pressurization are the dominant mechanisms of load support in the intact specimens, i.e. the flow-dependent mechanisms alone were sufficient to describe normal articular cartilage compressive creep behavior. For specimens with the surface removed, we found an increased creep rate which was derived from an increased tissue permeability, as well as significant changes in the flow-independent parameters of the viscoelastic solid matrix. permeability, as well as significant changes in the flow-independent parameters of the viscoelastic solid matrix. From these tissue properties and the biphasic poroviscoelastic theory, we determined that the flow-dependent mechanisms of load support, i.e. frictional drag and fluid pressurization, were greatly diminished in cartilage without the articular surface. Calculations based upon these material parameters show that for specimens with the surface zone removed, the cartilage solid matrix became more highly loaded during the early stages of creep. This suggests that an important function of the articular surface is to provide for a low fluid permeability, and thereby serve to restrict fluid exudation and increase interstitial fluid pressurization. Thus, it is likely that with increasing severity of damage to the articular surface, load support in cartilage under compression shifts from the flow-dependent modes of fluid drag and pressurization to increased solid matrix stress. This suggests that it is important to maintain the integrity of the articular surface in preserving normal compressive behavior of the tissue and normal load carriage in the joint.
---
paper_title: Failure locus of the anterior cruciate ligament: 3D finite element analysis
paper_content:
Anterior cruciate ligament (ACL) disruption is a common injury that is detrimental to an athlete's quality of life. Determining the mechanisms that cause ACL injury is important in order to develop proper interventions. A failure locus defined as various combinations of loadings and movements, internal/external rotation of femur and valgus and varus moments at a 25o knee flexion angle leading to ACL failure was obtained. The results indicated that varus and valgus movements were more dominant to the ACL injury than femoral rotation. Also, Von Mises stress in the lateral tibial cartilage during the valgus ACL injury mechanism was 83% greater than that of the medial cartilage during the varus mechanism of ACL injury. The results of this study could be used to develop training programmes focused on the avoidance of the described combination of movements which may lead to ACL injury.
---
paper_title: Computational biodynamics of human knee joint in gait: from muscle forces to cartilage stresses.
paper_content:
Using a validated finite element model of the intact knee joint we aim to compute muscle forces and joint response in the stance phase of gait. The model is driven by reported in vivo kinematics-kinetics data and ground reaction forces in asymptomatic subjects. Cartilage layers and menisci are simulated as depth-dependent tissues with collagen fibril networks. A simplified model with less refined mesh and isotropic depth-independent cartilage is also considered to investigate the effect of model accuracy on results. Muscle forces and joint detailed response are computed following an iterative procedure yielding results that satisfy kinematics/kinetics constraints while accounting at deformed configurations for muscle forces and passive properties. Predictions confirm that muscle forces and joint response alter substantially during the stance phase and that a simplified joint model may accurately be used to estimate muscle forces but not necessarily contact forces/areas, tissue stresses/strains, and ligament forces. Predictions are in general agreement with results of earlier studies. Performing the analyses at 6 periods from beginning to the end (0%, 5%, 25%, 50%, 75% and 100%), hamstrings forces peaked at 5%, quadriceps forces at 25% whereas gastrocnemius forces at 75%. ACL Force reached its maximum of 343 N at 25% and decreased thereafter. Contact forces reached maximum at 5%, 25% and 75% periods with the medial compartment carrying a major portion of load and experiencing larger relative movements and cartilage strains. Much smaller contact stresses were computed at the patellofemoral joint. This novel iterative kinematics-driven model is promising for the joint analysis in altered conditions.
---
paper_title: A Fibril-Network-Reinforced Biphasic Model of Cartilage in Unconfined Compression
paper_content:
Cartilage mechanical function relies on a composite structure of a collagen fibrillar network entrapping a proteoglycan matrix. Previous biphasic or poroelastic models of this tissue, which have approximated its composite structure using a homogeneous solid phase, have experienced difficulties in describing measured material responses. Progress to date in resolving these difficulties has demonstrated that a constitutive low that is successful for one test geometry (confined compression) is not necessarily successful for another (unconfined compression). In this study, we hypothesize that an alternative fibril-reinforced composite biphasic representation of cartilage can predict measured material responses and explore this hypothesis by developing and solving analytically a fibril-reinforced biphasic model for the case of uniaxial unconfined compression with frictionless compressing platens. The fibrils were considered to provide stiffness in tension only. The lateral stiffening provided by the fibril network dramatically increased the frequency dependence of disk rigidity in dynamic sinusoidal compression and the magnitude of the stress relaxation transient, in qualitative agreement with previously published data. Fitting newly obtained experimental stress relaxation data to the composite model allowed extraction of mechanical parameters from these tests, such as the rigidity of the fibril network, in addition to the elastic constants and the hydraulic permeability of the remaining matrix. Model calculations further highlight a potentially important difference between homogeneous and fibril-reinforced composite models. In the latter type of model, the stresses carried by different constituents can be dissimilar, even in sign (compression versus tension) even though strains can be identical. Such behavior, resulting only from a structurally physiological description, could have consequences in the efforts to understand the mechanical signals that determine cellular and extracellular biological responses to mechanical loads in cartilage.
---
paper_title: Flow-independent viscoelastic properties of articular cartilage matrix.
paper_content:
Abstract A sinusoidal shear generator developed by Miles (1962) is used to measure viscoelastic complex shear moduli for bovine articular cartilage. The shear loading mode uncouples the measurement of matrix properties from the flow of interstitial fluid and thus provides flow-independent measurements which are highly sensitive to biochemical alteration of the matrix constituents. Increased cross-linking causes significant increases in the matrix storage modulus while proteoglycan depletion and collagenase digestion cause significant decreases. Highly significant differences in complex shear moduli are also observed between the proteoglycan depletion group and the collagenase digestion group.
---
paper_title: An Analysis of the Unconfined Compression of Articular Cartilage
paper_content:
Analytical solutions have been obtained for the internal deformation and fluid-flow fields and the externally observable creep, stress relaxation, and constant strain-rate behaviors which occur during the unconfined compression of a cylindrical specimen of a fluid-filled, porous, elastic solid, such as articular cartilage, between smooth, impermeable plates. Instantaneously, the "biphasic" continuum deforms without change in volume and behaves like an incompressible elastic solid of the same shear modulus. Radial fluid flow then allows the internal fluid pressure to equilibrate with the external environment. The equilibrium response is controlled by the Young's modulus and Poisson's ratio of the solid matrix.
---
paper_title: Failure criterion of collagen fiber: Viscoelastic behavior simulated by using load control data
paper_content:
A nonlinear Zener model is developed to model the viscoelastic behavior of collagen fibers, a building block of the biological soft tissues in the skeletal system. The effects of the strain rate dependency, the loading history, rest, and recovery on the stress-strain relationship of collagen fibers were investigated using the Zener model. The following loading conditions were simulated: (1) the stress relaxation after cyclic loading, (2) the constant strain rate loading before and after cyclic loading (stabilization) and post recovery, and (3) the constant strain rate loading over a wide range of loading rates. In addition, we explored the critical values of stress and strain using different failure criteria at different strain rates. Four major findings were derived from these simulations. First of all, the stress relaxation is larger with a smaller number cycles of preloading. Second, the strain rate sensitivity diminishes after the stabilization and recovery from resting. Third, the stress-strain curve is dependent on the strain rate except for extreme loading conditions (very fast or slow rates of loading). Finally, the strain energy density (SED) criteria may be a more practical failure criterion than the ultimate stress or strain criterion for collagen fiber. These results provide the basis for interpretation of the viscoelastic and failure behaviors of complex structures such as spinal functional units with more economical CPU than full finite element modeling of the whole structure would have required.
---
paper_title: Stresses in the local collagen network of articular cartilage: a poroviscoelastic fibril-reinforced finite element study.
paper_content:
Osteoarthritis (OA) is a multifactorial disease, resulting in diarthrodial joint wear and eventually destruction. Swelling of cartilage, which is proportional to the amount of collagen damage, is an initial event of cartilage degeneration, so damage to the collagen fibril network is likely to be one of the earliest signs of OA cartilage degeneration. We propose that the local stresses and strains in the collagen fibrils, which cause the damage, cannot be determined dependably without taking the local arcade-like collagen-fibril structure into account. We investigate this using a poroviscoelastic fibril-reinforced FEA model. The constitutive fibril properties were determined by fitting numerical data to experimental results of unconfined compression and indentation tests on samples of bovine patellar articular cartilage. It was demonstrated that with this model the stresses and strains in the collagen fibrils can be calculated. It was also exhibited that fibrils with different orientations at the same location can be loaded differently, depending on the local architecture of the collagen network. To the best of our knowledge, the present model is the first that can account for these features. We conclude that the local stresses and strains in the articular cartilage are highly influenced by the local morphology of the collagen-fibril network.
---
paper_title: A physical model for the time-dependent deformation of articular cartilage.
paper_content:
A physical analogue was developed to simulate the time-dependent deformation of articular cartilage. The analogue was constructed from a matrix of water-saturated sponge material whose permeability could be varied, and was constrained so as to allow one-dimensional deformation under both static and dynamic compressive loading. Simultaneous measurements were made of the applied stress, matrix excess pore pressure and matrix strain. The results obtained reinforce the view that under static and low strain-rate loading conditions, a consolidatable system like cartilage sustains the applied stress through a stress-sharing mechanism between matrix water and the solid skeleton. However, at high strain-rates load-bearing is dominated by a mechanism in which the matrix water is immobilized and the excess pore pressure rises to almost that of the applied stress, thus suggesting that the constituents of the matrix act as a single functional entity to support the applied load. The model supports the description of cartilage as a poro-visco-hyperelastic material.
---
paper_title: A fibril-reinforced poroviscoelastic swelling model for articular cartilage.
paper_content:
From a mechanical point of view, the most relevant components of articular cartilage are the tight and highly organized collagen network together with the charged proteoglycans. Due to the fixed charges of the proteoglycans, the cation concentration inside the tissue is higher than in the surrounding synovial fluid. This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. The fibrillar collagen network resists straining and swelling pressures. This combination makes cartilage a unique, highly hydrated and pressurized tissue, enforced with a strained collagen network. Many theories to explain articular cartilage behavior under loading, expressed in computational models that either include the swelling behavior or the properties of the anisotropic collagen structure, can be found in the literature. The most common tests used to determine the mechanical quality of articular cartilage are those of confined compression, unconfined compression, indentation and swelling. All theories currently available in the literature can explain the cartilage response occurring in some of the above tests, but none of them can explain these for all of the tests. We hypothesized that a model including simultaneous mathematical descriptions of (1) the swelling properties due to the fixed-change densities of the proteoglycans and (2) the anisotropic viscoelastic collagen structure, can explain all these test simultaneously. To study this hypothesis we extended our fibril-reinforced poroviscoelastic finite element model with our biphasic swelling model. We have shown that the newly developed fibril-reinforced poroviscoelastic swelling (FPVES) model for articular cartilage can simultaneously account for the reaction force during swelling, confined compression, indentation and unconfined compression as well as the lateral deformation during unconfined compression. Using this theory it is possible to analyze the link between the collagen network and the swelling properties of articular cartilage.
---
paper_title: The Role of Flow-Independent Viscoelasticity in the Biphasic Tensile and Compressive Responses of Articular Cartilage
paper_content:
A long-standing challenge in the biomechanics of connective tissues (e.g., articular cartilage, ligament, tendon) has been the reported disparities between their tensile and compressive properties. In general, the intrinsic tensile properties of the solid matrices of these tissues are dictated by the collagen content and microstructural architecture, and the intrinsic compressive properties are dictated by their proteoglycan content and molecular organization as well as water content. These distinct materials give rise to a pronounced and experimentally well-documented nonlinear tension ‐compression stress‐ strain responses, as well as biphasic or intrinsic extracellular matrix viscoelastic responses. While many constitutive models of articular cartilage have captured one or more of these experimental responses, no single constitutive law has successfully described the uniaxial tensile and compressive responses of cartilage within the same framework. The objective of this study was to combine two previously proposed extensions of the biphasic theory of Mow et al. [1980, ASME J. Biomech. Eng.,102, pp. 73‐84] to incorporate tension‐compression nonlinearity as well as intrinsic viscoelasticity of the solid matrix of cartilage. The biphasic-conewise linear elastic model proposed by Soltz and Ateshian [2000, ASME J. Biomech. Eng.,122, pp. 576‐586] and based on the bimodular stressstrain constitutive law introduced by Curnier et al. [1995, J. Elasticity,37, pp. 1‐38], as well as the biphasic poroviscoelastic model of Mak [1986, ASME J. Biomech. Eng., 108, pp. 123‐130], which employs the quasi-linear viscoelastic model of Fung [1981, Biomechanics: Mechanical Properties of Living Tissues, Springer-Verlag, New York], were combined in a single model to analyze the response of cartilage to standard testing configurations. Results were compared to experimental data from the literature and it was found that a simultaneous prediction of compression and tension experiments of articular cartilage, under stress-relaxation and dynamic loading, can be achieved when properly taking into account both flow-dependent and flow-independent viscoelasticity effects, as well as tension‐compression nonlinearity. @DOI: 10.1115/1.1392316#
---
paper_title: Mechanical behavior of articular cartilage: quantitative changes with alteration of ionic environment.
paper_content:
The in vitro viscoelastic mechanical response of normal rabbit articular cartilage is quantitatively altered by changes in the ionic concentration of the test environment. Experimental results indicate that specific functional relationships exist between shear moduli, retardation time spectra and ionic concentration. The forms of these relationships are consistent with the structure and physio-chemical composition of the tissue.
---
paper_title: Knee joint biomechanics in closed-kinetic-chain exercises.
paper_content:
Effective management of knee joint disorders demands appropriate rehabilitation programs to restore function while strengthening muscles. Excessive stresses in cartilage/menisci and forces in ligaments should be avoided to not exacerbate joint condition after an injury or reconstruction. Using a validated 3D nonlinear finite element model, detailed biomechanics of the entire joint in closed-kinetic-chain squat exercises are investigated at different flexion angles, weights in hands, femur-tibia orientations and coactivity in hamstrings. Predictions are in agreement with results of earlier studies. Estimation of small forces in cruciate ligaments advocates the use of squat exercises at all joint angles and external loads. In contrast, large contact stresses, especially at the patellofemoral joint, that approach cartilage failure threshold in compression suggest avoiding squatting at greater flexion angles, joint moments and weights in hands. Current results are helpful in comprehensive evaluation and design of effective exercise therapies and trainings with minimal risk to various components.
---
paper_title: Experimental determination of the linear biphasic constitutive coefficients of human fetal proximal femoral chondroepiphysis.
paper_content:
The mechanical properties of the cartilaginous regions of the proximal femoral epiphysis are an important factor in load transmission through the hip joint of young children. Cylindrical test specimens excised from the chondroepiphysis of human stillborn femoral heads were subjected to uniaxial loading in peripherally-unconfined compression, using a ramp/plateau input strain history. The corresponding load vs time curves were analyzed in terms of a recent analytical solution for a linear biphasic material (the well-known KLM model), allowing calculation of that model's three fundamental constitutive coefficients (permeability, equilibrium modulus and solid-phase Poisson ratio) for this material. The numerical algorithm developed to evaluate the biphasic solution yielded very precise replication of previously published KLM parametric plots. When fitted to experimental load histories, however, the model provided only a rather loose approximation of specimen behavior, due apparently to a substantial underestimation of the transient response component associated with interstitial fluid transport. Averaged over the series, the best-fit values for permeability (2.51 X 10(-15) m4 Ns-1) and equilibrium modulus (0.699 MPa) were in the range of values accepted for human adult articular cartilage. A consequence of the coarseness of the analytical curve fits was that a solid-phase Poisson ratio of 0.0 was inferred for all specimens. The permeability vs equilibrium modulus exhibited a nearly linear (r = 0.74) inverse relationship similar to that reported for adult articular cartilage.
---
paper_title: Influence of patellofemoral articular geometry and material on mechanics of the unresurfaced patella.
paper_content:
Patellar resurfacing during knee replacement is still under debate, with several studies reporting higher incidence of anterior knee pain in unresurfaced patellae. Congruency between patella and femur impacts the mechanics of the patellar cartilage and strain in the underlying bone, with higher stresses and strains potentially contributing to cartilage wear and anterior knee pain. The material properties of the articulating surfaces will also affect load transfer between femur and patella. The purpose of this study was to evaluate the mechanics of the unresurfaced patella and compare with natural and resurfaced conditions in a series of finite element models of the patellofemoral joint. In the unresurfaced analyses, three commercially available implants were compared, in addition to an 'ideal' femoral component which replicated the geometry, but not the material properties, of the natural femur. Hence, the contribution of femoral component material properties could be assessed independently from geometry changes. The ideal component tracked the kinematics and patellar bone strain of the natural knee, but had consistently inferior contact mechanics. In later flexion, compressive patellar bone strain in unresurfaced conditions was substantially higher than in resurfaced conditions. Understanding how femoral component geometry and material properties in unresurfaced knee replacement alters cartilage contact mechanics and bone strain may aid in explaining why the incidence of anterior knee pain is higher in the unresurfaced population, and ultimately contribute to identifying criteria to pre-operatively predict which patients are suited to an unresurfaced procedure and reducing the incidence of anterior knee pain in the unresurfaced patient population.
---
paper_title: Importance of the superficial tissue layer for the indentation stiffness of articular cartilage
paper_content:
Indentation testing is a widely used technique for nondestructive mechanical analysis of articular cartilage. Although cartilage shows an inhomogeneous, layered structure with anisotropic mechanical properties, most theoretical indentation models assume material homogeneity and isotropy. In the present study, quantitative polarized light microscopy (PLM) measurements from canine cartilage were utilized to characterize thickness and structure of the superficial, collageneous tissue layer as well as to reveal its relation to experimental indentation measurements. In addition to experimental analyses, a layered, transversely isotropic finite element (FE) model was developed and the effect of superficial (tangential) tissue layer with high elastic modulus in the direction parallel to articular surface on the indentation response was studied. The experimental indentation stiffness was positively correlated with the relative thickness of the superficial cartilage layer. Also the optical retardation, which reflects the degree of parallel organization of collagen fibrils as well as collagen content, was related to indentation stiffness. FE results indicated effective stiffening of articular cartilage under indentation due to high transverse modulus of the superficial layer. The present results suggest that indentation testing is an efficient technique for the characterization of the superficial degeneration of articular cartilage.
---
paper_title: Nonlinear analysis of cartilage in unconfined ramp compression using a fibril reinforced poroelastic model.
paper_content:
OBJECTIVE ::: To develop a biomechanical model for cartilage which is capable of capturing experimentally observed nonlinear behaviours of cartilage and to investigate effects of collagen fibril reinforcement in cartilage. ::: ::: ::: DESIGN ::: A sequence of 10 or 20 steps of ramp compression/relaxation applied to cartilage disks in uniaxial unconfined geometry is simulated for comparison with experimental data. ::: ::: ::: BACKGROUND ::: Mechanical behaviours of cartilage, such as the compression-offset dependent stiffening of the transient response and the strong relaxation component, have been previously difficult to describe using the biphasic model in unconfined compression. ::: ::: ::: METHODS ::: Cartilage is modelled as a fluid-saturated solid reinforced by an elastic fibrillar network. The latter, mainly representing collagen fibrils, is considered as a distinct constituent embedded in a biphasic component made up mainly of proteoglycan macromolecules and a fluid carrying mobile ions. The Young's modulus of the fibrillar network is taken to vary linearly with its tensile strain but to be zero for compression. Numerical computations are carried out using a finite element procedure, for which the fibrillar network is discretized into a system of spring elements. ::: ::: ::: RESULTS ::: The nonlinear fibril reinforced poroelastic model is capable of describing the strong relaxation behaviour and compression-offset dependent stiffening of cartilage in unconfined compression. Computational results are also presented to demonstrate unique features of the model, e.g. the matrix stress in the radial direction is changed from tensile to compressive due to presence of distinct fibrils in the model. ::: ::: ::: RELEVANCE ::: Experimentally observed nonlinear behaviours of cartilage are successfully simulated, and the roles of collagen fibrils are distinguished by using the proposed model. Thus this study may lead to a better understanding of physiological responses of individual constituents of cartilage to external loads, and of the roles of mechanical loading in cartilage remodelling and pathology.
---
paper_title: Fibril reinforced poroelastic model predicts specifically mechanical behavior of normal, proteoglycan depleted and collagen degraded articular cartilage
paper_content:
Abstract Degradation of collagen network and proteoglycan (PG) macromolecules are signs of articular cartilage degeneration. These changes impair cartilage mechanical function. Effects of collagen degradation and PG depletion on the time-dependent mechanical behavior of cartilage are different. In this study, numerical analyses, which take the compression-tension nonlinearity of the tissue into account, were carried out using a fibril reinforced poroelastic finite element model. The study aimed at improving our understanding of the stress-relaxation behavior of normal and degenerated cartilage in unconfined compression. PG and collagen degradations were simulated by decreasing the Young's modulus of the drained porous (nonfibrillar) matrix and the fibril network, respectively. Numerical analyses were compared to results from experimental tests with chondroitinase ABC (PG depletion) or collagenase (collagen degradation) digested samples. Fibril reinforced poroelastic model predicted the experimental behavior of cartilage after chondroitinase ABC digestion by a major decrease of the drained porous matrix modulus (−64±28%) and a minor decrease of the fibril network modulus (−11±9%). After collagenase digestion, in contrast, the numerical analyses predicted the experimental behavior of cartilage by a major decrease of the fibril network modulus (−69±5%) and a decrease of the drained porous matrix modulus (−44±18%). The reduction of the drained porous matrix modulus after collagenase digestion was consistent with the microscopically observed secondary PG loss from the tissue. The present results indicate that the fibril reinforced poroelastic model is able to predict specifically characteristic alterations in the stress-relaxation behavior of cartilage after enzymatic modifications of the tissue. We conclude that the compression-tension nonlinearity of the tissue is needed to capture realistically the mechanical behavior of normal and degenerated articular cartilage.
---
paper_title: A fibril reinforced nonhomogeneous poroelastic model for articular cartilage: inhomogeneous response in unconfined compression.
paper_content:
The depth dependence of material properties of articular cartilage, known as the zonal differences, is incorporated into a nonlinear fibril-reinforced poroelastic model developed previously in order to explore the significance of material heterogeneity in the mechanical behavior of cartilage. The material variations proposed are based on extensive observations. The collagen fibrils are modeled as a distinct constituent which reinforces the other two constituents representing proteoglycans and water. The Young's modulus and Poisson's ratio of the drained nonfibrillar matrix are so determined that the aggregate compressive modulus for confined geometry fits the experimental data. Three nonlinear factors are considered, i.e. the effect of finite deformation, the dependence of permeability on dilatation and the fibril stiffening with its tensile strain. Solutions are extracted using a finite element procedure to simulate unconfined compression tests. The features of the model are then demonstrated with an emphasis on the results obtainable only with a nonhomogeneous model, showing reasonable agreement with experiments. The model suggests mechanical behaviors significantly different from those revealed by homogeneous models: not only the depth variations of the strains which are expected by qualitative analyses, but also, for instance, the relaxation-time dependence of the axial strain which is normally not expected in a relaxation test. Therefore, such a nonhomogeneous model is necessary for better understanding of the mechanical behavior of cartilage.
---
paper_title: The Generalized Consolidation of Articular Cartilage: An Investigation of Its Near-Physiological Response to Static Load
paper_content:
This paper presents a study of the response of articular cartilage to compression whilst measuring simultaneously its strain and fluid excess pore pressure using a newly developed experimental apparatus for testing the tissue in its unconfined state. This has provided a comparison of the load-induced responses of the cartilage matrix under axial, radial and 3-D consolidation regimes. Our results demonstrate that the patterns of the hydrostatic excess pore pressure for axial and 3-D consolidation are similar, but differ significantly from that obtained under the more physiologically relevant condition in which the matrix exhibits radial consolidation when loaded either through a non-porous polished stainless steel indenter or an opposing cartilage disc. Based on the transient strain characteristics obtained under axial and unconfined compression we argue that consolidation is indeed the controlling mechanism of cartilage biomechanical function.
---
paper_title: Stress–relaxation of human patellar articular cartilage in unconfined compression: Prediction of mechanical response by tissue composition and structure
paper_content:
Abstract Mechanical properties of articular cartilage are controlled by tissue composition and structure. Cartilage function is sensitively altered during tissue degeneration, in osteoarthritis (OA). However, mechanical properties of the tissue cannot be determined non-invasively. In the present study, we evaluate the feasibility to predict, without mechanical testing, the stress–relaxation response of human articular cartilage under unconfined compression. This is carried out by combining microscopic and biochemical analyses with composition-based mathematical modeling. Cartilage samples from five cadaver patellae were mechanically tested under unconfined compression. Depth-dependent collagen content and fibril orientation, as well as proteoglycan and water content were derived by combining Fourier transform infrared imaging, biochemical analyses and polarized light microscopy. Finite element models were constructed for each sample in unconfined compression geometry. First, composition-based fibril-reinforced poroviscoelastic swelling models, including composition and structure obtained from microscopical and biochemical analyses were fitted to experimental stress–relaxation responses of three samples. Subsequently, optimized values of model constants, as well as compositional and structural parameters were implemented in the models of two additional samples to validate the optimization. Theoretical stress–relaxation curves agreed with the experimental tests ( R =0.95–0.99). Using the optimized values of mechanical parameters, as well as composition and structure of additional samples, we were able to predict their mechanical behavior in unconfined compression, without mechanical testing ( R =0.98). Our results suggest that specific information on tissue composition and structure might enable assessment of cartilage mechanics without mechanical testing.
---
paper_title: High-resolution diffusion tensor imaging of human patellar cartilage: Feasibility and preliminary findings
paper_content:
MR diffusion tensor imaging (DTI) was used to analyze the microstructural properties of articular cartilage. Human patellar cartilage-on-bone samples were imaged at 9.4T using a diffusion-weighted SE sequence (12 gradient directions, resolution = 39 x 78 x 1500 microm(3)). Voxel-based maps of the mean diffusivity, fractional anisotropy (FA), and eigenvectors were calculated. The mean diffusivity decreased from the surface (1.45 x 10(-3) mm(2)/s) to the tide mark (0.68 x 10(-3) mm(2)/s). The FA was low (0.04-0.28) and had local maxima near the surface and in the portion of the cartilage corresponding to the radial layer. The eigenvector corresponding to the largest eigenvalue showed a distinct zonal pattern, being oriented tangentially and radially in the upper and lower portions of the cartilage, respectively. The findings correspond to current scanning electron microscopy (SEM) data on the zonal architecture of cartilage. The eigenvector maps appear to reflect the alignment of the collagenous fibers in cartilage. In view of current efforts to develop and evaluate structure-modifying therapeutic approaches in osteoarthritis (OA), DTI may offer a tool to assess the structural properties of cartilage.
---
paper_title: Depth-dependent Compressive Equilibrium Properties of Articular Cartilage Explained by its Composition
paper_content:
For this study, we hypothesized that the depth-dependent compressive equilibrium properties of articular cartilage are the inherent consequence of its depth-dependent composition, and not the result of depth-dependent material properties. To test this hypothesis, our recently developed fibril-reinforced poroviscoelastic swelling model was expanded to include the influence of intra- and extra-fibrillar water content, and the influence of the solid fraction on the compressive properties of the tissue. With this model, the depth-dependent compressive equilibrium properties of articular cartilage were determined, and compared with experimental data from the literature. The typical depth-dependent behavior of articular cartilage was predicted by this model. The effective aggregate modulus was highly strain-dependent. It decreased with increasing strain for low strains, and increases with increasing strain for high strains. This effect was more pronounced with increasing distance from the articular surface. The main insight from this study is that the depth-dependent material behavior of articular cartilage can be obtained from its depth-dependent composition only. This eliminates the need for the assumption that the material properties of the different constituents themselves vary with depth. Such insights are important for understanding cartilage mechanical behavior, cartilage damage mechanisms and tissue engineering studies.
---
paper_title: A hyperelastic biphasic fibre-reinforced model of articular cartilage considering distributed collagen fibre orientations: continuum basis, computational aspects and applications
paper_content:
Cartilage is a multi-phase material composed of fluid and electrolytes (68-85% by wet weight), proteoglycans (5-10% by wet weight), chondrocytes, collagen fibres and other glycoproteins. The solid ...
---
paper_title: Structural Analysis of Articular Cartilage Using Multiphoton Microscopy: Input for Biomechanical Modeling
paper_content:
The 3-D morphology of chicken articular cartilage was quantified using multiphoton microscopy (MPM) for use in continuum-mechanical modeling. To motivate this morphological study we propose aspects of a new, 3-D finite strain constitutive model for articular cartilage focusing on the essential load-bearing morphology: an inhomogeneous, poro-(visco)elastic solid matrix reinforced by an anisotropic, (visco)elastic dispersed fiber fabric which is saturated by an incompressible fluid residing in strain-dependent pores. Samples of fresh chicken cartilage were sectioned in three orthogonal planes and imaged using MPM, specifically imaging the collagen fibers using second harmonic generation. Employing image analysis techniques based on Fourier analysis, we derived the principal directionality and dispersion of the collagen fiber fabric in the superficial layer. In the middle layer, objective thresholding techniques were used to extract the volume fraction occupied by extracellular collagen matrix. In conjunction with information available in the literature, or additional experimental testing, we show how this data can be used to derive a 3-D map of the initial solid volume fraction and Darcy permeability.
---
paper_title: On the anisotropy and inhomogeneity of permeability in articular cartilage
paper_content:
Articular cartilage is known to be anisotropic and inhomogeneous because of its microstructure. In particular, its elastic properties are influenced by the arrangement of the collagen fibres, which are orthogonal to the bone-cartilage interface in the deep zone, randomly oriented in the middle zone, and parallel to the surface in the superficial zone. In past studies, cartilage permeability has been related directly to the orientation of the glycosaminoglycan chains attached to the proteoglycans which constitute the tissue matrix. These studies predicted permeability to be isotropic in the undeformed configuration, and anisotropic under compression. They neglected tissue anisotropy caused by the collagen network. However, magnetic resonance studies suggest that fluid flow is “directed” by collagen fibres in biological tissues. Therefore, the aim of this study was to express the permeability of cartilage accounting for the microstructural anisotropy and inhomogeneity caused by the collagen fibres. Permeability is predicted to be anisotropic and inhomogeneous, independent of the state of strain, which is consistent with the morphology of the tissue. Looking at the local anisotropy of permeability, we may infer that the arrangement of the collagen fibre network plays an important role in directing fluid flow to optimise tissue functioning.
---
paper_title: A human knee joint model considering fluid pressure and fiber orientation in cartilages and menisci
paper_content:
Abstract Articular cartilages and menisci are generally considered to be elastic in the published human knee models, and thus the fluid-flow dependent response of the knee has not been explored using finite element analysis. In the present study, the fluid pressure and site-specific collagen fiber orientation in the cartilages and menisci were implemented into a finite element model of the knee using fibril-reinforced modeling previously proposed for articular cartilage. The geometry of the knee was obtained from magnetic resonance imaging of a healthy young male. The bones were considered to be elastic due to their greater stiffness compared to that of the cartilages and menisci. The displacements obtained for fast ramp compression were essentially same as those for instantaneous compression of equal magnitude with the fluid being trapped in the tissues, which was expected. However, a clearly different pattern of displacements was predicted by an elastic model using a greater Young's modulus and a Poisson's ratio for nearly incompressible material. The results indicated the influence of fluid pressure and fiber orientation on the deformation of articular cartilage in the knee. The fluid pressurization in the femoral cartilage was somehow affected by the site-specific fiber directions. The peak fluid pressure in the femoral condyles was reduced by three quarters when no fibril reinforcement was assumed. The present study indicates the necessity of implementing the fluid pressure and anisotropic fibril reinforcement in articular cartilage for a more accurate understanding of the mechanics of the knee.
---
paper_title: Stresses in the local collagen network of articular cartilage: a poroviscoelastic fibril-reinforced finite element study.
paper_content:
Osteoarthritis (OA) is a multifactorial disease, resulting in diarthrodial joint wear and eventually destruction. Swelling of cartilage, which is proportional to the amount of collagen damage, is an initial event of cartilage degeneration, so damage to the collagen fibril network is likely to be one of the earliest signs of OA cartilage degeneration. We propose that the local stresses and strains in the collagen fibrils, which cause the damage, cannot be determined dependably without taking the local arcade-like collagen-fibril structure into account. We investigate this using a poroviscoelastic fibril-reinforced FEA model. The constitutive fibril properties were determined by fitting numerical data to experimental results of unconfined compression and indentation tests on samples of bovine patellar articular cartilage. It was demonstrated that with this model the stresses and strains in the collagen fibrils can be calculated. It was also exhibited that fibrils with different orientations at the same location can be loaded differently, depending on the local architecture of the collagen network. To the best of our knowledge, the present model is the first that can account for these features. We conclude that the local stresses and strains in the articular cartilage are highly influenced by the local morphology of the collagen-fibril network.
---
paper_title: Characterization of articular cartilage by combining microscopic analysis with a fibril-reinforced finite-element model.
paper_content:
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
---
paper_title: Deep vertical collagen fibrils play a significant role in mechanics of articular cartilage.
paper_content:
The primary orientation of collagen fibrils alters along the cartilage depth; being horizontal in the superficial zone, random in the transitional zone, and vertical in the deep zone. Commonly used confined and unconfined (when with no underlying bone) testing configurations cannot capture the mechanical role of deep vertical fibril network. To determine this role in cartilage mechanics, an axisymmetric nonlinear fibril-reinforced poroelastic model of tibial cartilage plateaus was developed accounting for depth-dependent properties and distinct fibril networks with physical material properties. Both creep and relaxation indentation models were analyzed which results were found equivalent in the transient period but diverged in post-transient periods. Vertical fibrils played a significant role at the transient period in dramatically increasing the stiffness of the tissue and in protecting the solid matrix against large distortions and strains at the subchondral junction. This role, however, disappeared both with time and at loading rates slower than those expected in physiological activities such as walking. The vertical fibrils demonstrated a chevron-type deformation pattern that was further accentuated with time in creep loading. Damages to deep vertical collagen fibril network or their firm anchorage to the bone, associated with bone bruises, for example, would weaken the transient stiffness and place the tissue at higher risk of failure particularly at the deep zone.
---
paper_title: Strain-rate Dependent Stiffness of Articular Cartilage in Unconfined Compression
paper_content:
The stiffness of articular cartilage is a nonlinear function of the strain amplitude and strain rate as well as the loading history, as a consequence of the flow of interstitial water and the stiffening of the collagen fibril network. This paper presents a full investigation of the interplay between the fluid kinetics and fibril stiffening of unconfined cartilage disks by analyzing over 200 cases with diverse material properties. The lower and upper elastic limits of the stress (under a given strain) are uniquely established by the instantaneous and equilibrium stiffness (obtained numerically for finite deformations and analytically for small deformations). These limits could be used to determine safe loading protocols in order that the stress in each solid constituent remains within its own elastic limit. For a given compressive strain applied at a low rate, the loading is close to the lower limit and is mostly borne directly by the solid constituents (with little contribution from the fluid). In contrast, however in case of faster compression, the extra loading is predominantly transported to the fibrillar matrix via rising fluid pressure with little increase of stress in the nonfibrillar matrix. The fibrillar matrix absorbs the loading increment by self-stiffening: the quicker the loading the faster the fibril stiffening until the upper elastic loading limit is reached. This self-protective mechanism prevents cartilage from damage since the fibrils are strong in tension. The present work demonstrates the ability of the fibril reinfored poroelastic models to describe the strain rate dependent behavior of articular cartilage in unconfined compression using a mechanism of fibril stiffening mainly induced by the fluid flow.
---
paper_title: A Transversely Isotropic Biphasic Model for Unconfined Compression of Growth Plate and Chondroepiphysis
paper_content:
Using the biphasic theory for hydrated soft tissues (Mow et al., 1980) and a transversely isotropic elastic model for the solid matrix, an analytical solution is presented for the unconfined compression of cylindrical disks of growth plate tissues compressed between two rigid platens with a frictionless interface. The axisymmetric case where the plane of transverse isotropy is perpendicular to the cylindrical axis is studied, and the stress-relaxation response to imposed step and ramp displacements is solved. This solution is then used to analyze experimental data from unconfined compression stress-relaxation tests performed on specimens from bovine distal ulnar growth plate and chondroepiphysis to determine the biphasic material parameters. The transversely isotropic biphasic model provides an excellent agreement between theory and experimental results, better than was previously achieved with an isotropic model, and can explain the observed experimental behavior in unconfined compression of these tissues.
---
paper_title: A composition-based cartilage model for the assessment of compositional changes during cartilage damage and adaptation
paper_content:
Summary Objective The composition of articular cartilage changes with progression of osteoarthritis. Since compositional changes are associated with changes in the mechanical properties of the tissue, they are relevant for understanding how mechanical loading induces progression. The objective of this study is to present a computational model of articular cartilage which enables to study the interaction between composition and mechanics. Methods Our previously developed fibril-reinforced poroviscoelastic swelling model for articular cartilage was combined with our tissue composition-based model. In the combined model both the depth- and strain-dependencies of the permeability are governed by tissue composition. All local mechanical properties in the combined model are directly related to the local composition of the tissue, i.e., to the local amounts of proteoglycans and collagens and to tissue anisotropy. Results Solely based on the composition of the cartilage, we were able to predict the equilibrium and transient response of articular cartilage during confined compression, unconfined compression, indentation and two different 1D-swelling tests, simultaneously. Conclusion Since both the static and the time-dependent mechanical properties have now become fully dependent on tissue composition, the model allows assessing the mechanical consequences of compositional changes seen during osteoarthritis without further assumptions. This is a major step forward in quantitative evaluations of osteoarthritis progression.
---
paper_title: Deformation of Chondrocytes in Articular Cartilage under Compressive Load: A Morphological Study
paper_content:
The main function of articular cartilage is to transmit load. The objective of this study was to describe the deformation of chondrocytes under static loading and its relation to collagen matrix deformation. Whole intact rabbit knee joints were loaded statically with either high or low magnitude and long or short duration. Specimens were cryopreserved while under load and prepared for morphological evaluation by field emission scanning electron microscopy. With this method an immediate preservation of the chondrocyte in its loaded state was possible. Static compression of articular cartilage produced a zone-specific deformation of chondrocyte shape, depending on the magnitude and duration of load. Under high-force and long-duration loading, the chondrocytes showed considerable deformation concomitant with the highly deformed collagen fibres. Chondrocyte deformation occurred mostly in the transitional and upper radial zones and less in the lower layers. There was no significant change of the chondrocyte shape in the tangential zone under high- or low-force short-duration loading. These results show that the chondrocytes undergo significant changes in shape ex vivo and that they are sensitive to differences in the magnitude and duration of loads being applied. Chondrocyte deformation is strongly linked to the deformation of the surrounding cartilage collagen matrix.
---
paper_title: A Conewise Linear Elasticity Mixture Model for the Analysis of Tension-Compression Nonlinearity in Articular Cartilage
paper_content:
A biphasic mixture model is developed which can account for the observed tension-compression nonlinearity of cartilage by employing the continuum-based Conewise Linear Elasticity (CLE) model of Curnier et al. (J Elasticity 37:1–38, 1995) to describe the solid phase of the mixture. In this first investigation, the orthotropic octantwise linear elasticity model was reduced to the more specialized case of cubic symmetry, to reduce the number of elastic constants from twelve to four. Confined and unconfined compression stress-relaxation, and torsional shear testing were performed on each of nine bovine humeral head articular cartilage cylindrical plugs from 6 month old calves. Using the CLE model with cubic symmetry, the aggregate modulus in compression and axial permeability were obtained from confined compression (H−A =0.64±0.22 MPa, kz = 3.62 ± .97 × 10−16 m4/N.s, r2 =0.95±0.03), the tensile modulus, compressive Poisson ratio and radial permeability were obtained from unconfined compression (E+Y = 12.75 ± 1.56 MPa, ν− =0.03±0.01, kr =6.06±2.10×10−16 m4/N.s, r2 =0.99±0.00), and the shear modulus was obtained from torsional shear (µ=0.17±0.06 MPa). The model was also employed to successfully predict the interstitial fluid pressure at the center of the cartilage plug in unconfined compression (r2 =0.98±0.01). The results of this study demonstrate that the integration of the CLE model with the biphasic mixture theory can provide a model of cartilage which can successfully curvefit three distinct testing configurations while producing material parameters consistent with previous reports in the literature.
---
paper_title: A viscoelastic model for fiber-reinforced composites at finite strains: Continuum basis, computational aspects and applications
paper_content:
Abstract This paper presents a viscoelastic model for the fully three-dimensional stress and deformation response of fiber-reinforced composites that experience finite strains. The composites are thought to be (soft) matrix materials which are reinforced by two families of fibers so that the mechanical properties of the composites depend on two fiber directions. The relaxation and/or creep response of each compound of the composite is modeled separately and the global response is obtained by an assembly of all contributions. We develop novel closed-form expressions for the fourth-order elasticity tensor (tangent moduli) in full generality. Constitutive models for orthotropic, transversely isotropic and isotropic hyperelastic materials at finite strains with or without dissipation are included as special cases. In order to clearly show the good performance of the constitutive model, we present 3D and 2D numerical simulations of a pressurized laminated circular tube which shows an interesting `stretch inversion phenomenon' in the low pressure domain. Numerical results are in good qualitative agreement with experimental data and approximate the observed strongly anisotropic physical response with satisfying accuracy. A third numerical example is designed to illustrate the anisotropic stretching process of a fiber-reinforced rubber bar and the subsequent relaxation behavior at finite strains. The material parameters are chosen so that thermodynamic equilibrium is associated with the known homogeneous deformation state.
---
paper_title: The collagenous architecture of articular cartilage--a synthesis of ultrastructure and mechanical function.
paper_content:
The fibrillar ultrastructure within the general matrix of articular cartilage has been examined in the stressed root region of predetermined notches propagating under tension in directions both perpendicular and parallel to the articular surface. From the different ultrastructural responses induced by the 2 notch geometries, it has been possible to further clarify the relationship between structure and load bearing function in normal articular cartilage, and identify features of the collagenous architecture that seem directly related to a loss of load bearing function associated with both osteoarthritis and nonprogressive degeneration.
---
paper_title: Mechanical characterization of articular cartilage by combining magnetic resonance imaging and finite-element analysis—a potential functional imaging technique
paper_content:
Magnetic resonance imaging (MRI) provides a method for non-invasive characterization of cartilage composition and structure. We aimed to see whether T1 and T2 relaxation times are related to proteoglycan (PG) and collagen-specific mechanical properties of articular cartilage. Specifically, we analyzed whether variations in the depthwise collagen orientation, as assessed by the laminae obtained from T2 profiles, affect the mechanical characteristics of cartilage. After MRI and unconfined compression tests of human and bovine patellar cartilage samples, fibril-reinforced poroviscoelastic finite-element models (FEM), with depthwise collagen orientations implemented from quantitative T2 maps (3 laminae for human, 3–7 laminae for bovine), were constructed to analyze the non-fibrillar matrix modulus (PG specific), fibril modulus (collagen specific) and permeability of the samples. In bovine cartilage, the non-fibrillar matrix modulus (R = −0.64, p < 0.05) as well as the initial permeability (R = 0.70, p < 0.05) correlated with T1. In bovine cartilage, T2 correlated positively with the initial fibril modulus (R = 0.62, p = 0.05). In human cartilage, the initial fibril modulus correlated negatively (R = −0.61, p < 0.05) with T2. Based on the simulations, cartilage with a complex collagen architecture (5 or 7 laminae), leading to high bulk T2 due to magic angle effects, provided higher compressive stiffness than tissue with a simple collagen architecture (3 laminae). Our results suggest that T1 reflects PG-specific mechanical properties of cartilage. High T2 is characteristic to soft cartilage with a classical collagen architecture. Contradictorily, high bulk T2 can also be found in stiff cartilage with a multilaminar collagen fibril network. By emerging MRI and FEM, the present study establishes a step toward functional imaging of articular cartilage.
---
paper_title: A VISCOELASTIC MODEL FOR COLLAGEN FIBRES
paper_content:
The viscoelastic behaviour of collagen fibres of different lengths was studied by developing simulation models. These models were found to explain the creep and stress-relaxation behaviour of the collagen fibres.
---
paper_title: A Phenomenological Approach Toward Patient-Specific Computational Modeling of Articular Cartilage Including Collagen Fiber Tracking
paper_content:
To model the cartilage morphology and the material response, a phenomenological and patient-specific simulation approach incorporating the collagen fiber fabric is proposed. Cartilage tissue respon ...
---
paper_title: Decreased birefringence of the superficial zone collagen network in the canine knee (stifle) articular cartilage after long distance running training, detected by quantitative polarised light microscopy.
paper_content:
OBJECTIVE: To investigate the effects of a one year programme of running training (up to 40 km/day for 15 weeks) on the spatial orientation pattern of collagen and glycosaminoglycans in articular cartilage in different parts of the knee (stifle) and shoulder joints of young beagle dogs. METHODS: Area specific measurements of the optical path difference (= retardation, gamma) and the cartilage zone thickness were performed using conventional procedures and a new computer based quantitative polarised light microscopy method. Transmission electron microscopy was used to determine the zonal volume density of collagen fibrils. The concentrations of collagen and hydroxypyridinium crosslinks were investigated biochemically. RESULTS: Running training decreased gamma by 24-34% (p < 0.05) in the superficial zone of the lateral femoral condyle articular cartilage and at the centre of the tibial condyles. Gamma of glycosaminoglycans decreased by 26% (p < 0.05) in the superficial zone of the lateral condyle of the femur, but at the same site the volume density of collagen fibrils was unchanged. Neither the collagen concentration nor the concentration of hydroxypyridinium crosslinks was altered as a result of running. In both control and runner dogs, the thickness and gamma values of the superficial zone were greater in the humerus and the femur than in the tibia. CONCLUSION: Endurance type running exercise in beagles caused a reduction in the superficial zone birefringence of the articular cartilage, which indicates either a disorganisation or a reorientation of the superficial zone collagen network. Articular cartilage showed marked variability of collagen network organisation over the different knee (stifle) joint articular surfaces.
---
paper_title: Effect of superficial collagen patterns and fibrillation of femoral articular cartilage on knee joint mechanics-a 3D finite element analysis.
paper_content:
Collagen fibrils of articular cartilage have specific depth-dependent orientations and the fibrils bend in the cartilage surface to exhibit split-lines. Fibrillation of superficial collagen takes place in osteoarthritis. We aimed to investigate the effect of superficial collagen fibril patterns and collagen fibrillation of cartilage on stresses and strains within a knee joint. A 3D finite element model of a knee joint with cartilage and menisci was constructed based on magnetic resonance imaging. The fibril-reinforced poroviscoelastic material properties with depth-dependent collagen orientations and split-line patterns were included in the model. The effects of joint loading on stresses and strains in cartilage with various split-line patterns and medial collagen fibrillation were simulated under axial impact loading of 1000 N. In the model, the collagen fibrils resisted strains along the split-line directions. This increased also stresses along the split-lines. On the contrary, contact and pore pressures were not affected by split-line patterns. Simulated medial osteoarthritis increased tissue strains in both medial and lateral femoral condyles, and contact and pore pressures in the lateral femoral condyle. This study highlights the importance of the collagen fibril organization, especially that indicated by split-line patterns, for the weight-bearing properties of articular cartilage. Osteoarthritic changes of cartilage in the medial femoral condyle created a possible failure point in the lateral femoral condyle. This study provides further evidence on the importance of the collagen fibril organization for the optimal function of articular cartilage.
---
paper_title: Analysis of articular cartilage as a composite using nonlinear membrane elements for collagen fibrils.
paper_content:
To develop a composite fibre-reinforced model of the cartilage, membrane shell elements were introduced to represent collagen fibrils reinforcing the isotropic porous solid matrix filled with fluid. Nonlinear stress-strain curve of pure collagen fibres and collagen volume fraction were explicitly presented in the formulation of these membrane elements. In this composite model, in accordance with tissue structure, the matrix and fibril membrane network experienced dissimilar stresses despite identical strains in the fibre directions. Different unconfined compression and indentation case studies were performed to determine the distinct role of membrane collagen fibrils in nonlinear poroelastic mechanics of articular cartilage. The importance of nonlinear fibril membrane elements in the tissue relaxation response as well as in temporal and spatial variations of pore pressure and solid matrix stresses was demonstrated. By individual adjustments of the collagen volume fraction and collagen mechanical properties, the model allows for the simulation of alterations in the fibril network structure of the tissue towards modelling damage processes or repair attempts. The current model, which is based on a physiological description of the tissue structure, is promising in improvement of our understanding of the cartilage pathomechanics.
---
paper_title: Nonlinear analysis of cartilage in unconfined ramp compression using a fibril reinforced poroelastic model.
paper_content:
OBJECTIVE ::: To develop a biomechanical model for cartilage which is capable of capturing experimentally observed nonlinear behaviours of cartilage and to investigate effects of collagen fibril reinforcement in cartilage. ::: ::: ::: DESIGN ::: A sequence of 10 or 20 steps of ramp compression/relaxation applied to cartilage disks in uniaxial unconfined geometry is simulated for comparison with experimental data. ::: ::: ::: BACKGROUND ::: Mechanical behaviours of cartilage, such as the compression-offset dependent stiffening of the transient response and the strong relaxation component, have been previously difficult to describe using the biphasic model in unconfined compression. ::: ::: ::: METHODS ::: Cartilage is modelled as a fluid-saturated solid reinforced by an elastic fibrillar network. The latter, mainly representing collagen fibrils, is considered as a distinct constituent embedded in a biphasic component made up mainly of proteoglycan macromolecules and a fluid carrying mobile ions. The Young's modulus of the fibrillar network is taken to vary linearly with its tensile strain but to be zero for compression. Numerical computations are carried out using a finite element procedure, for which the fibrillar network is discretized into a system of spring elements. ::: ::: ::: RESULTS ::: The nonlinear fibril reinforced poroelastic model is capable of describing the strong relaxation behaviour and compression-offset dependent stiffening of cartilage in unconfined compression. Computational results are also presented to demonstrate unique features of the model, e.g. the matrix stress in the radial direction is changed from tensile to compressive due to presence of distinct fibrils in the model. ::: ::: ::: RELEVANCE ::: Experimentally observed nonlinear behaviours of cartilage are successfully simulated, and the roles of collagen fibrils are distinguished by using the proposed model. Thus this study may lead to a better understanding of physiological responses of individual constituents of cartilage to external loads, and of the roles of mechanical loading in cartilage remodelling and pathology.
---
paper_title: A nonlinear biphasic fiber-reinforced porohyperviscoelastic model of articular cartilage incorporating fiber reorientation and dispersion.
paper_content:
A nonlinear biphasic fiber-reinforced porohyperviscoelastic (BFPHVE) model of articular cartilage incorporating fiber reorientation effects during applied load was used to predict the response of ovine articular cartilage at relatively high strains (20%). The constitutive material parameters were determined using a coupled finite element-optimization algorithm that utilized stress relaxation indentation tests at relatively high strains. The proposed model incorporates the strain-hardening, tension-compression, permeability, and finite deformation nonlinearities that inherently exist in cartilage, and accounts for effects associated with fiber dispersion and reorientation and intrinsic viscoelasticity at relatively high strains. A new optimization cost function was used to overcome problems associated with large peak-to-peak differences between the predicted finite element and experimental loads that were due to the large strain levels utilized in the experiments. The optimized material parameters were found to be insensitive to the initial guesses. Using experimental data from the literature, the model was also able to predict both the lateral displacement and reaction force in unconfined compression, and the reaction force in an indentation test with a single set of material parameters. Finally, it was demonstrated that neglecting the effects of fiber reorientation and dispersion resulted in poorer agreement with experiments than when they were considered. There was an indication that the proposed BFPHVE model, which includes the intrinsic viscoelasticity of the nonfibrillar matrix (proteoglycan), might be used to model the behavior of cartilage up to relatively high strains (20%). The maximum percentage error between the indentation force predicted by the FE model using the optimized material parameters and that measured experimentally was 3%.
---
paper_title: Effects of static axial strain on the tensile properties and failure mechanisms of self‐assembled collagen fibers
paper_content:
Collagen fibers form the structural units of connective tissue throughout the body, transmitting force, maintaining shape, and providing a scaffold for cells. Our laboratory has studied collagen self-assembly since the 1970s. In this study, collagen fibers were self-assembled from molecular collagen solutions and then stretched to enhance alignment. Fibers were tested in uniaxial tension to study the mechanical properties and failure mechanisms. Results reported suggest that axial orientation of collagen fibrils can be achieved by stretching uncrosslinked collagen fibers. Stretching by about 30% not only results in decreased diameter and increased tensile strength but also leads to unusual failure mechanisms that inhibit crack propagation across the fiber. It is proposed that stretching serves to generate oriented fibrillar substructure in self-assembled collagen fibers. © 1997 John Wiley & Sons, Inc. J Appl Polym Sci 63: 1429–1440, 1997
---
paper_title: Failure criterion of collagen fiber: Viscoelastic behavior simulated by using load control data
paper_content:
A nonlinear Zener model is developed to model the viscoelastic behavior of collagen fibers, a building block of the biological soft tissues in the skeletal system. The effects of the strain rate dependency, the loading history, rest, and recovery on the stress-strain relationship of collagen fibers were investigated using the Zener model. The following loading conditions were simulated: (1) the stress relaxation after cyclic loading, (2) the constant strain rate loading before and after cyclic loading (stabilization) and post recovery, and (3) the constant strain rate loading over a wide range of loading rates. In addition, we explored the critical values of stress and strain using different failure criteria at different strain rates. Four major findings were derived from these simulations. First of all, the stress relaxation is larger with a smaller number cycles of preloading. Second, the strain rate sensitivity diminishes after the stabilization and recovery from resting. Third, the stress-strain curve is dependent on the strain rate except for extreme loading conditions (very fast or slow rates of loading). Finally, the strain energy density (SED) criteria may be a more practical failure criterion than the ultimate stress or strain criterion for collagen fiber. These results provide the basis for interpretation of the viscoelastic and failure behaviors of complex structures such as spinal functional units with more economical CPU than full finite element modeling of the whole structure would have required.
---
paper_title: Stresses in the local collagen network of articular cartilage: a poroviscoelastic fibril-reinforced finite element study.
paper_content:
Osteoarthritis (OA) is a multifactorial disease, resulting in diarthrodial joint wear and eventually destruction. Swelling of cartilage, which is proportional to the amount of collagen damage, is an initial event of cartilage degeneration, so damage to the collagen fibril network is likely to be one of the earliest signs of OA cartilage degeneration. We propose that the local stresses and strains in the collagen fibrils, which cause the damage, cannot be determined dependably without taking the local arcade-like collagen-fibril structure into account. We investigate this using a poroviscoelastic fibril-reinforced FEA model. The constitutive fibril properties were determined by fitting numerical data to experimental results of unconfined compression and indentation tests on samples of bovine patellar articular cartilage. It was demonstrated that with this model the stresses and strains in the collagen fibrils can be calculated. It was also exhibited that fibrils with different orientations at the same location can be loaded differently, depending on the local architecture of the collagen network. To the best of our knowledge, the present model is the first that can account for these features. We conclude that the local stresses and strains in the articular cartilage are highly influenced by the local morphology of the collagen-fibril network.
---
paper_title: Characterization of articular cartilage by combining microscopic analysis with a fibril-reinforced finite-element model.
paper_content:
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
---
paper_title: A fibril-reinforced poroviscoelastic swelling model for articular cartilage.
paper_content:
From a mechanical point of view, the most relevant components of articular cartilage are the tight and highly organized collagen network together with the charged proteoglycans. Due to the fixed charges of the proteoglycans, the cation concentration inside the tissue is higher than in the surrounding synovial fluid. This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. The fibrillar collagen network resists straining and swelling pressures. This combination makes cartilage a unique, highly hydrated and pressurized tissue, enforced with a strained collagen network. Many theories to explain articular cartilage behavior under loading, expressed in computational models that either include the swelling behavior or the properties of the anisotropic collagen structure, can be found in the literature. The most common tests used to determine the mechanical quality of articular cartilage are those of confined compression, unconfined compression, indentation and swelling. All theories currently available in the literature can explain the cartilage response occurring in some of the above tests, but none of them can explain these for all of the tests. We hypothesized that a model including simultaneous mathematical descriptions of (1) the swelling properties due to the fixed-change densities of the proteoglycans and (2) the anisotropic viscoelastic collagen structure, can explain all these test simultaneously. To study this hypothesis we extended our fibril-reinforced poroviscoelastic finite element model with our biphasic swelling model. We have shown that the newly developed fibril-reinforced poroviscoelastic swelling (FPVES) model for articular cartilage can simultaneously account for the reaction force during swelling, confined compression, indentation and unconfined compression as well as the lateral deformation during unconfined compression. Using this theory it is possible to analyze the link between the collagen network and the swelling properties of articular cartilage.
---
paper_title: Contact Analysis of Biphasic Transversely Isotropic Cartilage Layers and Correlations With Tissue Failure
paper_content:
Failure of articular cartilage has been investigated experimentally and theoretically, but there is only partial agreement between observed failure and predicted regions of peak stresses. Since trauma and repetitive stress are implicated in the etiopathogenesis of osteoarthritis, it is important to develop cartilage models which correctly predict sites of high stresses. Cartilage is anisotropic and inhomogeneous, though it has been difficult to incorporate these complexities into engineering analyses. The objectives of this study are to demonstrate that a transversely isotropic, biphasic model of cartilage can provide agreement between predicted regions of high stresses and observed regions of cartilage failure and that with transverse isotropy cartilage stresses are more sensitive to convexity and concavity of the surfaces than with isotropy. These objectives are achieved by solving problems of diarthrodial joint contact by the finite-element method. Results demonstrate that transversely isotropic models predict peak stresses at the cartilage surface and the cartilage-bone interface, in agreement with sites of fissures following impact loading; isotropic models predict peak stresses only at the cartilage-bone interface. Also, when convex cartilage layers contacted concave layers in this study, the highest tensile stresses occur in the convex layer for transversely isotropic models; no such differences are found with isotropic models. The significance of this study is that it establishes a threshold of modeling complexity for articular cartilage that provides good agreement with experimental observations under impact loading and that surface curvatures significantly affect stress and strain within cartilage when using a biphasic transversely isotropic model.
---
paper_title: Ultrastructural evidence for fibril-to-fibril associations in articular cartilage and their functional implication.
paper_content:
This study presents ultrastructural evidence for the presence of a variety of fibril-to-fibril interactions or associations in the architecture of the general matrix of articular cartilage. These interactions are believed to serve a higher purpose of repeatedly constraining an overall radial arrangement of fibrils into an array of oblique interconnecting segments thus creating a three dimensional meshwork within which the hydrated ground substance is constrained. It is argued that any reduction in these interfibrillar interactions will allow the oblique fibril segments to revert to a low energy radial configuration, thus explaining the presence of such arrays prominent in various degenerate forms of articular cartilage.
---
paper_title: Fibril reinforced poroelastic model predicts specifically mechanical behavior of normal, proteoglycan depleted and collagen degraded articular cartilage
paper_content:
Abstract Degradation of collagen network and proteoglycan (PG) macromolecules are signs of articular cartilage degeneration. These changes impair cartilage mechanical function. Effects of collagen degradation and PG depletion on the time-dependent mechanical behavior of cartilage are different. In this study, numerical analyses, which take the compression-tension nonlinearity of the tissue into account, were carried out using a fibril reinforced poroelastic finite element model. The study aimed at improving our understanding of the stress-relaxation behavior of normal and degenerated cartilage in unconfined compression. PG and collagen degradations were simulated by decreasing the Young's modulus of the drained porous (nonfibrillar) matrix and the fibril network, respectively. Numerical analyses were compared to results from experimental tests with chondroitinase ABC (PG depletion) or collagenase (collagen degradation) digested samples. Fibril reinforced poroelastic model predicted the experimental behavior of cartilage after chondroitinase ABC digestion by a major decrease of the drained porous matrix modulus (−64±28%) and a minor decrease of the fibril network modulus (−11±9%). After collagenase digestion, in contrast, the numerical analyses predicted the experimental behavior of cartilage by a major decrease of the fibril network modulus (−69±5%) and a decrease of the drained porous matrix modulus (−44±18%). The reduction of the drained porous matrix modulus after collagenase digestion was consistent with the microscopically observed secondary PG loss from the tissue. The present results indicate that the fibril reinforced poroelastic model is able to predict specifically characteristic alterations in the stress-relaxation behavior of cartilage after enzymatic modifications of the tissue. We conclude that the compression-tension nonlinearity of the tissue is needed to capture realistically the mechanical behavior of normal and degenerated articular cartilage.
---
paper_title: Importance of the superficial tissue layer for the indentation stiffness of articular cartilage
paper_content:
Indentation testing is a widely used technique for nondestructive mechanical analysis of articular cartilage. Although cartilage shows an inhomogeneous, layered structure with anisotropic mechanical properties, most theoretical indentation models assume material homogeneity and isotropy. In the present study, quantitative polarized light microscopy (PLM) measurements from canine cartilage were utilized to characterize thickness and structure of the superficial, collageneous tissue layer as well as to reveal its relation to experimental indentation measurements. In addition to experimental analyses, a layered, transversely isotropic finite element (FE) model was developed and the effect of superficial (tangential) tissue layer with high elastic modulus in the direction parallel to articular surface on the indentation response was studied. The experimental indentation stiffness was positively correlated with the relative thickness of the superficial cartilage layer. Also the optical retardation, which reflects the degree of parallel organization of collagen fibrils as well as collagen content, was related to indentation stiffness. FE results indicated effective stiffening of articular cartilage under indentation due to high transverse modulus of the superficial layer. The present results suggest that indentation testing is an efficient technique for the characterization of the superficial degeneration of articular cartilage.
---
paper_title: A Triphasic Theory for the Swelling and Deformation Behaviors of Articular Cartilage
paper_content:
Swelling of articular cartilage depends on its fixed charge density and distribution, the stiffness of its collagen-proteoglycan matrix, and the ion concentrations in the interstitium. A theory for a tertiary mixture has been developed, including the two fluid-solid phases (biphasic), and an ion phase, representing cation and anion of a single salt, to describe the deformation and stress fields for cartilage under chemical and/or mechanical loads. This triphasic theory combines the physico-chemical theory for ionic and polyionic (proteoglycan) solutions with the biphasic theory for cartilage. The present model assumes the fixed charge groups to remain unchanged, and that the counter-ions are the cations of a single salt of the bathing solution. The momentum equation for the neutral salt and for the intersitial water are expressed in terms of their chemical potentials whose gradients are the driving forces for their movements. These chemical potentials depend on fluid pressure p, salt concentration c, solid matrix dilatation e and fixed charge density cF . For a uni-uni valent salt such as NaCl, they are given by μi = μo i + (RT/Mi )ln[γ± 2 c (c + c F )] and μW = μo w + [p − RTφ(2c + cF ) + Bw e]/ρT w , where R, T, Mi , γ± , φ, ρT w and Bw are universal gas constant, absolute temperature, molecular weight, mean activity coefficient of salt, osmotic coefficient, true density of water, and a coupling material coefficient, respectively. For infinitesimal strains and material isotropy, the stress-strain relationship for the total mixture stress is σ = − pI − Tc I + λs (trE)I + 2μs E, where E is the strain tensor and (λs ,μs ) are the Lame constants of the elastic solid matrix. The chemical-expansion stress (− Tc ) derives from the charge-to-charge repulsive forces within the solid matrix. This theory can be applied to both equilibrium and non-equilibrium problems. For equilibrium free swelling problems, the theory yields the well known Donnan equilibrium ion distribution and osmotic pressure equations, along with an analytical expression for the “pre-stress” in the solid matrix. For the confined-compression swelling problem, it predicts that the applied compressive stress is shared by three load support mechanisms: 1) the Donnan osmotic pressure; 2) the chemical-expansion stress; and 3) the solid matrix elastic stress. Numerical calculations have been made, based on a set of equilibrium free-swelling and confined-compression data, to assess the relative contribution of each mechanism to load support. Our results show that all three mechanisms are important in determining the overall compressive stiffness of cartilage.
---
paper_title: A finite element formulation and program to study transient swelling and load-carriage in healthy and degenerate articular cartilage
paper_content:
The theory of poroelasticity is extended to include physico-chemical swelling and used to predict the transient responses of normal and degenerate articular cartilage to both chemical and mechanical loading; with emphasis on isolating the influence of the major parameters which govern its deformation. Using a new hybrid element, our mathematical relationships were implemented in a purpose-built poroelastic finite element analysis algorithm (u-pi-c program) which was used to resolve the nature of the coupling between the mechanical and chemical responses of cartilage when subjected to ionic transport across its membranous skeleton. Our results demonstrate that one of the roles of the strain-dependent matrix permeability is to limit the rate of transmission of stresses from the fluid to the collagen-proteoglycan solid skeleton in the incipient stages of loading, and that the major contribution of the swelling pressure is that of preventing any excessive deformation of the matrix.
---
paper_title: Structural and Compositional Changes in Peri- and Extracellular Matrix of Osteoarthritic Cartilage Modulate Chondrocyte Morphology
paper_content:
It has not been shown how specific changes in composition and structure in the peri- and extracellular matrix (PCM and ECM) of human articular cartilage modulate cell morphology in different stages of osteoarthritis (OA). In the present study, cell morphology in the superficial tissue of normal, early OA and advanced OA human cartilage samples were measured from histological sections. Collagen and proteoglycan contents in the vicinity of chondrocytes were analyzed using Fourier transform infrared spectroscopy and digital densitometry. Determinants of the experimentally observed morphological changes of cells were studied using finite element analysis (FEA). From normal tissue to early OA, cell aspect ratio (height/width) remained constant (0.69 ± 0.11 and 0.69 ± 0.09, respectively). In advanced OA, cells became significantly (p < 0.05) more rounded (aspect ratio of 0.83 ± 0.13). Normalized collagen content in the PCM, i.e. collagen content in the PCM with respect to that in the ECM, was reduced significantly (p < 0.05) only in advanced OA. FEA indicated that reduced proteoglycan content and increased collagen fibrillation in the PCM and ECM, as well as reduced collagen content only in the PCM, primarily explained experimentally found changes in cell aspect ratio. Our results suggest that changes in composition and structure of the PCM and ECM in the superficial tissue of human articular cartilage modulate cell morphology differently in early and advanced OA.
---
paper_title: A validation of the quadriphasic mixture theory for intervertebral disc tissue
paper_content:
The swelling and shrinking behaviour of soft biological tissues is described by a quadriphasic mixture model. In this model four phases are distinguished: a charged solid, a fluid, cations and anions. A description of the set of coupled differential equations of this quadriphasic mixture model is given. These equations are solved by the finite element method using a weighted residual approach. The resulting non-linear integral equations are linearized and solved by the Newton-Raphson iteration procedure. We performed some confined swelling and compression experiments on intervertebral disc tissue. These experiments are simulated by a one-dimensional finite element implementation of this quadriphasic mixture model. In contrast to a triphasic mixture model, physically realistic diffusion coefficients can be used to fit the experiments when the fixed charge density is relatively large, because in the quadriphasic mixture model electrical phenomena are not neglected.
---
paper_title: Stress–relaxation of human patellar articular cartilage in unconfined compression: Prediction of mechanical response by tissue composition and structure
paper_content:
Abstract Mechanical properties of articular cartilage are controlled by tissue composition and structure. Cartilage function is sensitively altered during tissue degeneration, in osteoarthritis (OA). However, mechanical properties of the tissue cannot be determined non-invasively. In the present study, we evaluate the feasibility to predict, without mechanical testing, the stress–relaxation response of human articular cartilage under unconfined compression. This is carried out by combining microscopic and biochemical analyses with composition-based mathematical modeling. Cartilage samples from five cadaver patellae were mechanically tested under unconfined compression. Depth-dependent collagen content and fibril orientation, as well as proteoglycan and water content were derived by combining Fourier transform infrared imaging, biochemical analyses and polarized light microscopy. Finite element models were constructed for each sample in unconfined compression geometry. First, composition-based fibril-reinforced poroviscoelastic swelling models, including composition and structure obtained from microscopical and biochemical analyses were fitted to experimental stress–relaxation responses of three samples. Subsequently, optimized values of model constants, as well as compositional and structural parameters were implemented in the models of two additional samples to validate the optimization. Theoretical stress–relaxation curves agreed with the experimental tests ( R =0.95–0.99). Using the optimized values of mechanical parameters, as well as composition and structure of additional samples, we were able to predict their mechanical behavior in unconfined compression, without mechanical testing ( R =0.98). Our results suggest that specific information on tissue composition and structure might enable assessment of cartilage mechanics without mechanical testing.
---
paper_title: A composition-based cartilage model for the assessment of compositional changes during cartilage damage and adaptation
paper_content:
Summary Objective The composition of articular cartilage changes with progression of osteoarthritis. Since compositional changes are associated with changes in the mechanical properties of the tissue, they are relevant for understanding how mechanical loading induces progression. The objective of this study is to present a computational model of articular cartilage which enables to study the interaction between composition and mechanics. Methods Our previously developed fibril-reinforced poroviscoelastic swelling model for articular cartilage was combined with our tissue composition-based model. In the combined model both the depth- and strain-dependencies of the permeability are governed by tissue composition. All local mechanical properties in the combined model are directly related to the local composition of the tissue, i.e., to the local amounts of proteoglycans and collagens and to tissue anisotropy. Results Solely based on the composition of the cartilage, we were able to predict the equilibrium and transient response of articular cartilage during confined compression, unconfined compression, indentation and two different 1D-swelling tests, simultaneously. Conclusion Since both the static and the time-dependent mechanical properties have now become fully dependent on tissue composition, the model allows assessing the mechanical consequences of compositional changes seen during osteoarthritis without further assumptions. This is a major step forward in quantitative evaluations of osteoarthritis progression.
---
paper_title: Importance of collagen orientation and depth-dependent fixed charge densities of cartilage on mechanical behavior of chondrocytes.
paper_content:
The collagen network and proteoglycan matrix of articular cartilage are thought to play an important role in controlling the stresses and strains in and around chondrocytes, in regulating the biosynthesis of the solid matrix, and consequently in maintaining the health of diarthrodial joints. Understanding the detailed effects of the mechanical environment of chondrocytes on cell behavior is therefore essential for the study of the development, adaptation, and degeneration of articular cartilage. Recent progress in macroscopic models has improved our understanding of depth-dependent properties of cartilage. However, none of the previous works considered the effect of realistic collagen orientation or depth-dependent negative charges in microscopic models of chondrocyte mechanics. The aim of this study was to investigate the effects of the collagen network and fixed charge densities of cartilage on the mechanical environment of the chondrocytes in a depth-dependent manner. We developed an anisotropic, inhomogeneous, microstructural fibril-reinforced finite element model of articular cartilage for application in unconfined compression. The model consisted of the extracellular matrix and chondrocytes located in the superficial, middle, and deep zones. Chondrocytes were surrounded by a pericellular matrix and were assumed spherical prior to tissue swelling and load application. Material properties of the chondrocytes, pericellular matrix, and extracellular matrix were obtained from the literature. The loading protocol included a free swelling step followed by a stress-relaxation step. Results from traditional isotropic and transversely isotropic biphasic models were used for comparison with predictions from the current model. In the superficial zone, cell shapes changed from rounded to elliptic after free swelling. The stresses and strains as well as fluid flow in cells were greatly affected by the modulus of the collagen network. The fixed charge density of the chondrocytes, pericellular matrix, and extracellular matrix primarily affected the aspect ratios (height/ width) and the solid matrix stresses of cells. The mechanical responses of the cells were strongly location and time dependent. The current model highlights that the collagen orientation and the depth-dependent negative fixed charge densities of articular cartilage have a great effect in modulating the mechanical environment in the vicinity of chondrocytes, and it provides an important improvement over earlier models in describing the possible pathways from loading of articular cartilage to the mechanical and biological responses of chondrocytes.
---
paper_title: Composition of the pericellular matrix modulates the deformation behaviour of chondrocytes in articular cartilage under static loading
paper_content:
The aim was to assess the role of the composition changes in the pericellular matrix (PCM) for the chondrocyte deformation. For that, a three-dimensional finite element model with depth-dependent collagen density, fluid fraction, fixed charge density and collagen architecture, including parallel planes representing the split-lines, was created to model the extracellular matrix (ECM). The PCM was constructed similarly as the ECM, but the collagen fibrils were oriented parallel to the chondrocyte surfaces. The chondrocytes were modelled as poroelastic with swelling properties. Deformation behaviour of the cells was studied under 15% static compression. Due to the depth-dependent structure and composition of cartilage, axial cell strains were highly depth-dependent. An increase in the collagen content and fluid fraction in the PCMs increased the lateral cell strains, while an increase in the fixed charge density induced an inverse behaviour. Axial cell strains were only slightly affected by the changes in PCM composition. We conclude that the PCM composition plays a significant role in the deformation behaviour of chondrocytes, possibly modulating cartilage development, adaptation and degeneration. The development of cartilage repair materials could benefit from this information.
---
paper_title: On the Thermodynamical Admissibility of the Triphasic Theory of Charged Hydrated Tissues
paper_content:
The triphasic theory on soft charged hydrated tissues (Lai, W. M., Hou, J. S., and Mow, V. C., 1991, "A Triphasic Theory for the Swelling and Deformation Behaviors of Articular Cartilage," ASME J. Biomech. Eng., 113, pp. 245-258) attributes the swelling propensity of articular cartilage to three different mechanisms: Donnan osmosis, excluded volume effect, and chemical expansion stress. The aim of this study is to evaluate the thermodynamic plausibility of the triphasic theory. The free energy of a sample of articular cartilage subjected to a closed cycle of mechanical and chemical loading is calculated using the triphasic theory. It is shown that the chemical expansion stress term induces an unphysiological generation of free energy during each closed cycle of loading and unloading. As the cycle of loading and unloading can be repeated an indefinite number of times, any amount of free energy can be drawn from a sample of articular cartilage, if the triphasic theory were true. The formulation for the chemical expansion stress as used in the triphasic theory conflicts with the second law of thermodynamics.
---
paper_title: Quadriphasic mechanics of swelling incompressible porous media
paper_content:
A chemo-electro-mechanical formulation of quasi-static finite deformation of swelling incompressible porous media is derived from mixture theory. The model consists of an electrically charged porous solid saturated with a monovalent ionic solution. Incompressible and isothermal deformation is assumed. Hydration forces are neglected. The mixture as a whole is assumed locally electroneutral. Four phases following different kinematic paths are defined: solid, fluid, anions and cations. Balance laws are derived for each phase and for the mixture as a whole. A Lagrangian form of the second law of thermodynamics is derived for incompressible porous media and is used to derive the constitutive relationships of the medium. It is shown that the theory is consistent with Biot's theory for the limiting case without ionic effects and with Staverman's results for the limiting case without deformation.
---
paper_title: Complex nature of stress inside loaded articular cartilage
paper_content:
Abstract We show that in the early stages of loading of the cartilage matrix extensive water exudation and related physicochemical and structural changes give rise to a distinctly consolidatable system. By enzymatically modifying the pre-existing osmotic condition of the normal matrix and measuring its hydrostatic excess pore pressure, we have studied the exact influence of physicochemistry on the consolidation of cartilage. We argue that the attainment of a certain minimum level of swelling stiffness of the solid skeleton, which is developed at the maximum hydrostatic excess pore pressure of the fluid, controls the effective consolidation of articular cartilage. Three related but distinct stresses are developed during cartilage deformation, namely (1) the swelling stress in the coupled proteoglycan/collagen skeleton in the early stages of deformation, (2) the hydrostatic excess pore pressure carried by the fluid component, and (3) the effective stress generated on top of the minimum value of the swelling stress in the consolidation stages following the attainment of the fluid's maximum pore pressure. The minimum value of the swelling pressure is in turn generated over and above the intrinsic osmotic pressure in the unloaded matrix. The response of the hyaluronidase-digested matrix relative to its intact state again highlights the important influence of the osmotic pressure and the coefficient of permeability, both of which are related to the volume fraction of proteoglycans on cartilage deformation, and therefore its ability to function as an effective stress-redistributing layer above the subchondral bone.
---
paper_title: Depth- and strain-dependent mechanical and electromechanical properties of full-thickness bovine articular cartilage in confined compression.
paper_content:
Compression tests have often been performed to assess the biomechanical properties of full-thickness articular cartilage. We tested whether the apparent homogeneous strain-dependent properties, deduced from such tests, reflect both strain- and depth-dependent material properties. Full-thickness bovine articular cartilage was tested by oscillatory confined compression superimposed on a static offset up to 45%. and the data fit to estimate modulus, permeability, and electrokinetic coefficient assuming homogeneity. Additional tests on partial-thickness cartilage were then performed to assess depth- and strain-dependent properties in an inhomogeneous model, assuming three discrete layers (i = 1 starting from the articular surface, to i = 3 up to the subchondral bone). Estimates of the zero-strain equilibrium confined compression modulus (H(A0)), the zero-strain permeability (kp0) and deformation dependence constant (M), and the deformation-dependent electrokinetic coefficient (ke) differed among individual layers of cartilage and full-thickness cartilage. HiA0 increased from layer 1 to 3 (0.27 to 0.71 MPa), and bracketed the apparent homogeneous value (0.47 MPa). ki(p0) decreased from layer 1 to 3 (4.6 x 10(-15) to 0.50 x 10(-15) m2/Pa s) and was less than the homogeneous value (7.3 x 10(-15) m2/Pa s), while Mi increased from layer 1 to 3 (5.5 to 7.4) and became similar to the homogeneous value (8.4). The amplitude of ki(e) increased markedly with compressive strain, as did the homogeneous value: at low strain, it was lowest near the articular surface and increased to a peak in the middle-deep region. These results help to interpret the biomechanical assessment of full-thickness articular cartilage.
---
paper_title: A Poroelastic Finite Element Formulation Including Transport and Swelling in Soft Tissue Structures
paper_content:
A field theory is presented for the study of swelling in soft tissue structures that are modeled as poroelastic materials. As a first approximation, soft tissues are assumed to be linear isotropic materials undergoing infinitesimal strains. Material properties are identified that are necessary for the solution of initial boundary value problems where swelling and convection are significant. A finite element model is developed that includes the solid displacements, the relative fiuid displacements, and a representative concentration as the primary unknowns. A numerical example is presented based on a triphasic model. The finite model simulates a typical experimental protocol for soft tissue testing and demonstrates the interaction and coupling associated with relative fluid motion and swelling in a deforming poroelastic material. The theory and finite element model provide a starting point for nonlinear porohyperelastic transport-swelling analyses of soft tissue structures that include finite strains in anisotropic materials.
---
paper_title: The kinetics of chemically induced nonequilibrium swelling of articular cartilage and corneal stroma.
paper_content:
An electromechanical model for charged, hydrated tissues is developed to predict the kinetics of changes in swelling and isometric compressive stress induced by changes in bath salt concentration. The model focuses on ionic transport as the rate limiting step in chemically modulating electrical interactions between the charged macromolecules of the extracellular matrix. The swelling response to such changes in local interaction forces is determined by the relative rates of chemical diffusion and fluid redistribution in the tissue sample. We have tested the model by comparing the experimentally observed salt-induced stress relaxation response in bovine articular cartilage and corneal stroma to the response predicted by the model using constitutive relations for the concentration dependent material properties of the tissues reported in a related study. The qualitatively good agreement between our experimental measurements and the predictions of the model supports the physical basis of the model and demonstrates the model's ability to discriminate between the two soft connective tissues that were examined.
---
paper_title: A finite element analysis methodology for representing the articular cartilage functional structure.
paper_content:
Recognising that the unique biomechanical properties of articular cartilage are a consequence of its structure, this paper describes a finite element methodology which explicitly represents this structure using a modified overlay element model. The validity of this novel concept was then tested by using it to predict the axial curling forces generated by cartilage matrices subjected to saline solutions of known molality and concentration in a novel experimental protocol. Our results show that the finite element modelling methodology accurately represents the intrinsic biomechanical state of the cartilage matrix and can be used to predict its transient load-carriage behaviour. We conclude that this ability to represent the intrinsic swollen condition of a given cartilage matrix offers a viable avenue for numerical analysis of degenerate articular cartilage and also those matrices affected by disease.
---
paper_title: Physicochemical Properties of Cartilage in the Light of Ion Exchange Theory
paper_content:
Ion exchange theory has been applied to articular cartilage. Relationships were derived between permeability, diffusivity, electrical conductivity, and streaming potential. Systematic measurements were undertaken on these properties. Experimental techniques are described and data tabulated. Theoretical correlations were found to hold within the experimental error. The concentration of fixed negatively-charged groups in cartilage was shown to be the most important parameter. Fixed charge density was found to increase with distance from the articular surface and this variation was reflected in the other properties.
---
paper_title: A fibril-reinforced poroviscoelastic swelling model for articular cartilage.
paper_content:
From a mechanical point of view, the most relevant components of articular cartilage are the tight and highly organized collagen network together with the charged proteoglycans. Due to the fixed charges of the proteoglycans, the cation concentration inside the tissue is higher than in the surrounding synovial fluid. This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. The fibrillar collagen network resists straining and swelling pressures. This combination makes cartilage a unique, highly hydrated and pressurized tissue, enforced with a strained collagen network. Many theories to explain articular cartilage behavior under loading, expressed in computational models that either include the swelling behavior or the properties of the anisotropic collagen structure, can be found in the literature. The most common tests used to determine the mechanical quality of articular cartilage are those of confined compression, unconfined compression, indentation and swelling. All theories currently available in the literature can explain the cartilage response occurring in some of the above tests, but none of them can explain these for all of the tests. We hypothesized that a model including simultaneous mathematical descriptions of (1) the swelling properties due to the fixed-change densities of the proteoglycans and (2) the anisotropic viscoelastic collagen structure, can explain all these test simultaneously. To study this hypothesis we extended our fibril-reinforced poroviscoelastic finite element model with our biphasic swelling model. We have shown that the newly developed fibril-reinforced poroviscoelastic swelling (FPVES) model for articular cartilage can simultaneously account for the reaction force during swelling, confined compression, indentation and unconfined compression as well as the lateral deformation during unconfined compression. Using this theory it is possible to analyze the link between the collagen network and the swelling properties of articular cartilage.
---
paper_title: General Theory of Three‐Dimensional Consolidation
paper_content:
The settlement of soils under load is caused by a phenomenon called consolidation, whose mechanism is known to be in many cases identical with the process of squeezing water out of an elasticporous medium. The mathematical physical consequences of this viewpoint are established in the present paper. The number of physical constants necessary to determine the properties of the soil is derived along with the general equations for the prediction of settlements and stresses in three‐dimensional problems. Simple applications are treated as examples. The operational calculus is shown to be a powerful method of solution of consolidation problems.
---
paper_title: Contribution of postnatal collagen reorientation to depth-dependent mechanical properties of articular cartilage
paper_content:
The collagen fibril network is an important factor for the depth-dependent mechanical behaviour of adult articular cartilage (AC). Recent studies show that collagen orientation is parallel to the articular surface throughout the tissue depth in perinatal animals, and that the collagen orientations transform to a depth-dependent arcade-like structure in adult animals. Current understanding on the mechanobiology of postnatal AC development is incomplete. In the current paper, we investigate the contribution of collagen fibril orientation changes to the depth-dependent mechanical properties of AC. We use a composition-based finite element model to simulate in a 1-D confined compression geometry the effects of ten different collagen orientation patterns that were measured in developing sheep. In initial postnatal life, AC is mostly subject to growth and we observe only small changes in depth-dependent mechanical behaviour. Functional adaptation of depth-dependent mechanical behaviour of AC takes place in the second half of life before puberty. Changes in fibril orientation alone increase cartilage stiffness during development through the modulation of swelling strains and osmotic pressures. Changes in stiffness are most pronounced for small stresses and for cartilage adjacent to the bone. We hypothesize that postnatal changes in collagen fibril orientation induce mechanical effects that in turn promote these changes. We further hypothesize that a part of the depth-dependent postnatal increase in collagen content in literature is initiated by the depth-dependent postnatal increase in fibril strain due to collagen fibril reorientation.
---
paper_title: A Comparison Between Mechano-Electrochemical and Biphasic Swelling Theories for Soft Hydrated Tissues
paper_content:
Biological tissues like intervertebral discs and articular cartilage primarily consist of interstitial fluid, collagen fibrils and negatively charged proteoglycans. Due to the fixed charges of the proteoglycans, the total ion concentration inside the tissue is higher than in the surrounding synovial fluid (cation concentration is higher and the anion concentration is lower). This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. In the last decade several mechano-electrochemical models, which include this mechanism, have been developed. As these models are complex and computationally expensive, it is only possible to analyze geometrically relatively small problems. Furthermore, there is still no commercial finite element tool that includes such a mechano-electrochemical theory. Lanir (Biorheology,24, pp. 173‐187, 1987) hypothesized that electrolyte flux in articular cartilage can be neglected in mechanical studies. Lanir’s hypothesis implies that the swelling behavior of cartilage is only determined by deformation of the solid and by fluid flow. Hence, the response could be described by adding a deformation-dependent pressure term to the standard biphasic equations. Based on this theory we developed a biphasic swelling model. The goal of the study was to test Lanir’s hypothesis for a range of material properties. We compared the deformation behavior predicted by the biphasic swelling model and a full mechano-electrochemical model for confined compression and 1D swelling. It was shown that, depending on the material properties, the biphasic swelling model behaves largely the same as the mechano-electrochemical model, with regard to stresses and strains in the tissue following either mechanical or chemical perturbations. Hence, the biphasic swelling model could be an alternative for the more complex mechano-electrochemical model, in those cases where the ion flux itself is not the subject of the study. We propose thumbrules to estimate the correlation between the two models for specific problems. @DOI: 10.1115/1.1835361#
---
paper_title: Contribution of tissue composition and structure to mechanical response of articular cartilage under different loading geometries and strain rates
paper_content:
Mechanical function of articular cartilage in joints between articulating bones is dependent on the composition and structure of the tissue. The mechanical properties of articular cartilage are traditionally tested in compression using one of the three loading geometries, i.e., confined compression, unconfined compression or indentation. The aim of this study was to utilize a composition-based finite element model in combination with a fractional factorial design to determine the importance of different cartilage constituents in the mechanical response of the tissue, and to compare the importance of the tissue constituents with different loading geometries and loading rates. The evaluated parameters included water and collagen fraction as well as fixed charge density on cartilage surface and their slope over the tissue thickness. The thicknesses of superficial and middle zones, as based on the collagen orientation, were also included in the evaluated parameters. A three-level resolution V fractional factorial design was used. The model results showed that inhomogeneous composition plays only a minor role in indentation, though that role becomes more significant in confined compression and unconfined compression. In contrast, the collagen architecture and content had a more profound role in indentation than with two other loading geometries. These differences in the mechanical role of composition and structure between the loading geometries were emphasized at higher loading rates. These findings highlight how the results from mechanical tests of articular cartilage under different loading conditions are dependent upon tissue composition and structure.
---
paper_title: A finite element formulation and program to study transient swelling and load-carriage in healthy and degenerate articular cartilage
paper_content:
The theory of poroelasticity is extended to include physico-chemical swelling and used to predict the transient responses of normal and degenerate articular cartilage to both chemical and mechanical loading; with emphasis on isolating the influence of the major parameters which govern its deformation. Using a new hybrid element, our mathematical relationships were implemented in a purpose-built poroelastic finite element analysis algorithm (u-pi-c program) which was used to resolve the nature of the coupling between the mechanical and chemical responses of cartilage when subjected to ionic transport across its membranous skeleton. Our results demonstrate that one of the roles of the strain-dependent matrix permeability is to limit the rate of transmission of stresses from the fluid to the collagen-proteoglycan solid skeleton in the incipient stages of loading, and that the major contribution of the swelling pressure is that of preventing any excessive deformation of the matrix.
---
paper_title: Hyperelastic modelling of arterial layers with distributed collagen fibre orientations
paper_content:
Constitutive relations are fundamental to the solution of problems in continuum mechanics, and are required in the study of, for example, mechanically dominated clinical interventions involving soft biological tissues. Structural continuum constitutive models of arterial layers integrate information about the tissue morphology and therefore allow investigation of the interrelation between structure and function in response to mechanical loading. Collagen fibres are key ingredients in the structure of arteries. In the media (the middle layer of the artery wall) they are arranged in two helically distributed families with a small pitch and very little dispersion in their orientation (i.e. they are aligned quite close to the circumferential direction). By contrast, in the adventitial and intimal layers, the orientation of the collagen fibres is dispersed, as shown by polarized light microscopy of stained arterial tissue. As a result, continuum models that do not account for the dispersion are not able to capture accurately the stress–strain response of these layers. The purpose of this paper, therefore, is to develop a structural continuum framework that is able to represent the dispersion of the collagen fibre orientation. This then allows the development of a new hyperelastic free-energy function that is particularly suited for representing the anisotropic elastic properties of adventitial and intimal layers of arterial walls, and is a generalization of the fibre-reinforced structural model introduced by Holzapfel & Gasser (Holzapfel & Gasser 2001 Comput. Meth. Appl. Mech. Eng. 190, 4379–4403) and Holzapfel et al. (Holzapfel et al. 2000 J. Elast. 61, 1–48). The model incorporates an additional scalar structure parameter that characterizes the dispersed collagen orientation. An efficient finite element implementation of the model is then presented and numerical examples show that the dispersion of the orientation of collagen fibres in the adventitia of human iliac arteries has a significant effect on their mechanical response.
---
paper_title: Microstructural modeling of collagen network mechanics and interactions with the proteoglycan gel in articular cartilage
paper_content:
Cartilage matrix mechanical function is largely determined by interactions between the collagen fibrillar network and the proteoglycan gel. Although the molecular physics of these matrix constituents have been characterized and modern imaging methods are capable of localized measurement of molecular densities and orientation distributions, theoretical tools for using this information for prediction of cartilage mechanical behavior are lacking. We introduce a means to model collagen network contributions to cartilage mechanics based upon accessible microstructural information (fibril density and orientation distributions) and which self-consistently follows changes in microstructural geometry with matrix deformations. The interplay between the molecular physics of the collagen network and the proteoglycan gel is scaled up to determine matrix material properties, with features such as collagen fibril pre-stress in free-swelling cartilage emerging naturally and without introduction of ad hoc parameters. Methods are developed for theoretical treatment of the collagen network as a continuum-like distribution of fibrils, such that mechanical analysis of the network may be simplified by consideration of the spherical harmonic components of functions of the fibril orientation, strain, and stress distributions. Expressions for the collagen network contributions to matrix stress and stiffness tensors are derived, illustrating that only spherical harmonic components of orders 0 and 2 contribute to the stress, while orders 0, 2, and 4 contribute to the stiffness. Depth- and compression-dependent equilibrium mechanical properties of cartilage matrix are modeled, and advantages of the approach are illustrated by exploration of orientation and strain distributions of collagen fibrils in compressed cartilage. Results highlight collagen-proteoglycan interactions, especially for very small physiological strains where experimental data are relatively sparse. These methods for determining matrix mechanical properties from measurable quantities at the microscale (composition, structure, and molecular physics) may be useful for investigating cartilage structure-function relationships relevant to load-bearing, injury, and repair.
---
paper_title: A viscoelastic model for fiber-reinforced composites at finite strains: Continuum basis, computational aspects and applications
paper_content:
Abstract This paper presents a viscoelastic model for the fully three-dimensional stress and deformation response of fiber-reinforced composites that experience finite strains. The composites are thought to be (soft) matrix materials which are reinforced by two families of fibers so that the mechanical properties of the composites depend on two fiber directions. The relaxation and/or creep response of each compound of the composite is modeled separately and the global response is obtained by an assembly of all contributions. We develop novel closed-form expressions for the fourth-order elasticity tensor (tangent moduli) in full generality. Constitutive models for orthotropic, transversely isotropic and isotropic hyperelastic materials at finite strains with or without dissipation are included as special cases. In order to clearly show the good performance of the constitutive model, we present 3D and 2D numerical simulations of a pressurized laminated circular tube which shows an interesting `stretch inversion phenomenon' in the low pressure domain. Numerical results are in good qualitative agreement with experimental data and approximate the observed strongly anisotropic physical response with satisfying accuracy. A third numerical example is designed to illustrate the anisotropic stretching process of a fiber-reinforced rubber bar and the subsequent relaxation behavior at finite strains. The material parameters are chosen so that thermodynamic equilibrium is associated with the known homogeneous deformation state.
---
paper_title: Modeling the Matrix of Articular Cartilage Using a Continuous Fiber Angular Distribution Predicts Many Observed Phenomena
paper_content:
A number of theoretical frameworks embodying the disparity between tensile and compressive properties of cartilage have been proposed, accounting for the collagen fibers implicitly [1,2] or explicitly [3–5]. These models generally propose discrete fiber families to describe the collagen matrix. They are able to capture the most salient features of the cartilage mechanical response, namely, the tension-compression nonlinearity of the stress-strain curve [6].Copyright © 2009 by ASME
---
paper_title: A hyperelastic biphasic fibre-reinforced model of articular cartilage considering distributed collagen fibre orientations: continuum basis, computational aspects and applications
paper_content:
Cartilage is a multi-phase material composed of fluid and electrolytes (68-85% by wet weight), proteoglycans (5-10% by wet weight), chondrocytes, collagen fibres and other glycoproteins. The solid ...
---
paper_title: Analysis of articular cartilage as a composite using nonlinear membrane elements for collagen fibrils.
paper_content:
To develop a composite fibre-reinforced model of the cartilage, membrane shell elements were introduced to represent collagen fibrils reinforcing the isotropic porous solid matrix filled with fluid. Nonlinear stress-strain curve of pure collagen fibres and collagen volume fraction were explicitly presented in the formulation of these membrane elements. In this composite model, in accordance with tissue structure, the matrix and fibril membrane network experienced dissimilar stresses despite identical strains in the fibre directions. Different unconfined compression and indentation case studies were performed to determine the distinct role of membrane collagen fibrils in nonlinear poroelastic mechanics of articular cartilage. The importance of nonlinear fibril membrane elements in the tissue relaxation response as well as in temporal and spatial variations of pore pressure and solid matrix stresses was demonstrated. By individual adjustments of the collagen volume fraction and collagen mechanical properties, the model allows for the simulation of alterations in the fibril network structure of the tissue towards modelling damage processes or repair attempts. The current model, which is based on a physiological description of the tissue structure, is promising in improvement of our understanding of the cartilage pathomechanics.
---
paper_title: Nonlinear analysis of cartilage in unconfined ramp compression using a fibril reinforced poroelastic model.
paper_content:
OBJECTIVE ::: To develop a biomechanical model for cartilage which is capable of capturing experimentally observed nonlinear behaviours of cartilage and to investigate effects of collagen fibril reinforcement in cartilage. ::: ::: ::: DESIGN ::: A sequence of 10 or 20 steps of ramp compression/relaxation applied to cartilage disks in uniaxial unconfined geometry is simulated for comparison with experimental data. ::: ::: ::: BACKGROUND ::: Mechanical behaviours of cartilage, such as the compression-offset dependent stiffening of the transient response and the strong relaxation component, have been previously difficult to describe using the biphasic model in unconfined compression. ::: ::: ::: METHODS ::: Cartilage is modelled as a fluid-saturated solid reinforced by an elastic fibrillar network. The latter, mainly representing collagen fibrils, is considered as a distinct constituent embedded in a biphasic component made up mainly of proteoglycan macromolecules and a fluid carrying mobile ions. The Young's modulus of the fibrillar network is taken to vary linearly with its tensile strain but to be zero for compression. Numerical computations are carried out using a finite element procedure, for which the fibrillar network is discretized into a system of spring elements. ::: ::: ::: RESULTS ::: The nonlinear fibril reinforced poroelastic model is capable of describing the strong relaxation behaviour and compression-offset dependent stiffening of cartilage in unconfined compression. Computational results are also presented to demonstrate unique features of the model, e.g. the matrix stress in the radial direction is changed from tensile to compressive due to presence of distinct fibrils in the model. ::: ::: ::: RELEVANCE ::: Experimentally observed nonlinear behaviours of cartilage are successfully simulated, and the roles of collagen fibrils are distinguished by using the proposed model. Thus this study may lead to a better understanding of physiological responses of individual constituents of cartilage to external loads, and of the roles of mechanical loading in cartilage remodelling and pathology.
---
paper_title: A finite element analysis methodology for representing the articular cartilage functional structure.
paper_content:
Recognising that the unique biomechanical properties of articular cartilage are a consequence of its structure, this paper describes a finite element methodology which explicitly represents this structure using a modified overlay element model. The validity of this novel concept was then tested by using it to predict the axial curling forces generated by cartilage matrices subjected to saline solutions of known molality and concentration in a novel experimental protocol. Our results show that the finite element modelling methodology accurately represents the intrinsic biomechanical state of the cartilage matrix and can be used to predict its transient load-carriage behaviour. We conclude that this ability to represent the intrinsic swollen condition of a given cartilage matrix offers a viable avenue for numerical analysis of degenerate articular cartilage and also those matrices affected by disease.
---
paper_title: The role of interstitial fluid pressurization in articular cartilage lubrication
paper_content:
Over the last two decades, considerable progress has been reported in the field of cartilage mechanics that impacts our understanding of the role of interstitial fluid pressurization on cartilage lubrication. Theoretical and experimental studies have demonstrated that the interstitial fluid of cartilage pressurizes considerably under loading, potentially supporting most of the applied load under various transient or steady-state conditions. The fraction of the total load supported by fluid pressurization has been called the fluid load support. Experimental studies have demonstrated that the friction coefficient of cartilage correlates negatively with this variable, achieving remarkably low values when the fluid load support is greatest. A theoretical framework that embodies this relationship has been validated against experiments, predicting and explaining various outcomes, and demonstrating that a low friction coefficient can be maintained for prolonged loading durations under normal physiological function. This paper reviews salient aspects of this topic, as well as its implications for improving our understanding of boundary lubrication by molecular species in synovial fluid and the cartilage superficial zone. Effects of cartilage degeneration on its frictional response are also reviewed.
---
paper_title: A Microstructural Model of Elastostatic Properties of Articular Cartilage in Confined Compression
paper_content:
A microstructural model of cartilage was developed to investigate the relative contribution of tissue matrix components to its elastostatic properties. Cartilage was depicted as a tensed collagen lattice pressurized by the Donnan osmotic swelling pressure of proteoglycans. As a first step in modeling the collagen lattice, two-dimensional networks of tensed, elastic, interconnected cables were studied as conceptual models. The models were subjected to the boundary conditions of confined compression and stress-strain curves and elastic moduli were obtained as a function of a two-dimensional equivalent of swelling pressure. Model predictions were compared to equilibrium confined compression moduli of calf cartilage obtained at different bath concentrations ranging from 0.01 to 0.50 M NaCl. It was found that a triangular cable network provided the most consistent correspondence to the experimental data. The model showed that the cartilage collagen network remained tensed under large confined compression strains and could therefore support shear stress. The model also predicted that the elastic moduli increased with increasing swelling pressure in a manner qualitatively similar to experimental observations. Although the model did not preclude potential contributions of other tissue components and mechanisms, the consistency of model predictions with experimental observations suggests that the cartilage collagen network, prestressed by proteoglycan swelling pressure, plays an important role in supporting compression.
---
paper_title: Concerning the ultrastructural origin of large-scale swelling in articular cartilage
paper_content:
The swelling behaviour of the general matrix of both normal and abnormally softened articular cartilage was investigated in the context of its relationship to the underlying subchondral bone, the articular surface, and with respect to the primary structural directions represented in its strongly anisotropic collagenous architecture. Swelling behaviours were compared by subjecting tissue specimens under different modes of constraint to a high swelling bathing solution of distilled water and comparing structural changes imaged at the macroscopic, microscopic and ultrastructural levels of resolution. Near zero swelling was observed in the isolated normal general matrix with minimal structural change. By contrast the similarly isolated softened general matrix exhibited large-scale swelling in both the transverse and radial directions. This difference in dimensional stability was attributed to fundamentally different levels of fibril interconnectivity between the 2 matrices. A model of structural transformation is proposed to accommodate fibrillar rearrangements associated with the large-scale swelling in the radial and transverse directions in the softened general matrix.
---
paper_title: A Conewise Linear Elasticity Mixture Model for the Analysis of Tension-Compression Nonlinearity in Articular Cartilage
paper_content:
A biphasic mixture model is developed which can account for the observed tension-compression nonlinearity of cartilage by employing the continuum-based Conewise Linear Elasticity (CLE) model of Curnier et al. (J Elasticity 37:1–38, 1995) to describe the solid phase of the mixture. In this first investigation, the orthotropic octantwise linear elasticity model was reduced to the more specialized case of cubic symmetry, to reduce the number of elastic constants from twelve to four. Confined and unconfined compression stress-relaxation, and torsional shear testing were performed on each of nine bovine humeral head articular cartilage cylindrical plugs from 6 month old calves. Using the CLE model with cubic symmetry, the aggregate modulus in compression and axial permeability were obtained from confined compression (H−A =0.64±0.22 MPa, kz = 3.62 ± .97 × 10−16 m4/N.s, r2 =0.95±0.03), the tensile modulus, compressive Poisson ratio and radial permeability were obtained from unconfined compression (E+Y = 12.75 ± 1.56 MPa, ν− =0.03±0.01, kr =6.06±2.10×10−16 m4/N.s, r2 =0.99±0.00), and the shear modulus was obtained from torsional shear (µ=0.17±0.06 MPa). The model was also employed to successfully predict the interstitial fluid pressure at the center of the cartilage plug in unconfined compression (r2 =0.98±0.01). The results of this study demonstrate that the integration of the CLE model with the biphasic mixture theory can provide a model of cartilage which can successfully curvefit three distinct testing configurations while producing material parameters consistent with previous reports in the literature.
---
paper_title: Microstructural modeling of collagen network mechanics and interactions with the proteoglycan gel in articular cartilage
paper_content:
Cartilage matrix mechanical function is largely determined by interactions between the collagen fibrillar network and the proteoglycan gel. Although the molecular physics of these matrix constituents have been characterized and modern imaging methods are capable of localized measurement of molecular densities and orientation distributions, theoretical tools for using this information for prediction of cartilage mechanical behavior are lacking. We introduce a means to model collagen network contributions to cartilage mechanics based upon accessible microstructural information (fibril density and orientation distributions) and which self-consistently follows changes in microstructural geometry with matrix deformations. The interplay between the molecular physics of the collagen network and the proteoglycan gel is scaled up to determine matrix material properties, with features such as collagen fibril pre-stress in free-swelling cartilage emerging naturally and without introduction of ad hoc parameters. Methods are developed for theoretical treatment of the collagen network as a continuum-like distribution of fibrils, such that mechanical analysis of the network may be simplified by consideration of the spherical harmonic components of functions of the fibril orientation, strain, and stress distributions. Expressions for the collagen network contributions to matrix stress and stiffness tensors are derived, illustrating that only spherical harmonic components of orders 0 and 2 contribute to the stress, while orders 0, 2, and 4 contribute to the stiffness. Depth- and compression-dependent equilibrium mechanical properties of cartilage matrix are modeled, and advantages of the approach are illustrated by exploration of orientation and strain distributions of collagen fibrils in compressed cartilage. Results highlight collagen-proteoglycan interactions, especially for very small physiological strains where experimental data are relatively sparse. These methods for determining matrix mechanical properties from measurable quantities at the microscale (composition, structure, and molecular physics) may be useful for investigating cartilage structure-function relationships relevant to load-bearing, injury, and repair.
---
paper_title: A degeneration‐based hypothesis for interpreting fibrillar changes in the osteoarthritic cartilage matrix
paper_content:
The collagen fibrillar architectures in the general matrix of cartilage slices removed from both normal and osteoarthritic femoral heads were examined by both differential interference light microscopy and scanning electron microscopy. Whereas the normal general matrix contained a finely differentiated pseudo-random weave of fibrils developed from an interconnected array of radial elements, the osteoarthritic general matrix was characterised by the presence of structurally distinct regions consisting of strongly aligned radial bundles of fibrils and associated intense tangles or ‘knotted’ features. Simple structural models were developed to explore possible transformation structures based on two different types of interconnectivity in the three-dimensional fibrillar network. These models support the hypothesis that the distinctive ultrastructural features of the osteoarthritic general matrix can develop as a consequence of largely passive degradative changes occurring in the fibrillar weave originally present in the normal matrix. This could, in principle, occur independently of any new structure that might develop as a consequence of any upregulation of collagen associated with the osteoarthritic process.
---
paper_title: Is classical consolidation theory applicable to articular cartilage deformation?
paper_content:
In this paper, classical consolidation theory has been used to investigate the time-dependent response of articular cartilage to static loading. An experimental technique was developed to measure simultaneously the matrix internal pressure and creep strain under conditions of one-dimensional consolidation. This is the first measurement of the internal stress state of loaded cartilage. It is demonstrated that under static compression the applied load is shared by the components of the matrix (i.e. water, the proteoglycans, and the collagen fibrillar meshwork), during which time a maximum hydrostatic excess pore pressure is developed as initial water exudation occurs. This pressure decays as water is further exuded from the matrix and effective consolidation begins with a progressive transfer of the applied stress from water to the collagen fibrils and proteoglycan gel. Consolidation is completed when the hydrostatic excess pore pressure is reduced to zero and the solid components sustain in full the applied load.
---
paper_title: A comparison of the size distribution of collagen fibrils in connective tissues as a function of age and a possible relation between fibril size distribution and mechanical properties
paper_content:
Data on the distribution of collagen fibril diameters in various connective tissues have been collected and analysed for common features. The diameter distributions of the collagen fibrils at birth and in the foetal stages of development are unimodal, whereas at maturity the mass-average diameter of the collagen fibrils is generally larger than at birth and the distributions of fibril sizes may be either unimodal or bimodal depending on the tissue. At senescence, few data are available but in most instances both the mean and mass-average diameters of the collagen fibrils are smaller than those at maturity and the fibril distributions are mainly bimodal. The division between tissues showing unimodal or bimodal fibril distributions at maturity does not simply relate to the type I collagen/type II collagen classification, to the distinction between orientated and unorientated material or indeed directly to the levels of stress and strain encountered by the tissue. However, there may prove to be a relation between a bimodal fibril diameter distribution at maturity and the maintenance over long periods of time of either high stress in stretched tissues or low stress in compressed tissues. It has also been noted that the width of the collagen fibril diameter distribution at birth differs between altricious and precocious animals. The ultimate tensile strength of a connective tissue and the mass-average diameter of the constituent collagen fibrils have been shown to have a positive correlation. Further, the form of the collagen fibril diameter distribution can be directly related to the mechanical properties of the tissue. In particular, it is postulated that the size distribution of the collagen fibrils is largely determined by two factors. First, if the tissue is primarily designed to have high tensile strength, then an increase in the diameter of the collagen fibrils will parallel an increase in the potential density of intrafibrillar covalent crosslinks. Consequently large collagen fibrils are predicted to have a greater tensile strength than small fibrils. Secondly, if the tissue is designed to be elastic and hence withstand creep, then a reduction in the diameter of the collagen fibrils will effectively increase the surface area per unit mass of the fibrils thus enhancing the probability of interfibrillar non-covalent crosslinks between the collagen fibrils and the components of the matrix. The idealized description given may indicate how the mechanical properties of a tissue may be interpreted in terms of the collagen fibril diameter distribution.
---
paper_title: Nanomechanics of collagen fibrils under varying cross-link densities: Atomistic and continuum studies
paper_content:
Abstract Collagen is a protein material with intriguing mechanical properties — it is highly elastic, shows large fracture strength and plays a crucial role in making Nature’s structural materials tough. Collagen based tissues consist of collagen fibrils, each of which is composed out of a staggered array of ultra-long tropocollagen molecules extending to several hundred nanometers. Albeit the macroscopic properties of collagen based tissues have been studied extensively, less is known about the nanomechanical properties of tropocollagen molecules and collagen fibrils, their elementary building blocks. In particular, the relationship between molecular properties and tissue properties remains a scarcely explored aspect of the science of collagen materials. Results of molecular multi-scale modeling of the nanomechanical properties of the large-strain deformation regime of collagen fibrils under varying cross-link densities are reported in this paper. The results confirm the significance of cross-links in collagen fibrils in improving its mechanical strength. Further, it is found that cross-links influence the nature of its large-deformation and fracture behavior. Cross-link deficient collagen fibrils show a highly dissipative deformation behavior with large yield regimes. Increasing cross-link densities lead to stronger fibrils that display an increasingly brittle deformation character. The simulation results are compared with recent nanomechanical experiments at the scale of tropocollagen molecules and collagen fibrils.
---
paper_title: Depth-dependent confined compression modulus of full-thickness bovine articular cartilage.
paper_content:
The objective of this study was to determine the equilibrium confined compression modulus of bovine articular cartilage as it varies with depth from the articular surface. Osteochondral samples were compressed by 8, 16, 24, and 32% of the cartilage thickness and allowed to equilibrate. Intratissue displacement within the cartilage was measured with use of fluorescently labeled chondrocyte nuclei as intrinsic, fiducial markers. Axial strain was then calculated in nine sequential 125 mm thick cartilage layers comprising the superficial 1,125 mm and in a 250 mmm thick layer of cartilage adjacent to the cartilage-bone interface. Adjacent osteochondral cores were also tested in confined compression to determine the equilibrium stresses required to achieve the same levels of compression. Stress-strain data for each layer of each sample were fit to a finite deformation stress-strain relation to determine the equilibrium confined compression modulus in each tissue layer. The compressive modulus increased significantly with depth from the articular surface and ranged from 0.079 ± 0.039 MPa in the superficial layer to 1.14 ± 0.44 MPa in the ninth layer. The deepest layer. 250 mm thick, had a modulus of 2.10 ± 2.69 MPa. These moduli were markedly different from the apparent “homogeneous” modulus for full-thickness cartilage (0.38 ± 0.12 MPa) and ranged from 21 to 560% of that value. The relatively low moduli and the compression-induced stiffening of the superficial layers suggest that these layers greatly affect the biomechanical behavior of cartilage, such as during confined compression testing. The delineation of the depth-dependent modulus provides a basis for detailed study of the relationship between the composition, structure, and function of cartilage in such processes as aging, repair, and degeneration.
---
paper_title: The three-dimensional 'knit' of collagen fibrils in articular cartilage.
paper_content:
TEM stereoscopy of thick sections has been used to reconstruct the 3-dimensional relationships of collagen fibrils in the general matrix of articular cartilage in its relaxed and deformed states. As well as identifying a variety of fibril interactions involving direct physical entwinement which are assumed to provide matrix cohesion the study also highlights the functional importance of the repeatedly kinked morphology exhibited by the radial fibrils. It is suggested that these fibril kinks, in accommodating local compressive strains that approach 100%, function as macro-molecular hinges and permit the collagen elements to undergo large spatial rearrangement without risk to their structural integrity.
---
paper_title: Stress-sharing between the fluid and solid components of articular cartilage under varying rates of compression
paper_content:
This paper investigates the factors affecting the mechanical behavior of the articular matrix with special emphasis on the effect of compressive strain-rate on the short and long term responses of the fluid and the solid components. The relationships expressed in the general theory of one-dimensional consolidation are generalized to account for strain-rate in the deformation process with the result that the stiffness due to the fluid and the solid components, and a parameter representing the degree of drag, can be calculated explicity
---
paper_title: Collagen-collagen versus collagen-proteoglycan interactions in the determination of cartilage strength
paper_content:
For articular cartilage to function as a stress-reducing layer in the joint, it must both deform to an appropriate level to achieve load-spreading as well as remain structurally coherent. Combined micromechanical and enzymatic studies of cartilage have demonstrated that the bulk of the extractable proteoglycans, while essential to the maintenance of compressive stiffness, contribute little to its cohesive strength. The study reported here clarifies fundamental aspects of the relationship between matrix components and the biomechanical function of cartilage.
---
paper_title: Deformation of loaded articular cartilage prepared for scanning electron microscopy with rapid freezing and freeze-substitution fixation
paper_content:
To investigate the effect of joint loading on collagen fibers in articular cartilage, 45 knees of adult rabbits were examined by scaning electron microscopy. The knees were loaded at the patella with a simulated “quadriceps force” of 0.5-4 times body weight for 0.5 or 25 minutes, plunge-frozen, and fixed by freezesubstitution with aldehydes. Six knees were loaded for 3 hours and then fixed conventionally. Fixed tibial plateaus were examined and then freeze-fractured through the area of tibiofemoral contact, dried, coated, and examined by scanning electron microscopy to assess the overall deformation of the tibial articular surface and matrix collagen fibers. With tissue prepared by conventional fixation used as a standard, the quality of fixation was graded by light and transmission electron microscopy of patellar cartilage taken from half of the freeze-fixed knees. In loaded specimens, an indentation was present where the femur contacted the tibial plateau. The diameter and apparent depth of the dent were proportional to the magnitude and duration of the load; no dent was seen in the controls. The thickness of the cartilage at the center of the indentation was reduced 15-80%. Meniscectomy always produced larger deformations in otherwise equivalent conditions. Icecrystal damage to cells was evident by transmission electron microscopy and scanning electron microscopy, but at magnifications as high as ×30,000 the collagen fibrils prepared by freeze-substitution and conventional aqueous methods were identical. In loaded regions, the collagen matrix of the tibial cartilage was deformed in two ways: (a) radial collagen fibers exhibited a periodic crimp, and (b) in regions where an indentation was created by the femoral condyle, the radial fibers were bent, in effect creating tangential zone where none had existed before. The radial fibers apparently are loaded axially and buckle under normal loads.
---
paper_title: Structure-Function Relationships in Enzymatically Modified Articular Cartilage
paper_content:
The present study is aimed at revealing structure-function relationships of bovine patellar articular cartilage. Collagenase, chondroitinase ABC and elastase were used for controlled and selective enzymatic modifications of cartilage structure, composition and functional properties. The effects of the enzymatic degradations were quantitatively evaluated using quantitative polarized light microscopy, digital densitometry of safranin O-stained sections as well as with biochemical and biomechanical techniques. The parameters related to tissue composition and structure were correlated with the indentation stiffness of cartilage. In general, tissue alterations after enzymatic digestions were restricted to the superficial cartilage. All enzymatic degradations induced superficial proteoglycan (PG) depletion. Collagenase also induced detectable superficial collagen damage, though without causing cartilage fibrillation or tissue swelling. Quantitative microscopic techniques were more sensitive than biochemical methods in detecting these changes. The Young's modulus of cartilage decreased after enzymatic treatments indicating significant softening of the tissue. The PG concentration of the superficial zone proved to be the major determinant of the Young's modulus (r(2) = 0.767, n = 72, p < 0.001). Results of the present study indicate that specific enzymatic degradations of the tissue PGs and collagen can provide reproducible experimental models to clarify the structure-function relationships of cartilage. Effects of these models mimic the changes observed in early osteoarthrosis. Biomechanical testing and quantitative microscopic techniques proved to be powerful tools for detecting the superficial structural and compositional changes while the biochemical measurements on the whole uncalcified cartilage were less sensitive.
---
paper_title: Glycosaminoglycan network geometry may contribute to anisotropic hydraulic permeability in cartilage under compression.
paper_content:
Resistance to fluid flow within cartilage extracellular matrix is provided primarily by a dense network of rod-like glycosaminoglycans (GAGs). If the geometrical organization of this network is random, the hydraulic permeability tensor of cartilage is expected to be isotropic. However, experimental data have suggested that hydraulic permeability may become anisotropic when the matrix is mechanically compressed, contributing to cartilage biomechanical functions such as lubrication. We hypothesized that this may be due to preferred GAG rod orientations and directionally-dependent reduction of inter-GAG spacings which reflect molecular responses to tissue deformations. To examine this hypothesis, we developed a model for effects of compression which allows the GAG rod network to deform consistently with tissue-scale deformations but while still respecting limitations imposed by molecular structure. This network deformation model was combined with a perturbation analysis of a classical analytical model for hydraulic permeability based on molecular structure. Finite element analyses were undertaken to ensure that this approach exhibited results similar to those emerging from more exact calculations. Model predictions for effects of uniaxial confined compression on the hydraulic permeability tensor were consistent with previous experimental results. Permeability decreased more rapidly in the direction perpendicular to compression than in the parallel direction, for matrix solid volume fractions associated with fluid transport in articular cartilage. GAG network deformations may therefore introduce anisotropy to the permeability (and other GAG-associated matrix properties) as physiological compression is applied, and play an important role in cartilage lubrication and other biomechanical functions.
---
paper_title: Experimental Verification and Theoretical Prediction of Cartilage Interstitial Fluid Pressurization At an Impermeable Contact Interface in Confined Compression
paper_content:
Interstitial fluid pressurization has long been hypothesized to play a fundamental role in the load support mechanism and frictional response of articular cartilage. However, to date, few experimental studies have been performed to verify this hypothesis from direct measurements. The first objective of this study was to investigate experimentally the hypothesis that cartilage interstitial fluid pressurization does support the great majority of the applied load, in the testing configurations of confined compression creep and stress relaxation. The second objective was to investigate the hypothesis that the experimentally observed interstitial fluid pressurization could also be predicted using the linear biphasic theory of Mow et al. (J. Biomech. Engng ASME, 102, 73-84, 1980). Fourteen bovine cartilage samples were tested in a confined compression chamber fitted with a microchip piezoresistive transducer to measure interstitial fluid pressure, while simultaneously measuring (during stress relaxation) or prescribing (during creep) the total stress. It was found that interstitial fluid pressure supported more than 90% of the total stress for durations as long as 725 +/- 248 s during stress relaxation (mean +/- S.D., n = 7), and 404 +/- 229 s during creep (n = 7). When comparing experimental measurements of the time-varying interstitial fluid pressure against predictions from the linear biphasic theory, nonlinear coefficients of determination r2 = 0.871 +/- 0.086 (stress relaxation) and r2 = 0.941 +/- 0.061 (creep) were found. The results of this study provide some of the most direct evidence to date that interstitial fluid pressurization plays a fundamental role in cartilage mechanics; they also indicate that the mechanism of fluid load support in cartilage can be properly predicted from theory.
---
paper_title: Effects of static axial strain on the tensile properties and failure mechanisms of self‐assembled collagen fibers
paper_content:
Collagen fibers form the structural units of connective tissue throughout the body, transmitting force, maintaining shape, and providing a scaffold for cells. Our laboratory has studied collagen self-assembly since the 1970s. In this study, collagen fibers were self-assembled from molecular collagen solutions and then stretched to enhance alignment. Fibers were tested in uniaxial tension to study the mechanical properties and failure mechanisms. Results reported suggest that axial orientation of collagen fibrils can be achieved by stretching uncrosslinked collagen fibers. Stretching by about 30% not only results in decreased diameter and increased tensile strength but also leads to unusual failure mechanisms that inhibit crack propagation across the fiber. It is proposed that stretching serves to generate oriented fibrillar substructure in self-assembled collagen fibers. © 1997 John Wiley & Sons, Inc. J Appl Polym Sci 63: 1429–1440, 1997
---
paper_title: Direct Measurement of Glycosaminoglycan Intermolecular Interactions via High-Resolution Force Spectroscopy
paper_content:
Intermolecular repulsion forces between negatively charged glycosaminoglycan (CS−GAG) macromolecules are a major determinant of cartilage biomechanical properties. It is thought that the electrostatic component of the total intermolecular interaction is responsible for 50−75% of the equilibrium elastic modulus of cartilage in compression, while other forces (e.g., steric, hydration, van der Waals, etc.) may also play a role. To investigate these forces, radiolabeled CS−GAG polymer chains, with a fully extended contour length of 35 nm, were chemically end-grafted to a planar surface to form model biomimetic polyelectrolyte “brush” layers whose environment (e.g., ionic strength, pH) was varied to mimic physiological conditions. The total intersurface force (≤nN) between the CS−GAG brushes and chemically modified probe tips (SO3- and OH) was measured as a function of tip−substrate separation distance in aqueous solution using the technique of high-resolution force spectroscopy (HRFS). These experiments showe...
---
paper_title: The Role of Interstitial Fluid Pressurization and Surface Porosities on the Boundary Friction of Articular Cartilage
paper_content:
Articular cartilage is the remarkable bearing material of diarthrodial joints. Experimental measurements of its friction coefficient under various configurations have demonstrated that it is load-dependent, velocity-dependent, and time-dependent, and it can vary from values as low as 0.002 to as high as 0.3 or greater. Yet, many studies have suggested that these frictional properties are not dependent upon the viscosity of synovial fluid. In this paper, a theoretical formulation of a boundary friction modelfor articular cartilage is described and verified directly against experimental results in the configuration of confined compression stress-relaxation. The mathematical formulation of the friction model can potentially explain many of the experimentally observed frictional responses in relation to the pressurization of the interstitial fluid inside cartilage during joint loading, and the equilibrium friction coefficient which prevails in the absence of such pressurization. In this proposed model, it is also hypothesized that surface porosities play a role in the regulation of the frictional response of cartilage. The good agreement between theoretical predictions and experimental results of this study provide support for the proposed boundary friction formulation.
---
paper_title: Effect of dynamic loading on the frictional response of bovine articular cartilage.
paper_content:
The objective of this study was to test the hypotheses that (1) the steady-state friction coefficient of articular cartilage is significantly smaller under cyclical compressive loading than the equilibrium friction coefficient under static loading, and decreases as a function of loading frequency; (2) the steady-state cartilage interstitial fluid load support remains significantly greater than zero under cyclical compressive loading and increases as a function of loading frequency. Unconfined compression tests with sliding of bovine shoulder cartilage against glass in saline were carried out on fresh cylindrical plugs (n=12), under three sinusoidal loading frequencies (0.05, 0.5 and 1 Hz) and under static loading; the time-dependent friction coefficient mu(eff) was measured. The interstitial fluid load support was also predicted theoretically. Under static loading mu(eff) increased from a minimum value (mu(min)=0.005+/-0.003) to an equilibrium value (mu(eq)=0.153+/-0.032). In cyclical compressive loading tests mu(eff) similarly rose from a minimum value (mu(min)=0.004+/-0.002, 0.003+/-0.001 and 0.003+/-0.001 at 0.05, 0.5 and 1 Hz) and reached a steady-state response oscillating between a lower-bound (mu(lb)=0.092+/-0.016, 0.083+/-0.019 and 0.084+/-0.020) and upper bound (mu(ub)=0.382+/-0.057, 0.358+/-0.059, and 0.298+/-0.061). For all frequencies it was found that mu(ub)>mu(eq) and mu(lb)<mu(eq)(p<0.05). Under cyclical compressive loading the interstitial fluid load support was found to oscillate above and below the static loading response, with suction occurring over a portion of the loading cycle at steady-state conditions. All theoretical predictions and most experimental results demonstrated little sensitivity to loading frequency. On the basis of these results, both hypotheses were rejected. Cyclical compressive loading is not found to promote lower frictional coefficients or higher interstitial fluid load support than static loading.
---
paper_title: Relationship Between Mechanical Properties and Collagen Structure of Closed and Open Wounds
paper_content:
Mechanical properties and collagen structure of excisional wounds left open are compared with wounds closed by clips. In both wound models, collagen fiber diameter increases with time post-wounding and is related to tensile strength. Clipped wounds show a higher ultimate tensile strength and tangent modulus compared with open wounds. In clipped wounds, newly deposited collagen appears as a biaxially oriented network as observed in normal skin. In open wounds a delay in the organization of the collagen network is observed and parallel wavy-shaped ribbons of collagen fibers are deposited. At long term, the high extensibility observed in open wounds may be due to the sliding of ribbons of collagen fibers past each other.
---
paper_title: Experimental verification of the role of interstitial fluid pressurization in cartilage lubrication
paper_content:
The objective of the current study was to measure the friction coefficient simultaneously with the interstitial fluid load support in bovine articular cartilage, while sliding against glass under a constant load. Ten visually normal 6-mm-diameter cartilage plugs harvested from the humeral head of four bovine shoulder joints (ages 2-4 months) were tested in a custom friction device under reciprocating linear motion (range of translation +/-2 mm; sliding velocity 1 mm/s), subjected to a 4.5 N constant load. The frictional coefficient was found to increase with time from a minimum value of mu min=0.010+/-0.007 (mean+/-SD) to a maximum value of 0.243+/-0.044 over a duration ranging from 920 to 19,870 s (median: 4,560 s). The corresponding interstitial fluid load support decreased from a maximum of 88.8+/-3.8% to 8.7+/-8.6%. A linear correlation was observed between the frictional coefficient and interstitial fluid load support (r2=0.96+/-0.03). These results support the hypothesis that the temporal variation of the frictional coefficient correlates negatively with the interstitial fluid load support and that consequently interstitial fluid load support is a primary mechanism regulating the frictional response in articular cartilage. Fitting the experimental data to a previously proposed biphasic boundary lubrication model for cartilage yielded an equilibrium friction coefficient of mu eq=0.284+/-0.044. The fraction of the apparent contact area over which the solid cartilage matrix was in contact with the glass slide was predicted at phi s=1.7+/-6.3%, significantly smaller than the solid volume fraction of the tissue, phi s=13.8+/-1.8%. The model predictions suggest that mixed lubrication prevailed at the contact interface under the loading conditions employed in this study.
---
paper_title: Viscoelastic properties of proteoglycan subunits and aggregates in varying solution concentrations.
paper_content:
Using a cone-on-plate mechanical spectrometer, we have measured the linear and non-linear rheological properties of cartilage proteoglycan solutions at concentrations similar to those found in situ. Solutions of bovine nasal cartilage proteoglycan subunits (22S) and aggregates (79S) were studied at concentrations ranging from 10 to 50 mg ml-1. We determined: (1) the complex viscoelastic shear modulus G (omega) under small amplitude (0.02 radians) oscillatory excitation at frequencies (omega) ranging from 1.0 to 20.0 Hz, (2) the non-linear shear rate (gamma) dependent apparent viscosity napp (gamma) in continuous shear, and (3) the non-linear shear rate dependent primary normal stress difference sigma 1 (gamma) in continuous shear. Both the apparent viscosity and normal stress difference were measured over four decades of shear rates ranging from 0.25 to 250 s-1. Analysis of the experimental results were performed using a variety of materially objective non-linear viscoelastic constitutive laws. We found that the non-linear, four-coefficient Oldroyd rate-type model was most effective for describing the measured flow characteristics of proteoglycan subunit and aggregate solutions. Values of the relaxation time lambda 1, retardation time lambda 2, zero shear viscosity no, and nonlinear viscosity parameter muo were computed for the aggregate and subunit solutions at all of the solute concentrations used. The four independent material coefficients showed marked dependence on the two different molecular conformations, i.e. aggregate or subunit, of proteoglycans in solution.
---
paper_title: Ultrastructural evidence for fibril-to-fibril associations in articular cartilage and their functional implication.
paper_content:
This study presents ultrastructural evidence for the presence of a variety of fibril-to-fibril interactions or associations in the architecture of the general matrix of articular cartilage. These interactions are believed to serve a higher purpose of repeatedly constraining an overall radial arrangement of fibrils into an array of oblique interconnecting segments thus creating a three dimensional meshwork within which the hydrated ground substance is constrained. It is argued that any reduction in these interfibrillar interactions will allow the oblique fibril segments to revert to a low energy radial configuration, thus explaining the presence of such arrays prominent in various degenerate forms of articular cartilage.
---
paper_title: On the ultrastructure of softened cartilage: a possible model for structural transformation
paper_content:
The fibrillar architecture in the general matrix of softened cartilage has been compared with that of the normal matrix using both Nomarski light microscopy and transmission electron microscopy with combined stereoscopic reconstruction. A pseudorandom network developed from an overall radial arrangement of collagen fibrils is the most fundamental ultrastructural characteristic of the normal general matrix. This, in turn, provides an efficient entrapment system for the swelling proteoglycans. Conversely, the most distinctive feature of the softened matrix is the presence of parallel and relatively unentwined fibrils, strongly aligned in the radial direction. The presence of an optically resolvable fibrous texture in the softened cartilage matrix indicates the presence of discrete bundles of closely packed and aligned fibrils at the ultrastructural level of organisation. The general absence of such texture in the normal cartilage general matrix is consistent with the much greater degree of interconnectedness and related short-range obliquity in the fibrillar architecture, hence the importance of the term pseudorandom network. A mechanism of structural transformation is proposed based on the important property of lateral interconnectivity in the fibrils which involves both entwinement and nonentwinement based interactions. The previously reported difference in intrinsic mechanical strength between the normal and softened matrices is consistent with the transformation model proposed in this study.
---
paper_title: Assembly of type I collagen: fusion of fibril subunits and the influence of fibril diameter on mechanical properties.
paper_content:
Abstract Structural stability of the extracellular matrix is primarily a consequence of fibrillar collagen and the extent of cross-linking. The relationship between collagen self-assembly, consequent fibrillar shape and mechanical properties remains unclear. Our laboratory developed a model system for the preparation of self-assembled type I collagen fibers with fibrillar substructure mimicking the hierarchical structures of tendon. The present study evaluates the effects of pH and temperature during self-assembly on fibrillar structure, and relates the structural effects of these treatments on the uniaxial tensile mechanical properties of self-assembled collagen fibers. Results of the analysis of fibril diameter distributions and mechanical properties of the fibers formed under the different incubation conditions indicate that fibril diameters grow via the lateral fusion of discrete ∼4 nm subunits, and that fibril diameter correlates positively with the low strain modulus. Fibril diameter did not correlate with either the ultimate tensile strength or the high strain elastic modulus, which suggests that lateral aggregation and consequently fibril diameter influences mechanical properties during small strain mechanical deformation. We hypothesize that self-assembly is mediated by the formation of fibrillar subunits that laterally and linearly fuse resulting in fibrillar growth. Lateral fusion appears important in generating resistance to deformation at low strain, while linear fusion leading to longer fibrils appears important in the ultimate mechanical properties at high strain.
---
paper_title: Fibril reinforced poroelastic model predicts specifically mechanical behavior of normal, proteoglycan depleted and collagen degraded articular cartilage
paper_content:
Abstract Degradation of collagen network and proteoglycan (PG) macromolecules are signs of articular cartilage degeneration. These changes impair cartilage mechanical function. Effects of collagen degradation and PG depletion on the time-dependent mechanical behavior of cartilage are different. In this study, numerical analyses, which take the compression-tension nonlinearity of the tissue into account, were carried out using a fibril reinforced poroelastic finite element model. The study aimed at improving our understanding of the stress-relaxation behavior of normal and degenerated cartilage in unconfined compression. PG and collagen degradations were simulated by decreasing the Young's modulus of the drained porous (nonfibrillar) matrix and the fibril network, respectively. Numerical analyses were compared to results from experimental tests with chondroitinase ABC (PG depletion) or collagenase (collagen degradation) digested samples. Fibril reinforced poroelastic model predicted the experimental behavior of cartilage after chondroitinase ABC digestion by a major decrease of the drained porous matrix modulus (−64±28%) and a minor decrease of the fibril network modulus (−11±9%). After collagenase digestion, in contrast, the numerical analyses predicted the experimental behavior of cartilage by a major decrease of the fibril network modulus (−69±5%) and a decrease of the drained porous matrix modulus (−44±18%). The reduction of the drained porous matrix modulus after collagenase digestion was consistent with the microscopically observed secondary PG loss from the tissue. The present results indicate that the fibril reinforced poroelastic model is able to predict specifically characteristic alterations in the stress-relaxation behavior of cartilage after enzymatic modifications of the tissue. We conclude that the compression-tension nonlinearity of the tissue is needed to capture realistically the mechanical behavior of normal and degenerated articular cartilage.
---
paper_title: Structural and Compositional Changes in Peri- and Extracellular Matrix of Osteoarthritic Cartilage Modulate Chondrocyte Morphology
paper_content:
It has not been shown how specific changes in composition and structure in the peri- and extracellular matrix (PCM and ECM) of human articular cartilage modulate cell morphology in different stages of osteoarthritis (OA). In the present study, cell morphology in the superficial tissue of normal, early OA and advanced OA human cartilage samples were measured from histological sections. Collagen and proteoglycan contents in the vicinity of chondrocytes were analyzed using Fourier transform infrared spectroscopy and digital densitometry. Determinants of the experimentally observed morphological changes of cells were studied using finite element analysis (FEA). From normal tissue to early OA, cell aspect ratio (height/width) remained constant (0.69 ± 0.11 and 0.69 ± 0.09, respectively). In advanced OA, cells became significantly (p < 0.05) more rounded (aspect ratio of 0.83 ± 0.13). Normalized collagen content in the PCM, i.e. collagen content in the PCM with respect to that in the ECM, was reduced significantly (p < 0.05) only in advanced OA. FEA indicated that reduced proteoglycan content and increased collagen fibrillation in the PCM and ECM, as well as reduced collagen content only in the PCM, primarily explained experimentally found changes in cell aspect ratio. Our results suggest that changes in composition and structure of the PCM and ECM in the superficial tissue of human articular cartilage modulate cell morphology differently in early and advanced OA.
---
paper_title: Importance of collagen orientation and depth-dependent fixed charge densities of cartilage on mechanical behavior of chondrocytes.
paper_content:
The collagen network and proteoglycan matrix of articular cartilage are thought to play an important role in controlling the stresses and strains in and around chondrocytes, in regulating the biosynthesis of the solid matrix, and consequently in maintaining the health of diarthrodial joints. Understanding the detailed effects of the mechanical environment of chondrocytes on cell behavior is therefore essential for the study of the development, adaptation, and degeneration of articular cartilage. Recent progress in macroscopic models has improved our understanding of depth-dependent properties of cartilage. However, none of the previous works considered the effect of realistic collagen orientation or depth-dependent negative charges in microscopic models of chondrocyte mechanics. The aim of this study was to investigate the effects of the collagen network and fixed charge densities of cartilage on the mechanical environment of the chondrocytes in a depth-dependent manner. We developed an anisotropic, inhomogeneous, microstructural fibril-reinforced finite element model of articular cartilage for application in unconfined compression. The model consisted of the extracellular matrix and chondrocytes located in the superficial, middle, and deep zones. Chondrocytes were surrounded by a pericellular matrix and were assumed spherical prior to tissue swelling and load application. Material properties of the chondrocytes, pericellular matrix, and extracellular matrix were obtained from the literature. The loading protocol included a free swelling step followed by a stress-relaxation step. Results from traditional isotropic and transversely isotropic biphasic models were used for comparison with predictions from the current model. In the superficial zone, cell shapes changed from rounded to elliptic after free swelling. The stresses and strains as well as fluid flow in cells were greatly affected by the modulus of the collagen network. The fixed charge density of the chondrocytes, pericellular matrix, and extracellular matrix primarily affected the aspect ratios (height/ width) and the solid matrix stresses of cells. The mechanical responses of the cells were strongly location and time dependent. The current model highlights that the collagen orientation and the depth-dependent negative fixed charge densities of articular cartilage have a great effect in modulating the mechanical environment in the vicinity of chondrocytes, and it provides an important improvement over earlier models in describing the possible pathways from loading of articular cartilage to the mechanical and biological responses of chondrocytes.
---
paper_title: Composition of the pericellular matrix modulates the deformation behaviour of chondrocytes in articular cartilage under static loading
paper_content:
The aim was to assess the role of the composition changes in the pericellular matrix (PCM) for the chondrocyte deformation. For that, a three-dimensional finite element model with depth-dependent collagen density, fluid fraction, fixed charge density and collagen architecture, including parallel planes representing the split-lines, was created to model the extracellular matrix (ECM). The PCM was constructed similarly as the ECM, but the collagen fibrils were oriented parallel to the chondrocyte surfaces. The chondrocytes were modelled as poroelastic with swelling properties. Deformation behaviour of the cells was studied under 15% static compression. Due to the depth-dependent structure and composition of cartilage, axial cell strains were highly depth-dependent. An increase in the collagen content and fluid fraction in the PCMs increased the lateral cell strains, while an increase in the fixed charge density induced an inverse behaviour. Axial cell strains were only slightly affected by the changes in PCM composition. We conclude that the PCM composition plays a significant role in the deformation behaviour of chondrocytes, possibly modulating cartilage development, adaptation and degeneration. The development of cartilage repair materials could benefit from this information.
---
paper_title: The mechanical environment of the chondrocyte: a biphasic finite element model of cell-matrix interactions in articular cartilage.
paper_content:
Abstract Mechanical compression of the cartilage extracellular matrix has a significant effect on the metabolic activity of the chondrocytes. However, the relationship between the stress–strain and fluid-flow fields at the macroscopic “tissue” level and those at the microscopic “cellular” level are not fully understood. Based on the existing experimental data on the deformation behavior and biomechanical properties of articular cartilage and chondrocytes, a multi-scale biphasic finite element model was developed of the chondrocyte as a spheroidal inclusion embedded within the extracellular matrix of a cartilage explant. The mechanical environment at the cellular level was found to be time-varying and inhomogeneous, and the large difference (∼3 orders of magnitude) in the elastic properties of the chondrocyte and those of the extracellular matrix results in stress concentrations at the cell–matrix border and a nearly two-fold increase in strain and dilatation (volume change) at the cellular level, as compared to the macroscopic level. The presence of a narrow “pericellular matrix” with different properties than that of the chondrocyte or extracellular matrix significantly altered the principal stress and strain magnitudes within the chondrocyte, suggesting a functional biomechanical role for the pericellular matrix. These findings suggest that even under simple compressive loading conditions, chondrocytes are subjected to a complex local mechanical environment consisting of tension, compression, shear, and fluid pressure. Knowledge of the local stress and strain fields in the extracellular matrix is an important step in the interpretation of studies of mechanical signal transduction in cartilage explant culture models.
---
paper_title: Depth-dependent analysis of the role of collagen fibrils, fixed charges and fluid in the pericellular matrix of articular cartilage on chondrocyte mechanics.
paper_content:
Abstract The pericellular matrix of articular cartilage has been shown to regulate the mechanical environment of chondrocytes. However, little is known about the mechanical role of collagen fibrils in the pericellular matrix, and how fibrils might help modulate strains acting on chondrocytes when cartilage is loaded. The primary objective was to clarify the effect of pericellular collagen fibrils on cell volume changes and strains during cartilage loading. Secondary objectives were to investigate the effects of pericellular fixed charges and fluid on cell responses. A microstructural model of articular cartilage, in which chondrocytes and pericellular matrices were represented with depth-dependent structural and morphological properties, was created. The extracellular matrix and pericellular matrices were modeled as fibril-reinforced, biphasic materials with swelling capabilities, while chondrocytes were assumed to be isotropic and biphasic with swelling properties. Collagen fibrils in the extracellular matrix were represented with an arcade-like architecture, whereas pericellular fibrils were assumed to run tangential to the cell surface. In the early stages of a stress–relaxation test, pericellular fibrils were found to sensitively affect cell volume changes, even producing a reversal from increasing to decreasing cell volume with increasing fibril stiffness in the superficial zone. Consequently, steady-state volume of the superficial zone cell decreased with increasing pericellular fibril stiffness. Volume changes in the middle and deep zone chondrocytes were smaller and opposite to those observed in the superficial zone chondrocyte. An increase in the pericellular fixed charge density reduced cell volumes substantially in every zone. The sensitivity of cell volume changes to pericellular fibril stiffness suggests that pericellular fibrils play an important, and as of yet largely neglected, role in regulating the mechanical environment of chondrocytes, possibly affecting matrix synthesis during cartilage development and degeneration, and affecting biosynthetic responses associated with articular cartilage loading.
---
paper_title: A Study of the Structural Response of Wet Hyaline Cartilage to Various Loading Situations
paper_content:
A direct view has been obtained of the manner in which the fibrous components and chondrocytes in hyaline cartilage respond to the application of uniaxial tensile loading and plane-strain compressive loading.A micro-mechanical testing device has been developed which inserts directly into the stage of a high-resolution optical microscope fitted with Nomarski interference contrast and this has permitted simultaneous morphological and mechanical observations to be conducted on articular cartilage maintained in its wet functional condition.Aligned and crimped fibrous arrays surround the deeper chondrocytes and can be observed to undergo well-defined geometric changes with applied stress. It is thought that these arrays may act as displacement or strain sensors transmitting mechanical information from the bulk matrix to their associated cells thus inducing a specific metabolic response.The process of tissue recovery following sustained high levels of compressive loading can also be observed with this experimen...
---
paper_title: The Effect of Matrix Tension-Compression Nonlinearity and Fixed Negative Charges on Chondrocyte Responses in Cartilage
paper_content:
Thorough analyses of the mechano-electrochemical interaction between articular cartilage matrix and the chondrocytes are crucial to understanding of the signal transduction mechanisms that modulate the cell metabolic activities and biosynthesis. Attempts have been made to model the chondrocytes embedded in the collagen-proteoglycan extracellular matrix to determine the distribution of local stress-strain field, fluid pressure and the time-dependent deformation of the cell. To date, these models still have not taken into account a remarkable characteristic of the cartilage extracellular matrix given rise from organization of the collagen fiber architecture, now known as the tension-compression nonlinearity (TCN) of the tissue, as well as the effect of negative charges attached to the proteoglycan molecules, and the cell cytoskeleton that interacts with mobile ions in the interstitial fluid to create osmotic and electro-kinetic events in and around the cells. In this study, we proposed a triphasic, multi-scale, finite element model incorporating the Conewise Linear Elasticity that can describe the various known coupled mechanical, electrical and chemical events, while at the same time representing the TCN of the extracellular matrix. The model was employed to perform a detailed analysis of the chondrocytes' deformational and volume responses, and to quantitatively describe the mechano-electrochemical environment of these cells. Such a model describes contributions of the known detailed micro-structural and composition of articular cartilage. Expectedly, results from model simulations showed substantial effects of the matrix TCN on the cell deformational and volume change response. A low compressive Poisson's ratio of the cartilage matrix exhibiting TCN resulted in dramatic recoiling behavior of the tissue under unconfined compression and induced significant volume change in the cell. The fixed charge density of the chondrocyte and the pericellular matrix were also found to play an important role in both the time-dependent and equilibrium deformation of the cell. The pericellular matrix tended to create a uniform osmolarity around the cell and overall amplified the cell volume change. It is concluded that the proposed model can be a useful tool that allows detailed analysis of the mechano-electrochemical interactions between the chondrocytes and its surrounding extracellular matrix, which leads to more quantitative insights in the cell mechano-transduction.
---
paper_title: The Mechanical Behaviour of Chondrocytes Predicted with a Micro-structural Model of Articular Cartilage
paper_content:
The integrity of articular cartilage depends on the proper functioning and mechanical stimulation of chondrocytes, the cells that synthesize extracellular matrix and maintain tissue health. The biosynthetic activity of chondrocytes is influenced by genetic factors, environmental influences, extracellular matrix composition, and mechanical factors. The mechanical environment of chondrocytes is believed to be an important determinant for joint health, and chondrocyte deformation in response to mechanical loading is speculated to be an important regulator of metabolic activity. In previous studies of chondrocyte deformation, articular cartilage was described as a biphasic material consisting of a homogeneous, isotropic, linearly elastic solid phase, and an inviscid fluid phase. However, articular cartilage is known to be anisotropic and inhomogeneous across its depth. Therefore, isotropic and homogeneous models cannot make appropriate predictions for tissue and cell stresses and strains. Here, we modelled articular cartilage as a transversely isotropic, inhomogeneous (TI) material in which the anisotropy and inhomogeneity arose naturally from the microstructure of the depth-dependent collagen fibril orientation and volumetric fraction, as well as the chondrocyte shape and volumetric fraction. The purpose of this study was to analyse the deformation behaviour of chondrocytes using the TI model of articular cartilage. In order to evaluate our model against experimental results, we simulated indentation and unconfined compression tests for nominal compressions of 15%. Chondrocyte deformations were analysed as a function of location within the tissue. The TI model predicted a non-uniform behaviour across tissue depth: in indentation testing, cell height decreased by 43% in the superficial zone and between 11 and 29% in the deep zone. In unconfined compression testing, cell height decreased by 32% in the superficial zone, 25% in the middle, and 18% in the deep zones. This predicted non-uniformity is in agreement with experimental studies. The novelty of this study is the use of a cartilage material model accounting for the intrinsic inhomogeneity and anisotropy of cartilage caused by its microstructure.
---
paper_title: Remodeling of fracture callus in mice is consistent with mechanical loading and bone remodeling theory.
paper_content:
During the remodeling phase of fracture healing in mice, the callus gradually transforms into a double cortex, which thereafter merges into one cortex. In large animals, a double cortex normally does not form. We investigated whether these patterns of remodeling of the fracture callus in mice can be explained by mechanical loading. Morphologies of fractures after 21, 28, and 42 days of healing were determined from an in vivo mid-diaphyseal femoral osteotomy healing experiment in mice. Bone density distributions from microCT at 21 days were converted into adaptive finite element models. To assess the effect of loading mode on bone remodeling, a well-established remodeling algorithm was used to examine the effect of axial force or bending moment on bone structure. All simulations predicted that under axial loading, the callus remodeled to form a single cortex. When a bending moment was applied, dual concentric cortices developed in all simulations, corresponding well to the progression of remodeling observed experimentally and resulting in quantitatively comparable callus areas of woven and lamellar bone. Effects of biological differences between species or other reasons cannot be excluded, but this study demonstrates how a difference in loading mode could explain the differences between the remodeling phase in small rodents and larger mammals.
---
paper_title: The PTHrP–Ihh Feedback Loop in the Embryonic Growth Plate Allows PTHrP to Control Hypertrophy and Ihh to Regulate Proliferation
paper_content:
Growth plate and long bone development is governed by biochemical signaling pathways of which the PTHrP–Ihh system is the best known. Other factors, such as BMPs, FGFs and mechanical loading, may interact with this system. This study aims at elucidating the relative importance of PTHrP and Ihh for controlling proliferation, and hypertrophy in fetal growth plate cartilage. We assessed the question why reduced Ihh expression leads to more pronounced effects on the number of non-hypertrophic cells and total bone formation, compared to PTHrP down-regulation.
---
paper_title: The composition of engineered cartilage at the time of implantation determines the likelihood of regenerating tissue with a normal collagen architecture.
paper_content:
The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state,...
---
paper_title: Prediction of collagen orientation in articular cartilage by a collagen remodeling algorithm
paper_content:
Summary Objective Tissue engineering is a promising method to treat damaged cartilage. So far it has not been possible to create tissue-engineered cartilage with an appropriate structural organization. It is envisaged that cartilage tissue engineering will significantly benefit from knowledge of how the collagen fiber orientation is directed by mechanical conditions. The goal of the present study is to evaluate whether a collagen remodeling algorithm based on mechanical loading can be corroborated by the collagen orientation in healthy cartilage. Methods According to the remodeling algorithm, collagen fibrils align with a preferred fibril direction, situated between the positive principal strain directions. The remodeling algorithm was implemented in an axisymmetric finite element model of the knee joint. Loading as a result of typical daily activities was represented in three different phases: rest, standing and gait. Results In the center of the tibial plateau the collagen fibrils run perpendicular to the subchondral bone. Just below the articular surface they bend over to merge with the articular surface. Halfway between the center and the periphery, the collagen fibrils bend over earlier, resulting in a thicker superficial and transitional zones. Near the periphery fibrils in the deep zone run perpendicular to the articular surface and slowly bend over to angles of −45° and +45° with the articular surface. Conclusion The collagen structure as predicted with the collagen remodeling algorithm corresponds very well with the collagen structure in healthy knee joints. This remodeling algorithm is therefore considered to be a valuable tool for developing loading protocols for tissue engineering of articular cartilage.
---
paper_title: Recent advances in mechanobiological modeling of bone regeneration
paper_content:
Abstract Skeletal regeneration and bone fracture repair involves complex cellular and molecular events that result in new bone formation. Many of the critical steps during bone healing are dependent on the local mechanical environment in the healing tissue. Computational models are used together with mechano-regulation algorithms to predict the influence of mechanical stimuli on the tissue differentiation process during bone healing. This paper reviews the field of computational mechanobiology with focus on bone healing. The history of mechanoregulatory modeling is described, as well as the recent advances and current problems. Most recent advances have been focusing on integrating the mechano-regulatory algorithms with more sophisticated description of the cellular and molecular events. Achieving suitable validation for the models is the most significant challenge. Thus far, focus has been on corroborating mechanoregulatory models by comparing existing models with well characterized experimental data, identify shortcomings and further develop improved computational models of bone healing. Ultimately, these models can be used to help unraveling the basic principles of cell and tissue differentiation, optimization of implant design, and potentially to investigate treatments of non-union and other pathologies.
---
paper_title: Mechanics of chondrocyte hypertrophy
paper_content:
Chondrocyte hypertrophy is a characteristic of osteoarthritis and dominates bone growth. Intra- and extracellular changes that are known to be induced by metabolically active hypertrophic chondrocytes are known to contribute to hypertrophy. However, it is unknown to which extent these mechanical conditions together can be held responsible for the total magnitude of hypertrophy. The present paper aims to provide a quantitative, mechanically sound answer to that question. To address this aim requires a quantitative tool that captures the mechanical effects of collagen and proteoglycans, allows temporal changes in tissue composition, and can compute cell and tissue deformations. These requirements are met in our numerical model that is validated for articular cartilage mechanics, which we apply to quantitatively explain a range of experimental observations related to hypertrophy. After validating the numerical approach for studying hypertrophy, the model is applied to evaluate the direct mechanical effects of axial tension and compression on hypertrophy (Hueter-Volkmann principle) and to explore why hypertrophy is reduced in case of partially or fully compromised proteoglycan expression. Finally, a mechanical explanation is provided for the observation that chondrocytes do not hypertrophy when enzymatical collagen degradation is prohibited (S1Pcko knock-out mouse model). This paper shows that matrix turnover by metabolically active chondrocytes, together with externally applied mechanical conditions, can explain quantitatively the volumetric change of chondrocytes during hypertrophy. It provides a mechanistic explanation for the observation that collagen degradation results in chondrocyte hypertrophy, both under physiological and pathological conditions.
---
paper_title: Determining the most important cellular characteristics for fracture healing using design of experiments methods.
paper_content:
Computational models are employed as tools to investigate possible mechanoregulation pathways for tissue differentiation and bone healing. However, current models do not account for the uncertainty in input parameters, and often include assumptions about parameter values that are not yet established. The objective of this study was to determine the most important cellular characteristics of a mechanoregulatory model describing both cell phenotype-specific and mechanobiological processes that are active during bone healing using a statistical approach. The computational model included an adaptive two-dimensional finite element model of a fractured long bone. Three different outcome criteria were quantified: (1) ability to predict sequential healing events, (2) amount of bone formation at early, mid and late stages of healing and (3) the total time until complete healing. For the statistical analysis, first a resolution IV fractional factorial design (L(64)) was used to identify the most significant factors. Thereafter, a three-level Taguchi orthogonal array (L(27)) was employed to study the curvature (non-linearity) of the 10 identified most important parameters. The results show that the ability of the model to predict the sequences of normal fracture healing was predominantly influenced by the rate of matrix production of bone, followed by cartilage degradation (replacement). The amount of bone formation at early stages was solely dependent on matrix production of bone and the proliferation rate of osteoblasts. However, the amount of bone formation at mid and late phases had the rate of matrix production of cartilage as the most influential parameter. The time to complete healing was primarily dependent on the rate of cartilage degradation during endochondral ossification, followed by the rate of cartilage formation. The analyses of the curvature revealed a linear response for parameters related to bone, where higher rates of formation were more beneficial to healing. In contrast, parameters related to fibrous tissue and cartilage showed optimum levels. Some fibrous connective tissue- and cartilage formation was beneficial to bone healing, but too much of either tissue delayed bone formation. The identified significant parameters and processes are further confirmed by in vivo animal experiments in the literature. This study illustrates the potential of design of experiments methods for evaluating computational mechanobiological model parameters and suggests that further experiments should preferably focus at establishing values of parameters related to cartilage formation and degradation.
---
paper_title: A computational model for collagen fibre remodelling in the arterial wall
paper_content:
As the interaction between tissue adaptation and the mechanical condition within tissues is complex, mathematical models are desired to study this interrelation. In this study, a mathematical model is presented to investigate the interplay between collagen architecture and mechanical loading conditions in the arterial wall. It is assumed that the collagen fibres align along preferred directions, situated in between the principal stretch directions. The predicted fibre directions represent symmetrically arranged helices and agree qualitatively with morphometric data from literature. At the luminal side of the arterial wall, the fibres are oriented more circumferentially than at the outer side. The discrete transition of the fibre orientation at the media-adventitia interface can be explained by accounting for the different reference configurations of both layers. The predicted pressure-radius relations resemble experimentally measured sigma-shaped curves. As there is a strong coupling between the collagen architecture and the mechanical loading condition within the tissue, we expect that the presented model for collagen remodelling is useful to gain further insight into the processes involved in vascular adaptation, such as growth and smooth muscle tone adaptation.
---
paper_title: Stress–relaxation of human patellar articular cartilage in unconfined compression: Prediction of mechanical response by tissue composition and structure
paper_content:
Abstract Mechanical properties of articular cartilage are controlled by tissue composition and structure. Cartilage function is sensitively altered during tissue degeneration, in osteoarthritis (OA). However, mechanical properties of the tissue cannot be determined non-invasively. In the present study, we evaluate the feasibility to predict, without mechanical testing, the stress–relaxation response of human articular cartilage under unconfined compression. This is carried out by combining microscopic and biochemical analyses with composition-based mathematical modeling. Cartilage samples from five cadaver patellae were mechanically tested under unconfined compression. Depth-dependent collagen content and fibril orientation, as well as proteoglycan and water content were derived by combining Fourier transform infrared imaging, biochemical analyses and polarized light microscopy. Finite element models were constructed for each sample in unconfined compression geometry. First, composition-based fibril-reinforced poroviscoelastic swelling models, including composition and structure obtained from microscopical and biochemical analyses were fitted to experimental stress–relaxation responses of three samples. Subsequently, optimized values of model constants, as well as compositional and structural parameters were implemented in the models of two additional samples to validate the optimization. Theoretical stress–relaxation curves agreed with the experimental tests ( R =0.95–0.99). Using the optimized values of mechanical parameters, as well as composition and structure of additional samples, we were able to predict their mechanical behavior in unconfined compression, without mechanical testing ( R =0.98). Our results suggest that specific information on tissue composition and structure might enable assessment of cartilage mechanics without mechanical testing.
---
paper_title: Depth- and strain-dependent mechanical and electromechanical properties of full-thickness bovine articular cartilage in confined compression.
paper_content:
Compression tests have often been performed to assess the biomechanical properties of full-thickness articular cartilage. We tested whether the apparent homogeneous strain-dependent properties, deduced from such tests, reflect both strain- and depth-dependent material properties. Full-thickness bovine articular cartilage was tested by oscillatory confined compression superimposed on a static offset up to 45%. and the data fit to estimate modulus, permeability, and electrokinetic coefficient assuming homogeneity. Additional tests on partial-thickness cartilage were then performed to assess depth- and strain-dependent properties in an inhomogeneous model, assuming three discrete layers (i = 1 starting from the articular surface, to i = 3 up to the subchondral bone). Estimates of the zero-strain equilibrium confined compression modulus (H(A0)), the zero-strain permeability (kp0) and deformation dependence constant (M), and the deformation-dependent electrokinetic coefficient (ke) differed among individual layers of cartilage and full-thickness cartilage. HiA0 increased from layer 1 to 3 (0.27 to 0.71 MPa), and bracketed the apparent homogeneous value (0.47 MPa). ki(p0) decreased from layer 1 to 3 (4.6 x 10(-15) to 0.50 x 10(-15) m2/Pa s) and was less than the homogeneous value (7.3 x 10(-15) m2/Pa s), while Mi increased from layer 1 to 3 (5.5 to 7.4) and became similar to the homogeneous value (8.4). The amplitude of ki(e) increased markedly with compressive strain, as did the homogeneous value: at low strain, it was lowest near the articular surface and increased to a peak in the middle-deep region. These results help to interpret the biomechanical assessment of full-thickness articular cartilage.
---
paper_title: Mechanical characterization of articular cartilage by combining magnetic resonance imaging and finite-element analysis—a potential functional imaging technique
paper_content:
Magnetic resonance imaging (MRI) provides a method for non-invasive characterization of cartilage composition and structure. We aimed to see whether T1 and T2 relaxation times are related to proteoglycan (PG) and collagen-specific mechanical properties of articular cartilage. Specifically, we analyzed whether variations in the depthwise collagen orientation, as assessed by the laminae obtained from T2 profiles, affect the mechanical characteristics of cartilage. After MRI and unconfined compression tests of human and bovine patellar cartilage samples, fibril-reinforced poroviscoelastic finite-element models (FEM), with depthwise collagen orientations implemented from quantitative T2 maps (3 laminae for human, 3–7 laminae for bovine), were constructed to analyze the non-fibrillar matrix modulus (PG specific), fibril modulus (collagen specific) and permeability of the samples. In bovine cartilage, the non-fibrillar matrix modulus (R = −0.64, p < 0.05) as well as the initial permeability (R = 0.70, p < 0.05) correlated with T1. In bovine cartilage, T2 correlated positively with the initial fibril modulus (R = 0.62, p = 0.05). In human cartilage, the initial fibril modulus correlated negatively (R = −0.61, p < 0.05) with T2. Based on the simulations, cartilage with a complex collagen architecture (5 or 7 laminae), leading to high bulk T2 due to magic angle effects, provided higher compressive stiffness than tissue with a simple collagen architecture (3 laminae). Our results suggest that T1 reflects PG-specific mechanical properties of cartilage. High T2 is characteristic to soft cartilage with a classical collagen architecture. Contradictorily, high bulk T2 can also be found in stiff cartilage with a multilaminar collagen fibril network. By emerging MRI and FEM, the present study establishes a step toward functional imaging of articular cartilage.
---
paper_title: Osteoarthritic changes in the biphasic mechanical properties of the chondrocyte pericellular matrix in articular cartilage.
paper_content:
Abstract The pericellular matrix (PCM) is a narrow region of cartilaginous tissue that surrounds chondrocytes in articular cartilage. Previous modeling studies indicate that the mechanical properties of the PCM relative to those of the extracellular matrix (ECM) can significantly affect the stress–strain, fluid flow, and physicochemical environments of the chondrocyte, suggesting that the PCM plays a biomechanical role in articular cartilage. The goals of this study were to measure the mechanical properties of the PCM using micropipette aspiration coupled with a linear biphasic finite element model, and to determine the alterations in the mechanical properties of the PCM with osteoarthritis (OA). Using a recently developed isolation technique, chondrons (the chondrocyte and its PCM) were mechanically extracted from non-degenerate and osteoarthritic human cartilage. The transient mechanical behavior of the PCM was well-described by a biphasic model, suggesting that the viscoelastic response of the PCM is attributable to flow-dependent effects, similar to that of the ECM. With OA, the mean Young's modulus of the PCM was significantly decreased (38.7±16.2 kPa vs. 23.5±12.9 kPa, p 0.6). These findings suggest that the PCM may undergo degenerative processes with OA, similar to those occurring in the ECM. In combination with previous theoretical models of cell–matrix interactions in cartilage, our findings suggest that changes in the properties of the PCM with OA may have an important influence on the biomechanical environment of the chondrocyte.
---
paper_title: Intraspecies and Interspecies Comparison of the Compressive Properties of the Medial Meniscus
paper_content:
Quantification of the compressive material properties of the meniscus is of paramount importance, creating a “gold-standard” reference for future research. The purpose of this study was to determine compressive properties in six animal models (baboon, bovine, canine, human, lapine, and porcine) at six topographical locations. It was hypothesized that topographical variation of the compressive properties would be found in each animal model and that interspecies variations would also be exhibited. To test these hypotheses, creep and recovery indentation experiments were performed on the meniscus using a creep indentation apparatus and analyzed via a finite element optimization method to determine the material properties. Results show significant intraspecies and interspecies variation in the compressive properties among the six topographical locations, with the moduli exhibiting the highest values in the anterior portion. For example, the anterior location of the human meniscus has an aggregate modulus of 160 ± 40 kPa, whereas the central and posterior portions exhibit aggregate moduli of 100 ± 30 kPa. Interspecies comparison of the aggregate moduli identifies the lapine anterior location having the highest value (450 ± 120 kPa) and the human posterior location having the lowest (100 ± 30 kPa). These baseline values of compressive properties will be of help in future meniscal repair efforts.
---
paper_title: Characterization of articular cartilage by combining microscopic analysis with a fibril-reinforced finite-element model.
paper_content:
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
---
paper_title: Inverse analysis of constitutive models: biological soft tissues.
paper_content:
The paper describes a procedure for estimating the material parameters of biological soft tissue by fitting model prediction to experimental load-deformation data. This procedure minimizes the error between data and theoretical model prediction through systematically adjusting the parameters in the latter. The procedure uses commercially available software and is not specific to any particular model; nevertheless, for illustration purposes, we employ a six parameter fibril-reinforced poroelastic cartilage model. We are able to estimate any and all of these parameters by the procedure. Convergence of the parameters and convergence of the arbitrary initial stress relaxation to the data was demonstrated in all cases. Though we illustrate the optimization procedure here for unconfined compression only, it can be adapted easily to other experimental configurations such as confined compression, indentation and tensile test. Furthermore, the procedure can be applied in other areas of biomechanics where material parameters need to be extracted from experimental data.
---
paper_title: Biomechanical, biochemical and structural correlations in immature and mature rabbit articular cartilage.
paper_content:
OBJECTIVE ::: The structure and composition of articular cartilage change during development and growth. These changes lead to alterations in the mechanical properties of cartilage. In the present study, biomechanical, biochemical and structural relationships of articular cartilage during growth and maturation of rabbits are investigated. ::: ::: ::: DESIGN ::: Articular cartilage specimens from the tibial medial plateaus and femoral medial condyles of female New Zealand white rabbits were collected from seven age-groups; 0 days (n=29), 11 days (n=30), 4 weeks (n=30), 6 weeks (n=30), 3 months (n=24), 6 months (n=24) and 18 months (n=19). The samples underwent mechanical testing under creep indentation. From the mechanical response, instantaneous and equilibrium moduli were determined. Biochemical analyses of tissue collagen, hydroxylysylpyridinoline (HP) and pentosidine (PEN) cross-links in full thickness cartilage samples were conducted. Proteoglycans were investigated depth-wise from the tissue sections by measuring the optical density of Safranin-O-stained samples. Furthermore, depth-wise collagen architecture of articular cartilage was analyzed with polarized light microscopy. Finite element analyses of the samples from different age-groups were conducted to reveal tensile and compressive properties of the fibril network and the matrix of articular cartilage, respectively. ::: ::: ::: RESULTS ::: Tissue thickness decreased from approximately 3 to approximately 0.5mm until the age of 3 months, while the instantaneous modulus increased with age prior to peak at 4-6 weeks. A lower equilibrium modulus was observed before 3-month-age, after which the equilibrium modulus continued to increase. Collagen fibril orientation angle and parallelism index were inversely related to the instantaneous modulus, tensile fibril modulus and tissue thickness. Collagen content and cross-linking were positively related to the equilibrium compressive properties of the tissue. ::: ::: ::: CONCLUSIONS ::: During maturation, significant modulation of tissue structure, composition and mechanical properties takes place. Importantly, the present study provides insight into the mechanical, chemical and structural interactions that lead to functional properties of mature articular cartilage.
---
paper_title: Fibril reinforced poroelastic model predicts specifically mechanical behavior of normal, proteoglycan depleted and collagen degraded articular cartilage
paper_content:
Abstract Degradation of collagen network and proteoglycan (PG) macromolecules are signs of articular cartilage degeneration. These changes impair cartilage mechanical function. Effects of collagen degradation and PG depletion on the time-dependent mechanical behavior of cartilage are different. In this study, numerical analyses, which take the compression-tension nonlinearity of the tissue into account, were carried out using a fibril reinforced poroelastic finite element model. The study aimed at improving our understanding of the stress-relaxation behavior of normal and degenerated cartilage in unconfined compression. PG and collagen degradations were simulated by decreasing the Young's modulus of the drained porous (nonfibrillar) matrix and the fibril network, respectively. Numerical analyses were compared to results from experimental tests with chondroitinase ABC (PG depletion) or collagenase (collagen degradation) digested samples. Fibril reinforced poroelastic model predicted the experimental behavior of cartilage after chondroitinase ABC digestion by a major decrease of the drained porous matrix modulus (−64±28%) and a minor decrease of the fibril network modulus (−11±9%). After collagenase digestion, in contrast, the numerical analyses predicted the experimental behavior of cartilage by a major decrease of the fibril network modulus (−69±5%) and a decrease of the drained porous matrix modulus (−44±18%). The reduction of the drained porous matrix modulus after collagenase digestion was consistent with the microscopically observed secondary PG loss from the tissue. The present results indicate that the fibril reinforced poroelastic model is able to predict specifically characteristic alterations in the stress-relaxation behavior of cartilage after enzymatic modifications of the tissue. We conclude that the compression-tension nonlinearity of the tissue is needed to capture realistically the mechanical behavior of normal and degenerated articular cartilage.
---
paper_title: Collagen network primarily controls Poisson's ratio of bovine articular cartilage in compression.
paper_content:
The equilibrium Young's modulus of articular cartilage is known to be primarily determined by proteoglycans (PGs). However, the relation between the Poisson's ratio and the composition and structure of articular cartilage is more unclear. In this study, we determined Young's modulus and Poisson's ratio of bovine articular cartilage in unconfined compression. Subsequently, the same samples, taken from bovine knee (femoral, patellar and tibial cartilage) and shoulder (humeral cartilage) joints, were processed for quantitative microscopic analysis of PGs, collagen content, and collagen architecture. The Young's modulus, Poisson's ratio, PG content (estimated with optical density measurements), collagen content, and birefringence showed significant topographical variation (p < 0.05) among the test sites. Experimentally the Young's modulus was strongly determined by the tissue PG content (r = 0.86, p < 0.05). Poisson's ratio revealed a significant negative linear correlation (r = -0.59, p < 0.05) with the collagen content, as assessed by the Fourier transform infrared imaging. Finite element analyses, conducted using a fibril reinforced biphasic model, indicated that the mechanical properties of the collagen network strongly affected the Poisson's ratio. We conclude that Poisson's ratio of articular cartilage is primarily controlled by the content and organization of the collagen network.
---
paper_title: Composition of the pericellular matrix modulates the deformation behaviour of chondrocytes in articular cartilage under static loading
paper_content:
The aim was to assess the role of the composition changes in the pericellular matrix (PCM) for the chondrocyte deformation. For that, a three-dimensional finite element model with depth-dependent collagen density, fluid fraction, fixed charge density and collagen architecture, including parallel planes representing the split-lines, was created to model the extracellular matrix (ECM). The PCM was constructed similarly as the ECM, but the collagen fibrils were oriented parallel to the chondrocyte surfaces. The chondrocytes were modelled as poroelastic with swelling properties. Deformation behaviour of the cells was studied under 15% static compression. Due to the depth-dependent structure and composition of cartilage, axial cell strains were highly depth-dependent. An increase in the collagen content and fluid fraction in the PCMs increased the lateral cell strains, while an increase in the fixed charge density induced an inverse behaviour. Axial cell strains were only slightly affected by the changes in PCM composition. We conclude that the PCM composition plays a significant role in the deformation behaviour of chondrocytes, possibly modulating cartilage development, adaptation and degeneration. The development of cartilage repair materials could benefit from this information.
---
paper_title: Collagen network of articular cartilage modulates fluid flow and mechanical stresses in chondrocyte.
paper_content:
The extracellular matrix of articular cartilage modulates the mechanical signals sensed by the chondrocytes. In the present study, a finite element model (FEM) of the chondrocyte and its microenvironment was reconstructed using the information from fourier transform infrared imaging spectroscopy. This environment consisted of pericellular, territorial (mainly proteoglycans), and inter-territorial (mainly collagen) matrices. The chondrocyte, pericellular, and territorial matrix were assumedto be mechanically isotropic and poroelastic, whereas the inter-territorial matrix, due to its high collagen content, was assumed to be transversely isotropic and poroelastic. Under instantaneous strain-controlled compression, the FEM indicated that the fluid pressure within the chondrocyte increased nonlinearly as a function of the in-plane Young’s modulus of the collagen network. Under instantaneous force-controlled compression, the chondrocyte experienced the highest fluid pressure when the in-plane Young’s modulus of the collagen network was ~4 MPa. Based on the present results, the mechanical characteristics of the collagen network of articular cartilage can modify fluid flow and stresses in chondrocytes. Therefore, the integrity of the collagen network may be an important determinant in cell stimulation and in the control of the matrix maintenance.
---
paper_title: Physical signals and solute transport in human intervertebral disc during compressive stress relaxation: 3D finite element analysis
paper_content:
A 3D finite element model for charged hydrated soft tissues containing charged/uncharged solutes was developed based on the multi-phasic mechano-electrochemical mixture theory (Lai et al., J. Biomech. Eng. 113 (1991), 245-258; Gu et al., J. Biomech. Eng. 120 (1998), 169-180). This model was applied to analyze the mechanical, chemical and electrical signals within the human intervertebral disc during an unconfined compressive stress relaxation test. The effects of tissue composition (e.g., water content and fixed charge density (FCD)) on the physical signals and the transport rate of fluid, ions and nutrients were investigated. The numerical simulation showed that, during disc compression, the fluid pressurization was more pronounced at the center (nucleus) region of the disc while the effective (von Mises) stress was higher at the outer (annulus) region. Parametric analyses revealed that the decrease in initial tissue water content (0.7-0.8) increased the peak stress and relaxation time due to the reduction of permeability, causing greater fluid pressurization effect. The electrical signals within the disc were more sensitive to FCD than tissue porosity, and mechanical loading affected the large solute (e.g., growth factor) transport significantly, but not for small solute (e.g., glucose). Moreover, this study confirmed that the interstitial fluid pressurization plays an important role in the load support mechanism of IVD by sharing more than 40% of the total load during disc compression. This study is important for understanding disc biomechanics, disc nutrition and disc mechanobiology.
---
paper_title: Biomechanical Influence of Disk Properties on the Load Transfer of Healthy and Degenerated Disks Using a Poroelastic Finite Element Model
paper_content:
Spine degeneration is a pathology that will affect 80% of the population. Since the intervertebral disks play an important role in transmitting loads through the spine, the aim of this study was to evaluate the biomechanical impact of disk properties on the load carried by healthy (Thompson grade I) and degenerated (Thompson grades III and IV) disks. A three-dimensional parametric poroelastic finite element model of the L4/L5 motion segment was developed. Grade I, grade II, and grade IV disks were modeled by altering the biomechanical properties of both the annulus and nucleus. Models were validated using published creep experiments, in which a constant compressive axial stress of 0.35 MPa was applied for 4 h. Pore pressure (PP) and effective stress (SE) were analyzed as a function of time following loading application (1 min, 5 min, 45 min, 125 min, and 245 min) and discal region along the midsagittal profile for each disk grade. A design of experiments was further implemented to analyze the influence of six disk parameters (disk height (H), fiber proportion (%F), drained Young's modulus (Ea,En), and initial permeability (ka,kn) of both the annulus and nucleus) on load-sharing for disk grades I and IV. Simulations of grade I, grade III, and grade IV disks agreed well with the available published experimental data. Disk height (H) had a significant influence (p<0.05) on the PP and SE during the entire loading history for both healthy and degenerated disk models. Young’s modulus of the annulus (Ea) significantly affected not only SE in the annular region for both disk grades in the initial creep response but also SE in the nucleus zone for degenerated disks with further creep response. The nucleus and annulus permeabilities had a significant influence on the PP distribution for both disk grades, but this effect occurred at earlier stages of loading for degenerated than for healthy disk models. This is the first study that investigates the biomechanical influence of both geometrical and material disk properties on the load transfer of healthy and degenerated disks. Disk height is a significant parameter for both healthy and degenerated disks during the entire loading. Changes in the annulus stiffness, as well as in the annulus and nucleus permeability, control load-sharing in different ways for healthy and degenerated disks.
---
paper_title: Statistical methods in finite element analysis
paper_content:
Finite element analysis (FEA) is a commonly used tool within many areas of engineering and can provide useful information in structural analysis of mechanical systems. However, most analyses within the field of biomechanics usually take no account either of the wide variation in material properties and geometry that may occur in natural tissues or manufacturing imperfections in synthetic materials. This paper discusses two different methods of incorporating uncertainty in FE models. The first, Taguchi's robust parameter design, uses orthogonal matrices to determine how to vary the parameters in a series of FE models, and provides information on the sensitivity of a model to input parameters. The second, probabilistic analysis, enables the distribution of a response variable to be determined from the distributions of the input variables. The methods are demonstrated using a simple example of an FE model of a beam that is assigned material properties and geometry over a range similar to an orthopaedic fixation plate. In addition to showing how each method may be used on its own, we also show how computational effort may be minimised by first identifying the most important input variables before determining the effects of imprecision.
---
paper_title: Sensitivity of tibio-menisco-femoral joint contact behavior to variations in knee kinematics.
paper_content:
Use of computational models with kinematic boundary conditions to study the knee joint contact behavior for normal and pathologic knee joints depends on an understanding of the impacts of kinematic uncertainty. We studied the sensitivities of tibio-menisco-femoral joint contact behavior to variations in knee kinematics using a finite element model (FEM) with geometry and kinematic boundary conditions derived from sequences of magnetic resonance (MR) images. The MR images were taken before and after axial compression was applied to the knee joint of a healthy subject. A design of experiments approach was used to study the impact of the variation in knee kinematics on the contact outputs. We also explored the feasibility of using supplementary hip images to improve the accuracy of knee kinematics. Variations in knee kinematics (0.25mm in medial-lateral, 0.1mm in anterior-posterior and superior-inferior translations, and 0.1 degrees in flexion-extension and varus-valgus, 0.25 degrees in external-internal rotations) caused large variations in joint contact behavior. When kinematic boundary conditions resulted in close approximations of the model-predicted joint contact force to the applied force, variations in predictions of contact parameters were also reduced. The combination of inferior-superior and medial-lateral translations accounted for over 70% of variations for all the contact parameters examined. The inclusion of hip images in kinematic calculations improved knee kinematics by matching the femoral head position. Our findings demonstrate the importance of improving the accuracy and precision of knee kinematic measurements, especially when utilized as an input for finite element models.
---
paper_title: Sensitivities of Medial Meniscal Motion and Deformation to Material Properties of Articular Cartilage, Meniscus and Meniscal Attachments Using Design of Experiments Methods
paper_content:
This study investigated the role of the material properties assumed for articular cartilage, meniscus and meniscal attachments on the fit of a finite element model (FEM) to experimental data for meniscal motion and deformation due to an anterior tibial loading of 45 N in the anterior cruciate ligament-deficient knee. Taguchi style L18 orthogonal arrays were used to identify the most significant factors for further examination. A central composite design was then employed to develop a mathematical model for predicting the fit of the FEM to the experimental data as a function of the material properties and to identify the material property selections that optimize the fit. The cartilage was modeled as isotropic elastic material, the meniscus was modeled as transversely isotropic elastic material, and meniscal horn and the peripheral attachments were modeled as noncompressive and nonlinear in tension spring elements. The ability of the FEM to reproduce the experimentally measured meniscal motion and deformation was most strongly dependent on the initial strain of the meniscal horn attachments (E 1H ), the linear modulus of the meniscal peripheral attachments (Ep) and the ratio of meniscal moduli in the circumferential and transverse directions (E θ /E R ). Our study also successfully identified values for these critical material properties (e 1H =-5%, Ep=5.6 MPa, E θ /E R =20) to minimize the error in the FEM analysis of experimental results. This study illustrates the most important material properties for future experimental studies, and suggests that modeling work of meniscus, while retaining transverse isotropy, should also focus on the potential influence of nonlinear properties and inhomogeneity.
---
paper_title: A finite element model of an idealized diarthrodial joint to investigate the effects of variation in the mechanical properties of the tissues
paper_content:
AbstractThe stiffness of articular cartilage increases dramatically with increasing rate of loading, and it has been hypothesized that increasing the stiffness of the subchondral bone may result in damaging stresses being generated in the articular cartilage. Despite the interdependence of these tissues in a joint, little is understood of the effect of such changes in one tissue on stresses generated in another. To investigate this, a parametric finite element model of an idealized joint was developed. The model incorporated layers representing articular cartilage, calcified cartilage, the subchondral bone plate and cancellous bone. Taguchi factorial design techniques, employing a two-level full-factorial and a four-level fractional factorial design, were used to vary the material properties and thicknesses of the layers over the wide range of values found in the literature. The effects on the maximum values of von Mises stress in each of the tissues are reported here. The stiffness of the cartilage was t...
---
paper_title: Determining the most important cellular characteristics for fracture healing using design of experiments methods.
paper_content:
Computational models are employed as tools to investigate possible mechanoregulation pathways for tissue differentiation and bone healing. However, current models do not account for the uncertainty in input parameters, and often include assumptions about parameter values that are not yet established. The objective of this study was to determine the most important cellular characteristics of a mechanoregulatory model describing both cell phenotype-specific and mechanobiological processes that are active during bone healing using a statistical approach. The computational model included an adaptive two-dimensional finite element model of a fractured long bone. Three different outcome criteria were quantified: (1) ability to predict sequential healing events, (2) amount of bone formation at early, mid and late stages of healing and (3) the total time until complete healing. For the statistical analysis, first a resolution IV fractional factorial design (L(64)) was used to identify the most significant factors. Thereafter, a three-level Taguchi orthogonal array (L(27)) was employed to study the curvature (non-linearity) of the 10 identified most important parameters. The results show that the ability of the model to predict the sequences of normal fracture healing was predominantly influenced by the rate of matrix production of bone, followed by cartilage degradation (replacement). The amount of bone formation at early stages was solely dependent on matrix production of bone and the proliferation rate of osteoblasts. However, the amount of bone formation at mid and late phases had the rate of matrix production of cartilage as the most influential parameter. The time to complete healing was primarily dependent on the rate of cartilage degradation during endochondral ossification, followed by the rate of cartilage formation. The analyses of the curvature revealed a linear response for parameters related to bone, where higher rates of formation were more beneficial to healing. In contrast, parameters related to fibrous tissue and cartilage showed optimum levels. Some fibrous connective tissue- and cartilage formation was beneficial to bone healing, but too much of either tissue delayed bone formation. The identified significant parameters and processes are further confirmed by in vivo animal experiments in the literature. This study illustrates the potential of design of experiments methods for evaluating computational mechanobiological model parameters and suggests that further experiments should preferably focus at establishing values of parameters related to cartilage formation and degradation.
---
paper_title: Uncertainties in indentation testing of articular cartilage: A fibril-reinforced poroviscoelastic study
paper_content:
Indentation testing provides a quantitative technique to evaluate mechanical characteristics of articular cartilage in situ and in vivo. Traditionally, analytical solutions proposed by Hayes et al. [Hayes WC, Keer LM, Herrmann G, Mockros LF. A mathematical analysis for indentation tests of articular cartilage. J Biomech 1972;5(5):541-51] have been applied for the analysis of indentation measurements, and due to their practicality, they have been used for clinical diagnostics. Using this approach, the elastic modulus is derived based on scaling factors which depend on cartilage thickness, indenter radius and Poisson's ratio, and the cartilage model is assumed isotropic and homogeneous, thereby greatly simplifying the true tissue characteristics. The aim was to investigate the validity of previous model assumptions for indentation testing. Fibril-reinforced poroviscoelastic cartilage (FRPVE) model including realistic tissue characteristics was used to simulate indentation tests. The effects of cartilage inhomogeneity, anisotropy, and indentation velocity on the indentation response were evaluated, and scaling factors from the FRPVE analysis were derived. Subsequently, the validity of scaling factors obtained using the traditional and the FRPVE analyses was studied by calculating indentation moduli for bovine cartilage samples, and comparing these values to those obtained experimentally in unconfined compression testing. Collagen architecture and compression velocity had significant effects on the indentation response. Isotropic elastic analysis gave significantly higher (30-107%) Young's moduli for indentation compared to unconfined compression testing. Modification of Hayes' scaling factors by accounting for cartilage inhomogeneity and anisotropy improved the agreement of Young's moduli obtained for the two test configurations by 14-28%. These results emphasize the importance of realistic cartilage structure and mechanical properties in the indentation analysis. Although it is not possible to fully describe tissue inhomogeneity and anisotropy with just the Young's modulus and Poisson's ratio, accounting for inhomogeneity and anisotropy in these two parameters may help to improve the in vivo characterization of tissue using arthroscopic indentation testing.
---
paper_title: Sensitivity of tissue differentiation and bone healing predictions to tissue properties.
paper_content:
Computational models are employed as tools to investigate possible mechano-regulation pathways for tissue differentiation and bone healing. However, current models do not account for the uncertainty in input parameters, and often include assumptions about parameter values that are not yet established. The aim was to clarify the importance of the assumed tissue material properties in a computational model of tissue differentiation during bone healing. An established mechano-biological model was employed together with a statistical approach. The model included an adaptive 2D finite element model of a fractured long bone. Four outcome criteria were quantified: (1) ability to predict sequential healing events, (2) amount of bone formation at specific time points, (3) total time until healing, and (4) mechanical stability at specific time points. Statistical analysis based on fractional factorial designs first involved a screening experiment to identify the most significant tissue material properties. These seven properties were studied further with response surface methodology in a three-level Box-Behnken design. Generally, the sequential events were not significantly influenced by any properties, whereas rate-dependent outcome criteria and mechanical stability were significantly influenced by Young's modulus and permeability. Poisson's ratio and porosity had minor effects. The amount of bone formation at early, mid and late phases of healing, the time until complete healing and the mechanical stability were all mostly dependent on three material properties; permeability of granulation tissue, Young's modulus of cartilage and permeability of immature bone. The consistency between effects of the most influential parameters was high. To increase accuracy and predictive capacity of computational models of bone healing, the most influential tissue mechanical properties should be accurately quantified.
---
paper_title: Contribution of tissue composition and structure to mechanical response of articular cartilage under different loading geometries and strain rates
paper_content:
Mechanical function of articular cartilage in joints between articulating bones is dependent on the composition and structure of the tissue. The mechanical properties of articular cartilage are traditionally tested in compression using one of the three loading geometries, i.e., confined compression, unconfined compression or indentation. The aim of this study was to utilize a composition-based finite element model in combination with a fractional factorial design to determine the importance of different cartilage constituents in the mechanical response of the tissue, and to compare the importance of the tissue constituents with different loading geometries and loading rates. The evaluated parameters included water and collagen fraction as well as fixed charge density on cartilage surface and their slope over the tissue thickness. The thicknesses of superficial and middle zones, as based on the collagen orientation, were also included in the evaluated parameters. A three-level resolution V fractional factorial design was used. The model results showed that inhomogeneous composition plays only a minor role in indentation, though that role becomes more significant in confined compression and unconfined compression. In contrast, the collagen architecture and content had a more profound role in indentation than with two other loading geometries. These differences in the mechanical role of composition and structure between the loading geometries were emphasized at higher loading rates. These findings highlight how the results from mechanical tests of articular cartilage under different loading conditions are dependent upon tissue composition and structure.
---
paper_title: Determination of nonlinear fibre-reinforced biphasic poroviscoelastic constitutive parameters of articular cartilage using stress relaxation indentation testing and an optimizing finite element analysis
paper_content:
An inverse method was developed to determine the material constitutive parameters of human articular cartilage from stress relaxation indentation tests. The cartilage was modeled as a fibre-reinforced nonlinear biphasic poroviscoelastic material, and a finite element (FE) model was used with a simulated annealing (SA) optimization algorithm to determine the material parameters that minimized the error between the experimental and predicted time dependant indentation loads. The values of the 15 optimized material parameters were found to be insensitive to the initial guesses, and, when friction between the indenter and the cartilage was considered, resulted in good agreement between the measured stress relaxation response and the FE prediction (R^2=0.99). The optimized material parameters determined from experiments that used two different indenter sizes on the same samples were compared. When assuming frictionless contact between the indenter and the cartilage, all of the optimized parameters except for the Poisson's ratio were found to be relatively insensitive to indenter size. A second set of models that included frictional contact greatly reduced the sensitivity of the optimized Poisson's ratio to indenter size, thus confirming the validity of the model and demonstrating the importance of modeling friction. The results also demonstrate the robustness of the SA optimization algorithm to ensure convergence of a large number of material properties to a global minimum regardless of the quality of the initial guesses.
---
paper_title: A review of probabilistic analysis in orthopaedic biomechanics
paper_content:
Probabilistic analysis methods are being increasingly applied in the orthopaedics and biomechanics literature to account for uncertainty and variability in subject geometries, properties of various structures, kinematics and joint loading, as well as uncertainty in implant alignment. As a complement to experiments, finite element modelling, and statistical analysis, probabilistic analysis provides a method of characterizing the potential impact of variability in parameters on performance. This paper presents an overview of probabilistic analysis and a review of biomechanics literature utilizing probabilistic methods in structural reliability, kinematics, joint mechanics, musculoskeletal modelling, and patient-specific representations. The aim of this review paper is to demonstrate the wide range of applications of probabilistic methods and to aid researchers and clinicians in better understanding probabilistic analyses.
---
paper_title: An Axisymmetric Boundary Element Model for Determination of Articular Cartilage Pericellular Matrix Properties In Situ via Inverse Analysis of Chondron Deformation
paper_content:
The pericellular matrix (PCM) is the narrow tissue region surrounding all chondrocytes in articular cartilage and, together, the chondrocyte(s) and surrounding PCM have been termed the chondron. Previous theoretical and experimental studies suggest that the structure and properties of the PCM significantly influence the biomechanical environment at the microscopic scale of the chondrocytes within cartilage. In the present study, an axisymmetric boundary element method (BEM) was developed for linear elastic domains with internal interfaces. The new BEM was employed in a multiscale continuum model to determine linear elastic properties of the PCM in situ, via inverse analysis of previously reported experimental data for the three-dimensional morphological changes of chondrons within a cartilage explant in equilibrium unconfined compression (Choi, et al., 2007, "Zonal Changes in the Three-Dimensional Morphology of the Chondron Under Compression: The Relationship Among Cellular, Pericellular, and Extracellular Deformation in Articular Cartilage," J. Biomech., 40, pp. 2596-2603). The microscale geometry of the chondron (cell and PCM) within the cartilage extracellular matrix (ECM) was represented as a three-zone equilibrated biphasic region comprised of an ellipsoidal chondrocyte with encapsulating PCM that was embedded within a spherical ECM subjected to boundary conditions for unconfined compression at its outer boundary. Accuracy of the three-zone BEM model was evaluated and compared with analytical finite element solutions. The model was then integrated with a nonlinear optimization technique (Nelder-Mead) to determine PCM elastic properties within the cartilage explant by solving an inverse problem associated with the in situ experimental data for chondron deformation. Depending on the assumed material properties of the ECM and the choice of cost function in the optimization, estimates of the PCM Young's modulus ranged from ~24 kPa to 59 kPa, consistent with previous measurements of PCM properties on extracted chondrons using micropipette aspiration. Taken together with previous experimental and theoretical studies of cell-matrix interactions in cartilage, these findings suggest an important role for the PCM in modulating the mechanical environment of the chondrocyte.
---
paper_title: Intraspecies and Interspecies Comparison of the Compressive Properties of the Medial Meniscus
paper_content:
Quantification of the compressive material properties of the meniscus is of paramount importance, creating a “gold-standard” reference for future research. The purpose of this study was to determine compressive properties in six animal models (baboon, bovine, canine, human, lapine, and porcine) at six topographical locations. It was hypothesized that topographical variation of the compressive properties would be found in each animal model and that interspecies variations would also be exhibited. To test these hypotheses, creep and recovery indentation experiments were performed on the meniscus using a creep indentation apparatus and analyzed via a finite element optimization method to determine the material properties. Results show significant intraspecies and interspecies variation in the compressive properties among the six topographical locations, with the moduli exhibiting the highest values in the anterior portion. For example, the anterior location of the human meniscus has an aggregate modulus of 160 ± 40 kPa, whereas the central and posterior portions exhibit aggregate moduli of 100 ± 30 kPa. Interspecies comparison of the aggregate moduli identifies the lapine anterior location having the highest value (450 ± 120 kPa) and the human posterior location having the lowest (100 ± 30 kPa). These baseline values of compressive properties will be of help in future meniscal repair efforts.
---
paper_title: A nonlinear biphasic fiber-reinforced porohyperviscoelastic model of articular cartilage incorporating fiber reorientation and dispersion.
paper_content:
A nonlinear biphasic fiber-reinforced porohyperviscoelastic (BFPHVE) model of articular cartilage incorporating fiber reorientation effects during applied load was used to predict the response of ovine articular cartilage at relatively high strains (20%). The constitutive material parameters were determined using a coupled finite element-optimization algorithm that utilized stress relaxation indentation tests at relatively high strains. The proposed model incorporates the strain-hardening, tension-compression, permeability, and finite deformation nonlinearities that inherently exist in cartilage, and accounts for effects associated with fiber dispersion and reorientation and intrinsic viscoelasticity at relatively high strains. A new optimization cost function was used to overcome problems associated with large peak-to-peak differences between the predicted finite element and experimental loads that were due to the large strain levels utilized in the experiments. The optimized material parameters were found to be insensitive to the initial guesses. Using experimental data from the literature, the model was also able to predict both the lateral displacement and reaction force in unconfined compression, and the reaction force in an indentation test with a single set of material parameters. Finally, it was demonstrated that neglecting the effects of fiber reorientation and dispersion resulted in poorer agreement with experiments than when they were considered. There was an indication that the proposed BFPHVE model, which includes the intrinsic viscoelasticity of the nonfibrillar matrix (proteoglycan), might be used to model the behavior of cartilage up to relatively high strains (20%). The maximum percentage error between the indentation force predicted by the FE model using the optimized material parameters and that measured experimentally was 3%.
---
paper_title: Diffusion coefficients of articular cartilage for different CT and MRI contrast agents.
paper_content:
In contrast enhanced magnetic resonance imaging (MRI) and computed tomography (CT), the equilibrium distribution of anionic contrast agent is expected to reflect the fixed charged density (FCD) of articular cartilage. Diffusion is mainly responsible for the transport of contrast agents into cartilage. In osteoarthritis, cartilage composition changes at early stages of disease, and solute diffusion is most likely affected. Thus, investigation of contrast agent diffusion could enable new methods for imaging of cartilage composition. The aim of this study was to determine the diffusion coefficient of four contrast agents (ioxaglate, gadopentetate, iodide, gadodiamide) in bovine articular cartilage. The contrast agents were different in molecular size and charge. In peripheral quantitative CT experiments, penetration of contrast agent into the tissue was allowed either through the articular surface or through deep cartilage. To determine diffusion coefficients, a finite element model based on Fick's law was fitted to experimental data. Diffusion through articular surface was faster than through deep cartilage with every contrast agent. Iodide, being of atomic size, diffused into the cartilage significantly faster (q<0.05) than the other three contrast agents, for either transport direction. The diffusion coefficients of all clinical contrast agents (ioxaglate, gadopentetate and gadodiamide) were relatively low (142.8-253.7 μm(2)/s). In clinical diagnostics, such slow diffusion may not reach equilibrium and this jeopardizes the determination of FCD by standard methods. However, differences between diffusion through articular surface and deep cartilage, that are characterized by different tissue composition, suggest that diffusion coefficients may correlate with cartilage composition. Present method could therefore enable image-based assessment of cartilage composition by determination of diffusion coefficients within cartilage tissue.
---
paper_title: Evaluating knee replacement mechanics during ADL with PID-controlled dynamic finite element analysis.
paper_content:
Validated computational knee simulations are valuable tools for design phase development of knee replacement devices. Recently, a dynamic finite element (FE) model of the Kansas knee simulator was kinematically validated during gait and deep flexion cycles. In order to operate the computational simulator in the same manner as the experiment, a proportional-integral-derivative (PID) controller was interfaced with the FE model to control the quadriceps actuator excursion and produce a target flexion profile regardless of implant geometry or alignment conditions. The controller was also expanded to operate multiple actuators simultaneously in order to produce in vivo loading conditions at the joint during dynamic activities. Subsequently, the fidelity of the computational model was improved through additional muscle representation and inclusion of relative hip-ankle anterior-posterior (A-P) motion. The PID-controlled model was able to successfully recreate in vivo loading conditions (flexion angle, compressive joint load, medial-lateral load distribution or varus-valgus torque, internal-external torque, A-P force) for deep knee bend, chair rise, stance-phase gait and step-down activities.
---
paper_title: Dynamic finite element knee simulation for evaluation of knee replacement mechanics.
paper_content:
In vitro pre-clinical testing of total knee replacement (TKR) devices is a necessary step in the evaluation of new implant designs. Whole joint knee simulators, like the Kansas knee simulator (KKS), provide a controlled and repeatable loading environment for comparative evaluation of component designs or surgical alignment under dynamic conditions. Experimental testing, however, is time and cost prohibitive for design-phase evaluation of tens or hundreds of design variations. Experimentally-verified computational models provide an efficient platform for analysis of multiple components, sizes, and alignment conditions. The purpose of the current study was to develop and verify a computational model of a dynamic, whole joint knee simulator. Experimental internal-external and valgus-varus laxity tests, followed by dynamic deep knee bend and gait simulations in the KKS were performed on three cadaveric specimens. Specimen-specific finite element (FE) models of posterior-stabilized TKR were created from magnetic resonance images and CAD geometry. The laxity data was used to optimize mechanical properties of tibiofemoral soft-tissue structures on a specimen-specific basis. Each specimen was subsequently analyzed in a computational model of the experimental KKS, simulating both dynamic activities. The computational model represented all joints and actuators in the experimental setup, including a proportional-integral-derivative (PID) controller to drive quadriceps actuation. The computational model was verified against six degree-of-freedom patellofemoral (PF) and tibiofemoral (TF) kinematics and actuator loading during both deep knee bend and gait activities, with good agreement in trends and magnitudes between model predictions and experimental kinematics; differences were less than 1.8 mm and 2.2° for PF and TF translations and rotations. The whole joint FE simulator described in this study can be applied to investigate a wide range of clinical and research questions.
---
paper_title: Stresses in the local collagen network of articular cartilage: a poroviscoelastic fibril-reinforced finite element study.
paper_content:
Osteoarthritis (OA) is a multifactorial disease, resulting in diarthrodial joint wear and eventually destruction. Swelling of cartilage, which is proportional to the amount of collagen damage, is an initial event of cartilage degeneration, so damage to the collagen fibril network is likely to be one of the earliest signs of OA cartilage degeneration. We propose that the local stresses and strains in the collagen fibrils, which cause the damage, cannot be determined dependably without taking the local arcade-like collagen-fibril structure into account. We investigate this using a poroviscoelastic fibril-reinforced FEA model. The constitutive fibril properties were determined by fitting numerical data to experimental results of unconfined compression and indentation tests on samples of bovine patellar articular cartilage. It was demonstrated that with this model the stresses and strains in the collagen fibrils can be calculated. It was also exhibited that fibrils with different orientations at the same location can be loaded differently, depending on the local architecture of the collagen network. To the best of our knowledge, the present model is the first that can account for these features. We conclude that the local stresses and strains in the articular cartilage are highly influenced by the local morphology of the collagen-fibril network.
---
paper_title: Characterization of articular cartilage by combining microscopic analysis with a fibril-reinforced finite-element model.
paper_content:
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
---
paper_title: A subject specific multibody model of the knee with menisci.
paper_content:
The menisci of the knee play an important role in joint function and our understanding of knee mechanics and tissue interactions can be enhanced through computational models of the tibio-menisco-femoral structure. Several finite element models of the knee that include meniscus-cartilage contact exist, but these models are typically limited to simplified boundary conditions. Movement simulation and musculoskeletal modeling can predict muscle forces, but are typically performed using the multibody method with simplified representation of joint structures. This study develops a subject specific computational model of the knee with menisci that can be incorporated into neuromusculoskeletal models within a multibody framework. Meniscus geometries from a 78-year-old female right cadaver knee were divided into 61 discrete elements (29 medial and 32 lateral) that were connected through 6x6 stiffness matrices. An optimization and design of experiments approach was used to determine parameters for the 6x6 stiffness matrices such that the force-displacement relationship of the meniscus matched that of a linearly elastic transversely isotropic finite element model for the same cadaver knee. Similarly, parameters for compliant contact models of tibio-menisco-femoral articulations were derived from finite element solutions. As a final step, a multibody knee model was developed and placed within a dynamic knee simulator model and the tibio-femoral and patello-femoral kinematics compared to an identically loaded cadaver knee. RMS errors between finite element displacement and multibody displacement after parameter optimization were 0.017 mm for the lateral meniscus and 0.051 mm for the medial meniscus. RMS errors between model predicted and experimental cadaver kinematics during a walk cycle were less than 11 mm translation and less than 7 degrees orientation. A small improvement in kinematics, compared to experimental measurements, was seen when the menisci were included versus a model without the menisci. With the menisci the predicted tibio-femoral contact force was significantly reduced on the lateral side (937 N peak force versus 633 N peak force), but no significant reduction was seen on the medial side.
---
paper_title: Inverse analysis of constitutive models: biological soft tissues.
paper_content:
The paper describes a procedure for estimating the material parameters of biological soft tissue by fitting model prediction to experimental load-deformation data. This procedure minimizes the error between data and theoretical model prediction through systematically adjusting the parameters in the latter. The procedure uses commercially available software and is not specific to any particular model; nevertheless, for illustration purposes, we employ a six parameter fibril-reinforced poroelastic cartilage model. We are able to estimate any and all of these parameters by the procedure. Convergence of the parameters and convergence of the arbitrary initial stress relaxation to the data was demonstrated in all cases. Though we illustrate the optimization procedure here for unconfined compression only, it can be adapted easily to other experimental configurations such as confined compression, indentation and tensile test. Furthermore, the procedure can be applied in other areas of biomechanics where material parameters need to be extracted from experimental data.
---
paper_title: A finite element formulation and program to study transient swelling and load-carriage in healthy and degenerate articular cartilage
paper_content:
The theory of poroelasticity is extended to include physico-chemical swelling and used to predict the transient responses of normal and degenerate articular cartilage to both chemical and mechanical loading; with emphasis on isolating the influence of the major parameters which govern its deformation. Using a new hybrid element, our mathematical relationships were implemented in a purpose-built poroelastic finite element analysis algorithm (u-pi-c program) which was used to resolve the nature of the coupling between the mechanical and chemical responses of cartilage when subjected to ionic transport across its membranous skeleton. Our results demonstrate that one of the roles of the strain-dependent matrix permeability is to limit the rate of transmission of stresses from the fluid to the collagen-proteoglycan solid skeleton in the incipient stages of loading, and that the major contribution of the swelling pressure is that of preventing any excessive deformation of the matrix.
---
paper_title: A biphasic viscohyperelastic fibril-reinforced model for articular cartilage: formulation and comparison with experimental data.
paper_content:
Experiments in articular cartilage have shown highly nonlinear stress-strain curves under finite deformations, nonlinear tension-compression response as well as intrinsic viscous effects of the proteoglycan matrix and the collagen fibers. A biphasic viscohyperelastic fibril-reinforced model is proposed here, which is able to describe the intrinsic viscoelasticity of the fibrillar and nonfibrillar components of the solid phase, the nonlinear tension-compression response and the nonlinear stress-strain curves under tension and compression. A viscohyperelastic constitutive equation was used for the matrix and the fibers encompassing, respectively, a hyperelastic function used previously for the matrix and a hyperelastic law used before to represent biological connective tissues. This model, implemented in an updated Lagrangian finite element code, displayed good ability to follow experimental stress-strain equilibrium curves under tension and compression for human humeral cartilage. In addition, curve fitting of experimental reaction force and lateral displacement unconfined compression curves showed that the inclusion of viscous effects in the matrix allows the description of experimental data with material properties for the fibers consistent with experimental tensile tests, suggesting that intrinsic viscous effects in the matrix of articular cartilage plays an important role in the mechanical response of the tissue.
---
paper_title: Stress–relaxation of human patellar articular cartilage in unconfined compression: Prediction of mechanical response by tissue composition and structure
paper_content:
Abstract Mechanical properties of articular cartilage are controlled by tissue composition and structure. Cartilage function is sensitively altered during tissue degeneration, in osteoarthritis (OA). However, mechanical properties of the tissue cannot be determined non-invasively. In the present study, we evaluate the feasibility to predict, without mechanical testing, the stress–relaxation response of human articular cartilage under unconfined compression. This is carried out by combining microscopic and biochemical analyses with composition-based mathematical modeling. Cartilage samples from five cadaver patellae were mechanically tested under unconfined compression. Depth-dependent collagen content and fibril orientation, as well as proteoglycan and water content were derived by combining Fourier transform infrared imaging, biochemical analyses and polarized light microscopy. Finite element models were constructed for each sample in unconfined compression geometry. First, composition-based fibril-reinforced poroviscoelastic swelling models, including composition and structure obtained from microscopical and biochemical analyses were fitted to experimental stress–relaxation responses of three samples. Subsequently, optimized values of model constants, as well as compositional and structural parameters were implemented in the models of two additional samples to validate the optimization. Theoretical stress–relaxation curves agreed with the experimental tests ( R =0.95–0.99). Using the optimized values of mechanical parameters, as well as composition and structure of additional samples, we were able to predict their mechanical behavior in unconfined compression, without mechanical testing ( R =0.98). Our results suggest that specific information on tissue composition and structure might enable assessment of cartilage mechanics without mechanical testing.
---
paper_title: A composition-based cartilage model for the assessment of compositional changes during cartilage damage and adaptation
paper_content:
Summary Objective The composition of articular cartilage changes with progression of osteoarthritis. Since compositional changes are associated with changes in the mechanical properties of the tissue, they are relevant for understanding how mechanical loading induces progression. The objective of this study is to present a computational model of articular cartilage which enables to study the interaction between composition and mechanics. Methods Our previously developed fibril-reinforced poroviscoelastic swelling model for articular cartilage was combined with our tissue composition-based model. In the combined model both the depth- and strain-dependencies of the permeability are governed by tissue composition. All local mechanical properties in the combined model are directly related to the local composition of the tissue, i.e., to the local amounts of proteoglycans and collagens and to tissue anisotropy. Results Solely based on the composition of the cartilage, we were able to predict the equilibrium and transient response of articular cartilage during confined compression, unconfined compression, indentation and two different 1D-swelling tests, simultaneously. Conclusion Since both the static and the time-dependent mechanical properties have now become fully dependent on tissue composition, the model allows assessing the mechanical consequences of compositional changes seen during osteoarthritis without further assumptions. This is a major step forward in quantitative evaluations of osteoarthritis progression.
---
paper_title: A Conewise Linear Elasticity Mixture Model for the Analysis of Tension-Compression Nonlinearity in Articular Cartilage
paper_content:
A biphasic mixture model is developed which can account for the observed tension-compression nonlinearity of cartilage by employing the continuum-based Conewise Linear Elasticity (CLE) model of Curnier et al. (J Elasticity 37:1–38, 1995) to describe the solid phase of the mixture. In this first investigation, the orthotropic octantwise linear elasticity model was reduced to the more specialized case of cubic symmetry, to reduce the number of elastic constants from twelve to four. Confined and unconfined compression stress-relaxation, and torsional shear testing were performed on each of nine bovine humeral head articular cartilage cylindrical plugs from 6 month old calves. Using the CLE model with cubic symmetry, the aggregate modulus in compression and axial permeability were obtained from confined compression (H−A =0.64±0.22 MPa, kz = 3.62 ± .97 × 10−16 m4/N.s, r2 =0.95±0.03), the tensile modulus, compressive Poisson ratio and radial permeability were obtained from unconfined compression (E+Y = 12.75 ± 1.56 MPa, ν− =0.03±0.01, kr =6.06±2.10×10−16 m4/N.s, r2 =0.99±0.00), and the shear modulus was obtained from torsional shear (µ=0.17±0.06 MPa). The model was also employed to successfully predict the interstitial fluid pressure at the center of the cartilage plug in unconfined compression (r2 =0.98±0.01). The results of this study demonstrate that the integration of the CLE model with the biphasic mixture theory can provide a model of cartilage which can successfully curvefit three distinct testing configurations while producing material parameters consistent with previous reports in the literature.
---
paper_title: Experimental Verification and Theoretical Prediction of Cartilage Interstitial Fluid Pressurization At an Impermeable Contact Interface in Confined Compression
paper_content:
Interstitial fluid pressurization has long been hypothesized to play a fundamental role in the load support mechanism and frictional response of articular cartilage. However, to date, few experimental studies have been performed to verify this hypothesis from direct measurements. The first objective of this study was to investigate experimentally the hypothesis that cartilage interstitial fluid pressurization does support the great majority of the applied load, in the testing configurations of confined compression creep and stress relaxation. The second objective was to investigate the hypothesis that the experimentally observed interstitial fluid pressurization could also be predicted using the linear biphasic theory of Mow et al. (J. Biomech. Engng ASME, 102, 73-84, 1980). Fourteen bovine cartilage samples were tested in a confined compression chamber fitted with a microchip piezoresistive transducer to measure interstitial fluid pressure, while simultaneously measuring (during stress relaxation) or prescribing (during creep) the total stress. It was found that interstitial fluid pressure supported more than 90% of the total stress for durations as long as 725 +/- 248 s during stress relaxation (mean +/- S.D., n = 7), and 404 +/- 229 s during creep (n = 7). When comparing experimental measurements of the time-varying interstitial fluid pressure against predictions from the linear biphasic theory, nonlinear coefficients of determination r2 = 0.871 +/- 0.086 (stress relaxation) and r2 = 0.941 +/- 0.061 (creep) were found. The results of this study provide some of the most direct evidence to date that interstitial fluid pressurization plays a fundamental role in cartilage mechanics; they also indicate that the mechanism of fluid load support in cartilage can be properly predicted from theory.
---
paper_title: Mechanics of chondrocyte hypertrophy
paper_content:
Chondrocyte hypertrophy is a characteristic of osteoarthritis and dominates bone growth. Intra- and extracellular changes that are known to be induced by metabolically active hypertrophic chondrocytes are known to contribute to hypertrophy. However, it is unknown to which extent these mechanical conditions together can be held responsible for the total magnitude of hypertrophy. The present paper aims to provide a quantitative, mechanically sound answer to that question. To address this aim requires a quantitative tool that captures the mechanical effects of collagen and proteoglycans, allows temporal changes in tissue composition, and can compute cell and tissue deformations. These requirements are met in our numerical model that is validated for articular cartilage mechanics, which we apply to quantitatively explain a range of experimental observations related to hypertrophy. After validating the numerical approach for studying hypertrophy, the model is applied to evaluate the direct mechanical effects of axial tension and compression on hypertrophy (Hueter-Volkmann principle) and to explore why hypertrophy is reduced in case of partially or fully compromised proteoglycan expression. Finally, a mechanical explanation is provided for the observation that chondrocytes do not hypertrophy when enzymatical collagen degradation is prohibited (S1Pcko knock-out mouse model). This paper shows that matrix turnover by metabolically active chondrocytes, together with externally applied mechanical conditions, can explain quantitatively the volumetric change of chondrocytes during hypertrophy. It provides a mechanistic explanation for the observation that collagen degradation results in chondrocyte hypertrophy, both under physiological and pathological conditions.
---
paper_title: Stresses in the local collagen network of articular cartilage: a poroviscoelastic fibril-reinforced finite element study.
paper_content:
Osteoarthritis (OA) is a multifactorial disease, resulting in diarthrodial joint wear and eventually destruction. Swelling of cartilage, which is proportional to the amount of collagen damage, is an initial event of cartilage degeneration, so damage to the collagen fibril network is likely to be one of the earliest signs of OA cartilage degeneration. We propose that the local stresses and strains in the collagen fibrils, which cause the damage, cannot be determined dependably without taking the local arcade-like collagen-fibril structure into account. We investigate this using a poroviscoelastic fibril-reinforced FEA model. The constitutive fibril properties were determined by fitting numerical data to experimental results of unconfined compression and indentation tests on samples of bovine patellar articular cartilage. It was demonstrated that with this model the stresses and strains in the collagen fibrils can be calculated. It was also exhibited that fibrils with different orientations at the same location can be loaded differently, depending on the local architecture of the collagen network. To the best of our knowledge, the present model is the first that can account for these features. We conclude that the local stresses and strains in the articular cartilage are highly influenced by the local morphology of the collagen-fibril network.
---
paper_title: A fibril-reinforced poroviscoelastic swelling model for articular cartilage.
paper_content:
From a mechanical point of view, the most relevant components of articular cartilage are the tight and highly organized collagen network together with the charged proteoglycans. Due to the fixed charges of the proteoglycans, the cation concentration inside the tissue is higher than in the surrounding synovial fluid. This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. The fibrillar collagen network resists straining and swelling pressures. This combination makes cartilage a unique, highly hydrated and pressurized tissue, enforced with a strained collagen network. Many theories to explain articular cartilage behavior under loading, expressed in computational models that either include the swelling behavior or the properties of the anisotropic collagen structure, can be found in the literature. The most common tests used to determine the mechanical quality of articular cartilage are those of confined compression, unconfined compression, indentation and swelling. All theories currently available in the literature can explain the cartilage response occurring in some of the above tests, but none of them can explain these for all of the tests. We hypothesized that a model including simultaneous mathematical descriptions of (1) the swelling properties due to the fixed-change densities of the proteoglycans and (2) the anisotropic viscoelastic collagen structure, can explain all these test simultaneously. To study this hypothesis we extended our fibril-reinforced poroviscoelastic finite element model with our biphasic swelling model. We have shown that the newly developed fibril-reinforced poroviscoelastic swelling (FPVES) model for articular cartilage can simultaneously account for the reaction force during swelling, confined compression, indentation and unconfined compression as well as the lateral deformation during unconfined compression. Using this theory it is possible to analyze the link between the collagen network and the swelling properties of articular cartilage.
---
paper_title: A composition-based cartilage model for the assessment of compositional changes during cartilage damage and adaptation
paper_content:
Summary Objective The composition of articular cartilage changes with progression of osteoarthritis. Since compositional changes are associated with changes in the mechanical properties of the tissue, they are relevant for understanding how mechanical loading induces progression. The objective of this study is to present a computational model of articular cartilage which enables to study the interaction between composition and mechanics. Methods Our previously developed fibril-reinforced poroviscoelastic swelling model for articular cartilage was combined with our tissue composition-based model. In the combined model both the depth- and strain-dependencies of the permeability are governed by tissue composition. All local mechanical properties in the combined model are directly related to the local composition of the tissue, i.e., to the local amounts of proteoglycans and collagens and to tissue anisotropy. Results Solely based on the composition of the cartilage, we were able to predict the equilibrium and transient response of articular cartilage during confined compression, unconfined compression, indentation and two different 1D-swelling tests, simultaneously. Conclusion Since both the static and the time-dependent mechanical properties have now become fully dependent on tissue composition, the model allows assessing the mechanical consequences of compositional changes seen during osteoarthritis without further assumptions. This is a major step forward in quantitative evaluations of osteoarthritis progression.
---
paper_title: Swelling of articular cartilage and other connective tissues: electromechanochemical forces.
paper_content:
We have measured the relationship between tissue swelling stress and consolidation for bovine articular cartilage and corneal stroma in uniaxial confined compression as a function of bath ionic strength. Our experimental protocol and results clearly demonstrate that two concentration-dependent material properties are necessary to describe the chemical dependence of tissue swelling stress in uniaxial compression over the range of deformations and concentrations explored. A general electromechanochemical model for the swelling stress of charged connective tissues is developed. The model focuses on the role of charged matrix macromolecules in determining the mechanical behavior of the tissue. A constitutive relation for the swelling stress in uniaxial confined compression is formulated and the concentration dependence of the material properties of articular cartilage and corneal stroma is determined. The associated free swelling behavior of cartilage and cornea specimens is computed from these results and is found to compare favorably with data from the literature.
---
paper_title: A cross-validation of the biphasic poroviscoelastic model of articular cartilage in unconfined compression, indentation, and confined compression.
paper_content:
The biphasic poroviscoelastic (BPVE) model was curve fit to the simultaneous relaxation of reaction force and lateral displacement exhibited by articular cartilage in unconfined compression (n=18). Model predictions were also made for the relaxation observed in reaction force during indentation with a porous plane-ended metal indenter (n=4), indentation with a nonporous plane ended metal indenter (n=4), and during confined compression (n=4). Each prediction was made using material parameters resulting from curve fits of the unconfined compression response of the same tissue. The BPVE model was able to account for both the reaction force and the lateral displacement during unconfined compression very well. Furthermore, model predictions for both indentation and confined compression also followed the experimental data well. These results provide substantial evidence for the efficacy of the biphasic poroviscoelastic model for articular cartilage, as no successful cross-validation of a model simulation has been demonstrated using other mathematical models.
---
paper_title: Compressive and tensile properties of articular cartilage in axial loading are modulated differently by osmotic environment.
paper_content:
Aims of the present study were to test the hypotheses that (1) the compressive properties of articular cartilage are affected more by changes in the medium ionic concentration than the tensile properties, (2) collagen network controls the compression-tension nonlinearity of articular cartilage, and (3) proteoglycan (PG) and collagen contents are primary determinants of the compressive and tensile properties of cartilage, respectively. These hypotheses were experimentally tested by axial compressive and tensile tests (perpendicular to the cartilage surface) of bovine articular cartilage samples immersed in 0.005 M (n=6), 0.15M (n=12) and 1.0M (n=6) saline solutions. Compressive and tensile behaviour was analyzed by a nonlinear fibril-reinforced poroelastic model. Tissue PG and collagen contents were measured using Fourier transform infrared imaging spectroscopy (FT-IRIS). The compressive modulus of cartilage varied significantly (n=6, p<0.05) as the medium concentration changed. The tensile modulus changed significantly only as the medium concentration was reduced from 0.15 to 0.005 M (n=6, p<0.05). The fibril-reinforced poroelastic model with stiff, nonlinear collagen fibrils predicted the experimentally measured compression-tension nonlinearity of cartilage. Tissue PG and collagen contents accounted for the compressive and tensile properties of cartilage.
---
paper_title: A fibril-reinforced poroviscoelastic swelling model for articular cartilage.
paper_content:
From a mechanical point of view, the most relevant components of articular cartilage are the tight and highly organized collagen network together with the charged proteoglycans. Due to the fixed charges of the proteoglycans, the cation concentration inside the tissue is higher than in the surrounding synovial fluid. This excess of ion particles leads to an osmotic pressure difference, which causes swelling of the tissue. The fibrillar collagen network resists straining and swelling pressures. This combination makes cartilage a unique, highly hydrated and pressurized tissue, enforced with a strained collagen network. Many theories to explain articular cartilage behavior under loading, expressed in computational models that either include the swelling behavior or the properties of the anisotropic collagen structure, can be found in the literature. The most common tests used to determine the mechanical quality of articular cartilage are those of confined compression, unconfined compression, indentation and swelling. All theories currently available in the literature can explain the cartilage response occurring in some of the above tests, but none of them can explain these for all of the tests. We hypothesized that a model including simultaneous mathematical descriptions of (1) the swelling properties due to the fixed-change densities of the proteoglycans and (2) the anisotropic viscoelastic collagen structure, can explain all these test simultaneously. To study this hypothesis we extended our fibril-reinforced poroviscoelastic finite element model with our biphasic swelling model. We have shown that the newly developed fibril-reinforced poroviscoelastic swelling (FPVES) model for articular cartilage can simultaneously account for the reaction force during swelling, confined compression, indentation and unconfined compression as well as the lateral deformation during unconfined compression. Using this theory it is possible to analyze the link between the collagen network and the swelling properties of articular cartilage.
---
paper_title: Cell deformation behavior in mechanically loaded rabbit articular cartilage 4 weeks after anterior cruciate ligament transection
paper_content:
OBJECTIVE ::: Chondrocyte stresses and strains in articular cartilage are known to modulate tissue mechanobiology. Cell deformation behavior in cartilage under mechanical loading is not known at the earliest stages of osteoarthritis. Thus, the aim of this study was to investigate the effect of mechanical loading on volume and morphology of chondrocytes in the superficial tissue of osteoarthritic cartilage obtained from anterior cruciate ligament transected (ACLT) rabbit knee joints, 4 weeks after intervention. ::: ::: ::: METHODS ::: A unique custom-made microscopy indentation system with dual-photon microscope was used to apply controlled 2 MPa force-relaxation loading on patellar cartilage surfaces. Volume and morphology of chondrocytes were analyzed before and after loading. Also global and local tissue strains were calculated. Collagen content, collagen orientation and proteoglycan content were quantified with Fourier transform infrared microspectroscopy, polarized light microscopy and digital densitometry, respectively. ::: ::: ::: RESULTS ::: Following the mechanical loading, the volume of chondrocytes in the superficial tissue increased significantly in ACLT cartilage by 24% (95% confidence interval (CI) 17.2-31.5, P < 0.001), while it reduced significantly in contralateral group tissue by -5.3% (95% CI -8.1 to -2.5, P = 0.003). Collagen content in ACLT and contralateral cartilage were similar. PG content was reduced and collagen orientation angle was increased in the superficial tissue of ACLT cartilage compared to the contralateral cartilage. ::: ::: ::: CONCLUSIONS ::: We found the novel result that chondrocyte deformation behavior in the superficial tissue of rabbit articular cartilage is altered already at 4 weeks after ACLT, likely because of changes in collagen fibril orientation and a reduction in PG content.
---
paper_title: Regulatory volume decrease (RVD) by isolated and in situ bovine articular chondrocytes
paper_content:
Articular chondrocytes in vivo are exposed to a changing osmotic environment under both physiological (static load) and pathological (osteoarthritis) conditions. Such changes to matrix hydration could alter cell volume in situ and influence matrix metabolism. However the ability of chondrocytes to regulate their volume in the face of osmotic perturbations have not been studied in detail. We have investigated the regulatory volume decrease (RVD) capacity of bovine articular chondrocytes within, and isolated from the matrix, before and following acute hypotonic challenge. Cell volumes were determined by visualising fluorescently-labelled chondrocytes using confocal laser scanning microscopy (CLSM) at 21°C. Chondrocytes in situ were grouped into superficial (SZ), mid (MZ), and deep zones (DZ). When exposed to 180mOsm or 250mOsm hypotonic challenge, cells in situ swelled rapidly (within ∼90 sec). Chondrocytes then exhibited rapid RVD (t1/2 ∼ 8 min), with cells from all zones returning to ∼3% of their initial volume after 20 min. There was no significant difference in the rates of RVD between chondrocytes in the three zones. Similarly, no difference in the rate of RVD was observed for an osmotic shock from 280 to 250 or 180mOsm. Chondrocytes isolated from the matrix into medium of 380mOsm and then exposed to 280mOsm showed an identical RVD response to that of in situ cells. The RVD response of in situ cells was inhibited by REV 5901. The results suggested that the signalling pathways involved in RVD remained intact after chondrocyte isolation from cartilage and thus it was likely that there was no role for cell-matrix interactions in mediating RVD. © 2001 Wiley-Liss, Inc.
---
paper_title: Nanomechanical properties of individual chondrocytes and their developing growth factor-stimulated pericellular matrix
paper_content:
Abstract The nanomechanical properties of individual cartilage cells (chondrocytes) and their aggrecan and collagen-rich pericellular matrix (PCM) were measured via atomic force microscope nanoindentation using probe tips of two length scales (nanosized and micron-sized). The properties of cells freshly isolated from cartilage tissue (devoid of PCM) were compared to cells that were cultured for selected times (up to 28 days) in 3-D alginate gels which enabled PCM assembly and accumulation. Cells were immobilized and kept viable in pyramidal wells microfabricated into an array on silicon chips. Hertzian contact mechanics and finite element analyses were employed to estimate apparent moduli from the force versus depth curves. The effects of culture conditions on the resulting PCM properties were studied by comparing 10% fetal bovine serum to medium containing a combination of insulin growth factor-1 (IGF-1)+osteogenic protein-1 (OP-1). While both systems showed increases in stiffness with time in culture between days 7 and 28, the IGF-1+OP-1 combination resulted in a higher stiffness for the cell-PCM composite by day 28 and a higher apparent modulus of the PCM which is compared to the FBS cultured cells. These studies give insight into the temporal evolution of the nanomechanical properties of the pericellar matrix relevant to the biomechanics and mechanobiology of tissue-engineered constructs for cartilage repair.
---
paper_title: The effect of collagen degradation on chondrocyte volume and morphology in bovine articular cartilage following a hypotonic challenge
paper_content:
Collagen degradation is one of the early signs of osteoarthritis. It is not known how collagen degradation affects chondrocyte volume and morphology. Thus, the aim of this study was to investigate the effect of enzymatically induced collagen degradation on cell volume and shape changes in articular cartilage after a hypotonic challenge. Confocal laser scanning microscopy was used for imaging superficial zone chondrocytes in intact and degraded cartilage exposed to a hypotonic challenge. Fourier transform infrared microspectroscopy, polarized light microscopy, and mechanical testing were used to quantify differences in proteoglycan and collagen content, collagen orientation, and biomechanical properties, respectively, between the intact and degraded cartilage. Collagen content decreased and collagen orientation angle increased significantly (p < 0.05) in the superficial zone cartilage after collagenase treatment, and the instantaneous modulus of the samples was reduced significantly (p < 0.05). Normalized cell volume and height 20 min after the osmotic challenge (with respect to the original volume and height) were significantly (p < 0.001 and p < 0.01, respectively) larger in the intact compared to the degraded cartilage. These findings suggest that the mechanical environment of chondrocytes, specifically collagen content and orientation, affects cell volume and shape changes in the superficial zone articular cartilage when exposed to osmotic loading. This emphasizes the role of collagen in modulating cartilage mechanobiology in diseased tissue.
---
paper_title: Alterations in the Mechanical Properties of the Human Chondrocyte Pericellular Matrix With Osteoarthritis
paper_content:
In articular cartilage, chondrocytes are surrounded by a pericellular matrix (PCM), which together with the chondrocyte have been termed the "chondron." While the precise function of the PCM is not know there has been considerable speculation that it plays a role in regulating the biomechanical environment of the chondrocyte. In this study, we measured the Young's modulus of the PCM from normal and osteoarthritic cartilage using the micropipette aspiration technique, coupled with a newly developed axisymmetric elastic layered half-space model of the experimental configuration. Viable, intact chondrons were extracted from human articular cartilage using a new microaspiration-based isolation technique. In normal cartilage, the Young's modulus of the PCM was similar in chondrons isolated from the surface zone (68.9 +/- 18.9 kPa) as compared to the middle and deep layers (62.0 +/- 30.5 kPa). However, the mean Young's modulus of the PCM (pooled for the two zones) was significantly decreased in osteoarthritic cartilage (66.5 +/- 23.3 kPa versus 41.3 +/- 21.1 kPa, p < 0.001). In combination with previous theoretical models of cell-matrix interactions in cartilage, these findings suggest that the PCM has an important influence on the stress-strain environment of the chondrocyte that potentially varies with depth from the cartilage surface. Furthermore, the significant loss of PCM stiffness that was observed in osteoarthritic cartilage may affect the magnitude and distribution of biomechanical signals perceived by the chondrocytes.
---
paper_title: Chondrocyte deformation and local tissue strain in articular cartilage: A confocal microscopy study
paper_content:
It is well accepted that mechanical forces can modulate the metabolic activity of chondrocytes, although the specific mechanisms of mechanical signal transduction in articular cartilage are still unknown. One proposed pathway through which chondrocytes may perceive changes in their mechanical environment is directly through cellular deformation. An important step toward understanding the role of chondrocyte deformation in signal transduction is to determine the changes in the shape and volume of chondrocytes during applied compression of the tissue. Recently, a technique was developed for quantitative morphometry of viable chondrocytes within the extracellular matrix using three-dimensional confocal scanning laser microscopy. In the present study, this method was used to quantify changes in chondrocyte morphology and local tissue deformation in the surface, middle, and deep zones in explants of canine articular cartilage subjected to physiological levels of matrix deformation. The results indicated that at 15% surface-to-surface equilibrium strain in the tissue, a similar magnitude of local tissue strain occurs in the middle and deep zones. In the surface zone, local strains of 19% were observed, indicating that the compressive stiffness of the surface zone is significantly less than that of the middle and deep zones. With this degree of tissue deformation, significant decreases in cellular height of 26, 19, and 20% and in cell volume of 22, 16, and 17% were observed in the surface, middle, and deep zones, respectively. The deformation of chondrocytes in the surface zone was anisotropic, with significant lateral expansion occurring in the direction perpendicular to the local split-line pattern. When compression was removed, there was complete recovery of cellular morphology in all cases. These observations support the hypothesis that deformation of chondrocytes or a change in their volume may occur during in vivo joint loading and may have a role in the mechanical signal transduction pathway of articular cartilage.
---
paper_title: Osmotic loading of articular cartilage modulates cell deformations along primary collagen fibril directions.
paper_content:
Osmotic loading is known to modulate chondrocyte (cell) height, width and volume in articular cartilage. It is not known how cartilage architecture, especially the collagen fibril orientation, affects cell shape changes as a result of an osmotic challenge. Intact patellae of New Zealand white rabbits (n=6) were prepared for fluorescence imaging. Patellae were exposed to a hypotonic osmotic shock and cells were imaged before loading and 5-60 min after the osmotic challenge. Cell volumes and aspect ratios (height/width) were analyzed. A fibril-reinforced poroelastic swelling model with realistic primary collagen fibril orientations, i.e. horizontal, random and vertical orientation in the superficial, middle and deep zones, respectively and cells in different zones was used to estimate cell aspect ratios theoretically. As the medium osmolarity was reduced, cell aspect ratios decreased and volumes increased in the superficial zone of cartilage both experimentally (p<0.05) and theoretically. Theoretically determined aspect ratios of middle zone cells remained virtually constant, while they increased for deep zone cells as osmolarity was reduced. Findings of this study suggest that osmotic loading modulates chondrocyte shapes in accordance with the primary collagen fibril directions in articular cartilage.
---
paper_title: Mechanical loading of in situ chondrocytes in lapine retropatellar cartilage after anterior cruciate ligament transection
paper_content:
The aims of this study were (i) to quantify chondrocyte mechanics in fully intact articular cartilage attached to its native bone and (ii) to compare the chondrocyte mechanics for cells in healthy and early osteoarthritis (OA) tissue. We hypothesized that cells in the healthy tissue would deform less for given articular surface pressures than cells in the early OA tissue because of a loss of matrix integrity in early OA and the associated loss of structural integrity that is thought to protect chondrocytes. Chondrocyte dynamics were quantified by measuring the deformation response of the cells to controlled loading of fully intact cartilage using a custom-designed confocal indentation system. Early OA was achieved nine weeks following transection of the anterior cruciate ligament (ACL) in rabbit knees. Experiments were performed on the retropatellar cartilage of early OA rabbit knees (four joints and 48 cells), the corresponding intact contralateral control knees (four joints and 48 cells) and knees from normal control rabbits (four joints and 48 cells). Nine weeks following ACL transection, articular cartilage of the experimental joints showed substantial increases in thickness, and progression towards OA as assessed using histological grading. Local matrix strains in the superficial zone were greater for the experimental (38 ± 4%) compared with the contralateral (27 ± 5%) and normal (28 ± 4%) joints (p = 0.04). Chondrocyte deformations in the axial and depth directions were similar during indentation loading for all experimental groups. However, cell width increased more for the experimental cartilage chondrocytes (12 ± 1%) than the contralateral (6 ± 1%) and normal control chondrocytes (6 ± 1%; p < 0.001). On average, chondrocyte volume increased with indentation loading in the early OA cartilage (8 ± 3%, p = 0.001), while it decreased for the two control groups (−8 ± 2%, p = 0.002 for contralateral and −8 ± 1%, p = 0.004 for normal controls). We conclude from these results that our hypothesis of cell deformations in the early OA tissue was only partially supported: specifically, changes in chondrocyte mechanics in early OA were direction-specific with the primary axial deformations remaining unaffected despite vastly increased average axial matrix deformations. Surprisingly, chondrocyte deformations increased in early OA in specific transverse directions which have received little attention to date but might be crucial to chondrocyte signalling in early OA.
---
paper_title: Hypotonic challenge modulates cell volumes differently in the superficial zone of intact articular cartilage and cartilage explant
paper_content:
The objective of this study was to evaluate the effect of sample preparation on the biomechanical behaviour of chondrocytes. We compared the volumetric and dimensional changes of chondrocytes in the superficial zone (SZ) of intact articular cartilage and cartilage explant before and after a hypotonic challenge. Calcein-AM labelled SZ chondrocytes were imaged with confocal laser scanning microscopy through intact cartilage surfaces and through cut surfaces of cartilage explants. In order to clarify the effect of tissue composition on cell volume changes, Fourier Transform Infrared microspectroscopy was used for estimating the proteoglycan and collagen contents of the samples. In the isotonic medium (300 mOsm), there was a significant difference (p < 0.05) in the SZ cell volumes and aspect ratios between intact cartilage samples and cartilage explants. Changes in cell volumes at both short-term (2 min) and long-term (2 h) time points after the hypotonic challenge (180 mOsm) were significantly different (p < 0.05) between the groups. Further, proteoglycan content was found to correlate significantly (r 2 = 0.63, p < 0.05) with the cell volume changes in cartilage samples with intact surfaces. Collagen content did not correlate with cell volume changes. The results suggest that the biomechanical behaviour of chondrocytes following osmotic challenge is different in intact cartilage and in cartilage explant. This indicates that the mechanobiological responses of cartilage and cell signalling may be significantly dependent on the integrity of the mechanical environment of chondrocytes.
---
paper_title: Mechanics of chondrocyte hypertrophy
paper_content:
Chondrocyte hypertrophy is a characteristic of osteoarthritis and dominates bone growth. Intra- and extracellular changes that are known to be induced by metabolically active hypertrophic chondrocytes are known to contribute to hypertrophy. However, it is unknown to which extent these mechanical conditions together can be held responsible for the total magnitude of hypertrophy. The present paper aims to provide a quantitative, mechanically sound answer to that question. To address this aim requires a quantitative tool that captures the mechanical effects of collagen and proteoglycans, allows temporal changes in tissue composition, and can compute cell and tissue deformations. These requirements are met in our numerical model that is validated for articular cartilage mechanics, which we apply to quantitatively explain a range of experimental observations related to hypertrophy. After validating the numerical approach for studying hypertrophy, the model is applied to evaluate the direct mechanical effects of axial tension and compression on hypertrophy (Hueter-Volkmann principle) and to explore why hypertrophy is reduced in case of partially or fully compromised proteoglycan expression. Finally, a mechanical explanation is provided for the observation that chondrocytes do not hypertrophy when enzymatical collagen degradation is prohibited (S1Pcko knock-out mouse model). This paper shows that matrix turnover by metabolically active chondrocytes, together with externally applied mechanical conditions, can explain quantitatively the volumetric change of chondrocytes during hypertrophy. It provides a mechanistic explanation for the observation that collagen degradation results in chondrocyte hypertrophy, both under physiological and pathological conditions.
---
paper_title: Implementation of subject‐specific collagen architecture of cartilage into a 2D computational model of a knee joint—data from the osteoarthritis initiative (OAI)
paper_content:
A subject-specific collagen architecture of cartilage, obtained from T2 mapping of 3.0 T magnetic resonance imaging (MRI; data from the Osteoarthritis Initiative), was implemented into a 2D finite element model of a knee joint with fibril-reinforced poroviscoelastic cartilage properties. For comparison, we created two models with alternative collagen architectures, addressing the potential inaccuracies caused by the nonoptimal estimation of the collagen architecture from MRI. Also two models with constant depth-dependent zone thicknesses obtained from literature were created. The mechanical behavior of the models were analyzed and compared under axial impact loading of 846N. Compared to the model with patient-specific collagen architecture, the cartilage model without tangentially oriented collagen fibrils in the superficial zone showed up to 69% decrease in maximum principal stress and fibril strain and 35% and 13% increase in maximum principal strain and pore pressure, respectively, in the superficial layers of the cartilage. The model with increased thickness for the superficial and middle zones, as obtained from the literature, demonstrated at most 73% increase in stress, 143% increase in fibril strain, and 26% and 23% decrease in strain and pore pressure, respectively, in the intermediate cartilage. The present results demonstrate that the computational model of a knee joint with the collagen architecture of cartilage estimated from patient-specific MRI or literature lead to different stress and strain distributions. The findings also suggest that minor errors in the analysis of collagen architecture from MRI, for example due to the analysis method or MRI resolution, can lead to alterations in knee joint stresses and strains. © 2012 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 31:10–22, 2012
---
paper_title: Mechanical characterization of articular cartilage by combining magnetic resonance imaging and finite-element analysis—a potential functional imaging technique
paper_content:
Magnetic resonance imaging (MRI) provides a method for non-invasive characterization of cartilage composition and structure. We aimed to see whether T1 and T2 relaxation times are related to proteoglycan (PG) and collagen-specific mechanical properties of articular cartilage. Specifically, we analyzed whether variations in the depthwise collagen orientation, as assessed by the laminae obtained from T2 profiles, affect the mechanical characteristics of cartilage. After MRI and unconfined compression tests of human and bovine patellar cartilage samples, fibril-reinforced poroviscoelastic finite-element models (FEM), with depthwise collagen orientations implemented from quantitative T2 maps (3 laminae for human, 3–7 laminae for bovine), were constructed to analyze the non-fibrillar matrix modulus (PG specific), fibril modulus (collagen specific) and permeability of the samples. In bovine cartilage, the non-fibrillar matrix modulus (R = −0.64, p < 0.05) as well as the initial permeability (R = 0.70, p < 0.05) correlated with T1. In bovine cartilage, T2 correlated positively with the initial fibril modulus (R = 0.62, p = 0.05). In human cartilage, the initial fibril modulus correlated negatively (R = −0.61, p < 0.05) with T2. Based on the simulations, cartilage with a complex collagen architecture (5 or 7 laminae), leading to high bulk T2 due to magic angle effects, provided higher compressive stiffness than tissue with a simple collagen architecture (3 laminae). Our results suggest that T1 reflects PG-specific mechanical properties of cartilage. High T2 is characteristic to soft cartilage with a classical collagen architecture. Contradictorily, high bulk T2 can also be found in stiff cartilage with a multilaminar collagen fibril network. By emerging MRI and FEM, the present study establishes a step toward functional imaging of articular cartilage.
---
paper_title: Proteoglycan and collagen sensitive MRI evaluation of normal and degenerated articular cartilage
paper_content:
Quantitative magnetic resonance imaging (MRI) techniques have earlier been developed to characterize the structure and composition of articular cartilage. Particularly, Gd-DTPA(2-)-enhanced T1 imaging is sensitive to cartilage proteoglycan content, while T2 relaxation time mapping is indicative of the integrity and arrangement of the collagen network. However, the ability of these techniques to detect early osteoarthrotic changes in cartilage has not been demonstrated. In this study, normal and spontaneously degenerated bovine patellar cartilage samples (n=32) were investigated in vitro using the aforementioned techniques. For reference, mechanical, histological and biochemical properties of the adjacent tissue were determined, and a grading system, the cartilage quality index (CQI), was used to score the structural and functional integrity of each sample. As cartilage degeneration progressed, a statistically significant increase in the superficial T2 (r=0.494, p<0.05) and a decrease in superficial and bulk T1 in the presence of Gd-DTPA(2-) (r=-0.681 and -0.688 (p<0.05), respectively) were observed. Gd-DTPA(2-)-enhanced T1 imaging served as the best predictor of tissue integrity and accounted for about 50% of the variation in CQI. The present results reveal that changes in the quantitative MRI parameters studied are indicative of structural and compositional alterations as well as the mechanical impairment of spontaneously degenerated articular cartilage.
---
paper_title: Relaxation anisotropy in cartilage by NMR microscopy (μMRI) at 14‐μm resolution
paper_content:
To study the structural anisotropy and the magic-angle effect in articular cartilage, T 1 and T 2 images were constructed at a series of orientations of cartilage specimens in the magnetic field by using NMR microscopy (μMRI). An isotropic T 1 and a strong anisotropic T 2 were observed across the cartilage tissue thickness. Three distinct regions in the microscopic MR images corresponded approximately to the superficial, transitional, and radial histological zones in the cartilage. The percentage decrease of T 2 follows the pattern of the curve of (3cos 2 θ - 1) 2 at the radial zone, where the collagen fibrils are perpendicular to the articular surface. In contrast, little orientational dependence of T 2 was observed at the transitional zone, where the collagen fibrils are more randomly oriented. The result suggests that the interactions between water molecules and proteoglycans have a directional nature, which is somehow influenced by collagen fibril orientation. Hence, T 2 anisotropy could serve as a sensitive and noninvasive marker for molecular-level orientations in articular cartilage.
---
paper_title: Contrast agent-enhanced computed tomography of articular cartilage: Association with tissue composition and properties
paper_content:
BACKGROUND ::: Contrast agent-enhanced computed tomography may enable the noninvasive quantification of glycosaminoglycan (GAG) content of articular cartilage. It has been reported that penetration of the negatively charged contrast agent ioxaglate (Hexabrix) increases significantly after enzymatic degradation of GAGs. However, it is not known whether spontaneous degradation of articular cartilage can be quantitatively detected with this technique. ::: ::: ::: PURPOSE ::: To investigate the diagnostic potential of contrast agent-enhanced cartilage tomography (CECT) in quantification of GAG concentration in normal and spontaneously degenerated articular cartilage by means of clinical peripheral quantitative computed tomography (pQCT). ::: ::: ::: MATERIAL AND METHODS ::: In this in vitro study, normal and spontaneously degenerated adult bovine cartilage (n=32) was used. Bovine patellar cartilage samples were immersed in 21 mM contrast agent (Hexabrix) solution for 24 hours at room temperature. After immersion, the samples were scanned with a clinical pQCT instrument. From pQCT images, the contrast agent concentration in superficial as well as in full-thickness cartilage was calculated. Histological and functional integrity of the samples was quantified with histochemical and mechanical reference measurements extracted from our earlier study. ::: ::: ::: RESULTS ::: Full diffusion of contrast agent into the deep cartilage was found to take over 8 hours. As compared to normal cartilage, a significant increase (11%, P<0.05) in contrast agent concentration was seen in the superficial layer of spontaneously degenerated samples. Significant negative correlations were revealed between the contrast agent concentration and the superficial or full-thickness GAG content of tissue (|R| > 0.5, P<0.01). Further, pQCT could be used to measure the thickness of patellar cartilage. ::: ::: ::: CONCLUSION ::: The present results suggest that CECT can be used to diagnose proteoglycan depletion in spontaneously degenerated articular cartilage with a clinical pQCT scanner. Possibly, the in vivo use of clinical pQCT for CECT arthrography of human joints is feasible.
---
paper_title: T2relaxation reveals spatial collagen architecture in articular cartilage: A comparative quantitative MRI and polarized light microscopic study
paper_content:
It has been suggested that orientational changes in the collagen network of articular cartilage account for the depthwise T2 anisotropy of MRI through the magic angle effect. To investigate the relationship between laminar T2 appearance and collagen organization (anisotropy), bovine osteochondral plugs (N = 9) were T2 mapped at 9.4T with cartilage surface normal to the static magnetic field. Collagen fibril arrangement of the same samples was studied with polarized light microscopy, a quantitative technique for probing collagen organization by analyzing its ability to rotate plane polarized light, i.e., birefringence (BF). Depthwise variation of safranin O-stained proteoglycans was monitored with digital densitometry. The spatially varying cartilage T2 followed the architectural arrangement of the collagen fibril network: a linear positive correlation between T2 and the reciprocal of BF was established in each sample, with r = 0.91 +/- 0.02 (mean +/- SEM, N = 9). The current results reveal the close connection between the laminar T2 structure and the collagen architecture in histologic zones.
---
paper_title: Prediction of biomechanical properties of articular cartilage with quantitative magnetic resonance imaging.
paper_content:
Quantitative magnetic resonance imaging (MRI) is the most potential non-invasive means for revealing the structure, composition and pathology of articular cartilage. Here we hypothesize that cartilage mechanical properties as determined by the macromolecular framework and their interactions can be accessed by quantitative MRI. To test this, adjacent cartilage disk pairs (n=32) were prepared from bovine proximal humerus and patellofemoral surfaces. For one sample, the tissue Young's modulus, aggregate modulus, dynamic modulus and Poisson's ratio were determined in unconfined compression. The adjacent disk was studied at 9.4T to determine the tissue T(2) relaxation time, sensitive to the integrity of the collagen network, and T(1) relaxation time in the presence of Gd-DTPA, a technique developed for the estimation of cartilage proteoglycan (PG) content. Quantitative MRI parameters were able to explain up to 87% of the variations in certain biomechanical parameters. Correlations were further improved when data from the proximal humerus was assessed separately. MRI parameters revealed a topographical variation similar to that of mechanical parameters. Linear regression analysis revealed that Young's modulus of cartilage may be characterized more completely by combining both collagen- and PG-sensitive MRI parameters. The present results suggest that quantitative MRI can provide important information on the mechanical properties of articular cartilage. The results are encouraging with respect to functional imaging of cartilage, although in vivo applicability may be limited by the inferior resolution of clinical MRI instruments.
---
paper_title: Partial Meniscectomy Changes Fluid Pressurization in Articular Cartilage in Human Knees
paper_content:
Partial meniscectomy is believed to change the biomechanics of the knee joint through alterations in the contact of articular cartilages and menisci. Although fluid pressure plays an important role in the load support mechanism of the knee, the fluid pressurization in the cartilages and menisci has been ignored in the finite element studies of the mechanics of meniscectomy. In the present study, a 3D fibril-reinforced poromechanical model of the knee joint was used to explore the fluid flow dependent changes in articular cartilage following partial medial and lateral meniscectomies. Six partial longitudinal meniscectomies were considered under relaxation, simple creep, and combined creep loading conditions. In comparison to the intact knee, partial meniscectomy not only caused a substantial increase in the maximum fluid pressure but also shifted the location of this pressure in the femoral cartilage. Furthermore, these changes were positively correlated to the size of meniscal resection. While in the intact joint, the location of the maximum fluid pressure was dependent on the loading conditions, in the meniscectomized joint the location was predominantly determined by the site of meniscal resection. The partial meniscectomy also reduced the rate of the pressure dissipation, resulting in even larger difference between creep and relaxation times as compared to the case of the intact knee. The knee joint became stiffer after meniscectomy because of higher fluid pressure at knee compression followed by slower pressure dissipation. The present study indicated the role of fluid pressurization in the altered mechanics of meniscectomized knees.
---
paper_title: Tissue engineering of functional articular cartilage: the current status
paper_content:
Osteoarthritis is a degenerative joint disease characterized by pain and disability. It involves all ages and 70% of people aged >65 have some degree of osteoarthritis. Natural cartilage repair is limited because chondrocyte density and metabolism are low and cartilage has no blood supply. The results of joint-preserving treatment protocols such as debridement, mosaicplasty, perichondrium transplantation and autologous chondrocyte implantation vary largely and the average long-term result is unsatisfactory. One reason for limited clinical success is that most treatments require new cartilage to be formed at the site of a defect. However, the mechanical conditions at such sites are unfavorable for repair of the original damaged cartilage. Therefore, it is unlikely that healthy cartilage would form at these locations. The most promising method to circumvent this problem is to engineer mechanically stable cartilage ex vivo and to implant that into the damaged tissue area. This review outlines the issues related to the composition and functionality of tissue-engineered cartilage. In particular, the focus will be on the parameters cell source, signaling molecules, scaffolds and mechanical stimulation. In addition, the current status of tissue engineering of cartilage will be discussed, with the focus on extracellular matrix content, structure and its functionality.
---
paper_title: Finite element modeling following partial meniscectomy: Effect of various size of resection
paper_content:
Introduction: Meniscal tears are a common occurrence in the human knee joint. Orthopaedic surgeons routinely perform surgery to remove a portion of the torn meniscus. This surgery is referred to as a partial meniscectomy. It has been shown that individuals who have decreased amount of meniscus are likely to develop knee osteoarthritis. This research presents the analysis of the stresses in the knee joint upon various amounts of partial meniscectomy. Methods: To analyse the stresses in the knee joint using finite element method an axisymmetric model was developed. Articular cartilage was considered as three layers, which were modelled as a poroelastic transversely isotropic superficial layer, a poroelastic isotropic middle and deep layers and an elastic isotropic calcified cartilage layer. Eight cases were modelled including a knee joint with an intact meniscus, 10%, 20%, 30%, 40%, 50%, 60% and 65% medial meniscotomy. Findings: Under the axial load of human weight on the femoral articular cartilage with 40% removal of meniscus high contact stresses took place on cartilage surface. Further, with 30%, 40%, 50% of meniscectomy significant amount of contact area noticed between femoral and tibial articular cartilage. After 65% of meniscectomy the maximal shear stress in the cartilage increased up to 225% compared to knee with intact meniscus. It appears that meniscectomies greater than 20% drastically increases the stresses in the knee joint. I. BACKGROUND
---
paper_title: Analysis of partial meniscectomy and ACL reconstruction in knee joint biomechanics under a combined loading
paper_content:
Abstract Background Despite partial meniscectomies and ligament reconstructions as treatments of choice for meniscal and ligament injuries, respectively, the knee joint osteoarthritis persists. Methods A detailed nonlinear finite element model of the knee joint was developed to evaluate biomechanics of the tibiofemoral joint under 200 N drawer load with and without 1500 N compression preload. The model incorporated composite structure of cartilage and meniscus. The effects on joint response and articular contact pressure of unilateral partial meniscectomy, of changes in prestrain or material properties of the anterior cruciate ligament and of their combination were investigated. Findings Compressive preload further increases anterior cruciate ligament strains/forces in drawer loading. Partial meniscectomy and perturbations in anterior cruciate ligament prestrain/material properties, alone or combined, substantially alter the load transfer via covered and uncovered areas of cartilage as well as contact pressure distribution on cartilage. Partial meniscectomy especially when combined with a slacker anterior cruciate ligament diminish the load via affected meniscus generating unloaded regions on the cartilage. Interpretation Partial meniscectomy concurrent with a slack anterior cruciate ligament substantially alter cartilage contact pressures. These alterations further intensify in the event of greater external forces, larger meniscal resections and total anterior cruciate ligament rupture, thus suggesting a higher risk of joint degeneration.
---
paper_title: Prediction of collagen orientation in articular cartilage by a collagen remodeling algorithm
paper_content:
Summary Objective Tissue engineering is a promising method to treat damaged cartilage. So far it has not been possible to create tissue-engineered cartilage with an appropriate structural organization. It is envisaged that cartilage tissue engineering will significantly benefit from knowledge of how the collagen fiber orientation is directed by mechanical conditions. The goal of the present study is to evaluate whether a collagen remodeling algorithm based on mechanical loading can be corroborated by the collagen orientation in healthy cartilage. Methods According to the remodeling algorithm, collagen fibrils align with a preferred fibril direction, situated between the positive principal strain directions. The remodeling algorithm was implemented in an axisymmetric finite element model of the knee joint. Loading as a result of typical daily activities was represented in three different phases: rest, standing and gait. Results In the center of the tibial plateau the collagen fibrils run perpendicular to the subchondral bone. Just below the articular surface they bend over to merge with the articular surface. Halfway between the center and the periphery, the collagen fibrils bend over earlier, resulting in a thicker superficial and transitional zones. Near the periphery fibrils in the deep zone run perpendicular to the articular surface and slowly bend over to angles of −45° and +45° with the articular surface. Conclusion The collagen structure as predicted with the collagen remodeling algorithm corresponds very well with the collagen structure in healthy knee joints. This remodeling algorithm is therefore considered to be a valuable tool for developing loading protocols for tissue engineering of articular cartilage.
---
paper_title: Mechanical stimulation to stimulate formation of a physiological collagen architecture in tissue-engineered cartilage: a numerical study.
paper_content:
The load-bearing capacity of today's tissue-engineered (TE) cartilage is insufficient. The arcade-like collagen network in native cartilage plays an important role in its load-bearing properties. Inducing the formation of such collagen architecture in engineered cartilage can, therefore, enhance mechanical properties of TE cartilage. Considering the well-defined relationship between tensile strains and collagen alignment in the literature, we assume that cues for inducing this orientation should come from mechanical loading. In this study, strain fields prescribed by loading conditions of unconfined compression, sliding indentation and a novel loading regime of compression-sliding indentation are numerically evaluated to assess the probability that these would trigger a physiological collagen architecture. Results suggest that sliding indentation is likely to stimulate the formation of an appropriate superficial zone with parallel fibres. Adding lateral compression may stimulate the formation of a deep zone with perpendicularly aligned fibres. These insights may be used to improve loading conditions for cartilage tissue engineering.
---
paper_title: Effect of superficial collagen patterns and fibrillation of femoral articular cartilage on knee joint mechanics-a 3D finite element analysis.
paper_content:
Collagen fibrils of articular cartilage have specific depth-dependent orientations and the fibrils bend in the cartilage surface to exhibit split-lines. Fibrillation of superficial collagen takes place in osteoarthritis. We aimed to investigate the effect of superficial collagen fibril patterns and collagen fibrillation of cartilage on stresses and strains within a knee joint. A 3D finite element model of a knee joint with cartilage and menisci was constructed based on magnetic resonance imaging. The fibril-reinforced poroviscoelastic material properties with depth-dependent collagen orientations and split-line patterns were included in the model. The effects of joint loading on stresses and strains in cartilage with various split-line patterns and medial collagen fibrillation were simulated under axial impact loading of 1000 N. In the model, the collagen fibrils resisted strains along the split-line directions. This increased also stresses along the split-lines. On the contrary, contact and pore pressures were not affected by split-line patterns. Simulated medial osteoarthritis increased tissue strains in both medial and lateral femoral condyles, and contact and pore pressures in the lateral femoral condyle. This study highlights the importance of the collagen fibril organization, especially that indicated by split-line patterns, for the weight-bearing properties of articular cartilage. Osteoarthritic changes of cartilage in the medial femoral condyle created a possible failure point in the lateral femoral condyle. This study provides further evidence on the importance of the collagen fibril organization for the optimal function of articular cartilage.
---
paper_title: Structure–function relationships in osteoarthritic human hip joint articular cartilage
paper_content:
Summary Objectives It is currently poorly known how different structural and compositional components in human articular cartilage are related to their specific functional properties at different stages of osteoarthritis (OA). The objective of this study was to characterize the structure–function relationships of articular cartilage obtained from osteoarthritic human hip joints. Methods Articular cartilage samples with their subchondral bone ( n = 15) were harvested during hip replacement surgeries from human femoral necks. Stress–relaxation tests, Mankin scoring, spectroscopic and microscopic methods were used to determine the biomechanical properties, OA grade, and the composition and structure of the samples. In order to obtain the mechanical material parameters for the samples, a fibril-reinforced poroviscoelastic model was fitted to the experimental data obtained from the stress–relaxation experiments. Results The strain-dependent collagen network modulus ( E fe ) and the collagen orientation angle exhibited a negative linear correlation ( r = −0.65, P M ) and the collagen content exhibited a positive linear correlation ( r = 0.56, P E nf ) also exhibited a positive linear correlation with the proteoglycan content ( r = 0.54, P Conclusion The study suggests that increased collagen orientation angle during OA primarily impairs the collagen network and the tensile stiffness of cartilage in a strain-dependent manner, while the decreased collagen content in OA facilitates fluid flow out of the tissue especially at high compressive strains. Thus, the results provide interesting and important information of the structure–function relationships of human hip joint cartilage and mechanisms during the progression of OA.
---
paper_title: Computational biomechanics of articular cartilage of human knee joint: effect of osteochondral defects.
paper_content:
Articular cartilage and its supporting bone functional conditions are tightly coupled as injuries of either adversely affects joint mechanical environment. The objective of this study was set to quantitatively investigate the extent of alterations in the mechanical environment of cartilage and knee joint in presence of commonly observed osteochondral defects. An existing validated finite element model of a knee joint was used to construct a refined model of the tibial lateral compartment including proximal tibial bony structures. The response was computed under compression forces up to 2000 N while simulating localized bone damage, cartilage-bone horizontal split, bone overgrowth and absence of deep vertical collagen fibrils. Localized tibial bone damage increased overall joint compliance and substantially altered pattern and magnitude of contact pressures and cartilage strains in both tibia and femur. These alterations were further exacerbated when bone damage was combined with base cartilage split and absence of deep vertical collagen fibrils. Local bone boss markedly changed contact pressures and strain patterns in neighbouring cartilage. Bone bruise/fracture and overgrowth adversely perturbed the homeostatic balance in the mechanical environment of articulate cartilage surrounding and opposing the lesion as well as the joint compliance. As such, they potentially contribute to the initiation and development of post-traumatic osteoarthritis.
---
paper_title: Indentation Stiffness of Repair Tissue after Autologous Chondrocyte Transplantation
paper_content:
Our main hypothesis was that indentation stiffness of the repair tissue approaches the values of adjacent cartilage 1 year after autologous chondrocyte transplantation. We also wanted to investigate the differences between osteochondritic lesions and full-thickness lesions. Thirty patients with cartilage lesions were operated on with autologous chondrocyte transplantation. The repair was evaluated arthroscopically, indentation stiffness was measured, and clinical evaluations were done. The stiffness of the repair tissue improved to 62% (mean 2.04 +/- 0.83 N, mean +/- SD) of adjacent cartilage (3.58 +/- 1.04 N). Fifty-three percent of the patients graded their knee as excellent or good and 47% of the patients graded their knee as fair at the followup. In six patients the normalized stiffness was at least 80%, suggesting hyaline-like repair. The indentation stiffness of the osteochondritis dissecans lesion repairs (1.45 +/- 0.46 N; n = 7) was less than that of the nonosteochondritis dissecans lesion repair sites (2.37 +/- 0.72 N; n = 19). Gadolinium-enhanced magnetic resonance imaging of the cartilage (dGEMRIC) during followup of four patients suggested proteoglycan replenishment, although all grafts showed low indentation values. Low stiffness values may indicate incomplete maturation or predominantly fibrous repair. The indentation analysis showed that the repair tissue stiffness could, in some cases, reach the same level as the adjacent cartilage, but there was a large variation among the grafts.
---
paper_title: Creep behavior of the intact and meniscectomy knee joints
paper_content:
Abstract The mechanical functions of the menisci may be partially performed through the fluid pressurization in articular cartilages and menisci. This creep behavior has not been investigated in whole knee joint modeling. A three-dimensional finite element knee model was employed in the present study to explore the fluid-flow dependent creep behaviors of normal and meniscectomy knees. The model included distal femur, tibia, fibula, articular cartilages, menisci and four major ligaments. Articular cartilage or meniscus was modeled as a fluid-saturated solid matrix reinforced by a nonlinear orthotropic and site-specific collagen network. A 300 N compressive force, equal to half of body weight, was applied to the knee in full extension followed by creep. The results showed that the fluid pressurization played a substantial role in joint contact mechanics. Menisci bore more loading as creep developed, leading to decreased stresses in cartilages. The removal of menisci not only changed the stresses in the cartilages, which was in agreement with published studies, but also altered the distribution and the rate of dissipation of fluid pressure in the cartilages. The high fluid pressures in the femoral cartilage moved from anterior to more central regions of the condyles after total meniscectomy. For both intact and meniscectomy joints, the fluid pressure level remained considerably high for thousands of seconds during creep, which lasted even longer after meniscectomy. For the femoral cartilage, the maximum principal stress was generally in agreement with the fiber direction, which indicated the essential role of fibers in load support of the tissue.
---
| Title: A Review of the Combination of Experimental Measurements and Fibril-Reinforced Modeling for Investigation of Articular Cartilage and Chondrocyte Response to Loading
Section 1: Introduction
Description 1: Describe the structure and mechanical function of articular cartilage, including its composition and cellular components.
Section 2: Review of Fibril Reinforced Computational Models of Articular Cartilage
Description 2: Discuss the application and justification of fibril reinforced mixture models, detailing the components and mechanical characteristics of these models.
Section 3: Tissue-Level Fibril Reinforced Models
Description 3: Explain the implementation and development of fibril reinforced models at the tissue level, including details of the nonfibrillar matrix and collagen fibril network.
Section 4: Extensions to Fibrillar and Nonfibrillar Matrices
Description 4: Discuss additional features and complexities in modeling fibril and nonfibrillar matrices, such as tissue swelling and chemical expansion.
Section 5: Controversies and Limitations Related to Tissue-Level Fibril Reinforced Models
Description 5: Highlight the challenges and unresolved issues in applying fibril reinforced models to simulate cartilage mechanics.
Section 6: Cell-Level Models and Cell-Tissue Interactions
Description 6: Discuss the development and application of biphasic multiscale models to evaluate chondrocyte behavior and cell-tissue interactions under mechanical loading.
Section 7: Adaptation Models
Description 7: Explore models that investigate the biological responses of cells to physical loading and how these responses influence tissue adaptation over time.
Section 8: Practical Considerations in Modeling Applications
Description 8: Provide guidance on parametric analysis, model optimization, and convergence testing necessary for reliable and accurate simulations.
Section 9: Model Validation
Description 9: Explain the importance and methods of validating computational models using experimental data.
Section 10: Experimental Tests
Description 10: Describe the design and implementation of mechanical tests at the tissue and cell levels used to determine material properties through optimization.
Section 11: New Challenges
Description 11: Highlight future challenges and directions for research, including the development of patient-specific models and prediction of cartilage degeneration.
Section 12: Combining In Vivo Imaging with Modeling
Description 12: Discuss the integration of imaging techniques and modeling to obtain realistic evaluations of tissue and joint forces.
Section 13: Toward Patient-Specific Estimation of Disease Progression and Treatment
Description 13: Outline how advanced fibril reinforced models could lead to predictions of cartilage degeneration and inform clinical decision-making for individual patients. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.