A Formal Approach for Instructional Design Integrated with Data Visualization for Learning Analytics
Most Virtual Learning Environments do not provide support mechanisms for the integrated planning, construction and follow-up of Instructional Design supported by Learning Analytic results. The present work aims to present an authoring tool that will be responsible for constructing the structure of an Instructional Design (ID), without the data being altered during the execution of the course. The visual interface aims to present the critical situations present in this ID, serving as a support tool for the course follow-up and possible improvements, which can be made during its execution or in the planning of a new edition of this course. The model for the ID is based on High-Level Petri Nets and the visualization forms are determined by the specific kind of the data generated by an e-course, a population of students generating sequentially dependent data.
The Communication Library DIALOG for iFDAQ of the COMPASS Experiment
Modern experiments in high energy physics impose
great demands on the reliability, the efficiency, and the data rate
of Data Acquisition Systems (DAQ). This contribution focuses on
the development and deployment of the new communication library
DIALOG for the intelligent, FPGA-based Data Acquisition System
(iFDAQ) of the COMPASS experiment at CERN. The iFDAQ
utilizing a hardware event builder is designed to be able to readout
data at the maximum rate of the experiment. The DIALOG library is a
communication system both for distributed and mixed environments,
it provides a network transparent inter-process communication layer.
Using the high-performance and modern C++ framework Qt and its
Qt Network API, the DIALOG library presents an alternative to
the previously used DIM library. The DIALOG library was fully
incorporated to all processes in the iFDAQ during the run 2016.
From the software point of view, it might be considered as a
significant improvement of iFDAQ in comparison with the previous
run. To extend the possibilities of debugging, the online monitoring
of communication among processes via DIALOG GUI is a desirable
feature. In the paper, we present the DIALOG library from several
insights and discuss it in a detailed way. Moreover, the efficiency
measurement and comparison with the DIM library with respect to
the iFDAQ requirements is provided.
Linguistic Summarization of Structured Patent Data
Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.
Interference Management in Long Term Evolution-Advanced System
Incorporating Home eNodeB (HeNB) in cellular networks, e.g. Long Term Evolution Advanced (LTE-A), is beneficial for extending coverage and enhancing capacity at low price especially within the non-line-of sight (NLOS) environments such as homes. HeNB or femtocell is a small low powered base station which provides radio coverage to the mobile users in an indoor environment. This deployment results in a heterogeneous network where the available spectrum becomes shared between two layers. Therefore, a problem of Inter Cell Interference (ICI) appears. This issue is the main challenge in LTE-A. To deal with this challenge, various techniques based on frequency, time and power control are proposed. This paper deals with the impact of carrier aggregation and higher order MIMO (Multiple Input Multiple Output) schemes on the LTE-Advanced performance. Simulation results show the advantages of these schemes on the system capacity (4.109 b/s/Hz when bandwidth B=100 MHz and when applying MIMO 8x8 for SINR=30 dB), maximum theoretical peak data rate (more than 4 Gbps for B=100 MHz and when MIMO 8x8 is used) and spectral efficiency (15 b/s/Hz and 30b/s/Hz when MIMO 4x4 and MIMO 8x8 are applying respectively for SINR=30 dB).
Partner Selection in International Strategic Alliances: The Case of the Information Industry
This study analyzes international strategic alliances in the information industry. The purpose of this study is to clarify the strategic intention of an international alliance. Secondly, it investigates the influence of differences in the target markets of partner companies on alliances. Using an international strategy theory approach to analyze the global strategies of global companies, the study compares a database business and an electronic publishing business. In particular, these cases emphasized factors attributable to "people" and "learning", reliability and communication between organizations and the evolution of the IT infrastructure. The theory evolved in this study validates the effectiveness of these strategies.
Discovering User Behaviour Patterns from Web Log Analysis to Enhance the Accessibility and Usability of Website
Finding relevant information on the World Wide Web is becoming highly challenging day by day. Web usage mining is used for the extraction of relevant and useful knowledge, such as user behaviour patterns, from web access log records. Web access log records all the requests for individual files that the users have requested from the website. Web usage mining is important for Customer Relationship Management (CRM), as it can ensure customer satisfaction as far as the interaction between the customer and the organization is concerned. Web usage mining is helpful in improving website structure or design as per the user’s requirement by analyzing the access log file of a website through a log analyzer tool. The focus of this paper is to enhance the accessibility and usability of a guitar selling web site by analyzing their access log through Deep Log Analyzer tool. The results show that the maximum number of users is from the United States and that they use Opera 9.8 web browser and the Windows XP operating system.
High-Value Health System for All: Technologies for Promoting Health Education and Awareness
Health for all is considered as a sign of well-being and inclusive growth. New healthcare technologies are contributing to the quality of human lives by promoting health education and awareness, leading to the prevention, early diagnosis and treatment of the symptoms of diseases. Healthcare technologies have now migrated from the medical and institutionalized settings to the home and everyday life. This paper explores these new technologies and investigates how they contribute to health education and awareness, promoting the objective of high-value health system for all. The methodology used for the research is literature review. The paper also discusses the opportunities and challenges with futuristic healthcare technologies. The combined advances in genomics medicine, wearables and the IoT with enhanced data collection in electronic health record (EHR) systems, environmental sensors, and mobile device applications can contribute in a big way to high-value health system for all. The promise by these technologies includes reduced total cost of healthcare, reduced incidence of medical diagnosis errors, and reduced treatment variability. The major barriers to adoption include concerns with security, privacy, and integrity of healthcare data, regulation and compliance issues, service reliability, interoperability and portability of data, and user friendliness and convenience of these technologies.
Identity Verification Using k-NN Classifiers and Autistic Genetic Data
DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN).
Principle Components Updates via Matrix Perturbations
This paper highlights a new approach to look at online
principle components analysis (OPCA). Given a data matrix X ∈
R,^m x n we characterise the online updates of its covariance as a
matrix perturbation problem. Up to the principle components, it
turns out that online updates of the batch PCA can be captured
by symmetric matrix perturbation of the batch covariance matrix.
We have shown that as n→ n0 >> 1, the batch covariance and
its update become almost similar. Finally, utilize our new setup of
online updates to find a bound on the angle distance of the principle
components of X and its update.
Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique
The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.
Life Cycle Datasets for the Ornamental Stone Sector
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.
Generic Data Warehousing for Consumer Electronics Retail Industry
The dynamic and highly competitive nature of the consumer electronics retail industry means that businesses in this industry are experiencing different decision making challenges in relation to pricing, inventory control, consumer satisfaction and product offerings. To overcome the challenges facing retailers and create opportunities, we propose a generic data warehousing solution which can be applied to a wide range of consumer electronics retailers with a minimum configuration. The solution includes a dimensional data model, a template SQL script, a high level architectural descriptions, ETL tool developed using C#, a set of APIs, and data access tools. It has been successfully applied by ASK Outlets Ltd UK resulting in improved productivity and enhanced sales growth.
FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Instance selection (IS) technique is used to reduce
the data size to improve the performance of data mining methods.
Recently, to process very large data set, several proposed methods
divide the training set into some disjoint subsets and apply IS
algorithms independently to each subset. In this paper, we analyze
the limitation of these methods and give our viewpoint about how to
divide and conquer in IS procedure. Then, based on fast condensed
nearest neighbor (FCNN) rule, we propose a large data sets instance
selection method with MapReduce framework. Besides ensuring the
prediction accuracy and reduction rate, it has two desirable properties:
First, it reduces the work load in the aggregation node; Second
and most important, it produces the same result with the sequential
version, which other parallel methods cannot achieve. We evaluate the
performance of FCNN-MR on one small data set and two large data
sets. The experimental results show that it is effective and practical.
Searching the Efficient Frontier for the Coherent Covering Location Problem
In this article, we will try to find an efficient boundary
approximation for the bi-objective location problem with coherent
coverage for two levels of hierarchy (CCLP). We present the
mathematical formulation of the model used. Supported efficient
solutions and unsupported efficient solutions are obtained by solving
the bi-objective combinatorial problem through the weights method
using a Lagrangean heuristic. Subsequently, the results are validated
through the DEA analysis with the GEM index (Global efficiency
Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.
Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering
Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.
A Computational Cost-Effective Clustering Algorithm in Multidimensional Space Using the Manhattan Metric: Application to the Global Terrorism Database
The increasing amount of collected data has limited the performance of the current analyzing algorithms. Thus, developing new cost-effective algorithms in terms of complexity, scalability, and accuracy raised significant interests. In this paper, a modified effective k-means based algorithm is developed and experimented. The new algorithm aims to reduce the computational load without significantly affecting the quality of the clusterings. The algorithm uses the City Block distance and a new stop criterion to guarantee the convergence. Conducted experiments on a real data set show its high performance when compared with the original k-means version.
Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.
Virtual 3D Environments for Image-Based Navigation Algorithms
This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.
CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm
The biological function of an RNA molecule depends
on its structure. The objective of the alignment is finding the
homology between two or more RNA secondary structures. Knowing
the common functionalities between two RNA structures allows
a better understanding and a discovery of other relationships
between them. Besides, identifying non-coding RNAs -that is not
translated into a protein- is a popular application in which RNA
structural alignment is the first step A few methods for RNA
structure-to-structure alignment have been developed. Most of these
methods are partial structure-to-structure, sequence-to-structure, or
structure-to-sequence alignment. Less attention is given in the
literature to the use of efficient RNA structure representation and the
structure-to-structure alignment methods are lacking. In this paper,
we introduce an O(N2) Component-based Pairwise RNA Structure
Alignment (CompPSA) algorithm, where structures are given as
a component-based representation and where N is the maximum
number of components in the two structures. The proposed algorithm
compares the two RNA secondary structures based on their weighted
component features rather than on their base-pair details. Extensive
experiments are conducted illustrating the efficiency of the CompPSA
algorithm when compared to other approaches and on different real
and simulated datasets. The CompPSA algorithm shows an accurate
similarity measure between components. The algorithm gives the
flexibility for the user to align the two RNA structures based on
their weighted features (position, full length, and/or stem length).
Moreover, the algorithm proves scalability and efficiency in time and
Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Traditionally in sensor networks and recently in the
Internet of Things, numerous heterogeneous sensors are deployed
in distributed manner to monitor a phenomenon that often can be
model by an underlying stochastic process. The big time-series
data collected by the sensors must be analyzed to detect change
in the stochastic process as quickly as possible with tolerable
false alarm rate. However, sensors may have different accuracy
and sensitivity range, and they decay along time. As a result,
the big time-series data collected by the sensors will contain
uncertainties and sometimes they are conflicting. In this study, we
present a framework to take advantage of Evidence Theory (a.k.a.
Dempster-Shafer and Dezert-Smarandache Theories) capabilities of
representing and managing uncertainty and conflict to fast change
detection and effectively deal with complementary hypotheses.
Specifically, Kullback-Leibler divergence is used as the similarity
metric to calculate the distances between the estimated current
distribution with the pre- and post-change distributions. Then mass
functions are calculated and related combination rules are applied to
combine the mass values among all sensors. Furthermore, we applied
the method to estimate the minimum number of sensors needed to
combine, so computational efficiency could be improved. Cumulative
sum test is then applied on the ratio of pignistic probability to detect
and declare the change for decision making purpose. Simulation
results using both synthetic data and real data from experimental
setup demonstrate the effectiveness of the presented schemes.
A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
In this paper, we propose a method to model the
relationship between failure time and degradation for a simple step
stress test where underlying degradation path is linear and different
causes of failure are possible. It is assumed that the intensity function
depends only on the degradation value. No assumptions are made
about the distribution of the failure times. A simple step-stress test
is used to shorten failure time of products and a tampered failure
rate (TFR) model is proposed to describe the effect of the changing
stress on the intensities. We assume that some of the products that
fail during the test have a cause of failure that is only known to
belong to a certain subset of all possible failures. This case is known
as masking. In the presence of masking, the maximum likelihood
estimates (MLEs) of the model parameters are obtained through an
expectation-maximization (EM) algorithm by treating the causes of
failure as missing values. The effect of incomplete information on the
estimation of parameters is studied through a Monte-Carlo simulation.
Finally, a real example is analyzed to illustrate the application of the
Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements
With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.
Data Projects for “Social Good”: Challenges and Opportunities
One of the application fields for data analysis techniques and technologies gaining momentum is the area of social good or “common good”, covering cases related to humanitarian crises, global health care, or ecology and environmental issues, among others. The promotion of data-driven projects in this field aims at increasing the efficacy and efficiency of social initiatives, improving the way these actions help humanity in general and people in need in particular. This application field, however, poses its own barriers and challenges when developing data-driven projects, lagging behind in comparison with other scenarios. These challenges derive from aspects such as the scope and scale of the social issue to solve, cultural and political barriers, the skills of main stakeholders and the technological resources available, the motivation to be engaged in such projects, or the ethical and legal issues related to sensitive data. This paper analyzes the application of data projects in the field of social good, reviewing its current state and noteworthy initiatives, and presenting a framework covering the key aspects to analyze in such projects. The goal is to provide guidelines to understand the main challenges and opportunities for this type of data project, as well as identifying the main differential issues compared to “classical” data projects in general. A case study is presented on the initial steps and stakeholder analysis of a data project for the inclusion of refugees in the city of Frankfurt, Germany, in order to empirically confront the framework with a real example.
Innovative Design Considerations for Adaptive Spacecraft
Space technologies have changed the way we live in the present day society and manage many aspects of our daily affairs through Remote sensing, Navigation & Communications. Further, defense and military usage of spacecraft has increased tremendously along with civilian purposes. The number of satellites deployed in space in Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and the Geostationary Orbit (GEO) has gone up. The dependency on remote sensing and operational capabilities are most invariably to be exploited more and more in future. Every country is acquiring spacecraft in one way or other for their daily needs, and spacecraft numbers are likely to increase significantly and create spacecraft traffic problems. The aim of this research paper is to propose innovative design concepts for adaptive spacecraft. The main idea here is to improve existing design methods of spacecraft design and development to further improve upon design considerations for futuristic adaptive spacecraft with inbuilt features for automatic adaptability and self-protection. In other words, the innovative design considerations proposed here are to have future spacecraft with self-organizing capabilities for orbital control and protection from anti-satellite weapons (ASAT). Here, an attempt is made to propose design and develop futuristic spacecraft for 2030 and beyond due to tremendous advancements in VVLSI, miniaturization, and nano antenna array technologies, including nano technologies are expected.
, low earth orbit
, medium earth orbit
, geostationary earth orbit
, self-organizing control system
, anti-satellite weapons
, orbital control
, radar warning receiver
, missile warning receiver
, laser warning receiver
, attitude and orbit control systems
, command and data handling.
The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry
A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.
Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks
Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.
Aggregation Scheduling Algorithms in Wireless Sensor Networks
In Wireless Sensor Networks which consist of tiny
wireless sensor nodes with limited battery power, one of the most
fundamental applications is data aggregation which collects nearby
environmental conditions and aggregates the data to a designated
destination, called a sink node. Important issues concerning the
data aggregation are time efficiency and energy consumption due
to its limited energy, and therefore, the related problem, named
Minimum Latency Aggregation Scheduling (MLAS), has been the
focus of many researchers. Its objective is to compute the minimum
latency schedule, that is, to compute a schedule with the minimum
number of timeslots, such that the sink node can receive the
aggregated data from all the other nodes without any collision or
interference. For the problem, the two interference models, the graph
model and the more realistic physical interference model known as
Signal-to-Interference-Noise-Ratio (SINR), have been adopted with
different power models, uniform-power and non-uniform power (with
power control or without power control), and different antenna
models, omni-directional antenna and directional antenna models.
In this survey article, as the problem has proven to be NP-hard,
we present and compare several state-of-the-art approximation
algorithms in various models on the basis of latency as its
A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.
Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles
Redundancy requirements for UAV (Unmanned Aerial
Vehicle) are hardly faced due to the generally restricted amount
of available space and allowable weight for the aircraft systems,
limiting their exploitation. Essential equipment as the Air Data,
Attitude and Heading Reference Systems (ADAHRS) require several
external probes to measure significant data as the Angle of Attack
or the Sideslip Angle. Previous research focused on the analysis
of a patented technology named Smart-ADAHRS (Smart Air Data,
Attitude and Heading Reference System) as an alternative method to
obtain reliable and accurate estimates of the aerodynamic angles.
This solution is based on an innovative sensor fusion algorithm
implementing soft computing techniques and it allows to obtain a
simplified inertial and air data system reducing external devices.
In fact, only one external source of dynamic and static pressures
is needed. This paper focuses on the benefits which would be
gained by the implementation of this system in UAV applications.
A simplification of the entire ADAHRS architecture will bring to
reduce the overall cost together with improved safety performance.
Smart-ADAHRS has currently reached Technology Readiness Level
(TRL) 6. Real flight tests took place on ultralight aircraft equipped
with a suitable Flight Test Instrumentation (FTI). The output of
the algorithm using the flight test measurements demonstrates the
capability for this fusion algorithm to embed in a single device
multiple physical and virtual sensors. Any source of dynamic and
static pressure can be integrated with this system gaining a significant
improvement in terms of versatility.