Software Tools for System Identification and Control using Neural Networks in Process Engineering
Neural networks offer an alternative approach both
for identification and control of nonlinear processes in process
engineering. The lack of software tools for the design of controllers
based on neural network models is particularly pronounced in this
field. SIMULINK is properly a widely used graphical code
development environment which allows system-level developers to
perform rapid prototyping and testing. Such graphical based
programming environment involves block-based code development
and offers a more intuitive approach to modeling and control task in
a great variety of engineering disciplines. In this paper a
SIMULINK based Neural Tool has been developed for analysis and
design of multivariable neural based control systems. This tool has
been applied to the control of a high purity distillation column
including non linear hydrodynamic effects. The proposed control
scheme offers an optimal response for both theoretical and practical
challenges posed in process control task, in particular when both,
the quality improvement of distillation products and the operation
efficiency in economical terms are considered.
A Security Analysis for Home Gateway Architectures
Providing Services at Home has become over the last
few years a very dynamic and promising technological domain. It is
likely to enable wide dissemination of secure and automated living
environments. We propose a methodology for identifying threats to
Services at Home Delivery systems, as well as a threat analysis
of a multi-provider Home Gateway architecture. This methodology
is based on a dichotomous positive/preventive study of the target
system: it aims at identifying both what the system must do, and
what it must not do. This approach completes existing methods with
a synthetic view of potential security flaws, thus enabling suitable
measures to be taken into account. Security implications of the
evolution of a given system become easier to deal with. A prototype
is built based on the conclusions of this analysis.
Using Different Aspects of the Signings for Appearance-based Sign Language Recognition
Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.
Artificial Neural Networks for Identification and Control of a Lab-Scale Distillation Column Using LABVIEW
LABVIEW is a graphical programming language that has its roots in automation control and data acquisition. In this paper we have utilized this platform to provide a powerful toolset for process identification and control of nonlinear systems based on artificial neural networks (ANN). This tool has been applied to the monitoring and control of a lab-scale distillation column DELTALAB DC-SP. The proposed control scheme offers high speed of response for changes in set points and null stationary error for dual composition control and shows robustness in presence of externally imposed disturbance.
Expressive Modes and Species of Language
Computer languages are usually lumped together
into broad -paradigms-, leaving us in want of a finer classification
of kinds of language. Theories distinguishing between -genuine
differences- in language has been called for, and we propose that
such differences can be observed through a notion of expressive mode.
We outline this concept, propose how it could be operationalized and
indicate a possible context for the development of a corresponding
theory. Finally we consider a possible application in connection
with evaluation of language revision. We illustrate this with a case,
investigating possible revisions of the relational algebra in order to
overcome weaknesses of the division operator in connection with
A Parallel Architecture for the Real Time Correction of Stereoscopic Images
In this paper, we will present an architecture for the
implementation of a real time stereoscopic images correction's
approach. This architecture is parallel and makes use of several
memory blocs in which are memorized pre calculated data relating to
the cameras used for the acquisition of images. The use of reduced
images proves to be essential in the proposed approach; the
suggested architecture must so be able to carry out the real time
reduction of original images.
Dynamic Adaptability Using Reflexivity for Mobile Agent Protection
The paradigm of mobile agent provides a promising technology for the development of distributed and open applications. However, one of the main obstacles to widespread adoption of the mobile agent paradigm seems to be security. This paper treats the security of the mobile agent against malicious host attacks. It describes generic mobile agent protection architecture. The proposed approach is based on the dynamic adaptability and adopts the reflexivity as a model of conception and implantation. In order to protect it against behaviour analysis attempts, the suggested approach supplies the mobile agent with a flexibility faculty allowing it to present an unexpected behaviour. Furthermore, some classical protective mechanisms are used to reinforce the level of security.
A New Method in Detection of Ceramic Tiles Color Defects Using Genetic C-Means Algorithm
In this paper an algorithm is used to detect the color defects of ceramic tiles. First the image of a normal tile is clustered using GCMA; Genetic C-means Clustering Algorithm; those results in best cluster centers. C-means is a common clustering algorithm which optimizes an objective function, based on a measure between data points and the cluster centers in the data space. Here the objective function describes the mean square error. After finding the best centers, each pixel of the image is assigned to the cluster with closest cluster center. Then, the maximum errors of clusters are computed. For each cluster, max error is the maximum distance between its center and all the pixels which belong to it. After computing errors all the pixels of defected tile image are clustered based on the centers obtained from normal tile image in previous stage. Pixels which their distance from their cluster center is more than the maximum error of that cluster are considered as defected pixels.
A Secure Semi-Fragile Watermarking Scheme for Authentication and Recovery of Images Based On Wavelet Transform
Authentication of multimedia contents has gained much attention in recent times. In this paper, we propose a secure semi-fragile watermarking, with a choice of two watermarks to be embedded. This technique operates in integer wavelet domain and makes use of semi fragile watermarks for achieving better robustness. A self-recovering algorithm is employed, that hides the image digest into some Wavelet subbands to detect possible malevolent object manipulation undergone by the image (object replacing and/or deletion). The Semi-fragility makes the scheme tolerant for JPEG lossy compression as low as quality of 70%, and locate the tempered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees the safety of watermark, image recovery and location of the tempered area accurately.
On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion
This paper discusses EM algorithm and Bootstrap
approach combination applied for the improvement of the satellite
image fusion process. This novel satellite image fusion method based
on estimation theory EM algorithm and reinforced by Bootstrap
approach was successfully implemented and tested. The sensor
images are firstly split by a Bayesian segmentation method to
determine a joint region map for the fused image. Then, we use the
EM algorithm in conjunction with the Bootstrap approach to develop
the bootstrap EM fusion algorithm, hence producing the fused
targeted image. We proposed in this research to estimate the
statistical parameters from some iterative equations of the EM
algorithm relying on a reference of representative Bootstrap samples
of images. Sizes of those samples are determined from a new
criterion called 'hybrid criterion'. Consequently, the obtained results
of our work show that using the Bootstrap EM (BEM) in image
fusion improve performances of estimated parameters which involve
amelioration of the fused image quality; and reduce the computing
time during the fusion process.
Network Intrusion Detection Design Using Feature Selection of Soft Computing Paradigms
The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
On Preprocessing of Speech Signals
Preprocessing of speech signals is considered a crucial step in the development of a robust and efficient speech or speaker recognition system. In this paper, we present some popular statistical outlier-detection based strategies to segregate the silence/unvoiced part of the speech signal from the voiced portion. The proposed methods are based on the utilization of the 3 σ edit rule, and the Hampel Identifier which are compared with the conventional techniques: (i) short-time energy (STE) based methods, and (ii) distribution based methods. The results obtained after applying the proposed strategies on some test voice signals are encouraging.
Motion Area Estimated Motion Estimation with Triplet Search Patterns for H.264/AVC
In this paper a fast motion estimation method for
H.264/AVC named Triplet Search Motion Estimation (TS-ME) is
proposed. Similar to some of the traditional fast motion estimation
methods and their improved proposals which restrict the search points
only to some selected candidates to decrease the computation
complexity, proposed algorithm separate the motion search process to
several steps but with some new features. First, proposed algorithm try
to search the real motion area using proposed triplet patterns instead of
some selected search points to avoid dropping into the local minimum.
Then, in the localized motion area a novel 3-step motion search
algorithm is performed. Proposed search patterns are categorized into
three rings on the basis of the distance from the search center. These
three rings are adaptively selected by referencing the surrounding
motion vectors to early terminate the motion search process. On the
other hand, computation reduction for sub pixel motion search is also
discussed considering the appearance probability of the sub pixel
motion vector. From the simulation results, motion estimation speed
improved by a factor of up to 38 when using proposed algorithm than
that of the reference software of H.264/AVC with ignorable picture
Analysis of Textual Data Based On Multiple 2-Class Classification Models
This paper proposes a new method for analyzing textual data. The method deals with items of textual data, where each item is described based on various viewpoints. The method acquires 2- class classification models of the viewpoints by applying an inductive learning method to items with multiple viewpoints. The method infers whether the viewpoints are assigned to the new items or not by using the models. The method extracts expressions from the new items classified into the viewpoints and extracts characteristic expressions corresponding to the viewpoints by comparing the frequency of expressions among the viewpoints. This paper also applies the method to questionnaire data given by guests at a hotel and verifies its effect through numerical experiments.
A Survey of Business Component Identification Methods and Related Techniques
With deep development of software reuse, componentrelated
technologies have been widely applied in the development of
large-scale complex applications. Component identification (CI) is
one of the primary research problems in software reuse, by analyzing
domain business models to get a set of business components with high
reuse value and good reuse performance to support effective reuse.
Based on the concept and classification of CI, its technical stack is
briefly discussed from four views, i.e., form of input business models,
identification goals, identification strategies, and identification
process. Then various CI methods presented in literatures are
classified into four types, i.e., domain analysis based methods,
cohesion-coupling based clustering methods, CRUD matrix based
methods, and other methods, with the comparisons between these
methods for their advantages and disadvantages. Additionally, some
insufficiencies of study on CI are discussed, and the causes are
explained subsequently. Finally, it is concluded with some
significantly promising tendency about research on this problem.
Extraction of Temporal Relation by the Creation of Historical Natural Disaster Archive
In historical science and social science, the influence
of natural disaster upon society is a matter of great interest. In
recent years, some archives are made through many hands for natural
disasters, however it is inefficiency and waste. So, we suppose a
computer system to create a historical natural disaster archive. As
the target of this analysis, we consider newspaper articles. The news
articles are considered to be typical examples that prescribe the
temporal relations of affairs for natural disaster. In order to do this
analysis, we identify the occurrences in newspaper articles by some
index entries, considering the affairs which are specific to natural
disasters, and show the temporal relation between natural disasters.
We designed and implemented the automatic system of “extraction
of the occurrences of natural disaster" and “temporal relation table
for natural disaster."
Embedding a Large Amount of Information Using High Secure Neural Based Steganography Algorithm
In this paper, we construct and implement a new
Steganography algorithm based on learning system to hide a large
amount of information into color BMP image. We have used adaptive
image filtering and adaptive non-uniform image segmentation with
bits replacement on the appropriate pixels. These pixels are selected
randomly rather than sequentially by using new concept defined by
main cases with sub cases for each byte in one pixel. According to
the steps of design, we have been concluded 16 main cases with their
sub cases that covere all aspects of the input information into color
bitmap image. High security layers have been proposed through four
layers of security to make it difficult to break the encryption of the
input information and confuse steganalysis too. Learning system has
been introduces at the fourth layer of security through neural
network. This layer is used to increase the difficulties of the statistical
attacks. Our results against statistical and visual attacks are discussed
before and after using the learning system and we make comparison
with the previous Steganography algorithm. We show that our
algorithm can embed efficiently a large amount of information that
has been reached to 75% of the image size (replace 18 bits for each
pixel as a maximum) with high quality of the output.
Adaptive Anisotropic Diffusion for Ultrasonic Image Denoising and Edge Enhancement
Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Measuring the Comprehensibility of a UML-B Model and a B Model
Software maintenance, which involves making enhancements, modifications and corrections to existing software systems, consumes more than half of developer time. Specification comprehensibility plays an important role in software maintenance as it permits the understanding of the system properties more easily and quickly. The use of formal notation such as B increases a specification-s precision and consistency. However, the notation is regarded as being difficult to comprehend. Semi-formal notation such as the Unified Modelling Language (UML) is perceived as more accessible but it lacks formality. Perhaps by combining both notations could produce a specification that is not only accurate and consistent but also accessible to users. This paper presents an experiment conducted on a model that integrates the use of both UML and B notations, namely UML-B, versus a B model alone. The objective of the experiment was to evaluate the comprehensibility of a UML-B model compared to a traditional B model. The measurement used in the experiment focused on the efficiency in performing the comprehension tasks. The experiment employed a cross-over design and was conducted on forty-one subjects, including undergraduate and masters students. The results show that the notation used in the UML-B model is more comprehensible than the B model.
Automatic Fingerprint Classification Using Graph Theory
Using efficient classification methods is necessary for automatic fingerprint recognition system. This paper introduces a new structural approach to fingerprint classification by using the directional image of fingerprints to increase the number of subclasses. In this method, the directional image of fingerprints is segmented into regions consisting of pixels with the same direction. Afterwards the relational graph to the segmented image is constructed and according to it, the super graph including prominent information of this graph is formed. Ultimately we apply a matching technique to compare obtained graph with the model graphs in order to classify fingerprints by using cost function. Increasing the number of subclasses with acceptable accuracy in classification and faster processing in fingerprints recognition, makes this system superior.
Hybrid Genetic-Simulated Annealing Approach for Fractal Image Compression
In this paper a hybrid technique of Genetic Algorithm
and Simulated Annealing (HGASA) is applied for Fractal Image
Compression (FIC). With the help of this hybrid evolutionary
algorithm effort is made to reduce the search complexity of matching
between range block and domain block. The concept of Simulated
Annealing (SA) is incorporated into Genetic Algorithm (GA) in order
to avoid pre-mature convergence of the strings. One of the image
compression techniques in the spatial domain is Fractal Image
Compression but the main drawback of FIC is that it involves more
computational time due to global search. In order to improve the
computational time along with acceptable quality of the decoded
image, HGASA technique has been proposed. Experimental results
show that the proposed HGASA is a better method than GA in terms
of PSNR for Fractal image Compression.
Performance Evaluation of Data Transfer Protocol GridFTP for Grid Computing
In Grid computing, a data transfer protocol called
GridFTP has been widely used for efficiently transferring a large volume
of data. Currently, two versions of GridFTP protocols, GridFTP
version 1 (GridFTP v1) and GridFTP version 2 (GridFTP v2), have
been proposed in the GGF. GridFTP v2 supports several advanced
features such as data streaming, dynamic resource allocation, and
checksum transfer, by defining a transfer mode called X-block mode.
However, in the literature, effectiveness of GridFTP v2 has not been
fully investigated. In this paper, we therefore quantitatively evaluate
performance of GridFTP v1 and GridFTP v2 using mathematical
analysis and simulation experiments. We reveal the performance
limitation of GridFTP v1, and quantitatively show effectiveness of
GridFTP v2. Through several numerical examples, we show that by
utilizing the data streaming feature, the average file transfer time of
GridFTP v2 is significantly smaller than that of GridFTP v1.
iDENTM Phones Automated Stress Testing
System testing is actually done to the entire system
against the Functional Requirement Specification and/or the System
Requirement Specification. Moreover, it is an investigatory testing
phase, where the focus is to have almost a destructive attitude and
test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and
beyond the bounds defined in the software/hardware requirements
specifications. In Motorola®, Automated Testing is one of the testing
methodologies uses by GSG-iSGT (Global Software Group - iDEN
Subcriber Group-Test) to increase the testing volume, productivity
and reduce test cycle-time in iDEN
phones testing. Testing is able
to produce more robust products before release to the market. In this
paper, iHopper is proposed as a tool to perform stress test on iDEN
phonse. We will discuss the value that automation has brought to
Phone testing such as improving software quality in the
phone together with some metrics. We will also look into
the advantages of the proposed system and some discussion of the
future work as well.
Adaptive Bidirectional Flow for Image Interpolation and Enhancement
Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Interactive Model Based On an Extended CPN
The UML modeling of complex distributed systems often is a great challenge due to the large amount of parallel real-time operating components. In this paper the problems of verification of such systems are discussed. ECPN, an Extended Colored Petri Net is defined to formally describe state transitions of components and interactions among components. The relationship between sequence diagrams and Free Choice Petri Nets is investigated. Free Choice Petri Net theory helps verifying the liveness of sequence diagrams. By converting sequence diagrams to ECPNs and then comparing behaviors of sequence diagram ECPNs and statecharts, the consistency among models is analyzed. Finally, a verification process for an example model is demonstrated.
Computer Generated Hologram for SemiFragile Watermarking with Encrypted Images
The protection of the contents of digital products is
referred to as content authentication. In some applications, to be able
to authenticate a digital product could be extremely essential. For
example, if a digital product is used as a piece of evidence in the
court, its integrity could mean life or death of the accused. Generally,
the problem of content authentication can be solved using semifragile
digital watermarking techniques. Recently many authors have
proposed Computer Generated Hologram Watermarking (CGHWatermarking)
techniques. Starting from these studies, in this paper
a semi-fragile Computer Generated Hologram coding technique is
proposed, which is able to detect malicious tampering while
tolerating some incidental distortions. The proposed technique uses
as watermark an encrypted image, and it is well suitable for digital
A Web Oriented Spread Spectrum Watermarking Procedure for MPEG-2 Videos
In the last decade digital watermarking procedures have
become increasingly applied to implement the copyright protection
of multimedia digital contents distributed on the Internet. To this
end, it is worth noting that a lot of watermarking procedures
for images and videos proposed in literature are based on spread
spectrum techniques. However, some scepticism about the robustness
and security of such watermarking procedures has arisen because
of some documented attacks which claim to render the inserted
watermarks undetectable. On the other hand, web content providers
wish to exploit watermarking procedures characterized by flexible and
efficient implementations and which can be easily integrated in their
existing web services frameworks or platforms. This paper presents
how a simple spread spectrum watermarking procedure for MPEG-2
videos can be modified to be exploited in web contexts. To this end,
the proposed procedure has been made secure and robust against some
well-known and dangerous attacks. Furthermore, its basic scheme
has been optimized by making the insertion procedure adaptive with
respect to the terminals used to open the videos and the network transactions
carried out to deliver them to buyers. Finally, two different
implementations of the procedure have been developed: the former
is a high performance parallel implementation, whereas the latter is
a portable Java and XML based implementation. Thus, the paper
demonstrates that a simple spread spectrum watermarking procedure,
with limited and appropriate modifications to the embedding scheme,
can still represent a valid alternative to many other well-known and
more recent watermarking procedures proposed in literature.
Frame Texture Classification Method (FTCM) Applied on Mammograms for Detection of Abnormalities
Texture classification is an important image processing
task with a broad application range. Many different techniques for
texture classification have been explored. Using sparse approximation
as a feature extraction method for texture classification is a relatively
new approach, and Skretting et al. recently presented the Frame
Texture Classification Method (FTCM), showing very good results on
classical texture images. As an extension of that work the FTCM is
here tested on a real world application as detection of abnormalities
in mammograms. Some extensions to the original FTCM that are
useful in some applications are implemented; two different smoothing
techniques and a vector augmentation technique. Both detection of
microcalcifications (as a primary detection technique and as a last
stage of a detection scheme), and soft tissue lesions in mammograms
are explored. All the results are interesting, and especially the results
using FTCM on regions of interest as the last stage in a detection
scheme for microcalcifications are promising.
Using a Trust-Based Environment Key for Mobile Agent Code Protection
Human activities are increasingly based on the use of remote resources and services, and on the interaction between
remotely located parties that may know little about each other. Mobile agents must be prepared to execute on different hosts with
various environmental security conditions. The aim of this paper is to
propose a trust based mechanism to improve the security of mobile
agents and allow their execution in various environments. Thus, an
adaptive trust mechanism is proposed. It is based on the dynamic interaction between the agent and the environment. Information
collected during the interaction enables generation of an environment
key. This key informs on the host-s trust degree and permits the mobile agent to adapt its execution. Trust estimation is based on
concrete parameters values. Thus, in case of distrust, the source of problem can be located and a mobile agent appropriate behavior can
Factoring a Polynomial with Multiple-Roots
A given polynomial, possibly with multiple roots, is
factored into several lower-degree distinct-root polynomials with
natural-order-integer powers. All the roots, including multiplicities,
of the original polynomial may be obtained by solving these lowerdegree
distinct-root polynomials, instead of the original high-degree
multiple-root polynomial directly.
The approach requires polynomial Greatest Common Divisor
(GCD) computation. The very simple and effective process, “Monic
polynomial subtractions" converted trickily from “Longhand
polynomial divisions" of Euclidean algorithm is employed. It
requires only simple elementary arithmetic operations without any
Amazingly, the derived routine gives the expected results for the
test polynomials of very high degree, such as p( x) =(x+1)1000.
A Support System Applicable to Multiple APIs for Haptic VR Application Designers
This paper describes a proposed support system which
enables applications designers to effectively create VR applications
using multiple haptic APIs. When the VR designers create
applications, it is often difficult to handle and understand many
parameters and functions that have to be set in the application program
using documentation manuals only. This complication may disrupt
creative imagination and result in inefficient coding. So, we proposed
the support application which improved the efficiency of VR
applications development and provided the interactive components of
confirmation of operations with haptic sense previously.
In this paper, we describe improvements of our former proposed
support application, which was applicable to multiple APIs and haptic
devices, and evaluate the new application by having participants
complete VR program. Results from a preliminary experiment suggest
that our application facilitates creation of VR applications.
A Sequential Pattern Mining Method Based On Sequential Interestingness
Sequential mining methods efficiently discover all frequent sequential patterns included in sequential data. These methods use the support, which is the previous criterion that satisfies the Apriori property, to evaluate the frequency. However, the discovered patterns do not always correspond to the interests of analysts, because the patterns are common and the analysts cannot get new knowledge from the patterns. The paper proposes a new criterion, namely, the sequential interestingness, to discover sequential patterns that are more attractive for the analysts. The paper shows that the criterion satisfies the Apriori property and how the criterion is related to the support. Also, the paper proposes an efficient sequential mining method based on the proposed criterion. Lastly, the paper shows the effectiveness of the proposed method by applying the method to two kinds of sequential data.
A New Similarity Measure Based On Edge Counting
In the field of concepts, the measure of Wu and Palmer  has the advantage of being simple to implement and have good performances compared to the other similarity measures . Nevertheless, the Wu and Palmer measure present the following disadvantage: in some situations, the similarity of two elements of an IS-A ontology contained in the neighborhood exceeds the similarity value of two elements contained in the same hierarchy. This situation is inadequate within the information retrieval framework. To overcome this problem, we propose a new similarity measure based on the Wu and Palmer measure. Our objective is to obtain realistic results for concepts not located in the same way. The obtained results show that compared to the Wu and Palmer approach, our measure presents a profit in terms of relevance and execution time.
Discovery of Sequential Patterns Based On Constraint Patterns
This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.
A Unified Robust Algorithm for Detection of Human and Non-human Object in Intelligent Safety Application
This paper presents a general trainable framework
for fast and robust upright human face and non-human object
detection and verification in static images. To enhance the
performance of the detection process, the technique we develop is
based on the combination of fast neural network (FNN) and
classical neural network (CNN). In FNN, a useful correlation is
exploited to sustain high level of detection accuracy between input
image and the weight of the hidden neurons. This is to enable the
use of Fourier transform that significantly speed up the time
detection. The combination of CNN is responsible to verify the
face region. A bootstrap algorithm is used to collect non human
object, which adds the false detection to the training process of the
human and non-human object. Experimental results on test images
with both simple and complex background demonstrate that the
proposed method has obtained high detection rate and low false
positive rate in detecting both human face and non-human object.
Auto Tuning PID Controller based on Improved Genetic Algorithm for Reverse Osmosis Plant
An optimal control of Reverse Osmosis (RO) plant is
studied in this paper utilizing the auto tuning concept in conjunction
with PID controller. A control scheme composing an auto tuning
stochastic technique based on an improved Genetic Algorithm (GA) is
proposed. For better evaluation of the process in GA, objective
function defined newly in sense of root mean square error has been
used. Also in order to achieve better performance of GA, more
pureness and longer period of random number generation in operation
are sought. The main improvement is made by replacing the uniform
distribution random number generator in conventional GA technique
to newly designed hybrid random generator composed of Cauchy
distribution and linear congruential generator, which provides
independent and different random numbers at each individual steps in
Genetic operation. The performance of newly proposed GA tuned
controller is compared with those of conventional ones via simulation.
Evolution, Tendencies and Impact of Standardization of Input/Output Platforms in Full Scale Simulators for Training Power Plant Operators
This article presents the evolution and technological changes implemented on the full scale simulators developed by the Simulation Department of the Instituto de Investigaciones Eléctricas1 (Mexican Electric Research Institute) and located at different training centers around the Mexican territory, and allows US to know the last updates, basically from the input/output view point, of the current simulators at some facilities of the electrical sector as well as the compatible industry of the electrical manufactures and industries such as Comision Federal de Electricidad (CFE*, The utility Mexican company). Tendencies of these developments and impact within the operators- scope are also presented.
A Modified Spiral Search Algorithm and Its Embedded System Architecture Design
One of the most growing areas in the embedded community is multimedia devices. Multimedia devices incorporate a number of complicated functions for their operation, like motion estimation. A multitude of different implementations have been proposed to reduce motion estimation complexity, such as spiral search. We have studied the implementations of spiral search and identified areas of improvement. We propose a modified spiral search algorithm, with lower computational complexity compared to the original spiral search. We have implemented our algorithm on an embedded ARM based architecture, with custom memory hierarchy. The resulting system yields energy consumption reduction up to 64% and performance increase up to 77%, with a small penalty of 2.3 dB, in average, of video quality compared with the original spiral search algorithm.
This paper presents an Extended Kaman Filter
implementation of a single-camera Visual Simultaneous Localization
and Mapping algorithm, a novel algorithm for simultaneous
localization and mapping problem widely studied in mobile robotics
field. The algorithm is vision and odometry-based, The odometry
data is incremental, and therefore it will accumulate error over time,
since the robot may slip or may be lifted, consequently if the
odometry is used alone we can not accurately estimate the robot
position, in this paper we show that a combination of odometry and
visual landmark via the extended Kalman filter can improve the robot
position estimate. We use a Pioneer II robot and motorized pan tilt
camera models to implement the algorithm.
Some Computational Results on MPI Parallel Implementation of Dense Simplex Method
There are two major variants of the Simplex
Algorithm: the revised method and the standard, or tableau method.
Today, all serious implementations are based on the revised method
because it is more efficient for sparse linear programming problems.
Moreover, there are a number of applications that lead to dense linear
problems so our aim in this paper is to present some computational
results on parallel implementation of dense Simplex Method. Our
implementation is implemented on a SMP cluster using C
programming language and the Message Passing Interface MPI.
Preliminary computational results on randomly generated dense
linear programs support our results.
A New Approach for the Fingerprint Classification Based On Gray-Level Co- Occurrence Matrix
In this paper, we propose an approach for the classification of fingerprint databases. It is based on the fact that a fingerprint image is composed of regular texture regions that can be successfully represented by co-occurrence matrices. So, we first extract the features based on certain characteristics of the cooccurrence matrix and then we use these features to train a neural network for classifying fingerprints into four common classes. The obtained results compared with the existing approaches demonstrate the superior performance of our proposed approach.
A Type-2 Fuzzy Adaptive Controller of a Class of Nonlinear System
In this paper we propose a robust adaptive fuzzy
controller for a class of nonlinear system with unknown dynamic.
The method is based on type-2 fuzzy logic system to approximate
unknown non-linear function. The design of the on-line adaptive
scheme of the proposed controller is based on Lyapunov technique.
Simulation results are given to illustrate the effectiveness of the
Decision Making with Dempster-Shafer Theory of Evidence Using Geometric Operators
We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.
Privacy in New Mobile Payment Protocol
The increasing development of wireless networks and
the widespread popularity of handheld devices such as Personal
Digital Assistants (PDAs), mobile phones and wireless tablets
represents an incredible opportunity to enable mobile devices as a
universal payment method, involving daily financial transactions.
Unfortunately, some issues hampering the widespread acceptance of
mobile payment such as accountability properties, privacy protection,
limitation of wireless network and mobile device. Recently, many
public-key cryptography based mobile payment protocol have been
proposed. However, limited capabilities of mobile devices and
wireless networks make these protocols are unsuitable for mobile
network. Moreover, these protocols were designed to preserve
traditional flow of payment data, which is vulnerable to attack and
increase the user-s risk. In this paper, we propose a private mobile
payment protocol which based on client centric model and by
employing symmetric key operations. The proposed mobile payment
protocol not only minimizes the computational operations and
communication passes between the engaging parties, but also
achieves a completely privacy protection for the payer. The future
work will concentrate on improving the verification solution to
support mobile user authentication and authorization for mobile
Evaluation of Risk Attributes Driven by Periodically Changing System Functionality
Modeling of the distributed systems allows us to
represent the whole its functionality. The working system instance
rarely fulfils the whole functionality represented by model; usually
some parts of this functionality should be accessible periodically.
The reporting system based on the Data Warehouse concept seams to
be an intuitive example of the system that some of its functionality is
required only from time to time. Analyzing an enterprise risk
associated with the periodical change of the system functionality, we
should consider not only the inaccessibility of the components
(object) but also their functions (methods), and the impact of such a
situation on the system functionality from the business point of view.
In the paper we suggest that the risk attributes should be estimated
from risk attributes specified at the requirements level (Use Case in
the UML model) on the base of the information about the structure of
the model (presented at other levels of the UML model). We argue
that it is desirable to consider the influence of periodical changes in
requirements on the enterprise risk estimation. Finally, the
proposition of such a solution basing on the UML system model is
Specification of a Model of Honeypot Attack Based On Raised Data
The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).
A Metric-Set and Model Suggestion for Better Software Project Cost Estimation
Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Lithofacies Classification from Well Log Data Using Neural Networks, Interval Neutrosophic Sets and Quantification of Uncertainty
This paper proposes a novel approach to the question of lithofacies classification based on an assessment of the uncertainty in the classification results. The proposed approach has multiple neural networks (NN), and interval neutrosophic sets (INS) are used to classify the input well log data into outputs of multiple classes of lithofacies. A pair of n-class neural networks are used to predict n-degree of truth memberships and n-degree of false memberships. Indeterminacy memberships or uncertainties in the predictions are estimated using a multidimensional interpolation method. These three memberships form the INS used to support the confidence in results of multiclass classification. Based on the experimental data, our approach improves the classification performance as compared to an existing technique applied only to the truth membership. In addition, our approach has the capability to provide a measure of uncertainty in the problem of multiclass classification.
TRS: System for Recommending Semantic Web Service Composition Approaches
A large number of semantic web service composition
approaches are developed by the research community and one is
more efficient than the other one depending on the particular
situation of use. So a close look at the requirements of ones particular
situation is necessary to find a suitable approach to use. In this paper,
we present a Technique Recommendation System (TRS) which using
a classification of state-of-art semantic web service composition
approaches, can provide the user of the system with the
recommendations regarding the use of service composition approach
based on some parameters regarding situation of use. TRS has
modular architecture and uses the production-rules for knowledge
Modeling of Dielectric Heating in Radio- Frequency Applicator Optimized for Uniform Temperature by Means of Genetic Algorithms
The paper presents an optimization study based on
genetic algorithms (GA-s) for a radio-frequency applicator used in
heating dielectric band products. The weakly coupled electro-thermal
problem is analyzed using 2D-FEM. The design variables in the
optimization process are: the voltage of a supplementary “guard"
electrode and six geometric parameters of the applicator. Two
objective functions are used: temperature uniformity and total active
power absorbed by the dielectric. Both mono-objective and multiobjective
formulations are implemented in GA optimization.
MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database
For a spatiotemporal database management system,
I/O cost of queries and other operations is an important performance
criterion. In order to optimize this cost, an intense research on
designing robust index structures has been done in the past decade.
With these major considerations, there are still other design issues
that deserve addressing due to their direct impact on the I/O cost.
Having said this, an efficient buffer management strategy plays a key
role on reducing redundant disk access. In this paper, we proposed an
efficient buffer strategy for a spatiotemporal database index
structure, specifically indexing objects moving over a network of
roads. The proposed strategy, namely MONPAR, is based on the data
type (i.e. spatiotemporal data) and the structure of the index
structure. For the purpose of an experimental evaluation, we set up a
simulation environment that counts the number of disk accesses
while executing a number of spatiotemporal range-queries over the
index. We reiterated simulations with query sets with different
distributions, such as uniform query distribution and skewed query
distribution. Based on the comparison of our strategy with wellknown
page-replacement techniques, like LRU-based and Prioritybased
buffers, we conclude that MONPAR behaves better than its
competitors for small and medium size buffers under all used query-distributions.
Coalescing Data Marts
OLAP uses multidimensional structures, to provide
access to data for analysis. Traditionally, OLAP operations are more
focused on retrieving data from a single data mart. An exception is
the drill across operator. This, however, is restricted to retrieving
facts on common dimensions of the multiple data marts. Our concern
is to define further operations while retrieving data from multiple
data marts. Towards this, we have defined six operations which
coalesce data marts. While doing so we consider the common as well
as the non-common dimensions of the data marts.
Shot Transition Detection with Minimal Decoding of MPEG Video Streams
Digital libraries become more and more necessary in
order to support users with powerful and easy-to-use tools for
searching, browsing and retrieving media information. The starting
point for these tasks is the segmentation of video content into shots.
To segment MPEG video streams into shots, a fully automatic
procedure to detect both abrupt and gradual transitions (dissolve and
fade-groups) with minimal decoding in real time is developed in this
study. Each was explored through two phases: macro-block type's
analysis in B-frames, and on-demand intensity information analysis.
The experimental results show remarkable performance in
detecting gradual transitions of some kinds of input data and
comparable results of the rest of the examined video streams. Almost
all abrupt transitions could be detected with very few false positive