Classification of Defects by the SVM Method and the Principal Component Analysis (PCA)
Analyses carried out on examples of detected defects
echoes showed clearly that one can describe these detected forms according to a whole of characteristic parameters in order to be able to make discrimination between a planar defect and a volumic defect.
This work answers to a problem of ultrasonics NDT like Identification of the defects. The problems as well as the objective of
this realized work, are divided in three parts: Extractions of the parameters of wavelets from the ultrasonic echo of the detected defect - the second part is devoted to principal components analysis
(PCA) for optimization of the attributes vector. And finally to establish the algorithm of classification (SVM, Support Vector Machine) which allows discrimination between a plane defect and a
volumic defect. We have completed this work by a conclusion where we draw up a summary of the completed works, as well as the robustness of the
various algorithms proposed in this study.
Simple and Advanced Models for Calculating Single-Phase Diode Rectifier Line-Side Harmonics
This paper proposes different methods for estimation
of the harmonic currents of the single-phase diode bridge rectifier. Both simple and advanced methods are compared and the models are
put into a context of practical use for calculating the harmonic distortion in a typical application. Finally, the different models are
compared to measurements of a real application and convincing results are achieved.
WiPoD Wireless Positioning System based on 802.11 WLAN Infrastructure
This paper describes WiPoD (Wireless Position
Detector) which is a pure software based location determination and
tracking (positioning) system. It uses empirical signal strength measurements from different wireless access points for mobile user
positioning. It is designed to determine the location of users having
802.11 enabled mobile devices in an 802.11 WLAN infrastructure
and track them in real time. WiPoD is the first main module in our
LBS (Location Based Services) framework. We tested K-Nearest
Neighbor and Triangulation algorithms to estimate the position of a
mobile user. We also give the analysis results of these algorithms for
real time operations. In this paper, we propose a supportable, i.e.
understandable, maintainable, scalable and portable wireless
positioning system architecture for an LBS framework. The WiPoD
software has a multithreaded structure and was designed and implemented with paying attention to supportability features and real-time constraints and using object oriented design principles. We also describe the real-time software design issues of a wireless positioning system which will be part of an LBS framework.
Blind Impulse Response Identification of Frequency Radio Channels: Application to Bran A Channel
This paper describes a blind algorithm for estimating a time varying and frequency selective fading channel. In order to identify blindly the impulse response of these channels, we have used Higher Order Statistics (HOS) to build our algorithm. In this paper, we have selected two theoretical frequency selective channels as the Proakis-s 'B' channel and the Macchi-s channel, and one practical frequency selective fading channel called Broadband Radio Access Network (BRAN A). The simulation results in noisy environment and for different data input channel, demonstrate that the proposed method could estimate the phase and magnitude of these channels blindly and without any information about the input, except that the input excitation is i.i.d (Identically and Independent Distributed) and non-Gaussian.
Quasi-Permutation Representations for the Group SL(2, q) when Extended by a Certain Group of Order Two
A square matrix over the complex field with non- negative integral trace is called a quasi-permutation matrix. For a finite group G the minimal degree of a faithful representation of G by quasi-permutation matrices over the rationals and the complex numbers are denoted by q(G) and c(G) respectively. Finally r (G) denotes the minimal degree of a faithful rational valued complex character of C. The purpose of this paper is to calculate q(G), c(G) and r(G) for the group S L(2, q) when extended by a certain group of order two.
Approximate Solution of Nonlinear Fredholm Integral Equations of the First Kind via Converting to Optimization Problems
In this paper we introduce an approach via optimization methods to find approximate solutions for nonlinear Fredholm integral equations of the first kind. To
this purpose, we consider two stages of approximation.
First we convert the integral equation to a moment problem and then we modify the new problem to two classes of optimization problems, non-constraint optimization problems
and optimal control problems. Finally numerical examples is
Data Annotation Models and Annotation Query Language
This paper presents data annotation models at
five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models
do not require any structural and schematic changes to the
underlying database. These models are also flexible, extensible,
customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
A Community Compromised Approach to Combinatorial Coalition Problem
Buyer coalition with a combination of items is a group of buyers joining together to purchase a combination of items with a larger discount. The primary aim of existing buyer coalition with a combination of items research is to generate a large total discount. However, the aim is hard to achieve because this research is based on the assumption that each buyer completely knows other buyers- information or at least one buyer knows other buyers- information in a coalition by exchange of information. These assumption contrast with the real world environment where buyers join a coalition with incomplete information, i.e., they concerned only with their expected discounts. Therefore, this paper proposes a new buyer community coalition formation with a combination of items scheme, called the Community Compromised Combinatorial Coalition scheme, under such an environment of incomplete information. In order to generate a larger total discount, after buyers who want to join a coalition propose their minimum required saving, a coalition structure that gives a maximum total retail prices is formed. Then, the total discount division of the coalition is divided among buyers in the coalition depending on their minimum required saving and is a Pareto optimal. In mathematical analysis, we compare concepts of this scheme with concepts of the existing buyer coalition scheme. Our mathematical analysis results show that the total discount of the coalition in this scheme is larger than that in the existing buyer coalition scheme.
The Influence of Preprocessing Parameters on Text Categorization
Text categorization (the assignment of texts in natural language into predefined categories) is an important and extensively studied problem in Machine Learning. Currently, popular techniques developed to deal with this task include many preprocessing and learning algorithms, many of which in turn require tuning nontrivial internal parameters. Although partial studies are available, many authors fail to report values of the parameters they use in their experiments, or reasons why these values were used instead of others. The goal of this work then is to create a more thorough comparison of preprocessing parameters and their mutual influence, and report interesting observations and results.
Global Behavior in (Q-xy)2 Potential
The general global behavior of particle S a non-linear (Q - xy)2 potential cannot be revealed a Poincare surface of section method (PSS) because inost trajectories take practically infinitely long time to integrate numerically before they come back to the surface. In this study as an alternative to PSS, a multiple scale perturbation is applied to analyze global adiabatic, non-adiabatic and chaotic behavior of particles in this potential. It was found that the results can be summarized as a form of a Fermi-like map. Additionally, this method gives a variation of global stochasticity criteria with Q.
Palmprint based Cancelable Biometric Authentication System
A cancelable palmprint authentication system
proposed in this paper is specifically designed to overcome the
limitations of the contemporary biometric authentication system. In
this proposed system, Geometric and pseudo Zernike moments are
employed as feature extractors to transform palmprint image into a
lower dimensional compact feature representation. Before moment
computation, wavelet transform is adopted to decompose palmprint
image into lower resolution and dimensional frequency subbands.
This reduces the computational load of moment calculation
drastically. The generated wavelet-moment based feature
representation is used to generate cancelable verification key with a
set of random data. This private binary key can be canceled and
replaced. Besides that, this key also possesses high data capture
offset tolerance, with highly correlated bit strings for intra-class
population. This property allows a clear separation of the genuine
and imposter populations, as well as zero Equal Error Rate
achievement, which is hardly gained in the conventional biometric
based authentication system.
Recursive Algorithms for Image Segmentation Based on a Discriminant Criterion
In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1
A Perceptual Image Coding method of High Compression Rate
In the framework of the image compression by
Wavelet Transforms, we propose a perceptual method by
incorporating Human Visual System (HVS) characteristics in the
quantization stage. Indeed, human eyes haven-t an equal sensitivity
across the frequency bandwidth. Therefore, the clarity of the
reconstructed images can be improved by weighting the quantization
according to the Contrast Sensitivity Function (CSF). The visual
artifact at low bit rate is minimized. To evaluate our method, we use
the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria
witch takes into account visual criteria. The experimental results
illustrate that our technique shows improvement on image quality at
the same compression ratio.
Spread Spectrum Code Estimation by Genetic Algorithm
In the context of spectrum surveillance, a method to
recover the code of spread spectrum signal is presented, whereas the
receiver has no knowledge of the transmitter-s spreading sequence.
The approach is based on a genetic algorithm (GA), which is forced to
model the received signal. Genetic algorithms (GAs) are well known
for their robustness in solving complex optimization problems.
Experimental results show that the method provides a good
estimation, even when the signal power is below the noise power.
Intelligent Modeling of the Electrical Activity of the Human Heart
The aim of this contribution is to present a new
approach in modeling the electrical activity of the human heart. A
recurrent artificial neural network is being used in order to exhibit a
subset of the dynamics of the electrical behavior of the human heart.
The proposed model can also be used, when integrated, as a
diagnostic tool of the human heart system.
What makes this approach unique is the fact that every model is
being developed from physiological measurements of an individual.
This kind of approach is very difficult to apply successfully in many
modeling problems, because of the complexity and entropy of the
free variables describing the complex system. Differences between
the modeled variables and the variables of an individual, measured at
specific moments, can be used for diagnostic purposes. The sensor
fusion used in order to optimize the utilization of biomedical sensors
is another point that this paper focuses on. Sensor fusion has been
known for its advantages in applications such as control and
diagnostics of mechanical and chemical processes.
Time-Delay Estimation Using Cross-ΨB-Energy Operator
In this paper, a new time-delay estimation
technique based on the cross IB-energy operator  is
introduced. This quadratic energy detector measures how
much a signal is present in another one. The location of the
peak of the energy operator, corresponding to the maximum of
interaction between the two signals, is the estimate of the
delay. The method is a fully data-driven approach. The
discrete version of the continuous-time form of the cross IBenergy
operator, for its implementation, is presented. The
effectiveness of the proposed method is demonstrated on real
underwater acoustic signals arriving from targets and the
results compared to the cross-correlation method.
FIR Filter Design via Linear Complementarity Problem, Messy Genetic Algorithm, and Ising Messy Genetic Algorithm
In this paper the design of maximally flat linear phase
finite impulse response (FIR) filters is considered. The problem is
handled with totally two different approaches. The first one is
completely deterministic numerical approach where the problem is
formulated as a Linear Complementarity Problem (LCP). The other
one is based on a combination of Markov Random Fields (MRF's)
approach with messy genetic algorithm (MGA). Markov Random
Fields (MRFs) are a class of probabilistic models that have been
applied for many years to the analysis of visual patterns or textures.
Our objective is to establish MRFs as an interesting approach to
modeling messy genetic algorithms. We establish a theoretical result
that every genetic algorithm problem can be characterized in terms of
a MRF model. This allows us to construct an explicit probabilistic
model of the MGA fitness function and introduce the Ising MGA.
Experimentations done with Ising MGA are less costly than those
done with standard MGA since much less computations are involved.
The least computations of all is for the LCP. Results of the LCP,
random search, random seeded search, MGA, and Ising MGA are
Complex-Valued Neural Networks for Blind Equalization of Time-Varying Channels
Most of the commonly used blind equalization algorithms are based on the minimization of a nonconvex and nonlinear cost function and a neural network gives smaller residual error as compared to a linear structure. The efficacy of complex valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present two neural network models for blind equalization of time-varying channels, for M-ary QAM and PSK signals. The complex valued activation functions, suitable for these signal constellations in time-varying environment, are introduced and the learning algorithms based on the CMA cost function are derived. The improved performance of the proposed models is confirmed through computer simulations.
Modeling and Simulating of Gas Turbine Cooled Blades
In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.
Home Network-Specific RBAC Model
As various mobile sensing technologies, remote
control and ubiquitous infrastructure are developing and expectations
on quality of life are increasing, a lot of researches and developments
on home network technologies and services are actively on going,
Until now, we have focused on how to provide users with high-level
home network services, while not many researches on home network
security for guaranteeing safety are progressing. So, in this paper, we
propose an access control model specific to home network that
provides various kinds of users with home network services up one-s
characteristics and features, and protects home network systems from
illegal/unnecessary accesses or intrusions.
Automatic Vehicle Identification by Plate Recognition
Automatic Vehicle Identification (AVI) has many
applications in traffic systems (highway electronic toll collection, red
light violation enforcement, border and customs checkpoints, etc.).
License Plate Recognition is an effective form of AVI systems. In
this study, a smart and simple algorithm is presented for vehicle-s
license plate recognition system. The proposed algorithm consists of
three major parts: Extraction of plate region, segmentation of
characters and recognition of plate characters. For extracting the
plate region, edge detection algorithms and smearing algorithms are
used. In segmentation part, smearing algorithms, filtering and some
morphological algorithms are used. And finally statistical based
template matching is used for recognition of plate characters. The
performance of the proposed algorithm has been tested on real
images. Based on the experimental results, we noted that our
algorithm shows superior performance in car license plate
Definition and Implementation of a Simulation Model for the Physical Layer and the Radio Channel in Dedicated Short Range Communication Systems
This paper proposes a vehicle-to-vehicle propagation
model implemented with SDL. To estimate the channel
characteristics for Inter-Vehicle communication, we first define a
predicted propagation pathloss between the moving vehicles under
three typical scenarios. A Ray-tracing method is used for the simple
gamma model performance.
Non-Parametric Histogram-Based Thresholding Methods for Weld Defect Detection in Radiography
In non destructive testing by radiography, a perfect
knowledge of the weld defect shape is an essential step to
appreciate the quality of the weld and make decision on its
acceptability or rejection. Because of the complex nature of the
considered images, and in order that the detected defect region
represents the most accurately possible the real defect, the choice
of thresholding methods must be done judiciously. In this paper,
performance criteria are used to conduct a comparative study of
four non parametric histogram thresholding methods for automatic
extraction of weld defect in radiographic images.
Fast Extraction of Edge Histogram in DCT Domain based on MPEG7
In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
EEG Spikes Detection, Sorting, and Localization
This study introduces a new method for detecting,
sorting, and localizing spikes from multiunit EEG recordings. The
method combines the wavelet transform, which localizes distinctive
spike features, with Super-Paramagnetic Clustering (SPC) algorithm,
which allows automatic classification of the data without assumptions
such as low variance or Gaussian distributions. Moreover, the method
is capable of setting amplitude thresholds for spike detection. The
method makes use of several real EEG data sets, and accordingly the
spikes are detected, clustered and their times were detected.
Dynamic Clustering using Particle Swarm Optimization with Application in Unsupervised Image Classification
A new dynamic clustering approach (DCPSO), based
on Particle Swarm Optimization, is proposed. This approach is
applied to unsupervised image classification. The proposed approach
automatically determines the "optimum" number of clusters and
simultaneously clusters the data set with minimal user interference.
The algorithm starts by partitioning the data set into a relatively large
number of clusters to reduce the effects of initial conditions. Using
binary particle swarm optimization the "best" number of clusters is
selected. The centers of the chosen clusters is then refined via the Kmeans
clustering algorithm. The experiments conducted show that
the proposed approach generally found the "optimum" number of
clusters on the tested images.
Contour Estimation in Synthetic and Real Weld Defect Images based on Maximum Likelihood
This paper describes a novel method for automatic
estimation of the contours of weld defect in radiography images.
Generally, the contour detection is the first operation which we apply
in the visual recognition system. Our approach can be described as a
region based maximum likelihood formulation of parametric
deformable contours. This formulation provides robustness against
the poor image quality, and allows simultaneous estimation of the
contour parameters together with other parameters of the model.
Implementation is performed by a deterministic iterative algorithm
with minimal user intervention. Results testify for the very good
performance of the approach especially in synthetic weld defect
Improving Image Quality in Remote Sensing Satellites using Channel Coding
Among other factors that characterize satellite communication
channels is their high bit error rate. We present a system for
still image transmission over noisy satellite channels. The system
couples image compression together with error control codes to
improve the received image quality while maintaining its bandwidth
requirements. The proposed system is tested using a high resolution
satellite imagery simulated over the Rician fading channel. Evaluation
results show improvement in overall system including image quality
and bandwidth requirements compared to similar systems with different
2D Gabor Functions and FCMI Algorithm for Flaws Detection in Ultrasonic Images
In this paper we present a new approach to detecting a
flaw in T.O.F.D (Time Of Flight Diffraction) type ultrasonic image
based on texture features. Texture is one of the most important
features used in recognizing patterns in an image. The paper
describes texture features based on 2D Gabor functions, i.e.,
Gaussian shaped band-pass filters, with dyadic treatment of the radial
spatial frequency range and multiple orientations, which represent an
appropriate choice for tasks requiring simultaneous measurement in
both space and frequency domains. The most relevant features are
used as input data on a Fuzzy c-mean clustering classifier. The
classes that exist are only two: 'defects' or 'no defects'. The proposed
approach is tested on the T.O.F.D image achieved at the laboratory
and on the industrial field.
LINUX Cluster Possibilities in 3-D PHOTO Quality Imaging and Animation
In this paper we present the PC cluster built at R.V.
College of Engineering (with great help from the Department of
Computer Science and Electrical Engineering). The structure of the
cluster is described and the performance is evaluated by rendering of
complex 3D Persistence of Vision (POV) images by the Ray-Tracing
algorithm. Here, we propose an unexampled method to render such
images, distributedly on a low cost scalable.
A Dynamic Time-Lagged Correlation based Method to Learn Multi-Time Delay Gene Networks
A gene network gives the knowledge of the regulatory
relationships among the genes. Each gene has its activators and
inhibitors that regulate its expression positively and negatively
respectively. Genes themselves are believed to act as activators and
inhibitors of other genes. They can even activate one set of genes and
inhibit another set. Identifying gene networks is one of the most
crucial and challenging problems in Bioinformatics. Most work done
so far either assumes that there is no time delay in gene regulation or
there is a constant time delay. We here propose a Dynamic Time-
Lagged Correlation Based Method (DTCBM) to learn the gene
networks, which uses time-lagged correlation to find the potential
gene interactions, and then uses a post-processing stage to remove
false gene interactions to common parents, and finally uses dynamic
correlation thresholds for each gene to construct the gene network.
DTCBM finds correlation between gene expression signals shifted in
time, and therefore takes into consideration the multi time delay
relationships among the genes. The implementation of our method is
done in MATLAB and experimental results on Saccharomyces
cerevisiae gene expression data and comparison with other methods
indicate that it has a better performance.
Trispectral Analysis of Voiced Sounds Defective Audition and Tracheotomisian Cases
This paper presents the cepstral and trispectral
analysis of a speech signal produced by normal men, men with
defective audition (deaf, deep deaf) and others affected by
tracheotomy, the trispectral analysis based on parametric methods
(Autoregressive AR) using the fourth order cumulant. These
analyses are used to detect and compare the pitches and the formants
of corresponding voiced sounds (vowel \a\, \i\ and \u\). The first
results appear promising, since- it seems after several experimentsthere
is no deformation of the spectrum as one could have supposed
it at the beginning, however these pathologies influenced the two
The defective audition influences to the formants contrary to the
tracheotomy, which influences the fundamental frequency (pitch).
2D Rigid Registration of MR Scans using the 1d Binary Projections
This paper presents the application of a signal
intensity independent registration criterion for 2D rigid body
registration of medical images using 1D binary projections. The
criterion is defined as the weighted ratio of two projections. The ratio
is computed on a pixel per pixel basis and weighting is performed by
setting the ratios between one and zero pixels to a standard high
value. The mean squared value of the weighted ratio is computed
over the union of the one areas of the two projections and it is
minimized using the Chebyshev polynomial approximation using
n=5 points. The sum of x and y projections is used for translational
adjustment and a 45deg projection for rotational adjustment. 20 T1-
T2 registration experiments were performed and gave mean errors
1.19deg and 1.78 pixels. The method is suitable for contour/surface
matching. Further research is necessary to determine the robustness
of the method with regards to threshold, shape and missing data.
Advanced Image Analysis Tools Development for the Early Stage Bronchial Cancer Detection
Autofluorescence (AF) bronchoscopy is an
established method to detect dysplasia and carcinoma in situ (CIS).
For this reason the “Sotiria" Hospital uses the Karl Storz D-light
system. However, in early tumor stages the visualization is not that
obvious. With the help of a PC, we analyzed the color images we
captured by developing certain tools in Matlab®. We used statistical
methods based on texture analysis, signal processing methods based
on Gabor models and conversion algorithms between devicedependent
color spaces. Our belief is that we reduced the error made
by the naked eye. The tools we implemented improve the quality of
A Comparison of Adaline and MLP Neural Network based Predictors in SIR Estimation in Mobile DS/CDMA Systems
In this paper we compare the response of linear and
nonlinear neural network-based prediction schemes in prediction of
received Signal-to-Interference Power Ratio (SIR) in Direct
Sequence Code Division Multiple Access (DS/CDMA) systems. The
nonlinear predictor is Multilayer Perceptron MLP and the linear
predictor is an Adaptive Linear (Adaline) predictor. We solve the
problem of complexity by using the Minimum Mean Squared Error
(MMSE) principle to select the optimal predictors. The optimized
Adaline predictor is compared to optimized MLP by employing
noisy Rayleigh fading signals with 1.8 GHZ carrier frequency in an
urban environment. The results show that the Adaline predictor can
estimates SIR with the same error as MLP when the user has the
velocity of 5 km/h and 60 km/h but by increasing the velocity up-to
120 km/h the mean squared error of MLP is two times more than
Adaline predictor. This makes the Adaline predictor (with lower
complexity) more suitable than MLP for closed-loop power control
where efficient and accurate identification of the time-varying
inverse dynamics of the multi path fading channel is required.
Network Anomaly Detection using Soft Computing
One main drawback of intrusion detection system is the
inability of detecting new attacks which do not have known
signatures. In this paper we discuss an intrusion detection method
that proposes independent component analysis (ICA) based feature
selection heuristics and using rough fuzzy for clustering data. ICA is
to separate these independent components (ICs) from the monitored
variables. Rough set has to decrease the amount of data and get rid of
redundancy and Fuzzy methods allow objects to belong to several
clusters simultaneously, with different degrees of membership. Our
approach allows us to recognize not only known attacks but also to
detect activity that may be the result of a new, unknown attack. The
experimental results on Knowledge Discovery and Data Mining-
(KDDCup 1999) dataset.
Implementing High Performance VPN Router using Cavium-s CN2560 Security Processor
IPsec protocol is a set of security extensions
developed by the IETF and it provides privacy and authentication
services at the IP layer by using modern cryptography. In this paper,
we describe both of H/W and S/W architectures of our router system,
SRS-10. The system is designed to support high performance routing
and IPsec VPN. Especially, we used Cavium-s CN2560 processor to
implement IPsec processing in inline-mode.
Comparative Analysis of Mobility Support in Mobile IP and SIP
With the rapid usage of portable devices mobility in
IP networks becomes more important issue in the recent years. IETF
standardized Mobile IP that works in Network Layer, which involves
tunneling of IP packets from HA to Foreign Agent. Mobile IP suffers
many problems of Triangular Routing, conflict with private
addressing scheme, increase in load in HA, need of permanent home
IP address, tunneling itself, and so on. In this paper, we proposed
mobility management in Application Layer protocol SIP and show
some comparative analysis between Mobile IP and SIP in context of
Genetic-Fuzzy Inverse Controller for a Robot Arm Suitable for On Line Applications
The robot is a repeated task plant. The control of such
a plant under parameter variations and load disturbances is one of the
important problems. The aim of this work is to design Geno-Fuzzy
controller suitable for online applications to control single link rigid
robot arm plant. The genetic-fuzzy online controller (indirect
controller) has two genetic-fuzzy blocks, the first as controller, the
second as identifier. The identification method is based on inverse
identification technique. The proposed controller it tested in normal
and load disturbance conditions.
Robot Task-Level Programming Language and Simulation
This paper presents the development of a software
application for Off-line robot task programming and simulation. Such
application is designed to assist in robot task planning and to direct
manipulator motion on sensor based programmed motion. The
concept of the designed programming application is to use the power
of the knowledge base for task accumulation. In support of the
programming means, an interactive graphical simulation for
manipulator kinematics was also developed and integrated into the
application as the complimentary factor to the robot programming
media. The simulation provides the designer with useful,
inexpensive, off-line tools for retain and testing robotics work cells
and automated assembly lines for various industrial applications.
Cooperative Multi Agent Soccer Robot Team
This paper introduces our first efforts of developing a
new team for RoboCup Middle Size Competition. In our robots we
have applied omni directional based mobile system with omnidirectional
vision system and fuzzy control algorithm to navigate
robots. The control architecture of MRL middle-size robots is a three
layered architecture, Planning, Sequencing, and Executing. It also
uses Blackboard system to achieve coordination among agents.
Moreover, the architecture should have minimum dependency on low
level structure and have a uniform protocol to interact with real
Fuzzy Error Recovery in Feedback Control for Three Wheel Omnidirectional Soccer Robot
This paper is described one of the intelligent control method in Autonomous systems, which is called fuzzy control to correct the three wheel omnidirectional robot movement while it make mistake to catch the target. Fuzzy logic is especially advantageous for problems that can not be easily represented by mathematical modeling because data is either unavailable, incomplete or the process is too complex. Such systems can be easily up grated by adding new rules to improve performance or add new features. In many cases , fuzzy control can be used to improve existing traditional controller systems by adding an extra layer of intelligence to the current control method. The fuzzy controller designed here is more accurate and flexible than the traditional controllers. The project is done at MRL middle size soccer robot team.
Visual Object Tracking and Interception in Industrial Settings
This paper presents a solution for a robotic
manipulation problem. We formulate the problem as combining
target identification, tracking and interception. The task in our
solution is sensing a target on a conveyor belt and then intercepting
robot-s end-effector at a convenient rendezvous point. We used
an object recognition method which identifies the target and finds
its position from visualized scene picture, then the robot system
generates a solution for rendezvous problem using the target-s initial
position and belt velocity . The interception of the target and the
end-effector is executed at a convenient rendezvous point along the
target-s calculated trajectory. Experimental results are obtained using
a real platform with an industrial robot and a vision system over it.
A Computational Model of Minimal Consciousness Functions
Interest in Human Consciousness has been revived in the late 20th century from different scientific disciplines. Consciousness studies involve both its understanding and its application. In this paper, a computational model of the minimum consciousness functions necessary in my point of view for Artificial Intelligence applications is presented with the aim of improving the way computations will be made in the future. In section I, human consciousness is briefly described according to the scope of this paper. In section II, a minimum set of consciousness functions is defined - based on the literature reviewed - to be modelled, and then a computational model of these functions is presented in section III. In section IV, an analysis of the model is carried out to describe its functioning in detail.
Extended Deductive Databases with Uncertain Information
The paper presents an approach for handling uncertain
information in deductive databases using multivalued logics. Uncertainty
means that database facts may be assigned logical values other
than the conventional ones - true and false. The logical values represent
various degrees of truth, which may be combined and propagated
by applying the database rules. A corresponding multivalued database
semantics is defined. We show that it extends successful conventional
semantics as the well-founded semantics, and has a polynomial time
A Logic Approach to Database Dynamic Updating
We introduce a logic-based framework for database
updating under constraints. In our framework, the constraints are
represented as an instantiated extended logic program. When performing
an update, database consistency may be violated. We provide
an approach of maintaining database consistency, and study the
conditions under which the maintenance process is deterministic. We
show that the complexity of the computations and decision problems
presented in our framework is in each case polynomial time.
Extended Well-Founded Semantics in Bilattices
One of the most used assumptions in logic programming
and deductive databases is the so-called Closed World Assumption
(CWA), according to which the atoms that cannot be inferred
from the programs are considered to be false (i.e. a pessimistic
assumption). One of the most successful semantics of conventional
logic programs based on the CWA is the well-founded semantics.
However, the CWA is not applicable in all circumstances when
information is handled. That is, the well-founded semantics, if
conventionally defined, would behave inadequately in different cases.
The solution we adopt in this paper is to extend the well-founded
semantics in order for it to be based also on other assumptions. The
basis of (default) negative information in the well-founded semantics
is given by the so-called unfounded sets. We extend this concept
by considering optimistic, pessimistic, skeptical and paraconsistent
assumptions, used to complete missing information from a program.
Our semantics, called extended well-founded semantics, expresses
also imperfect information considered to be missing/incomplete,
uncertain and/or inconsistent, by using bilattices as multivalued
logics. We provide a method of computing the extended well-founded
semantics and show that Kripke-Kleene semantics is captured by
considering a skeptical assumption. We show also that the complexity
of the computation of our semantics is polynomial time.
Exchanges of Knowledge about Product Configurations using XML Topic Map
Modeling product configurations needs large amounts of knowledge about technical and marketing restrictions on the product. Previous attempts to automate product configurations concentrate on representations and management of the knowledge for specific domains in fixed and isolated computing environments. Since the knowledge about product configurations is subject to continuous change and hard to express, these attempts often failed to efficiently manage and exchange the knowledge in collaborative product development. In this paper, XML Topic Map (XTM) is introduced to represent and exchange the knowledge about product configurations in collaborative product development. A product configuration model based on XTM along with its merger and inference facilities enables configuration engineers in collaborative product development to manage and exchange their knowledge efficiently. A prototype implementation is also presented to demonstrate the proposed model can be applied to engineering information systems to exchange the product configuration knowledge.
A Computer Aided Model for Supporting Design Education
Educating effective architect designers is an important
goal of architectural education. But what contributes to students-
performance, and to critical and creative thinking in architectural
design education? Besides teaching architecture students how to
understand logical arguments, eliminate the inadequate solutions and
focus on the correct ones, it is also crucial to teach students how to
focus on exploring ideas and the alternative solutions and seeking for
other right answers rather than one. This paper focuses on the
enhancing architectural design education and may provide
implications for enhancing teaching design.
The Performance of the Character-Access on the Checking Phase in String Searching Algorithms
A new algorithm called Character-Comparison to
Character-Access (CCCA) is developed to test the effect of both: 1)
converting character-comparison and number-comparison into
character-access and 2) the starting point of checking on the
performance of the checking operation in string searching. An
experiment is performed; the results are compared with five
algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Circle.
With the CCCA algorithm, the results suggest that the evaluation
criteria of the average number of comparisons are improved up to
74.0%. Furthermore, the results suggest that the clock time required
by the other algorithms is improved in range from 28% to 68% by the
new CCCA algorithm
Core Issues Affecting Software Architecture in Enterprise Projects
In this paper we analyze the core issues affecting
software architecture in enterprise projects where a large number of
people at different backgrounds are involved and complex business,
management and technical problems exist. We first give general
features of typical enterprise projects and then present foundations of
software architectures. The detailed analysis of core issues affecting
software architecture in software development phases is given. We
focus on three main areas in each development phase: people,
process, and management related issues, structural (product) issues,
and technology related issues. After we point out core issues and
problems in these main areas, we give recommendations for
designing good architecture. We observed these core issues and the
importance of following the best software development practices and
also developed some novel practices in many big enterprise
commercial and military projects in about 10 years of experience.
University of Jordan Case Tool (Uj-Case- Tool) for Database Reverse Engineering
The database reverse engineering problems and
solving processes are getting mature, even though, the academic
community is facing the complex problem of knowledge transfer,
both in university and industrial contexts. This paper presents a new
CASE tool developed at the University of Jordan which addresses an
efficient support of this transfer, namely UJ-CASE-TOOL. It is a
small and self-contained application exhibiting representative
problems and appropriate solutions that can be understood in a
limited time. It presents an algorithm that describes the developed
academic CASE tool which has been used for several years both as
an illustration of the principles of database reverse engineering and
as an exercise aimed at academic and industrial students.
The Challenge of Large-Scale IT Projects
The trend in the world of Information Technology
(IT) is getting increasingly large and difficult projects rather than
smaller and easier. However, the data on large-scale IT project
success rates provide cause for concern. This paper seeks to answer
why large-scale IT projects are different from and more difficult than
other typical engineering projects. Drawing on the industrial
experience, a compilation of the conditions that influence failure is
presented. With a view to improve success rates solutions are
A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks
This paper presents a modified version of the
maximum urgency first scheduling algorithm. The maximum
urgency algorithm combines the advantages of fixed and dynamic
scheduling to provide the dynamically changing systems with
flexible scheduling. This algorithm, however, has a major
shortcoming due to its scheduling mechanism which may cause a
critical task to fail. The modified maximum urgency first scheduling
algorithm resolves the mentioned problem. In this paper, we propose
two possible implementations for this algorithm by using either
earliest deadline first or modified least laxity first algorithms for
calculating the dynamic priorities. These two approaches are
compared together by simulating the two algorithms. The earliest
deadline first algorithm as the preferred implementation is then
recommended. Afterwards, we make a comparison between our
proposed algorithm and maximum urgency first algorithm using
simulation and results are presented. It is shown that modified
maximum urgency first is superior to maximum urgency first, since it
usually has less task preemption and hence, less related overhead. It
also leads to less failed non-critical tasks in overloaded situations.
Development of a Wiki-based Feature Library for a Process Planning System
A manufacturing feature can be defined simply as a
geometric shape and its manufacturing information to create the shape.
In a feature-based process planning system, feature library plays an
important role in the extraction of manufacturing features with their
proper manufacturing information. However, to manage the
manufacturing information flexibly, it is important to build a feature
library that is easy to modify. In this paper, a Wiki-based feature
library is proposed.
Development of a Software about Calculating the Production Parameters in Knitted Garment Plants
Apparel product development is an important stage in the life cycle of a product. Shortening this stage will help to reduce the costs of a garment. The aim of this study is to examine the production parameters in knitwear apparel companies by defining the unit costs, and developing a software to calculate the unit costs of garments and make the cost estimates. In this study, with the help of a questionnaire, different companies- systems of unit cost estimating and cost calculating were tried to be analyzed. Within the scope of the questionnaire, the importance of cost estimating process for apparel companies and the expectations from a new cost estimating program were investigated. According to the results of the questionnaire, it was seen that the majority of companies which participated to the questionnaire use manual cost calculating methods or simple Microsoft Excel spreadsheets to make cost estimates. Furthermore, it was discovered that many companies meet with difficulties in archiving the cost data for future use and as a solution to that problem, it is thought that prior to making a cost estimate, sub units of garment costs which are fabric, accessory and the labor costs should be analyzed and added to the database of the programme beforehand. Another specification of the cost estimating unit prepared in this study is that the programme was designed to consist of two main units, one of which makes the product specification and the other makes the cost calculation. The programme is prepared as a web-based application in order that the supplier, the manufacturer and the customer can have the opportunity to communicate through the same platform.
Iterative Way to Acquire Information Technology for Defense and Aerospace
Defense and Aerospace environment is continuously
striving to keep up with increasingly sophisticated Information
Technology (IT) in order to remain effective in today-s dynamic and
unpredictable threat environment. This makes IT one of the largest
and fastest growing expenses of Defense. Hundreds of millions of
dollars spent a year on IT projects. But, too many of those millions
are wasted on costly mistakes. Systems that do not work properly,
new components that are not compatible with old ones, trendy new
applications that do not really satisfy defense needs or lost through
poorly managed contracts.
This paper investigates and compiles the effective strategies that
aim to end exasperation with low returns and high cost of
Information Technology acquisition for defense; it tries to show how
to maximize value while reducing time and expenditure.
Mathematical Model for the Transmission of P. Falciparum and P. Vivax Malaria along the Thai-Myanmar Border
The most Malaria cases are occur along Thai-Mynmar border. Mathematical model for the transmission of Plasmodium falciparum and Plasmodium vivax malaria in a mixed population of Thais and migrant Burmese living along the Thai-Myanmar Border is studied. The population is separated into two groups, Thai and Burmese. Each population is divided into susceptible, infected, dormant and recovered subclasses. The loss of immunity by individuals in the infected class causes them to move back into the susceptible class. The person who is infected with Plasmodium vivax and is a member of the dormant class can relapse back into the infected class. A standard dynamical method is used to analyze the behaviors of the model. Two stable equilibrium states, a disease-free state and an epidemic state, are found to be possible in each population. A disease-free equilibrium state in the Thai population occurs when there are no infected Burmese entering the community. When infected Burmese enter the Thai community, an epidemic state can occur. It is found that the disease-free state is stable when the threshold number is less than one. The epidemic state is stable when a second threshold number is greater than one. Numerical simulations are used to confirm the results of our model.
Design of Moving Sliding Surfaces in A Variable Structure Plant and Chattering Phenomena
This paper deals with the design of a moving sliding
surface in a variable structure plant for a second order system. The
chattering phenomena is also dealt with during the switching process
for an unstable sliding surface condition. The simulation examples
considered in this paper shows the effectiveness of the sliding mode
control method used for the design of the moving sliding surfaces. A
simulink model of the continuous system was also developed in
MATLAB-SIMULINK for the design and hence demonstrated. The
phase portraits and the state plots shows the demonstration of
the powerful control technique which can be applied for second
The Effects of Speed on the Performance of Routing Protocols in Mobile Ad-hoc Networks
Mobile ad hoc network is a collection of mobile
nodes communicating through wireless channels without any
existing network infrastructure or centralized administration.
Because of the limited transmission range of wireless network
interfaces, multiple "hops" may be needed to exchange data
across the network. Consequently, many routing algorithms
have come into existence to satisfy the needs of
communications in such networks. Researchers have
conducted many simulations comparing the performance of
these routing protocols under various conditions and
constraints. One question that arises is whether speed of nodes
affects the relative performance of routing protocols being
studied. This paper addresses the question by simulating two
routing protocols AODV and DSDV. Protocols were
simulated using the ns-2 and were compared in terms of
packet delivery fraction, normalized routing load and average
delay, while varying number of nodes, and speed.
The Performance of Genetic Algorithm for Synchronized Chaotic Chen System in CDMA Satellite Channel
Synchronization is a difficult problem in CDMA
satellite communications. Due to the influence of additive noise and
fading in the mobile channel, it is not easy to keep up with the
attenuation and offset. This paper considers a recently proposed
approach to solve the problem of synchronization chaotic Chen
system in CDMA satellite communication in the presence of constant
attenuation and offset. An analytic algorithm that provides closed
form channel and carrier offset estimates is presented. The principle
of this approach is based on adding a compensation block before the
receiver to compensate the distortion of the imperfect channel by
using genetic algorithm.
The resultants presented, show that the receiver is able to recover
rapidly the synchronization with the transmitter.
Optimization of Transmitter Aperture by Genetic Algorithm in Optical Satellite
To establish optical communication between any two
satellites, the transmitter satellite must track the beacon of the
receiver satellite and point the information optical beam in its
direction. Optical tracking and pointing systems for free space suffer
during tracking from high-amplitude vibration because of
background radiation from interstellar objects such as the Sun, Moon,
Earth, and stars in the tracking field of view or the mechanical
impact from satellite internal and external sources. The vibrations of
beam pointing increase the bit error rate and jam communication
between the two satellites. One way to overcome this problem is the
use of very small transmitter beam divergence angles of too narrow
divergence angle is that the transmitter beam may sometimes miss
the receiver satellite, due to pointing vibrations. In this paper we
propose the use of genetic algorithm to optimize the BER as function
of transmitter optics aperture.
High Order Cascade Multibit ΣΔ Modulator for Wide Bandwidth Applications
A wideband 2-1-1 cascaded ΣΔ modulator with a
single-bit quantizer in the two first stages and a 4-bit quantizer in the
final stage is developed. To reduce sensitivity of digital-to-analog
converter (DAC) nonlinearities in the feedback of the last stage,
dynamic element matching (DEM) is introduced. This paper presents
two modelling approaches: The first is MATLAB description and the
second is VHDL-AMS modelling of the proposed architecture and
exposes some high-level-simulation results allowing a behavioural
study. The detail of both ideal and non-ideal behaviour modelling are
presented. Then, the study of the effect of building blocks
nonidealities is presented; especially the influences of nonlinearity,
finite operational amplifier gain, amplifier slew rate limitation and
capacitor mismatch. A VHDL-AMS description presents a good
solution to predict system-s performances and can provide sensitivity
curves giving the impact of nonidealities on the system performance.
Design Optimization Methodology of CMOS Active Mixers for Multi-Standard Receivers
A design flow of multi-standard down-conversion
CMOS mixers for three modern standards: Global System Mobile,
Digital Enhanced Cordless Telephone and Universal Mobile
Telecommunication Systems is presented. Three active mixer-s
structures are studied. The first is based on the Gilbert cell which
gives a tolerable noise figure and linearity with a low conversion
gain. The second and third structures use the current bleeding and
charge injection techniques in order to increase the conversion gain.
An improvement of about 2 dB of the conversion gain is achieved
without a considerable degradation of the other characteristics. The
models used for noise figure, conversion gain and IIP3 used are
studied. This study describes the nature of trade-offs inherent in such
structures and gives insights that help in identifying which structure
is better for given conditions.
Application of Wavelet Neural Networks in Optimization of Skeletal Buildings under Frequency Constraints
The main goal of the present work is to decrease the
computational burden for optimum design of steel frames with
frequency constraints using a new type of neural networks called
Wavelet Neural Network. It is contested to train a suitable neural
network for frequency approximation work as the analysis program.
The combination of wavelet theory and Neural Networks (NN)
has lead to the development of wavelet neural networks.
Wavelet neural networks are feed-forward networks using
wavelet as activation function. Wavelets are mathematical
functions within suitable inner parameters, which help them to
approximate arbitrary functions. WNN was used to predict the
frequency of the structures. In WNN a RAtional function with
Second order Poles (RASP) wavelet was used as a transfer
function. It is shown that the convergence speed was faster
than other neural networks. Also comparisons of WNN with
the embedded Artificial Neural Network (ANN) and with
approximate techniques and also with analytical solutions are
available in the literature.
Neural Network Tuned Fuzzy Controller for MIMO System
In this paper, a neural network tuned fuzzy controller
is proposed for controlling Multi-Input Multi-Output (MIMO)
systems. For the convenience of analysis, the structure of MIMO
fuzzy controller is divided into single input single-output (SISO)
controllers for controlling each degree of freedom. Secondly,
according to the characteristics of the system-s dynamics coupling, an
appropriate coupling fuzzy controller is incorporated to improve the
performance. The simulation analysis on a two-level mass–spring
MIMO vibration system is carried out and results show the
effectiveness of the proposed fuzzy controller. The performance
though improved, the computational time and memory used is
comparatively higher, because it has four fuzzy reasoning blocks and
number may increase in case of other MIMO system. Then a fuzzy
neural network is designed from a set of input-output training data to
reduce the computing burden during implementation. This control
strategy can not only simplify the implementation problem of fuzzy
control, but also reduce computational time and consume less
Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain
Real world Speaker Identification (SI) application
differs from ideal or laboratory conditions causing perturbations that
leads to a mismatch between the training and testing environment
and degrade the performance drastically. Many strategies have been
adopted to cope with acoustical degradation; wavelet based Bayesian
marginal model is one of them. But Bayesian marginal models
cannot model the inter-scale statistical dependencies of different
wavelet scales. Simple nonlinear estimators for wavelet based
denoising assume that the wavelet coefficients in different scales are
independent in nature. However wavelet coefficients have significant
inter-scale dependency. This paper enhances this inter-scale
dependency property by a Circularly Symmetric Probability Density
Function (CS-PDF) related to the family of Spherically Invariant
Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain
and corresponding joint shrinkage estimator is derived by Maximum
a Posteriori (MAP) estimator. A framework is proposed based on
these to denoise speech signal for automatic speaker identification
problems. The robustness of the proposed framework is tested for
Text Independent Speaker Identification application on 100 speakers
of POLYCOST and 100 speakers of YOHO speech database in three
different noise environments. Experimental results show that the
proposed estimator yields a higher improvement in identification
accuracy compared to other estimators on popular Gaussian Mixture
Model (GMM) based speaker model and Mel-Frequency Cepstral
Coefficient (MFCC) features.
Power-Efficient AND-EXOR-INV Based Realization of Achilles' heel Logic Functions
This paper deals with a power-conscious ANDEXOR- Inverter type logic implementation for a complex class of Boolean functions, namely Achilles- heel functions. Different variants of the above function class have been considered viz. positive, negative and pure horn for analysis and simulation purposes. The proposed realization is compared with the decomposed implementation corresponding to an existing standard AND-EXOR logic minimizer; both result in Boolean networks with good testability attribute. It could be noted that an AND-OR-EXOR type logic network does not exist for the positive phase of this unique class of logic function. Experimental results report significant savings in all the power consumption components for designs based on standard cells pertaining to a 130nm UMC CMOS process The simulations have been extended to validate the savings across all three library corners (typical, best and worst case specifications).
Class Outliers Mining: Distance-Based Approach
In large datasets, identifying exceptional or rare cases
with respect to a group of similar cases is considered very significant
problem. The traditional problem (Outlier Mining) is to find
exception or rare cases in a dataset irrespective of the class label of
these cases, they are considered rare events with respect to the whole
dataset. In this research, we pose the problem that is Class Outliers
Mining and a method to find out those outliers. The general
definition of this problem is “given a set of observations with class
labels, find those that arouse suspicions, taking into account the
class labels". We introduce a novel definition of Outlier that is Class
Outlier, and propose the Class Outlier Factor (COF) which measures
the degree of being a Class Outlier for a data object. Our work
includes a proposal of a new algorithm towards mining of the Class
Outliers, presenting experimental results applied on various domains
of real world datasets and finally a comparison study with other
related methods is performed.
Fuzzy Join Dependency in Fuzzy Relational Databases
The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
A PSO-based SSSC Controller for Improvement of Transient Stability Performance
The application of a Static Synchronous Series Compensator (SSSC) controller to improve the transient stability performance of a power system is thoroughly investigated in this paper. The design problem of SSSC controller is formulated as an optimization problem and Particle Swarm Optimization (PSO) Technique is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor angle of the generator is involved; transient stability performance of the system is improved. The proposed controller is tested on a weakly connected power system subjected to different severe disturbances. The non-linear simulation results are presented to show the effectiveness of the proposed controller and its ability to provide efficient damping of low frequency oscillations. It is also observed that the proposed SSSC controller improves greatly the voltage profile of the system under severe disturbances.
Anticipating Action Decisions of Automated Guided Vehicle in an Autonomous Decentralized Flexible Manufacturing System
Nowadays the market for industrial companies is becoming more and more globalized and highly competitive, forcing them to shorten the duration of the manufacturing system development time in order to reduce the time to market. In order to achieve this target, the hierarchical systems used in previous manufacturing systems are not enough because they cannot deal effectively with unexpected situations. To achieve flexibility in manufacturing systems, the concept of an Autonomous Decentralized Flexible Manufacturing System (AD-FMS) is useful. In this paper, we introduce a hypothetical reasoning based algorithm called the Algorithm for Future Anticipative Reasoning (AFAR) which is able to decide on a conceivable next action of an Automated Guided Vehicle (AGV) that works autonomously in the AD-FMS.
A Cascaded Fuzzy Inference System for Dynamic Online Portals Customization
In our modern world, more physical transactions are being substituted by electronic transactions (i.e. banking, shopping, and payments), many businesses and companies are performing most of their operations through the internet. Instead of having a physical commerce, internet visitors are now adapting to electronic commerce (e-Commerce). The ability of web users to reach products worldwide can be greatly benefited by creating friendly and personalized online business portals. Internet visitors will return to a particular website when they can find the information they need or want easily. Dealing with this human conceptualization brings the incorporation of Artificial/Computational Intelligence techniques in the creation of customized portals. From these techniques, Fuzzy-Set technologies can make many useful contributions to the development of such a human-centered endeavor as e-Commerce. The main objective of this paper is the implementation of a Paradigm for the Intelligent Design and Operation of Human-Computer interfaces. In particular, the paradigm is quite appropriate for the intelligent design and operation of software modules that display information (such Web Pages, graphic user interfaces GUIs, Multimedia modules) on a computer screen. The human conceptualization of the user personal information is analyzed throughout a Cascaded Fuzzy Inference (decision-making) System to generate the User Ascribe Qualities, which identify the user and that can be used to customize portals with proper Web links.
Information System for Data Selection and New Information Acquisition for Reconfigurable Multifunctional Machine Tools
The purpose of the paper is to develop an informationcontrol environment for overall management and self-reconfiguration of the reconfigurable multifunctional machine tool for machining both rotation and prismatic parts and high concentration of different technological operations - turning, milling, drilling, grinding, etc. For the realization of this purpose on the basis of defined sub-processes for the implementation of the technological process, architecture of the information-search system for machine control is suggested. By using the object-oriented method, a structure and organization of the search system based on agents and manager with central control are developed. Thus conditions for identification of available information in DBs, self-reconfiguration of technological system and entire control of the reconfigurable multifunctional machine tool are created.
Specifying Strict Serializability of Iterated Transactions in Propositional Temporal Logic
We present an operator for a propositional linear temporal logic over infinite schedules of iterated transactions, which, when applied to a formula, asserts that any schedule satisfying the formula is serializable. The resulting logic is suitable for specifying and verifying consistency properties of concurrent transaction management systems, that can be defined in terms of serializability, as well as other general safety and liveness properties. A strict form of serializability is used requiring that, whenever the read and write steps of a transaction occurrence precede the read and write steps of another transaction occurrence in a schedule, the first transaction must precede the second transaction in an equivalent serial schedule. This work improves on previous work in providing a propositional temporal logic with a serializability operator that is of the same PSPACE complete computational complexity as standard propositional linear temporal logic without a serializability operator.
Learning FCM by Tabu Search
Fuzzy Cognitive Maps (FCMs) is a causal graph, which shows the relations between essential components in complex systems. Experts who are familiar with the system components and their relations can generate a related FCM. There is a big gap when human experts cannot produce FCM or even there is no expert to produce the related FCM. Therefore, a new mechanism must be used to bridge this gap. In this paper, a novel learning method is proposed to construct causal graph based on historical data and by using metaheuristic such Tabu Search (TS). The efficiency of the proposed method is shown via comparison of its results of some numerical examples with those of some other methods.
A Content Based Image Watermarking Scheme Resilient to Geometric Attacks
Multimedia security is an incredibly significant area of concern. The paper aims to discuss a robust image watermarking scheme, which can withstand geometric attacks. The source image is initially moment normalized in order to make it withstand geometric attacks. The moment normalized image is wavelet transformed. The first level wavelet transformed image is segmented into blocks if size 8x8. The product of mean and standard and standard deviation of each block is computed. The second level wavelet transformed image is divided into 8x8 blocks. The product of block mean and the standard deviation are computed. The difference between products in the two levels forms the watermark. The watermark is inserted by modulating the coefficients of the mid frequencies. The modulated image is inverse wavelet transformed and inverse moment normalized to generate the watermarked image. The watermarked image is now ready for transmission. The proposed scheme can be used to validate identification cards and financial instruments. The performance of this scheme has been evaluated using a set of parameters. Experimental results show the effectiveness of this scheme.
Evaluating Sinusoidal Functions by a Low Complexity Cubic Spline Interpolator with Error Optimization
We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.
Extraction of Significant Phrases from Text
Prospective readers can quickly determine whether a document is relevant to their information need if the significant phrases (or keyphrases) in this document are provided. Although keyphrases are useful, not many documents have keyphrases assigned to them, and manually assigning keyphrases to existing documents is costly. Therefore, there is a need for automatic keyphrase extraction. This paper introduces a new domain independent keyphrase extraction algorithm. The algorithm approaches the problem of keyphrase extraction as a classification task, and uses a combination of statistical and computational linguistics techniques, a new set of attributes, and a new machine learning method to distinguish keyphrases from non-keyphrases. The experiments indicate that this algorithm performs better than other keyphrase extraction tools and that it significantly outperforms Microsoft Word 2000-s AutoSummarize feature. The domain independence of this algorithm has also been confirmed in our experiments.
Formal Analysis of a Public-Key Algorithm
In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.
A Technique for Improving the Performance of Median Smoothers at the Corners Characterized by Low Order Polynomials
Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.
Water Security in Rural Areas through Solar Energy in Baja California Sur, Mexico
This study aims to assess the potential of solar energy technology for improving access to water and hence the livelihood strategies of rural communities in Baja California Sur, Mexico. It focuses on livestock ranches and photovoltaic water-pumptechnology as well as other water extraction methods. The methodology used are the Sustainable Livelihoods and the Appropriate Technology approaches. A household survey was applied in June of 2006 to 32 ranches in the municipality, of which 22 used PV pumps; and semi-structured interviews were conducted. Findings indicate that solar pumps have in fact helped people improve their quality of life by allowing them to pursue a different livelihood strategy and that improved access to water -not necessarily as more water but as less effort to extract and collect it- does not automatically imply overexploitation of the resource; consumption is based on basic needs as well as on storage and pumping capacity. Justification for such systems lies in the avoidance of logistical problems associated to fossil fuels, PV pumps proved to be the most beneficial when substituting gasoline or diesel equipment but of dubious advantage if intended to replace wind or gravity systems. Solar water pumping technology-s main obstacle to dissemination are high investment and repairs costs and it is therefore not suitable for all cases even when insolation rates and water availability are adequate. In cases where affordability is not an obstacle it has become an important asset that contributes –by means of reduced expenses, less effort and saved time- to the improvement of livestock, the main livelihood provider for these ranches.
Economic Development, Environmental Conflicts and Citizen Participation in Latin America
Environmental conflicts produced by economic development and natural resources exploitation, are discussed. Main causes of conflicts in developing countries were shown to arise from geographically external investments, inefficiency of the Environmental Impact Assessment (EIA), and the lack of communication between government and Non-Government Organizations (NGOs). Citizen participation can only intervene during late stages of the EIA, which is considered as one of the main shortcomings in satisfying demands of local people.
A Method for 3D Mesh Adaptation in FEA
The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.
Optimizing Mobile Agents Migration Based on Decision Tree Learning
Mobile agents are a powerful approach to develop distributed systems since they migrate to hosts on which they have the resources to execute individual tasks. In a dynamic environment like a peer-to-peer network, Agents have to be generated frequently and dispatched to the network. Thus they will certainly consume a certain amount of bandwidth of each link in the network if there are too many agents migration through one or several links at the same time, they will introduce too much transferring overhead to the links eventually, these links will be busy and indirectly block the network traffic, therefore, there is a need of developing routing algorithms that consider about traffic load. In this paper we seek to create cooperation between a probabilistic manner according to the quality measure of the network traffic situation and the agent's migration decision making to the next hop based on decision tree learning algorithms.
A Novel Approach to EMABS and Comparison with ABS
In this paper two different Antilock braking system (ABS) are simulated and compared. One is the ordinary hydraulic ABS system which we call it ABS and the other is Electromagnetic Antilock braking system which is called (EMABS) the basis of performance of an EMABS is based upon Electromagnetic force. In this system there is no need to use servo hydraulic booster which are used in ABS system. In EMABS to generate the desired force we have use a magnetic relay which works with an input voltage through an air gap (g). The generated force will be amplified by the relay arm, and is applied to the brake shoes and thus the braking torque is generated. The braking torque is proportional to the applied electrical voltage E. to adjust the braking torque it is only necessary to regulate the electrical voltage E which is very faster and has a much smaller time constant T than the ABS system. The simulations of these two different ABS systems are done with MATLAB/SIMULINK software and the superiority of the EMABS has been shown.
Study on Diversified Developments Improving Environmental Values-In Case of University Campus -
This study aims to clarify constructions which enable to improve socio-cultural values of environments and also to obtain new knowledge on selecting development plans. CVM is adopted as a method of evaluation. As a case of the research, university campus (CP; the following) is selected on account of its various environments, institutions and many users. Investigations were conducted from 4 points of view, total value and utility value of whole CP environments, values of each environment existing in CP or development plan assumed in CP. Furthermore, respondents- attributes were also investigated. In consequence, the following is obtained. 1) Almost all of total value of CP is composed of utility value of direct use. 2) Each of environment and development plans whose value is the highest is clarified. 3) Moreover, development plan to improve environmental value the most is specified.
Estimation Method for the Construction of Hydrogen Society with Various Biomass Resources in Japan-Project of Cost Reductions in Biomass Transport and Feasibility for Hydrogen Station with Biomass-
It was determined that woody biomass and livestock excreta can be utilized as hydrogen resources and hydrogen produced from such sources can be used to fill fuel cell vehicles (FCVs) at hydrogen stations. It was shown that the biomass transport costs for hydrogen production may be reduced the costs for co-generation. In the Tokyo Metropolitan Area, there are only a few sites capable of producing hydrogen from woody biomass in amounts greater than 200 m3/h-the scale required for a hydrogen station to be operationally practical. However, in the case of livestock excreta, it was shown that 15% of the municipalities in this area are capable of securing sufficient biomass to be operationally practical for hydrogen production. The differences in feasibility of practical operation depend on the type of biomass.
Investigation of Chaotic Behavior in DC-DC Converters
DC-DC converters are widely used in regulated switched mode power supplies and in DC motor drive applications. There are several sources of unwanted nonlinearity in practical power converters. In addition, their operation is characterized by switching that gives birth to a variety of nonlinear dynamics. DC-DC buck and boost converters controlled by pulse-width modulation (PWM) have been simulated. The voltage waveforms and attractors obtained from the circuit simulation have been studied. With the onset of instability, the phenomenon of subharmonic oscillations, quasi-periodicity, bifurcations, and chaos have been observed. This paper is mainly motivated by potential contributions of chaos theory in the design, analysis and control of power converters, in particular and power electronics circuits, in general.
Robust H∞ Filter Design for Uncertain Fuzzy Descriptor Systems: LMI-Based Design
This paper examines the problem of designing a robust H∞ filter for a class of uncertain fuzzy descriptor systems described by a Takagi-Sugeno (TS) fuzzy model. Based on a linear matrix inequality (LMI) approach, LMI-based sufficient conditions for the uncertain nonlinear descriptor systems to have an H∞ performance are derived. To alleviate the ill-conditioning resulting from the interaction of slow and fast dynamic modes, solutions to the problem are given in terms of linear matrix inequalities which are independent of the singular perturbation ε, when ε is sufficiently small. The proposed approach does not involve the separation of states into slow and fast ones and it can be applied not only to standard, but also to nonstandard uncertain nonlinear descriptor systems. A numerical example is provided to illustrate the design developed in this paper.
Evaluation of Torsional Efforts on Thermal Machines Shaft with Gas Turbine resulting of Automatic Reclosing
This paper analyses the torsional efforts in gas turbine-generator shafts caused by high speed automatic reclosing of transmission lines. This issue is especially important for cases of three phase short circuit and unsuccessful reclosure of lines in the vicinity of the thermal plant. The analysis was carried out for the thermal plant TERMOPERNAMBUCO located on Northeast region of Brazil. It is shown that stress level caused by lines unsuccessful reclosing can be several times higher than terminal three-phase short circuit. Simulations were carried out with detailed shaft torsional model provided by machine manufacturer and with the “Alternative Transient Program – ATP" program . Unsuccessful three phase reclosing for selected lines in the area closed to the plant indicated most critical cases. Also, reclosing first the terminal next to the gas turbine gererator will lead also to the most critical condition. Considering that the values of transient torques are very sensible to the instant of reclosing, simulation of unsuccessful reclosing with statistics ATP switch were carried out for determination of most critical transient torques for each section of the generator turbine shaft.
Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet
In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.
Application of Computational Intelligence for Sensor Fault Detection and Isolation
The new idea of this research is application of a new fault detection and isolation (FDI) technique for supervision of sensor networks in transportation system. In measurement systems, it is necessary to detect all types of faults and failures, based on predefined algorithm. Last improvements in artificial neural network studies (ANN) led to using them for some FDI purposes. In this paper, application of new probabilistic neural network features for data approximation and data classification are considered for plausibility check in temperature measurement. For this purpose, two-phase FDI mechanism was considered for residual generation and evaluation.
Solving the Teacher Assignment-Course Scheduling Problem by a Hybrid Algorithm
This paper presents a hybrid algorithm for solving a timetabling problem, which is commonly encountered in many universities. The problem combines both teacher assignment and course scheduling problems simultaneously, and is presented as a mathematical programming model. However, this problem becomes intractable and it is unlikely that a proven optimal solution can be obtained by an integer programming approach, especially for large problem instances. A hybrid algorithm that combines an integer programming approach, a greedy heuristic and a modified simulated annealing algorithm collaboratively is proposed to solve the problem. Several randomly generated data sets of sizes comparable to that of an institution in Indonesia are solved using the proposed algorithm. Computational results indicate that the algorithm can overcome difficulties of large problem sizes encountered in previous related works.
Simulation and 40 Years of Object-Oriented Programming
2007 is a jubilee year: in 1967, programming language SIMULA 67 was presented, which contained all aspects of what was later called object-oriented programming. The present paper contains a description of the development unto the objectoriented programming, the role of simulation in this development and other tools that appeared in SIMULA 67 and that are nowadays called super-object-oriented programming.
A Fuzzy Approach for Delay Proportion Differentiated Service
There are two paradigms proposed to provide QoS for Internet applications: Integrated service (IntServ) and Differentiated service (DiffServ).Intserv is not appropriate for large network like Internet. Because is very complex. Therefore, to reduce the complexity of QoS management, DiffServ was introduced to provide QoS within a domain using aggregation of flow and per- class service. In theses networks QoS between classes is constant and it allows low priority traffic to be effected from high priority traffic, which is not suitable. In this paper, we proposed a fuzzy controller, which reduced the effect of low priority class on higher priority ones. Our simulations shows that, our approach reduces the latency dependency of low priority class on higher priority ones, in an effective manner.
Virtual Mechanical Engineering Education – A Case Study
Virtual engineering technology has undergone rapid progress in recent years and is being adopted increasingly by manufacturing companies of many engineering disciplines. There is an increasing demand from industry for qualified virtual engineers. The qualified virtual engineers should have the ability of applying engineering principles and mechanical design methods within the commercial software package environment. It is a challenge to the engineering education in universities which traditionally tends to lack the integration of knowledge and skills required for solving real world problems. In this paper, a case study shows some recent development of a MSc Mechanical Engineering course at Department of Engineering and Technology in MMU, and in particular, two units Simulation of Mechanical Systems(SMS) and Computer Aided Fatigue Analysis(CAFA) that emphasize virtual engineering education and promote integration of knowledge acquisition, skill training and industrial application.
On the Early Development of Dispersion in Flow through a Tube with Wall Reactions
This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
The Differential Transform Method for Advection-Diffusion Problems
In this paper a class of numerical methods to solve linear and nonlinear PDEs and also systems of PDEs is developed. The Differential Transform method associated with the Method of Lines (MoL) is used. The theory for linear problems is extended to the nonlinear case, and a recurrence relation is established. This method can achieve an arbitrary high-order accuracy in time. A variable stepsize algorithm and some numerical results are also presented.
Mobility Analysis of the Population of Rabat-Salé-Zemmour-Zaer
In this paper, we present the 2006 survey study origin destination and price that we carried out during 2006 fall in the area in the Moroccan region of Rabat-Salé-Zemmour-Zaer. The survey concerns the people-s characteristics, their displacements behavior and the price that they will be able to pay for a tramway ticket. The main objective is to study a set of relative features to the households and to their displacement's habits and to their choices among public and privet transport modes. A comparison between this survey results and that of the 1996's is made. A pricing scheme is also given according to the tram capacity. (The Rabat-Salé tramway is under construction right now and it will be operational beginning 2010).
A Supervised Text-Independent Speaker Recognition Approach
We provide a supervised speech-independent voice recognition technique in this paper. In the feature extraction stage we propose a mel-cepstral based approach. Our feature vector classification method uses a special nonlinear metric, derived from the Hausdorff distance for sets, and a minimum mean distance classifier.
Theoretical Study on a Thermal Model for Large Power Transformer Units
The paper analyzes the large power transformer unit regimes, indicating the criteria for the management of the voltage operating conditions, as well as the change in the operating conditions with the load connected to the secondary winding of the transformer unit. Further, the paper presents the software application for the evaluation of the transformer unit operation under different conditions. The software application was developed by means of virtual instrumentation.
Collaborative Design System based on Object- Oriented Modeling of Supply Chain Simulation: A Case Study of Thai Jewelry Industry
The paper proposes a new concept in developing
collaborative design system. The concept framework involves
applying simulation of supply chain management to collaborative
design called – 'SCM–Based Design Tool'. The system is developed
particularly to support design activities and to integrate all facilities
together. The system is aimed to increase design productivity and
creativity. Therefore, designers and customers can collaborate by the
system since conceptual design. JAG: Jewelry Art Generator based
on artificial intelligence techniques is integrated into the system.
Moreover, the proposed system can support users as decision tool
and data propagation. The system covers since raw material supply
until product delivery. Data management and sharing information are
visually supported to designers and customers via user interface. The
system is developed on Web–assisted product development
environment. The prototype system is presented for Thai jewelry
industry as a system prototype demonstration, but applicable for
Entropy Generation Analysis of Free Convection Film Condensation on a Vertical Ellipsoid with Variable Wall Temperature
This paper aims to perform the second law analysis of
thermodynamics on the laminar film condensation of pure saturated
vapor flowing in the direction of gravity on an ellipsoid with variable
wall temperature. The analysis provides us understanding how the
geometric parameter- ellipticity and non-isothermal wall temperature
variation amplitude “A." affect entropy generation during film-wise
condensation heat transfer process. To understand of which
irreversibility involved in this condensation process, we derived an
expression for the entropy generation number in terms of ellipticity
and A. The result indicates that entropy generation increases with
ellipticity. Furthermore, the irreversibility due to finite temperature
difference heat transfer dominates over that due to condensate film
flow friction and the local entropy generation rate decreases with
increasing A in the upper half of ellipsoid. Meanwhile, the local
entropy generation rate enhances with A around the rear lower half of
Phase Behavior of CO2 and CH4 Hydrate in Porous Media
Hydrate phase equilibria for the binary CO2+water and
CH4+water mixtures in silica gel pore of nominal diameters 6, 30, and
100 nm were measured and compared with the calculated results based
on van der Waals and Platteeuw model. At a specific temperature,
three-phase hydrate-water-vapor (HLV) equilibrium curves for pore
hydrates were shifted to the higher-pressure condition depending on
pore sizes when compared with those of bulk hydrates. Notably,
hydrate phase equilibria for the case of 100 nominal nm pore size were
nearly identical with those of bulk hydrates. The activities of water in
porous silica gels were modified to account for capillary effect, and
the calculation results were generally in good agreement with the
experimental data. The structural characteristics of gas hydrates in
silica gel pores were investigated through NMR spectroscopy.
Investment Prediction Using Simulation
A business case is a proposal for an investment
initiative to satisfy business and functional requirements. The
business case provides the foundation for tactical decision making
and technology risk management. It helps to clarify how the
organization will use its resources in the best way by providing
justification for investment of resources. This paper describes how
simulation was used for business case benefits and return on
investment for the procurement of 8 production machines. With
investment costs of about 4.7 million dollars and annual operating
costs of about 1.3 million, we needed to determine if the machines
would provide enough cost savings and cost avoidance. We
constructed a model of the existing factory environment consisting of
8 machines and subsequently, we conducted average day simulations
with light and heavy volumes to facilitate planning decisions
required to be documented and substantiated in the business case.
Harmonic Parameters with HHT and Wavelet Transform for Automatic Sleep Stages Scoring
Previously, harmonic parameters (HPs) have been
selected as features extracted from EEG signals for automatic sleep
scoring. However, in previous studies, only one HP parameter was
used, which were directly extracted from the whole epoch of EEG
In this study, two different transformations were applied to extract
HPs from EEG signals: Hilbert-Huang transform (HHT) and wavelet
transform (WT). EEG signals are decomposed by the two
transformations; and features were extracted from different
components. Twelve parameters (four sets of HPs) were extracted.
Some of the parameters are highly diverse among different stages.
Afterward, HPs from two transformations were used to building a
rough sleep stages scoring model using the classifier SVM. The
performance of this model is about 78% using the features obtained by
our proposed extractions. Our results suggest that these features may
be useful for automatic sleep stages scoring.
Use XML Format like a Model of Data Backup
Nowadays data backup format doesn-t cease to appear raising so the anxiety on their accessibility and their perpetuity. XML is one of the most promising formats to guarantee the integrity of data. This article suggests while showing one thing man can do with XML. Indeed XML will help to create a data backup model. The main task will consist in defining an application in JAVA able to convert information of a database in XML format and restore them later.
Feedback-Controlled Server for Scheduling Aperiodic Tasks
This paper proposes a scheduling scheme using feedback
control to reduce the response time of aperiodic tasks with soft
real-time constraints. We design an algorithm based on the proposed
scheduling scheme and Total Bandwidth Server (TBS) that is a
conventional server technique for scheduling aperiodic tasks. We then
describe the feedback controller of the algorithm and give the control
parameter tuning methods. The simulation study demonstrates that the
algorithm can reduce the mean response time up to 26% compared
to TBS in exchange for slight deadline misses.
Memory Estimation of Internet Server Using Queuing Theory: Comparative Study between M/G/1, G/M/1 and G/G/1 Queuing Model
How to effectively allocate system resource to process
the Client request by Gateway servers is a challenging problem. In
this paper, we propose an improved scheme for autonomous
performance of Gateway servers under highly dynamic traffic loads.
We devise a methodology to calculate Queue Length and Waiting
Time utilizing Gateway Server information to reduce response time
variance in presence of bursty traffic. The most widespread
contemplation is performance, because Gateway Servers must offer
cost-effective and high-availability services in the elongated period,
thus they have to be scaled to meet the expected load. Performance
measurements can be the base for performance modeling and
prediction. With the help of performance models, the performance
metrics (like buffer estimation, waiting time) can be determined at
the development process. This paper describes the possible queue
models those can be applied in the estimation of queue length to
estimate the final value of the memory size. Both simulation and
experimental studies using synthesized workloads and analysis of
real-world Gateway Servers demonstrate the effectiveness of the
Two Area Power Systems Economic Dispatch Problem Solving Considering Transmission Capacity Constraints
This paper describes an efficient and practical method
for economic dispatch problem in one and two area electrical power
systems with considering the constraint of the tie transmission line
capacity constraint. Direct search method (DSM) is used with some
equality and inequality constraints of the production units with any
kind of fuel cost function. By this method, it is possible to use several
inequality constraints without having difficulty for complex cost
functions or in the case of unavailability of the cost function
derivative. To minimize the number of total iterations in searching,
process multi-level convergence is incorporated in the DSM.
Enhanced direct search method (EDSM) for two area power system
will be investigated. The initial calculation step size that causes less
iterations and then less calculation time is presented. Effect of the
transmission tie line capacity, between areas, on economic dispatch
problem and on total generation cost will be studied; line
compensation and active power with reactive power dispatch are
proposed to overcome the high generation costs for this multi-area
Numerical Study of Iterative Methods for the Solution of the Dirichlet-Neumann Map for Linear Elliptic PDEs on Regular Polygon Domains
A generalized Dirichlet to Neumann map is
one of the main aspects characterizing a recently introduced
method for analyzing linear elliptic PDEs, through which it
became possible to couple known and unknown components
of the solution on the boundary of the domain without
solving on its interior. For its numerical solution, a well conditioned
quadratically convergent sine-Collocation method
was developed, which yielded a linear system of equations
with the diagonal blocks of its associated coefficient matrix
being point diagonal. This structural property, among others,
initiated interest for the employment of iterative methods for
its solution. In this work we present a conclusive numerical
study for the behavior of classical (Jacobi and Gauss-Seidel)
and Krylov subspace (GMRES and Bi-CGSTAB) iterative
methods when they are applied for the solution of the Dirichlet
to Neumann map associated with the Laplace-s equation
on regular polygons with the same boundary conditions on
Multi Band Frequency Synthesizer Based on ISPD PLL with Adapted LC Tuned VCO
The 4G front-end transceiver needs a high
performance which can be obtained mainly with an optimal
architecture and a multi-band Local Oscillator. In this study, we
proposed and presented a new architecture of multi-band frequency
synthesizer based on an Inverse Sine Phase Detector Phase Locked
Loop (ISPD PLL) without any filters and any controlled gain block
and associated with adapted multi band LC tuned VCO using a
several numeric controlled capacitive branches but not binary
weighted. The proposed architecture, based on 0.35μm CMOS
process technology, supporting Multi-band GSM/DCS/DECT/
UMTS/WiMax application and gives a good performances: a phase
noise @1MHz -127dBc and a Factor Of Merit (FOM) @ 1MHz -
186dB and a wide band frequency range (from 0.83GHz to 3.5GHz),
that make the proposed architecture amenable for monolithic
integration and 4G multi-band application.
An Experimental Investigation of Thermoelectric Air-Cooling Module
This article experimentally investigates the
thermal performance of thermoelectric air-cooling module
which comprises a thermoelectric cooler (TEC) and an
air-cooling heat sink. The influences of input current and heat
load are determined. And performances under each situation
are quantified by thermal resistance analysis. Since TEC
generates Joule heat, this nature makes construction of thermal
resistance network difficult. To simplify the analysis, this
article emphasizes on the resistance heat load might meet when
passing through the device. Therefore, the thermal resistances
in this paper are to divide temperature differences by heat load.
According to the result, there exists an optimum input current
under every heating power. In this case, the optimum input
current is around 6A or 7A. The performance of the heat sink
would be improved with TEC under certain heating power and
input current, especially at a low heat load. According to the
result, the device can even make the heat source cooler than the
ambient. However, TEC is not always effective at every heat
load and input current. In some situation, the device works
worse than the heat sink without TEC. To determine the
availability of TEC, this study figures out the effective
operating region in which the TEC air-cooling module works
better than the heat sink without TEC. The result shows that
TEC is more effective at a lower heat load. If heat load is too
high, heat sink with TEC will perform worse than without TEC.
The limit of this device is 57W. Besides, TEC is not helpful if
input current is too high or too low. There is an effective range
of input current, and the range becomes narrower when the heat
On the Reduction of Side Effects in Tomography
As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
Robust Face Recognition using AAM and Gabor Features
In this paper, we propose a face recognition algorithm
using AAM and Gabor features. Gabor feature vectors which are well
known to be robust with respect to small variations of shape, scaling,
rotation, distortion, illumination and poses in images are popularly
employed for feature vectors for many object detection and
recognition algorithms. EBGM, which is prominent among face
recognition algorithms employing Gabor feature vectors, requires
localization of facial feature points where Gabor feature vectors are
extracted. However, localization method employed in EBGM is based
on Gabor jet similarity and is sensitive to initial values. Wrong
localization of facial feature points affects face recognition rate. AAM
is known to be successfully applied to localization of facial feature
points. In this paper, we devise a facial feature point localization
method which first roughly estimate facial feature points using AAM
and refine facial feature points using Gabor jet similarity-based facial
feature localization method with initial points set by the rough facial
feature points obtained from AAM, and propose a face recognition
algorithm using the devised localization method for facial feature
localization and Gabor feature vectors. It is observed through
experiments that such a cascaded localization method based on both
AAM and Gabor jet similarity is more robust than the localization
method based on only Gabor jet similarity. Also, it is shown that the
proposed face recognition algorithm using this devised localization
method and Gabor feature vectors performs better than the
conventional face recognition algorithm using Gabor jet
similarity-based localization method and Gabor feature vectors like
Progressive AAM Based Robust Face Alignment
AAM has been successfully applied to face alignment,
but its performance is very sensitive to initial values. In case the initial
values are a little far distant from the global optimum values, there
exists a pretty good possibility that AAM-based face alignment may
converge to a local minimum. In this paper, we propose a progressive
AAM-based face alignment algorithm which first finds the feature
parameter vector fitting the inner facial feature points of the face and
later localize the feature points of the whole face using the first
information. The proposed progressive AAM-based face alignment
algorithm utilizes the fact that the feature points of the inner part of the
face are less variant and less affected by the background surrounding
the face than those of the outer part (like the chin contour). The
proposed algorithm consists of two stages: modeling and relation
derivation stage and fitting stage. Modeling and relation derivation
stage first needs to construct two AAM models: the inner face AAM
model and the whole face AAM model and then derive relation matrix
between the inner face AAM parameter vector and the whole face
AAM model parameter vector. In the fitting stage, the proposed
algorithm aligns face progressively through two phases. In the first
phase, the proposed algorithm will find the feature parameter vector
fitting the inner facial AAM model into a new input face image, and
then in the second phase it localizes the whole facial feature points of
the new input face image based on the whole face AAM model using
the initial parameter vector estimated from using the inner feature
parameter vector obtained in the first phase and the relation matrix
obtained in the first stage. Through experiments, it is verified that the
proposed progressive AAM-based face alignment algorithm is more
robust with respect to pose, illumination, and face background than the
conventional basic AAM-based face alignment algorithm.
Multi-Scale Gabor Feature Based Eye Localization
Eye localization is necessary for face recognition and
related application areas. Most of eye localization algorithms reported
so far still need to be improved about precision and computational
time for successful applications. In this paper, we propose an eye
location method based on multi-scale Gabor feature vectors, which is
more robust with respect to initial points. The eye localization based
on Gabor feature vectors first needs to constructs an Eye Model Bunch
for each eye (left or right eye) which consists of n Gabor jets and
average eye coordinates of each eyes obtained from n model face
images, and then tries to localize eyes in an incoming face image by
utilizing the fact that the true eye coordinates is most likely to be very
close to the position where the Gabor jet will have the best Gabor jet
similarity matching with a Gabor jet in the Eye Model Bunch. Similar
ideas have been already proposed in such as EBGM (Elastic Bunch
Graph Matching). However, the method used in EBGM is known to be
not robust with respect to initial values and may need extensive search
range for achieving the required performance, but extensive search
ranges will cause much more computational burden. In this paper, we
propose a multi-scale approach with a little increased computational
burden where one first tries to localize eyes based on Gabor feature
vectors in a coarse face image obtained from down sampling of the
original face image, and then localize eyes based on Gabor feature
vectors in the original resolution face image by using the eye
coordinates localized in the coarse scaled image as initial points.
Several experiments and comparisons with other eye localization
methods reported in the other papers show the efficiency of our
Estimation of Buffer Size of Internet Gateway Server via G/M/1 Queuing Model
How to efficiently assign system resource to route the
Client demand by Gateway servers is a tricky predicament. In this
paper, we tender an enhanced proposal for autonomous recital of
Gateway servers under highly vibrant traffic loads. We devise a
methodology to calculate Queue Length and Waiting Time utilizing
Gateway Server information to reduce response time variance in
presence of bursty traffic.
The most widespread contemplation is performance, because
Gateway Servers must offer cost-effective and high-availability
services in the elongated period, thus they have to be scaled to meet
the expected load. Performance measurements can be the base for
performance modeling and prediction. With the help of performance
models, the performance metrics (like buffer estimation, waiting
time) can be determined at the development process.
This paper describes the possible queue models those can be
applied in the estimation of queue length to estimate the final value
of the memory size. Both simulation and experimental studies using
synthesized workloads and analysis of real-world Gateway Servers
demonstrate the effectiveness of the proposed system.
Applying Similarity Theory and Hilbert Huang Transform for Estimating the Differences of Pig-s Blood Pressure Signals between Situations of Intestinal Artery Blocking and Unblocking
A mammal-s body can be seen as a blood vessel with
complex tunnels. When heart pumps blood periodically, blood runs
through blood vessels and rebounds from walls of blood vessels.
Blood pressure signals can be measured with complex but periodic
patterns. When an artery is clamped during a surgical operation, the
spectrum of blood pressure signals will be different from that of
normal situation. In this investigation, intestinal artery clamping
operations were conducted to a pig for simulating the situation of
intestinal blocking during a surgical operation. Similarity theory is a
convenient and easy tool to prove that patterns of blood pressure
signals of intestinal artery blocking and unblocking are surely
different. And, the algorithm of Hilbert Huang Transform can be
applied to extract the character parameters of blood pressure pattern.
In conclusion, the patterns of blood pressure signals of two different
situations, intestinal artery blocking and unblocking, can be
distinguished by these character parameters defined in this paper.
Learning Flexible Neural Networks for Pattern Recognition
Learning the gradient of neuron's activity function
like the weight of links causes a new specification which is
flexibility. In flexible neural networks because of supervising and
controlling the operation of neurons, all the burden of the learning is
not dedicated to the weight of links, therefore in each period of
learning of each neuron, in fact the gradient of their activity function,
cooperate in order to achieve the goal of learning thus the number of
learning will be decreased considerably.
Furthermore, learning neurons parameters immunes them against
changing in their inputs and factors which cause such changing.
Likewise initial selecting of weights, type of activity function,
selecting the initial gradient of activity function and selecting a fixed
amount which is multiplied by gradient of error to calculate the
weight changes and gradient of activity function, has a direct affect
in convergence of network for learning.
A New Particle Filter Inspired by Biological Evolution: Genetic Filter
In this paper, we consider a new particle filter inspired
by biological evolution. In the standard particle filter, a resampling
scheme is used to decrease the degeneracy phenomenon and improve
estimation performance. Unfortunately, however, it could cause the
undesired the particle deprivation problem, as well. In order to
overcome this problem of the particle filter, we propose a novel
filtering method called the genetic filter. In the proposed filter, we
embed the genetic algorithm into the particle filter and overcome the
problems of the standard particle filter. The validity of the proposed
method is demonstrated by computer simulation.
Logic Program for Authorizations
As a security mechanism, authorization is to provide access control to the system resources according to the polices and rules specified by the security strategies. Either by update or in the initial specification, conflicts in authorization is an issue needs to be solved. In this paper, we propose a new approach to solve conflict by using prioritized logic programs and discuss the uniqueness of its answer set. Addressing conflict resolution from logic programming viewpoint and the uniqueness analysis of the answer set provide a novel, efficient approach for authorization conflict resolution.
Optimal Control Problem, Quasi-Assignment Problem and Genetic Algorithm
In this paper we apply one of approaches in category of heuristic methods as Genetic Algorithms for obtaining approximate solution of optimal control problems. The firs we convert optimal control problem to a quasi Assignment Problem by defining some usual characters as defined in Genetic algorithm applications. Then we obtain approximate optimal control function as an piecewise constant function. Finally the numerical examples are given.
A Critical Survey of Reusability Aspects for Component-Based Systems
The last decade has shown that object-oriented
concept by itself is not that powerful to cope with the rapidly
changing requirements of ongoing applications. Component-based
systems achieve flexibility by clearly separating the stable parts of
systems (i.e. the components) from the specification of their
composition. In order to realize the reuse of components effectively
in CBSD, it is required to measure the reusability of components.
However, due to the black-box nature of components where the
source code of these components are not available, it is difficult to
use conventional metrics in Component-based Development as these
metrics require analysis of source codes. In this paper, we survey
few existing component-based reusability metrics. These metrics
give a border view of component-s understandability, adaptability,
and portability. It also describes the analysis, in terms of quality
factors related to reusability, contained in an approach that aids
significantly in assessing existing components for reusability.
Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling
We developed a new method based on quasimolecular
modeling to simulate the cavity flow in three cavity
shapes: rectangular, half-circular and bucket beer in cgs units. Each
quasi-molecule was a group of particles that interacted in a fashion
entirely analogous to classical Newtonian molecular interactions.
When a cavity flow was simulated, the instantaneous velocity vector
fields were obtained by using an inverse distance weighted
interpolation method. In all three cavity shapes, fluid motion was
rotated counter-clockwise. The velocity vector fields of the three
cavity shapes showed a primary vortex located near the upstream
corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively.
The configurational kinetic energy of the cavities increased as time
increased until the kinetic energy reached a maximum at time t ~
0.02 s and, then, the kinetic energy decreased as time increased. The
rectangular cavity system showed the lowest kinetic energy, while
the half-circular cavity system showed the highest kinetic energy.
The kinetic energy of rectangular, beer bucket and half-circular
cavities fluctuated about stable average values 35.62 x 103, 38.04 x
103 and 40.80 x 103 ergs/particle, respectively. This indicated that the
half-circular shapes were the most suitable shape for a shrimp pond
because the water in shrimp pond flows best when we compared with
rectangular and beer bucket shape.
Thailand National Biodiversity Database System with webMathematica and Google Earth
National Biodiversity Database System (NBIDS) has
been developed for collecting Thai biodiversity data. The goal of this
project is to provide advanced tools for querying, analyzing,
modeling, and visualizing patterns of species distribution for
researchers and scientists. NBIDS data record two types of datasets:
biodiversity data and environmental data. Biodiversity data are
specie presence data and species status. The attributes of biodiversity
data can be further classified into two groups: universal and projectspecific
attributes. Universal attributes are attributes that are common
to all of the records, e.g. X/Y coordinates, year, and collector name.
Project-specific attributes are attributes that are unique to one or a
few projects, e.g., flowering stage. Environmental data include
atmospheric data, hydrology data, soil data, and land cover data
collecting by using GLOBE protocols. We have developed webbased
tools for data entry. Google Earth KML and ArcGIS were used
as tools for map visualization. webMathematica was used for simple
data visualization and also for advanced data analysis and
visualization, e.g., spatial interpolation, and statistical analysis.
NBIDS will be used by park rangers at Khao Nan National Park, and
Morphometric Analysis of Tor tambroides by Stepwise Discriminant and Neural Network Analysis
The population structure of the Tor tambroides was
investigated with morphometric data (i.e. morphormetric
measurement and truss measurement). A morphometric analysis was
conducted to compare specimens from three waterfalls: Sunanta, Nan
Chong Fa and Wang Muang waterfalls at Khao Nan National Park,
Nakhon Si Thammarat, Southern Thailand. The results of stepwise
discriminant analysis on seven morphometric variables and 21 truss
variables per individual were the same as from a neural network. Fish
from three waterfalls were separated into three groups based on their
morphometric measurements. The morphometric data shows that the
nerual network model performed better than the stepwise
Climatic Factors Affecting Influenza Cases in Southern Thailand
This study investigated climatic factors associated
with influenza cases in Southern Thailand. The main aim for use
regression analysis to investigate possible causual relationship of
climatic factors and variability between the border of the Andaman
Sea and the Gulf of Thailand. Southern Thailand had the highest
Influenza incidences among four regions (i.e. north, northeast, central
and southern Thailand). In this study, there were 14 climatic factors:
mean relative humidity, maximum relative humidity, minimum
relative humidity, rainfall, rainy days, daily maximum rainfall,
pressure, maximum wind speed, mean wind speed, sunshine duration,
mean temperature, maximum temperature, minimum temperature,
and temperature difference (i.e. maximum – minimum temperature).
Multiple stepwise regression technique was used to fit the statistical
model. The results indicated that the mean wind speed and the
minimum relative humidity were positively associated with the
number of influenza cases on the Andaman Sea side. The maximum
wind speed was positively associated with the number of influenza
cases on the Gulf of Thailand side.
, Climatic Factor
, Relative Humidity
, Wind Speed
, sunshine duration
, Andaman Sea
, Gulf of Thailand
, Southern Thailand.
Larval Occurrence and Climatic Factors Affecting DHF Incidence in Samui Islands, Thailand
This study investigated the number of Aedes larvae,
the key breeding sites of Aedes sp., and the relationship between
climatic factors and the incidence of DHF in Samui Islands. We
conducted our questionnaire and larval surveys from randomly
selected 105 households in Samui Islands in July-September 2006.
Pearson-s correlation coefficient was used to explore the primary
association between the DHF incidence and all climatic factors.
Multiple stepwise regression technique was then used to fit the
statistical model. The results showed that the positive indoor
containers were small jars, cement tanks, and plastic tanks. The
positive outdoor containers were small jars, cement tanks, plastic
tanks, used cans, tires, plastic bottles, discarded objects, pot saucers,
plant pots, and areca husks. All Ae. albopictus larval indices (i.e., CI,
HI, and BI) were higher than Ae. aegypti larval indices in this area.
These larval indices were higher than WHO standard. This indicated
a high risk of DHF transmission at Samui Islands. The multiple
stepwise regression model was y = –288.80 + 11.024xmean temp. The
mean temperature was positively associated with the DHF incidence
in this area.
Computation of D8 Flow Line at Ron Phibun Area, Nakhon Si Thammarat, Thailand
A flow line computational technique based on the D8
method using Mathematica was developed. The technique was
applied to Ron Phibun area, Nakhon Si Thammarat Province. This
area is highly contaminated with arsenic 3 and 5. It was found that
the technique using Mathematica can produce similar results to those
obtained from GRASS v 5.0.2.