Hamiltonian Factors in Hamiltonian Graphs
Let G be a Hamiltonian graph. A factor F of G is called
a Hamiltonian factor if F contains a Hamiltonian cycle. In this paper,
two sufficient conditions are given, which are two neighborhood
conditions for a Hamiltonian graph G to have a Hamiltonian factor.
Advanced Gronwall-Bellman-Type Integral Inequalities and Their Applications
In this paper, some new nonlinear generalized
Gronwall-Bellman-Type integral inequalities with mixed time delays
are established. These inequalities can be used as handy tools
to research stability problems of delayed differential and integral
dynamic systems. As applications, based on these new established
inequalities, some p-stable results of a integro-differential equation
are also given. Two numerical examples are presented to illustrate
the validity of the main results.
Quadratic Irrationals, Quadratic Ideals and Indefinite Quadratic Forms II
Let D = 1 be a positive non-square integer and let δ = √D or 1+√D 2 be a real quadratic irrational with trace t =δ + δ and norm n = δδ. Let γ = P+δ Q be a quadratic irrational for positive integers P and Q. Given a quadratic irrational γ, there exist a quadratic ideal Iγ = [Q, δ + P] and an indeﬁnite quadratic form Fγ(x, y) = Q(x−γy)(x−γy) of discriminant Δ = t
2−4n. In the ﬁrst section, we give some preliminaries form binary quadratic forms, quadratic irrationals and quadratic ideals. In the second section, we obtain some results on γ, Iγ and Fγ for some speciﬁc values of Q and P.
Some Collineations Preserving Cross-Ratio in some Moufang-Klingenberg Planes
In this paper we are interested in Moufang-Klingenberg
planesM(A) defined over a local alternative ring A of dual numbers.
We show that some collineations of M(A) preserve cross-ratio.
Audio Watermarking Using Spectral Modifications
In this paper, we present a non-blind technique of
adding the watermark to the Fourier spectral components of audio
signal in a way such that the modified amplitude does not exceed the
maximum amplitude spread (MAS). This MAS is due to individual
Discrete fourier transform (DFT) coefficients in that particular frame,
which is derived from the Energy Spreading function given by
Schroeder. Using this technique one can store double the information
within a given frame length i.e. overriding the watermark on the
host of equal length with least perceptual distortion. The watermark
is uniformly floating on the DFT components of original signal.
This helps in detecting any intentional manipulations done on the
watermarked audio. Also, the scheme is found robust to various signal
processing attacks like presence of multiple watermarks, Additive
white gaussian noise (AWGN) and mp3 compression.
Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition
A new estimator for evolutionary spectrum (ES) based
on short time Fourier transform (STFT) and modified group delay
function (MGDF) by signal decomposition (SD) is proposed. The
STFT due to its built-in averaging, suppresses the cross terms and the
MGDF preserves the frequency resolution of the rectangular window
with the reduction in the Gibbs ripple. The present work overcomes
the magnitude distortion observed in multi-component non-stationary
signals with STFT and MGDF estimation of ES using SD. The SD is
achieved either through discrete cosine transform based harmonic
wavelet transform (DCTHWT) or perfect reconstruction filter banks
(PRFB). The MGDF also improves the signal to noise ratio by
removing associated noise. The performance of the present method is
illustrated for cross chirp and frequency shift keying (FSK) signals,
which indicates that its performance is better than STFT-MGDF
(STFT-GD) alone. Further its noise immunity is better than STFT.
The SD based methods, however cannot bring out the frequency
transition path from band to band clearly, as there will be gap in the
contour plot at the transition. The PRFB based STFT-SD shows good
performance than DCTHWT decomposition method for STFT-GD.
Unit Selection Algorithm Using Bi-grams Model For Corpus-Based Speech Synthesis
In this paper, we present a novel statistical approach to
corpus-based speech synthesis. Classically, phonetic information is
defined and considered as acoustic reference to be respected. In this
way, many studies were elaborated for acoustical unit classification.
This type of classification allows separating units according to their
symbolic characteristics. Indeed, target cost and concatenation cost
were classically defined for unit selection.
In Corpus-Based Speech Synthesis System, when using large text
corpora, cost functions were limited to a juxtaposition of symbolic
criteria and the acoustic information of units is not exploited in the
definition of the target cost.
In this manuscript, we token in our consideration the unit phonetic
information corresponding to acoustic information. This would be realized
by defining a probabilistic linguistic Bi-grams model basically
used for unit selection. The selected units would be extracted from
the English TIMIT corpora.
A Double Referenced Contrast for Blind Source Separation
This paper addresses the problem of blind source separation
(BSS). To recover original signals, from linear instantaneous
mixtures, we propose a new contrast function based on the use of a
double referenced system. Our approach assumes statistical independence
sources. The reference vectors will be incrusted in the cumulant
to evaluate the independence. The estimation of the separating matrix
will be performed in two steps: whitening observations and joint
diagonalization of a set of referenced cumulant matrices. Computer
simulations are presented to demonstrate the effectiveness of the
An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization
This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.
Fixed Point Equations Related to Motion Integrals in Renormalization Hopf Algebra
In this paper we consider quantum motion integrals
depended on the algebraic reconstruction of BPHZ method for
perturbative renormalization in two different procedures. Then based
on Bogoliubov character and Baker-Campbell-Hausdorff (BCH) formula,
we show that how motion integral condition on components
of Birkhoff factorization of a Feynman rules character on Connes-
Kreimer Hopf algebra of rooted trees can determine a family of fixed
A Meta-Heuristic Algorithm for Vertex Covering Problem Based on Gravity
A new Meta heuristic approach called "Randomized gravitational emulation search algorithm (RGES)" for solving vertex covering problems has been designed. This algorithm is found upon introducing randomization concept along with the two of the four primary parameters -velocity- and -gravity- in physics. A new heuristic operator is introduced in the domain of RGES to maintain feasibility specifically for the vertex covering problem to yield best solutions. The performance of this algorithm has been evaluated on a large set of benchmark problems from OR-library. Computational results showed that the randomized gravitational emulation search algorithm - based heuristic is capable of producing high quality solutions. The performance of this heuristic when compared with other existing heuristic algorithms is found to be excellent in terms of solution quality.
Controllability of Efficiency of Antiviral Therapy in Hepatitis B Virus Infections
An optimal control problem for a mathematical model of efficiency of antiviral therapy in hepatitis B virus infections is considered. The aim of the study is to control the new viral production, block the new infection cells and maintain the number of uninfected cells in the given range. The optimal controls represent the efficiency of antiviral therapy in inhibiting viral production and preventing new infections. Defining the cost functional, the optimal control problem is converted into the constrained optimization problem and the first order optimality system is derived. For the numerical simulation, we propose the steepest descent algorithm based on the adjoint variable method. A computer program in MATLAB is developed for the numerical simulations.
On 6-Figures in Finite Klingenberg Planes of Parameters (p2k-1, p)
In this paper, we deal with finite projective Klingenberg plane M (A) coordinatized by local ring A := Zq+Zq E (where prime power q = p', e0 Z q and 62 = 0). So, we get some combinatorical results on 6-figures. For example, we show that there exist p — 1 6-figure classes in M(A).
Further Investigations on Higher Mathematics Scores for Chinese University Students
Recently, X. Ge and J. Qian investigated some relations between higher mathematics scores and calculus scores (resp. linear algebra scores, probability statistics scores) for Chinese university students. Based on rough-set theory, they established an information system S = (U,CuD,V, f). In this information system, higher mathematics score was taken as a decision attribute and calculus score, linear algebra score, probability statistics score were taken as condition attributes. They investigated importance of each condition attribute with respective to decision attribute and strength of each condition attribute supporting decision attribute. In this paper, we give further investigations for this issue. Based on the above information system S = (U, CU D, V, f), we analyze the decision rules between condition and decision granules. For each x E U, we obtain support (resp. strength, certainty factor, coverage factor) of the decision rule C —>x D, where C —>x D is the decision rule induced by x in S = (U, CU D, V, f). Results of this paper gives new analysis of on higher mathematics scores for Chinese university students, which can further lead Chinese university students to raise higher mathematics scores in Chinese graduate student entrance examination.
A Novel Estimation Method for Integer Frequency Offset in Wireless OFDM Systems
Ren et al. presented an efficient carrier frequency offset
(CFO) estimation method for orthogonal frequency division multiplexing
(OFDM), which has an estimation range as large as the
bandwidth of the OFDM signal and achieves high accuracy without
any constraint on the structure of the training sequence. However,
its detection probability of the integer frequency offset (IFO) rapidly
varies according to the fractional frequency offset (FFO) change. In
this paper, we first analyze the Ren-s method and define two criteria
suitable for detection of IFO. Then, we propose a novel method for
the IFO estimation based on the maximum-likelihood (ML) principle
and the detection criteria defined in this paper. The simulation results
demonstrate that the proposed method outperforms the Ren-s method
in terms of the IFO detection probability irrespective of a value of
Virtual Gesture Screen System Based on 3D Visual Information and Multi-Layer Perceptron
Active research is underway on virtual touch screens
that complement the physical limitations of conventional touch
screens. This paper discusses a virtual touch screen that uses a
multi-layer perceptron to recognize and control three-dimensional
(3D) depth information from a time of flight (TOF) camera. This
system extracts an object-s area from the image input and compares it
with the trajectory of the object, which is learned in advance, to
recognize gestures. The system enables the maneuvering of content in
virtual space by utilizing human actions.
Novel Schemes of Pilot-Aided Integer Frequency Offset Estimation for OFDM-Based DVB-T Systems
This paper proposes two novel schemes for pilot-aided
integer frequency offset (IFO) estimation in orthogonal frequency
division multiplexing (OFDM)-based digital video broadcastingterrestrial
(DVB-T) systems. The conventional scheme proposed for
estimating the IFO uses only partial information of combinations
that pilots can provide, which stems from a rigorous assumption
that the channel responses of pilots used for estimating the IFO
change very rapidly. Thus, in this paper, we propose the novel IFO
estimation schemes exploiting all information of combinations that
pilots can provide to improve the performance of IFO estimation.
The simulation results show that the proposed schemes are highly
accurate in terms of the IFO detection probability.
Effect of Curing Profile to Eliminate the Voids / Black Dots Formation in Underfill Epoxy for Hi-CTE Flip Chip Packaging
Void formation in underfill is considered as failure
in flip chip manufacturing process. Void formation possibly caused
by several factors such as poor soldering and flux residue during
die attach process, void entrapment due moisture contamination,
dispense pattern process and setting up the curing process. This
paper presents the comparison of single step and two steps curing
profile towards the void and black dots formation in underfill for
Hi-CTE Flip Chip Ceramic Ball Grid Array Package (FC-CBGA).
Statistic analysis was conducted to analyze how different factors
such as wafer lot, sawing technique, underfill fillet height and
curing profile recipe were affected the formation of voids and
black dots. A C-Mode Scanning Aqoustic Microscopy (C-SAM)
was used to scan the total count of voids and black dots. It was
shown that the 2 steps curing profile provided solution for void
elimination and black dots in underfill after curing process.
The Euler Equations of Steady Flow in Terms of New Dependent and Independent Variables
In this paper we study the transformation of Euler equations 1 , u u u Pf t (ρ ∂) + ⋅∇ = − ∇ + ∂ G G G G ∇⋅ = u 0, G where (ux, t) G G is the velocity of a fluid, P(x, t) G is the pressure of a fluid andρ (x, t) G is density. First of all, we rewrite the Euler equations in terms of new unknown functions. Then, we introduce new independent variables and transform it to a new curvilinear coordinate system. We obtain the Euler equations in the new dependent and independent variables. The governing equations into two subsystems, one is hyperbolic and another is elliptic.
Analysis of Feature Space for a 2d/3d Vision based Emotion Recognition Method
In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Removal of Hydrogen Sulfide in Terms of Scrubbing Techniques using Silver Nano-Particles
Silver nano-particles have been used for antibacterial
purpose and it is also believed to have removal of odorous compounds,
oxidation capacity as a metal catalyst. In this study, silver
nano-particles in nano sizes (5-30 nm) were prepared on the surface of
NaHCO3, the supporting material, using a sputtering method that
provided high silver content and minimized conglomerating problems
observed in the common AgNO3 photo-deposition method. The silver
nano-particles were dispersed by dissolving Ag-NaHCO3 into water,
and the dispersed silver nano-particles in the aqueous phase were
applied to remove inorganic odor compounds, H2S, in a scrubbing
reactor. Hydrogen sulfide in the gas phase was rapidly removed by the
silver nano-particles, and the concentration of sulfate (SO4
increased with time due to the oxidation reaction by silver as a
catalyst. Consequently, the experimental results demonstrated that the
silver nano-particles in the aqueous solution can be successfully
applied to remove odorous compounds without adding additional
energy sources and producing any harmful byproducts
A Robust Method for Hand Tracking Using Mean-shift Algorithm and Kalman Filter in Stereo Color Image Sequences
Real-time hand tracking is a challenging task in many
computer vision applications such as gesture recognition. This paper
proposes a robust method for hand tracking in a complex environment
using Mean-shift analysis and Kalman filter in conjunction with 3D
depth map. The depth information solve the overlapping problem
between hands and face, which is obtained by passive stereo measuring
based on cross correlation and the known calibration data of
the cameras. Mean-shift analysis uses the gradient of Bhattacharyya
coefficient as a similarity function to derive the candidate of the hand
that is most similar to a given hand target model. And then, Kalman
filter is used to estimate the position of the hand target. The results
of hand tracking, tested on various video sequences, are robust to
changes in shape as well as partial occlusion.
A Study of Visual Attention in Diagnosing Cerebellar Tumours
Visual attention allows user to select the most relevant
information to ongoing behaviour. This paper presents a study on; i)
the performance of people measurements, ii) accurateness of people
measurement of the peaks that correspond to chemical quantities
from the Magnetic Resonance Spectroscopy (MRS) graphs and iii)
affects of people measurements to the algorithm-based diagnosis.
Participant-s eye-movement was recorded using eye-tracker tool
(Eyelink II). This experiment involves three participants for
examining 20 MRS graphs to estimate the peaks of chemical
quantities which indicate the abnormalities associated with
Cerebellar Tumours (CT). The status of each MRS is verified by
using decision algorithm. Analysis involves determination of
humans-s eye movement pattern in measuring the peak of
spectrograms, scan path and determining the relationship of
distributions of fixation durations with the accuracy of measurement.
In particular, the eye-tracking data revealed which aspects of the
spectrogram received more visual attention and in what order they
were viewed. This preliminary investigation provides a proof of
concept for use of the eye tracking technology as the basis for
expanded CT diagnosis.
Temperature Control of Industrial Water Cooler using Hot-gas Bypass
In this study, we experiment on precise control outlet
temperature of water from the water cooler with hot-gas bypass
method based on PI control logic for machine tool. Recently, technical
trend for machine tools is focused on enhancement of speed and
accuracy. High speedy processing causes thermal and structural
deformation of objects from the machine tools. Water cooler has to be
applied to machine tools to reduce the thermal negative influence with
accurate temperature controlling system. The goal of this study is to
minimize temperature error in steady state. In addition, control period
of an electronic expansion valve were considered to increment of
lifetime of the machine tools and quality of product with a water
Humanoid Personalized Avatar Through Multiple Natural Language Processing
There has been a growing interest in implementing humanoid avatars in networked virtual environment. However, most existing avatar communication systems do not take avatars- social backgrounds into consideration. This paper proposes a novel humanoid avatar animation system to represent personalities and facial emotions of avatars based on culture, profession, mood, age, taste, and so forth. We extract semantic keywords from the input text through natural language processing, and then the animations of personalized avatars are retrieved and displayed according to the order of the keywords. Our primary work is focused on giving avatars runtime instruction from multiple natural languages. Experiments with Chinese, Japanese and English input based on the prototype show that interactive avatar animations can be displayed in real time and be made available online. This system provides a more natural and interesting means of human communication, and therefore is expected to be used for cross-cultural communication, multiuser online games, and other entertainment applications.
Construct Pairwise Test Suites Based on the Bak-Sneppen Model of Biological Evolution
Pairwise testing, which requires that every
combination of valid values of each pair of system factors be covered
by at lease one test case, plays an important role in software testing
since many faults are caused by unexpected 2-way interactions among
system factors. Although meta-heuristic strategies like simulated
annealing can generally discover smaller pairwise test suite, they may
cost more time to perform search, compared with greedy algorithms.
We propose a new method, improved Extremal Optimization (EO)
based on the Bak-Sneppen (BS) model of biological evolution, for
constructing pairwise test suites and define fitness function according
to the requirement of improved EO. Experimental results show that
improved EO gives similar size of resulting pairwise test suite and
yields an 85% reduction in solution time over SA.
Dynamic Decompression for Text Files
Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
[a, b]-Factors Excluding Some Specified Edges In Graphs
Let G be a graph of order n, and let a, b and m be positive integers with 1 ≤ a<b. An [a, b]-factor of G is deﬁned as a spanning subgraph F of G such that a ≤ dF (x) ≤ b for each x ∈ V (G). In this paper, it is proved that if n ≥ (a+b−1+√(a+b+1)m−2)2−1 b and δ(G) > n + a + b − 2 √bn+ 1, then for any subgraph H of G with m edges, G has an [a, b]-factor F such that E(H)∩ E(F) = ∅. This result is an extension of thatof Egawa .
Position Vector of a Partially Null Curve Derived from a Vector Differential Equation
In this paper, position vector of a partially null unit speed curve with respect to standard frame of Minkowski space-time is studied. First, it is proven that position vector of every partially null unit speed curve satisfies a vector differential equation of fourth order. In terms of solution of the differential equation, position vector of a partially null unit speed curve is expressed.
The Slant Helices According to Bishop Frame
In this study, we have defined slant helix according to
Bishop frame in Euclidean 3-Space. Furthermore, we have given
some necassary and sufficient conditons for the slant helix.
On the Determination of a Time-like Dual Curve in Dual Lorentzian Space
In this work, position vector of a time-like dual curve
according to standard frame of D31
is investigated. First, it is proven
that position vector of a time-like dual curve satisfies a dual vector
differential equation of fourth order. The general solution of this dual
vector differential equation has not yet been found. Due to this, in
terms of special solutions, position vectors of some special time-like
dual curves with respect to standard frame of D31
On the Differential Geometry of the Curves in Minkowski Space-Time II
In the first part of this paper , a method to
determine Frenet apparatus of the space-like curves in Minkowski
space-time is presented. In this work, the mentioned method is
developed for the time-like curves in Minkowski space-time.
Additionally, an example of presented method is illustrated.
Exact Solutions of Steady Plane Flows of an Incompressible Fluid of Variable Viscosity Using (ξ, ψ)- Or (η, ψ)- Coordinates
The exact solutions of the equations describing the steady plane motion of an incompressible fluid of variable viscosity for an arbitrary state equation are determined in the (ξ,ψ) − or (η,ψ )- coordinates where ψ(x,y) is the stream function, ξ and η are the parts of the analytic function, ϖ =ξ( x,y )+iη( x,y ). Most of the solutions involve arbitrary function/ functions indicating
that the flow equations possess an infinite set of solutions.
Lagrangian Method for Solving Unsteady Gas Equation
In this paper we propose, a Lagrangian method to solve unsteady gas equation which is a nonlinear ordinary differential equation on semi-infnite interval. This approach is based on Modified generalized Laguerre functions. This method reduces the solution of this problem to the solution of a system of algebraic equations. We also compare this work with some other numerical results. The findings show that the present solution is highly accurate.
On The Elliptic Divisibility Sequences over Finite Fields
In this work we study elliptic divisibility sequences over
finite fields. MorganWard in [11, 12] gave arithmetic theory of elliptic
divisibility sequences. We study elliptic divisibility sequences, equivalence
of these sequences and singular elliptic divisibility sequences
over finite fields Fp, p > 3 is a prime.
Model Reduction of Linear Systems by Conventional and Evolutionary Techniques
Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
A Method to Calculate Frenet Apparatus of W-Curves in the Euclidean 6-Space
These In this work, a regular unit speed curve in six
dimensional Euclidean space, whose Frenet curvatures are constant,
is considered. Thereafter, a method to calculate Frenet apparatus of
this curve is presented.
The Number of Rational Points on Singular Curvesy 2 = x(x - a)2 over Finite Fields Fp
Let p ≥ 5 be a prime number and let Fp be a finite
field. In this work, we determine the number of rational points on
singular curves Ea : y2 = x(x - a)2 over Fp for some specific
values of a.
MaxMin Share Based Medium Access for Attaining Fairness and Channel Utilization in Mobile Adhoc Networks
Due to the complex network architecture, the mobile
adhoc network-s multihop feature gives additional problems to the
users. When the traffic load at each node gets increased, the
additional contention due its traffic pattern might cause the nodes
which are close to destination to starve the nodes more away from the
destination and also the capacity of network is unable to satisfy the
total user-s demand which results in an unfairness problem. In this
paper, we propose to create an algorithm to compute the optimal
MAC-layer bandwidth assigned to each flow in the network. The
bottleneck links contention area determines the fair time share which
is necessary to calculate the maximum allowed transmission rate used
by each flow. To completely utilize the network resources, we
compute two optimal rates namely, the maximum fair share and
minimum fair share. We use the maximum fair share achieved in
order to limit the input rate of those flows which crosses the
bottleneck links contention area when the flows that are not allocated
to the optimal transmission rate and calculate the following highest
fair share. Through simulation results, we show that the proposed
protocol achieves improved fair share and throughput with reduced
Business Rules for Data Warehouse
Business rules and data warehouse are concepts and
technologies that impact a wide variety of organizational tasks. In
general, each area has evolved independently, impacting application
development and decision-making. Generating knowledge from data
warehouse is a complex process. This paper outlines an approach to
ease import of information and knowledge from a data warehouse
star schema through an inference class of business rules. The paper
utilizes the Oracle database for illustrating the working of the
concepts. The star schema structure and the business rules are stored
within a relational database. The approach is explained through a
prototype in Oracle-s PL/SQL Server Pages.
Particle Filter Applied to Noisy Synchronization in Polynomial Chaotic Maps
Polynomial maps offer analytical properties used to obtain better performances in the scope of chaos synchronization under noisy channels. This paper presents a new method to simplify equations of the Exact Polynomial Kalman Filter (ExPKF) given in . This faster algorithm is compared to other estimators showing that performances of all considered observers vanish rapidly with the channel noise making application of chaos synchronization intractable. Simulation of ExPKF shows that saturation drawn on the emitter to keep it stable impacts badly performances for low channel noise. Then we propose a particle filter that outperforms all other Kalman structured observers in the case of noisy channels.
Bandwidth Estimation Algorithms for the Dynamic Adaptation of Voice Codec
In the recent years multimedia traffic and in particular
VoIP services are growing dramatically. We present a new algorithm
to control the resource utilization and to optimize the voice codec
selection during SIP call setup on behalf of the traffic condition
estimated on the network path.
The most suitable methodologies and the tools that perform realtime
evaluation of the available bandwidth on a network path have
been integrated with our proposed algorithm: this selects the best
codec for a VoIP call in function of the instantaneous available
bandwidth on the path. The algorithm does not require any explicit
feedback from the network, and this makes it easily deployable over
the Internet. We have also performed intensive tests on real network
scenarios with a software prototype, verifying the algorithm
efficiency with different network topologies and traffic patterns
between two SIP PBXs.
The promising results obtained during the experimental validation
of the algorithm are now the basis for the extension towards a larger
set of multimedia services and the integration of our methodology
with existing PBX appliances.
A Novel Method For evaluating Parameters Of Ongoing Calls In Low Earth Orbit Mobile Satellite System
In order to derive important parameters concerning
mobile subscriber MS with ongoing calls in Low Earth Orbit Mobile
Satellite Systems LEO MSSs, a positioning system had to be
integrated into MSS in order to localize mobile subscribers MSs and
track them during the connection. Such integration is regarded as a
We propose in this paper a novel method based on advantages of
mobility model of Low Earth Orbit Mobile Satellite System LEO
MSS called Evaluation Parameters Method EPM which allows for
such systems the evaluation of different information concerning a
MS with a call in progress even if its location is unknown.
Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA
A new code synchronization algorithm is proposed in
this paper for the secondary cell-search stage in wideband CDMA
systems. Rather than using the Cyclically Permutable (CP) code in the
Secondary Synchronization Channel (S-SCH) to simultaneously
determine the frame boundary and scrambling code group, the new
synchronization algorithm implements the same function with less
system complexity and less Mean Acquisition Time (MAT). The
Secondary Synchronization Code (SSC) is redesigned by splitting into
two sub-sequences. We treat the information of scrambling code group
as data bits and use simple time diversity BCH coding for further
reliability. It avoids involved and time-costly Reed-Solomon (RS)
code computations and comparisons. Analysis and simulation results
show that the Synchronization Error Rate (SER) yielded by the new
algorithm in Rayleigh fading channels is close to that of the
conventional algorithm in the standard. This new synchronization
algorithm reduces system complexities, shortens the average
cell-search time and can be implemented in the slot-based cell-search
pipeline. By taking antenna diversity and pipelining correlation
processes, the new algorithm also shows its flexible application in
multiple antenna systems.
An Efficient Graph Query Algorithm Based on Important Vertices and Decision Features
Graph has become increasingly important in modeling
complicated structures and schemaless data such as proteins, chemical
compounds, and XML documents. Given a graph query, it is desirable
to retrieve graphs quickly from a large database via graph-based
indices. Different from the existing methods, our approach, called
VFM (Vertex to Frequent Feature Mapping), makes use of vertices
and decision features as the basic indexing feature. VFM constructs
two mappings between vertices and frequent features to answer graph
queries. The VFM approach not only provides an elegant solution to
the graph indexing problem, but also demonstrates how database
indexing and query processing can benefit from data mining,
especially frequent pattern mining. The results show that the proposed
method not only avoids the enumeration method of getting subgraphs
of query graph, but also effectively reduces the subgraph isomorphism
tests between the query graph and graphs in candidate answer set in
Building the Reliability Prediction Model of Component-Based Software Architectures
Reliability is one of the most important quality attributes of software. Based on the approach of Reussner and the approach of Cheung, we proposed the reliability prediction model of component-based software architectures. Also, the value of the model is shown through the experimental evaluation on a web server system.
Improvement of MLLR Speaker Adaptation Using a Novel Method
This paper presents a technical speaker adaptation
method called WMLLR, which is based on maximum likelihood linear
regression (MLLR). In MLLR, a linear regression-based transform
which adapted the HMM mean vectors was calculated to maximize the
likelihood of adaptation data. In this paper, the prior knowledge of the
initial model is adequately incorporated into the adaptation. A series of
speaker adaptation experiments are carried out at a 30 famous city
names database to investigate the efficiency of the proposed method.
Experimental results show that the WMLLR method outperforms the
conventional MLLR method, especially when only few utterances
from a new speaker are available for adaptation.
Generating Qualitative Causal Graph using Modeling Constructs of Qualitative Process Theory for Explaining Organic Chemistry Reactions
This paper discusses the causal explanation capability
of QRIOM, a tool aimed at supporting learning of organic chemistry
reactions. The development of the tool is based on the hybrid use of
Qualitative Reasoning (QR) technique and Qualitative Process
Theory (QPT) ontology. Our simulation combines symbolic,
qualitative description of relations with quantity analysis to generate
causal graphs. The pedagogy embedded in the simulator is to both
simulate and explain organic reactions. Qualitative reasoning through
a causal chain will be presented to explain the overall changes made
on the substrate; from initial substrate until the production of final
outputs. Several uses of the QPT modeling constructs in supporting
behavioral and causal explanation during run-time will also be
demonstrated. Explaining organic reactions through causal graph
trace can help improve the reasoning ability of learners in that their
conceptual understanding of the subject is nurtured.
Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing
This paper presents a formant-tracking linear prediction
(FTLP) model for speech processing in noise. The main focus of this
work is the detection of formant trajectory based on Hidden Markov
Models (HMM), for improved formant estimation in noise. The
approach proposed in this paper provides a systematic framework for
modelling and utilization of a time- sequence of peaks which satisfies
continuity constraints on parameter; the within peaks are modelled
by the LP parameters. The formant tracking LP model estimation
is composed of three stages: (1) a pre-cleaning multi-band spectral
subtraction stage to reduce the effect of residue noise on formants
(2) estimation stage where an initial estimate of the LP model of
speech for each frame is obtained (3) a formant classification using
probability models of formants and Viterbi-decoders. The evaluation
results for the estimation of the formant tracking LP model tested
in Gaussian white noise background, demonstrate that the proposed
combination of the initial noise reduction stage with formant tracking
and LPC variable order analysis, results in a significant reduction in
errors and distortions. The performance was evaluated with noisy
natual vowels extracted from international french and English vocabulary
speech signals at SNR value of 10dB. In each case, the
estimated formants are compared to reference formants.
An Adaptive Hand-Talking System for the Hearing Impaired
An adaptive Chinese hand-talking system is presented
in this paper. By analyzing the 3 data collecting strategies for new
users, the adaptation framework including supervised and unsupervised
adaptation methods is proposed. For supervised adaptation,
affinity propagation (AP) is used to extract exemplar subsets, and enhanced
maximum a posteriori / vector field smoothing (eMAP/VFS)
is proposed to pool the adaptation data among different models. For
unsupervised adaptation, polynomial segment models (PSMs) are
used to help hidden Markov models (HMMs) to accurately label
the unlabeled data, then the "labeled" data together with signerindependent
models are inputted to MAP algorithm to generate
signer-adapted models. Experimental results show that the proposed
framework can execute both supervised adaptation with small amount
of labeled data and unsupervised adaptation with large amount
of unlabeled data to tailor the original models, and both achieve
improvements on the performance of recognition rate.
Local Steerable Pyramid Binary Pattern Sequence LSPBPS for Face Recognition Method
In this paper the problem of face recognition under variable illumination conditions is considered. Most of the works in the literature exhibit good performance under strictly controlled acquisition conditions, but the performance drastically drop when changes in pose and illumination occur, so that recently number of approaches have been proposed to deal with such variability. The aim of this work is to introduce an efficient local appearance feature extraction method based steerable pyramid (SP) for face recognition. Local information is extracted from SP sub-bands using LBP(Local binary Pattern). The underlying statistics allow us to reduce the required amount of data to be stored. The experiments carried out on different face databases confirm the effectiveness of the proposed approach.
Reconstruction of the Most Energetic Modes in a Fully Developed Turbulent Channel Flow with Density Variation
Proper orthogonal decomposition (POD) is used to reconstruct spatio-temporal data of a fully developed turbulent channel flow with density variation at Reynolds number of 150, based on the friction velocity and the channel half-width, and Prandtl number of 0.71. To apply POD to the fully developed turbulent channel flow with density variation, the flow field (velocities, density, and temperature) is scaled by the corresponding root mean square values (rms) so that the flow field becomes dimensionless. A five-vector POD problem is solved numerically. The reconstructed second-order moments of velocity, temperature, and density from POD eigenfunctions compare favorably to the original Direct Numerical Simulation (DNS) data.
Scale-Space Volume Descriptors for Automatic 3D Facial Feature Extraction
An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.
A Novel Compression Algorithm for Electrocardiogram Signals based on Wavelet Transform and SPIHT
Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Multi-stage Directional Median Filter
Median filter is widely used to remove impulse noise
without blurring sharp edges. However, when noise level increased,
or with thin edges, median filter may work poorly. This paper
proposes a new filter, which will detect edges along four possible
directions, and then replace noise corrupted pixel with estimated
noise-free edge median value. Simulations show that the proposed
multi-stage directional median filter can provide excellent
performance of suppressing impulse noise in all situations.
Transform-Domain Rate-Distortion Optimization Accelerator for H.264/AVC Video Encoding
In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Weight Functions for Signal Reconstruction Based On Level Crossings
Although the level crossing concept has been the subject of intensive investigation over the last few years, certain problems of great interest remain unsolved. One of these concern is distribution of threshold levels. This paper presents a new threshold level allocation schemes for level crossing based on nonuniform sampling. Intuitively, it is more reasonable if the information rich regions of the signal are sampled finer and those with sparse information are sampled coarser. To achieve this objective, we propose non-linear quantization functions which dynamically assign the number of quantization levels depending on the importance of the given amplitude range. Two new approaches to determine the importance of the given amplitude segment are presented. The proposed methods are based on exponential and logarithmic functions. Various aspects of proposed techniques are discussed and experimentally validated. Its efficacy is investigated by comparison with uniform sampling.
Performance Analysis of Digital Signal Processors Using SMV Benchmark
Unlike general-purpose processors, digital signal
processors (DSP processors) are strongly application-dependent. To
meet the needs for diverse applications, a wide variety of DSP
processors based on different architectures ranging from the
traditional to VLIW have been introduced to the market over the
years. The functionality, performance, and cost of these processors
vary over a wide range. In order to select a processor that meets the
design criteria for an application, processor performance is usually
the major concern for digital signal processing (DSP) application
developers. Performance data are also essential for the designers of
DSP processors to improve their design. Consequently, several DSP
performance benchmarks have been proposed over the past decade or
so. However, none of these benchmarks seem to have included recent
new DSP applications.
In this paper, we use a new benchmark that we recently developed
to compare the performance of popular DSP processors from Texas
Instruments and StarCore. The new benchmark is based on the
Selectable Mode Vocoder (SMV), a speech-coding program from the
recent third generation (3G) wireless voice applications. All
benchmark kernels are compiled by the compilers of the respective
DSP processors and run on their simulators. Weighted arithmetic
mean of clock cycles and arithmetic mean of code size are used to
compare the performance of five DSP processors.
In addition, we studied how the performance of a processor is
affected by code structure, features of processor architecture and
optimization of compiler. The extensive experimental data gathered,
analyzed, and presented in this paper should be helpful for DSP
processor and compiler designers to meet their specific design goals.
Reversible Watermarking on Stereo Image Sequences
In this paper, a new reversible watermarking method is presented that reduces the size of a stereoscopic image sequence while keeping its content visible. The proposed technique embeds the residuals of the right frames to the corresponding frames of the left sequence, halving the total capacity. The residual frames may result in after a disparity compensated procedure between the two video streams or by a joint motion and disparity compensation. The residuals are usually lossy compressed before embedding because of the limited embedding capacity of the left frames. The watermarked frames are visible at a high quality and at any instant the stereoscopic video may be recovered by an inverse process. In fact, the left frames may be exactly recovered whereas the right ones are slightly distorted as the residuals are not embedded intact. The employed embedding method reorders the left frame into an array of consecutive pixel pairs and embeds a number of bits according to their intensity difference. In this way, it hides a number of bits in intensity smooth areas and most of the data in textured areas where resulting distortions are less visible. The experimental evaluation demonstrates that the proposed scheme is quite effective.
Signal Driven Sampling and Filtering a Promising Approach for Time Varying Signals Processing
The mobile systems are powered by batteries.
Reducing the system power consumption is a key to increase its
autonomy. It is known that mostly the systems are dealing with time
varying signals. Thus, we aim to achieve power efficiency by smartly
adapting the system processing activity in accordance with the input
signal local characteristics. It is done by completely rethinking the
processing chain, by adopting signal driven sampling and processing.
In this context, a signal driven filtering technique, based on the level
crossing sampling is devised. It adapts the sampling frequency and
the filter order by analysing the input signal local variations. Thus, it
correlates the processing activity with the signal variations. It leads
towards a drastic computational gain of the proposed technique
compared to the classical one.
A Signal Driven Adaptive Resolution Short-Time Fourier Transform
The frequency contents of the non-stationary
signals vary with time. For proper characterization of such
signals, a smart time-frequency representation is necessary.
Classically, the STFT (short-time Fourier transform) is
employed for this purpose. Its limitation is the fixed timefrequency
resolution. To overcome this drawback an enhanced
STFT version is devised. It is based on the signal driven
sampling scheme, which is named as the cross-level sampling.
It can adapt the sampling frequency and the window function
(length plus shape) by following the input signal local
variations. This adaptation results into the proposed technique
appealing features, which are the adaptive time-frequency
resolution and the computational efficiency.
A High Quality Speech Coder at 600 bps
This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.
Robust Statistics Based Algorithm to Remove Salt and Pepper Noise in Images
In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise
Decimation Filter Design Toolbox for Multi-Standard Wireless Transceivers using MATLAB
The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. A multi-standard design often involves extensive system level analysis and architectural partitioning, typically requiring extensive calculations. In this research, a decimation filter design tool for wireless communication standards consisting of GSM, WCDMA, WLANa, WLANb, WLANg and WiMAX is developed in MATLAB® using GUIDE environment for visual analysis. The user can select a required wireless communication standard, and obtain the corresponding multistage decimation filter implementation using this toolbox. The toolbox helps the user or design engineer to perform a quick design and analysis of decimation filter for multiple standards without doing extensive calculation of the underlying methods.
Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain
In this paper, an image adaptive, invisible digital
watermarking algorithm with Orthogonal Polynomials based
Transformation (OPT) is proposed, for copyright protection of digital
images. The proposed algorithm utilizes a visual model to determine
the watermarking strength necessary to invisibly embed the
watermark in the mid frequency AC coefficients of the cover image,
chosen with a secret key. The visual model is designed to generate a
Just Noticeable Distortion mask (JND) by analyzing the low level
image characteristics such as textures, edges and luminance of the
cover image in the orthogonal polynomials based transformation
domain. Since the secret key is required for both embedding and
extraction of watermark, it is not possible for an unauthorized user to
extract the embedded watermark. The proposed scheme is robust to
common image processing distortions like filtering, JPEG
compression and additive noise. Experimental results show that the
quality of OPT domain watermarked images is better than its DCT
Estimating Frequency, Amplitude and Phase of Two Sinusoids with Very Close Frequencies
This paper presents an algorithm to estimate the parameters of two closely spaced sinusoids, providing a frequency resolution that is more than 800 times greater than that obtained by using the Discrete Fourier Transform (DFT). The strategy uses a highly optimized grid search approach to accurately estimate frequency, amplitude and phase of both sinusoids, keeping at the same time the computational effort at reasonable levels. The proposed method has three main characteristics: 1) a high frequency resolution; 2) frequency, amplitude and phase are all estimated at once using one single package; 3) it does not rely on any statistical assumption or constraint. Potential applications to this strategy include the difficult task of resolving coincident partials of instruments in musical signals.
Efficient Block Matching Algorithm for Motion Estimation
Motion estimation is a key problem in video
processing and computer vision. Optical flow motion estimation can
achieve high estimation accuracy when motion vector is small.
Three-step search algorithm can handle large motion vector but not
very accurate. A joint algorithm was proposed in this paper to
achieve high estimation accuracy disregarding whether the motion
vector is small or large, and keep the computation cost much lower
than full search.
Dempster-Shafer Evidence Theory for Image Segmentation: Application in Cells Images
In this paper we propose a new knowledge model using
the Dempster-Shafer-s evidence theory for image segmentation and
fusion. The proposed method is composed essentially of two steps.
First, mass distributions in Dempster-Shafer theory are obtained from
the membership degrees of each pixel covering the three image
components (R, G and B). Each membership-s degree is determined by
applying Fuzzy C-Means (FCM) clustering to the gray levels of the
three images. Second, the fusion process consists in defining three
discernment frames which are associated with the three images to be
fused, and then combining them to form a new frame of discernment.
The strategy used to define mass distributions in the combined
framework is discussed in detail. The proposed fusion method is
illustrated in the context of image segmentation. Experimental
investigations and comparative studies with the other previous methods
are carried out showing thus the robustness and superiority of the
proposed method in terms of image segmentation.
Union is Strength in Lossy Image Compression
In this work, we present a comparison between
different techniques of image compression. First, the image is
divided in blocks which are organized according to a certain scan.
Later, several compression techniques are applied, combined or
alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève
Transform, etc. Simulations show that the combined versions are the
best, with minor Mean Squared Error (MSE), and higher Peak Signal
to Noise Ratio (PSNR) and better image quality, even in the presence
Distributed Estimation Using an Improved Incremental Distributed LMS Algorithm
In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.
From Maskee to Audible Noise in Perceptual Speech Enhancement
A new analysis of perceptual speech enhancement is
presented. It focuses on the fact that if only noise above the masking
threshold is filtered, then noise below the masking threshold, but
above the absolute threshold of hearing, can become audible after the
masker filtering. This particular drawback of some perceptual filters,
hereafter called the maskee-to-audible-noise (MAN) phenomenon,
favours the emergence of isolated tonals that increase musical noise.
Two filtering techniques that avoid or correct the MAN phenomenon
are proposed to effectively suppress background noise without introducing
much distortion. Experimental results, including objective
and subjective measurements, show that these techniques improve
the enhanced speech quality and the gain they bring emphasizes the
importance of the MAN phenomenon.
A Complexity-Based Approach in Image Compression using Neural Networks
In this paper we present an adaptive method for image
compression that is based on complexity level of the image. The
basic compressor/de-compressor structure of this method is a multilayer
perceptron artificial neural network. In adaptive approach
different Back-Propagation artificial neural networks are used as
compressor and de-compressor and this is done by dividing the
image into blocks, computing the complexity of each block and then
selecting one network for each block according to its complexity
value. Three complexity measure methods, called Entropy, Activity
and Pattern-based are used to determine the level of complexity in
image blocks and their ability in complexity estimation are evaluated
and compared. In training and evaluation, each image block is
assigned to a network based on its complexity value. Best-SNR is
another alternative in selecting compressor network for image blocks
in evolution phase which chooses one of the trained networks such
that results best SNR in compressing the input image block. In our
evaluations, best results are obtained when overlapping the blocks is
allowed and choosing the networks in compressor is based on the
Best-SNR. In this case, the results demonstrate superiority of this
method comparing with previous similar works and JPEG standard
Modulation Identification Algorithm for Adaptive Demodulator in Software Defined Radios Using Wavelet Transform
A generalized Digital Modulation Identification algorithm for adaptive demodulator has been developed and presented in this paper. The algorithm developed is verified using wavelet Transform and histogram computation to identify QPSK and QAM with GMSK and M–ary FSK modulations. It has been found that the histogram peaks simplifies the procedure for identification. The simulated results show that the correct modulation identification is possible to a lower bound of 5 dB and 12 dB for GMSK and QPSK respectively. When SNR is above 5 dB the throughput of the proposed algorithm is more than 97.8%. The receiver operating characteristics (ROC) has been computed to measure the performance of the proposed algorithm and the analysis shows that the probability of detection (Pd) drops rapidly when SNR is 5 dB and probability of false alarm (Pf) is smaller than 0.3. The performance of the proposed algorithm has been compared with existing methods and found it will identify all digital modulation schemes with low SNR.
Codebook Generation for Vector Quantization on Orthogonal Polynomials based Transform Coding
In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.
Design of Multiplier-free State-Space Digital Filters
In this paper, a novel approach is presented
for designing multiplier-free state-space digital filters. The
multiplier-free design is obtained by finding power-of-2 coefficients
and also quantizing the state variables to power-of-2
numbers. Expressions for the noise variance are derived for the
quantized state vector and the output of the filter. A “structuretransformation
matrix" is incorporated in these expressions. It
is shown that quantization effects can be minimized by properly
designing the structure-transformation matrix. Simulation
results are very promising and illustrate the design algorithm.
Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application
Discrimination between different classes of environmental
sounds is the goal of our work. The use of a sound recognition
system can offer concrete potentialities for surveillance and
security applications. The first paper contribution to this research
field is represented by a thorough investigation of the applicability
of state-of-the-art audio features in the domain of environmental
sound recognition. Additionally, a set of novel features obtained by
combining the basic parameters is introduced. The quality of the
features investigated is evaluated by a HMM-based classifier to which
a great interest was done. In fact, we propose to use a Multi-Style
training system based on HMMs: one recognizer is trained on a
database including different levels of background noises and is used
as a universal recognizer for every environment. In order to enhance
the system robustness by reducing the environmental variability, we
explore different adaptation algorithms including Maximum Likelihood
Linear Regression (MLLR), Maximum A Posteriori (MAP)
and the MAP/MLLR algorithm that combines MAP and MLLR.
Experimental evaluation shows that a rather good recognition rate
can be reached, even under important noise degradation conditions
when the system is fed by the convenient set of features.
Using Teager Energy Cepstrum and HMM distancesin Automatic Speech Recognition and Analysis of Unvoiced Speech
In this study, the use of silicon NAM (Non-Audible
Murmur) microphone in automatic speech recognition is presented.
NAM microphones are special acoustic sensors, which are attached
behind the talker-s ear and can capture not only normal (audible)
speech, but also very quietly uttered speech (non-audible murmur).
As a result, NAM microphones can be applied in automatic speech
recognition systems when privacy is desired in human-machine communication.
Moreover, NAM microphones show robustness against
noise and they might be used in special systems (speech recognition,
speech conversion etc.) for sound-impaired people. Using a small
amount of training data and adaptation approaches, 93.9% word
accuracy was achieved for a 20k Japanese vocabulary dictation
task. Non-audible murmur recognition in noisy environments is also
investigated. In this study, further analysis of the NAM speech has
been made using distance measures between hidden Markov model
(HMM) pairs. It has been shown the reduced spectral space of NAM
speech using a metric distance, however the location of the different
phonemes of NAM are similar to the location of the phonemes
of normal speech, and the NAM sounds are well discriminated.
Promising results in using nonlinear features are also introduced,
especially under noisy conditions.
Denoising and Compression in Wavelet Domainvia Projection on to Approximation Coefficients
We describe a new filtering approach in the wavelet domain for image denoising and compression, based on the projections of details subbands coefficients (resultants of the splitting procedure, typical in wavelet domain) onto the approximation subband coefficients (much less noisy). The new algorithm is called Projection Onto Approximation Coefficients (POAC). As a result of this approach, only the approximation subband coefficients and three scalars are stored and/or transmitted to the channel. Besides, with the elimination of the details subbands coefficients, we obtain a bigger compression rate. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.
Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter
A state of the art Speaker Identification (SI) system
requires a robust feature extraction unit followed by a speaker
modeling scheme for generalized representation of these features.
Over the years, Mel-Frequency Cepstral Coefficients (MFCC)
modeled on the human auditory system has been used as a standard
acoustic feature set for speech related applications. On a recent
contribution by authors, it has been shown that the Inverted Mel-
Frequency Cepstral Coefficients (IMFCC) is useful feature set for
SI, which contains complementary information present in high
frequency region. This paper introduces the Gaussian shaped filter
(GF) while calculating MFCC and IMFCC in place of typical
triangular shaped bins. The objective is to introduce a higher
amount of correlation between subband outputs. The performances
of both MFCC & IMFCC improve with GF over conventional
triangular filter (TF) based implementation, individually as well as
in combination. With GMM as speaker modeling paradigm, the
performances of proposed GF based MFCC and IMFCC in
individual and fused mode have been verified in two standard
databases YOHO, (Microphone Speech) and POLYCOST
(Telephone Speech) each of which has more than 130 speakers.
A Scheme of Model Verification of the Concurrent Discrete Wavelet Transform (DWT) for Image Compression
The scientific community has invested a great deal of effort in the fields of discrete wavelet transform in the last few decades. Discrete wavelet transform (DWT) associated with the vector quantization has been proved to be a very useful tool for the compression of image. However, the DWT is very computationally intensive process requiring innovative and computationally efficient method to obtain the image compression. The concurrent transformation of the image can be an important solution to this problem. This paper proposes a model of concurrent DWT for image compression. Additionally, the formal verification of the model has also been performed. Here the Symbolic Model Verifier (SMV) has been used as the formal verification tool. The system has been modeled in SMV and some properties have been verified formally.
Bounds on the Second Stage Spectral Radius of Graphs
Let G be a graph of order n. The second stage adjacency matrix of G is the symmetric n × n matrix for which the ijth entry is 1 if the vertices vi and vj are of distance two; otherwise 0. The sum of the absolute values of this second stage adjacency matrix is called the second stage energy of G. In this paper we investigate a few properties and determine some upper bounds for the largest eigenvalue.
A New Heuristic Approach for the Large-Scale Generalized Assignment Problem
This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.
Approximating Maximum Weighted Independent Set Using Vertex Support
The Maximum Weighted Independent Set (MWIS)
problem is a classic graph optimization NP-hard problem. Given an
undirected graph G = (V, E) and weighting function defined on the
vertex set, the MWIS problem is to find a vertex set S V whose total
weight is maximum subject to no two vertices in S are adjacent. This
paper presents a novel approach to approximate the MWIS of a graph
using minimum weighted vertex cover of the graph. Computational
experiments are designed and conducted to study the performance
of our proposed algorithm. Extensive simulation results show that
the proposed algorithm can yield better solutions than other existing
algorithms found in the literature for solving the MWIS.
The Orlicz Space of the Entire Sequence Fuzzy Numbers Defined by Infinite Matrices
This paper is devoted to the study of the general properties of Orlicz space of entire sequence of fuzzy numbers by using infinite matrices.
A New Class χ2 (M, A,) of the Double Difference Sequences of Fuzzy Numbers
The aim of this paper is to introduce and study a new concept of strong double χ2 (M,A, Δ) of fuzzy numbers and also some properties of the resulting sequence spaces of fuzzy numbers were examined.
A Self-stabilizing Algorithm for Maximum Popular Matching of Strictly Ordered Preference Lists
In this paper, we consider the problem of Popular Matching of strictly ordered preference lists. A Popular Matching is not guaranteed to exist in any network. We propose an IDbased, constant space, self-stabilizing algorithm that converges to a Maximum Popular Matching an optimum solution, if one exist. We show that the algorithm stabilizes in O(n5) moves under any scheduler (daemon).
Decomposition of Graphs into Induced Paths and Cycles
A decomposition of a graph G is a collection ψ of subgraphs H1,H2, . . . , Hr of G such that every edge of G belongs to exactly one Hi. If each Hi is either an induced path or an induced cycle in G, then ψ is called an induced path decomposition of G. The minimum cardinality of an induced path decomposition of G is called the induced path decomposition number of G and is denoted by πi(G). In this paper we initiate a study of this parameter.
Application of Generalized NAUT B-Spline Curveon Circular Domain to Generate Circle Involute
In the present paper, we use generalized B-Spline curve in trigonometric form on circular domain, to capture the transcendental nature of circle involute curve and uncertainty characteristic of design. The required involute curve get generated within the given tolerance limit and is useful in gear design.
Analyzing the Factors Influencing Exclusive Breastfeeding Using the Generalized Poisson Regression Model
Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is of fundamental importance because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in developed countries, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we study the factors that influence exclusive breastfeeding and use the Generalized Poisson regression model to analyze the practices of exclusive breastfeeding in Mauritius. We develop two sets of quasi-likelihood equations (QLE)to estimate the parameters.
The Diophantine Equation y2 − 2yx − 3 = 0 and Corresponding Curves over Fp
In this work, we consider the number of integer solutions
of Diophantine equation D : y2 - 2yx - 3 = 0 over Z and
also over finite fields Fp for primes p ≥ 5. Later we determine the
number of rational points on curves Ep : y2 = Pp(x) = yp
1 + yp
over Fp, where y1 and y2 are the roots of D. Also we give a formula
for the sum of x- and y-coordinates of all rational points (x, y) on
Ep over Fp.
Load Balancing in Genetic Zone Routing Protocol for MANETs
Genetic Zone Routing Protocol (GZRP) is a new
hybrid routing protocol for MANETs which is an extension of ZRP
by using Genetic Algorithm (GA). GZRP uses GA on IERP and BRP
parts of ZRP to provide a limited set of alternative routes to the
destination in order to load balance the network and robustness
during node/link failure during the route discovery process. GZRP is
studied for its performance compared to ZRP in many folds like
scalability for packet delivery and proved with improved results. This
paper presents the results of the effect of load balancing on GZRP.
The results show that GZRP outperforms ZRP while balancing the
Tensorial Transformations of Double Gai Sequence Spaces
The precise form of tensorial transformations acting on a given collection of infinite matrices into another ; for such classical ideas connected with the summability field of double gai sequence spaces. In this paper the results are impose conditions on the tensor g so that it becomes a tensorial transformations from the metric space χ2 to the metric space C
A Normalization-based Robust Watermarking Scheme Using Zernike Moments
Digital watermarking has become an important technique for copyright protection but its robustness against attacks remains a major problem. In this paper, we propose a normalizationbased robust image watermarking scheme. In the proposed scheme, original host image is first normalized to a standard form. Zernike transform is then applied to the normalized image to calculate Zernike moments. Dither modulation is adopted to quantize the magnitudes of Zernike moments according to the watermark bit stream. The watermark extracting method is a blind method. Security analysis and false alarm analysis are then performed. The quality degradation of watermarked image caused by the embedded watermark is visually transparent. Experimental results show that the proposed scheme has very high robustness against various image processing operations and geometric attacks.
Ethiopian Opposition Political Parties and Rebel Fronts: Past and Present
In a representative democracy political parties
promote vital competition on different policy issues and play
essential roles by offering ideological alternatives. They also give
channels for citizens- participation in government decision-making
processes and they are significant conduits and interpreters of
information about government. This paper attempts to examine how
opposition political parties and rebel fronts emerged in Ethiopia, and
examines their present conditions. In this paper, selected case studies
of political parties and rebel fronts are included to highlight the status
and the role of opposition groups in the country in the three
successive administrations: Haile Selassie (1930-1974), Derg (1974-
1991), and EPRDF (1991-Present).
Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis
Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.
Natural Language Database Interface for Selection of Data Using Grammar and Parsing
Databases have become ubiquitous. Almost all IT applications are storing into and retrieving information from databases. Retrieving information from the database requires knowledge of technical languages such as Structured Query Language (SQL). However majority of the users who interact with the databases do not have a technical background and are intimidated by the idea of using languages such as SQL. This has led to the development of a few Natural Language Database Interfaces (NLDBIs). A NLDBI allows the user to query the database in a natural language. This paper highlights on architecture of new NLDBI system, its implementation and discusses on results obtained. In most of the typical NLDBI systems the natural language statement is converted into an internal representation based on the syntactic and semantic knowledge of the natural language. This representation is then converted into queries using a representation converter. A natural language query is translated to an equivalent SQL query after processing through various stages. The work has been experimented on primitive database queries with certain constraints.
Integrating Decision Tree and Spatial Cluster Analysis for Landslide Susceptibility Zonation
Landslide susceptibility map delineates the potential
zones for landslide occurrence. Previous works have applied
multivariate methods and neural networks for mapping landslide
susceptibility. This study proposed a new approach to integrate
decision tree model and spatial cluster statistic for assessing landslide
susceptibility spatially. A total of 2057 landslide cells were digitized
for developing the landslide decision tree model. The relationships of
landslides and instability factors were explicitly represented by using
tree graphs in the model. The local Getis-Ord statistics were used to
cluster cells with high landslide probability. The analytic result from
the local Getis-Ord statistics was classed to create a map of landslide
susceptibility zones. The map was validated using new landslide data
with 482 cells. Results of validation show an accuracy rate of 86.1% in
predicting new landslide occurrence. This indicates that the proposed
approach is useful for improving landslide susceptibility mapping.
Probabilities and the Persistence of Memory in a Bingo-like Carnival Game
Seemingly simple probabilities in the m-player game bingo have never been calculated. These probabilities include expected game length and the expected number of winners on a given turn. The difficulty in probabilistic analysis lies in the subtle interdependence among the m-many bingo game cards in play. In this paper, the game i got it!, a bingo variant, is considered. This variation provides enough weakening of the inter-player dependence to allow probabilistic analysis not possible for traditional bingo. The probability of winning in exactly k turns is calculated for a one-player game. Given a game of m-many players, the expected game length and tie probability are calculated. With these calculations, the game-s interesting payout scheme is considered.
Security Analysis on Anonymous Mutual Authentication Protocol for RFID Tag without Back-End Database and its Improvement
RFID (Radio Frequency IDentification) system has
been widely used in our life, such as transport systems, passports,
automotive, animal tracking, human implants, library, and so on.
However, the RFID authentication protocols between RF (Radio
Frequency) tags and the RF readers have been bring about various
privacy problems that anonymity of the tags, tracking, eavesdropping,
and so on. Many researchers have proposed the solution of the
problems. However, they still have the problem, such as location
privacy, mutual authentication. In this paper, we show the problems of
the previous protocols, and then we propose a more secure and
efficient RFID authentication protocol.
Vehicle Detection Method using Haar-like Feature on Real Time System
This paper presents a robust vehicle detection approach using Haar-like feature. It is possible to get a strong edge feature from this Haar-like feature. Therefore it is very effective to remove the shadow of a vehicle on the road. And we can detect the boundary of vehicles accurately. In the paper, the vehicle detection algorithm can be divided into two main steps. One is hypothesis generation, and the other is hypothesis verification. In the first step, it determines vehicle candidates using features such as a shadow, intensity, and vertical edge. And in the second step, it determines whether the candidate is a vehicle or not by using the symmetry of vehicle edge features. In this research, we can get the detection rate over 15 frames per second on our embedded system.
Probabilistic Center Voting Method for Subsequent Object Tracking and Segmentation
In this paper, we introduce a novel algorithm for object tracking in video sequence. In order to represent the object to be tracked, we propose a spatial color histogram model which encodes both the color distribution and spatial information. The object tracking from frame to frame is accomplished via center voting and back projection method. The center voting method has every pixel in the new frame to cast a vote on whereabouts the object center is. The back projection method segments the object from the background. The segmented foreground provides information on object size and orientation, omitting the need to estimate them separately. We do not put any assumption on camera motion; the proposed algorithm works equally well for object tracking in both static and moving camera videos.
Using Secure-Image Mechanism to Protect Mobile Agent Against Malicious Hosts
The usage of internet is rapidly increasing and the usage of mobile agent technology in internet environment has a great demand. The security issue one of main obstacles that restrict the mobile agent technology to spread. This paper proposes Secure-Image Mechanism (SIM) as a new mechanism to protect mobile agents against malicious hosts. . SIM aims to protect mobile agent by using the symmetric encryption and hash function in cryptography science. This mechanism can prevent the eavesdropping and alteration attacks. It assists the mobile agents to continue their journey normally incase attacks occurred.
A Study of Under Actuator Dynamic System by Comparing between Minimum Energy and Minimum Jerk Problems
This paper deals with under actuator dynamic systems such as spring-mass-damper system when the number of control variable is less than the number of state variable. In order to apply optimal control, the controllability must be checked. There are many objective functions to be selected as the goal of the optimal control such as minimum energy, maximum energy and minimum jerk. As the objective function is the first priority, if one like to have the second goal to be applied; however, it could not fit in the objective function format and also avoiding the vector cost for the objective, this paper will illustrate the problem of under actuator dynamic systems with the easiest to deal with comparing between minimum energy and minimum jerk.
Using Memetic Algorithms for the Solution of Technical Problems
The intention of this paper is, to help the user of evolutionary algorithms to adapt them easier to their problem at hand. For a lot of problems in the technical field it is not necessary to reach an optimum solution, but to reach a good solution in time. In many cases the solution is undetermined or there doesn-t exist a method to determine the solution. For these cases an evolutionary algorithm can be useful. This paper intents to give the user rules of thumb with which it is easier to decide if the problem is suitable for an evolutionary algorithm and how to design them.
Double Aperture Camera for High Resolution Measurement
In the domain of machine vision, the
measurement of length is done using cameras where the
accuracy is directly proportional to the resolution of the
camera and inversely to the size of the object. Since most of
the pixels are wasted imaging the entire body as opposed to
just imaging the edges in a conventional system, a double
aperture system is constructed to focus on the edges to
measure at higher resolution. The paper discusses the
complexities and how they are mitigated to realize a practical
machine vision system.
Effect of Concentration of Sodium Borohydrate on the Synthesis of Silicon Nanoparticles via Microemulsion Route
The effect of concentration of reduction agent of
sodium borohydrate (NaBH4) on the properties of silicon
nanoparticles synthesized via microemulsion route is reported. In
this work, the concentration of the silicon tetrachloride (SiCl4) that
served as silicon source with sodium hydroxide (NaOH) and
polyethylene glycol (PEG) as stabilizer and surfactant, respectively,
are keep fixed. Four samples with varied concentration of NaBH4
from 0.05 M to 0.20 M were synthesized. It was found that the lowest
concentration of NaBH4 gave better formation of silicon
A High Bitrate Information Hiding Algorithm for Video in Video
In high bitrate information hiding techniques, 1 bit is
embedded within each 4 x 4 Discrete Cosine Transform (DCT)
coefficient block by means of vector quantization, then the hidden bit
can be effectively extracted in terminal end. In this paper high bitrate
information hiding algorithms are summarized, and the scheme of
video in video is implemented. Experimental result shows that the host
video which is embedded numerous auxiliary information have little
visually quality decline. Peak Signal to Noise Ratio (PSNR)Y of host
video only degrades 0.22dB in average, while the hidden information
has a high percentage of survives and keeps a high robustness in
H.264/AVC compression, the average Bit Error Rate(BER) of hiding
information is 0.015%.
A Variety of Meteorological and Geographical Characteristics Effects on Watershed Responses to a Storm Event
The Chichiawan stream in the Wulin catchment in
Taiwan is the natural habitat of Formosan landlocked salmon. Human
and agriculture activities gradually worsen water quality and impact
the fish habitat negatively. To protect and manage Formosan
landlocked salmon habitat, it is important to understand a variety
land-uses affect on the watershed responses to storms. This study
discusses watershed responses to the dry-day before a storm event and
a variety of land-uses in the Wulin catchment. Under the land-use
planning in the Wulin catchment, the peak flows during typhoon
events do not have noticeable difference. However, the nutrient
exports can be highly reduced under the strategies of restraining
agriculture activities. Due to the higher affinity of P for soil than that
of N, the exports of TN from overall Wuling catchment were much
greater than Ortho-P. Agriculture mainly centralized in subbasin A,
which is the important source of nutrients in nonpoint source discharge.
The subbasin A supplied about 26% of the TN and 32% of the Ortho-P
discharge in 2004, despite the fact it only covers 19% area of the
Wuling catchment. The subbasin analysis displayed that the
agricultural subbasin A exports higher nutrients per unit area than
other forest subbasins. Additionally, the agricultural subbasin A
contributed a higher percentage to total Ortho-P exports compares to
TN. The results of subbasin analysis might imply the transport of
Ortho-P was similar to the particulate matter which was mainly
influenced by the runoff and affected by the desorption from soil
particles while the TN (dominated as nitrate-N) was mainly influenced
A Weighted Least Square Algorithm for Low-Delay FIR Filters with Piecewise Variable Stopbands
Variable digital filters are useful for various signal processing and communication applications where the frequency characteristics, such as fractional delays and cutoff frequencies, can be varied. In this paper, we propose a design method of variable FIR digital filters with an approximate linear phase characteristic in the passband. The proposed variable FIR filters have some large attenuation in stopband and their large attenuation can be varied by spectrum parameters. In the proposed design method, a quasi-equiripple characteristic can be obtained by using an iterative weighted least square method. The usefulness of the proposed design method is verified through some examples.
Non-contact Gaze Tracking with Head Movement Adaptation based on Single Camera
With advances in computer vision, non-contact gaze tracking systems are heading towards being much easier to operate and more comfortable for use, the technique proposed in this paper is specially designed for achieving these goals. For the convenience in operation, the proposal aims at the system with simple configuration which is composed of a fixed wide angle camera and dual infrared illuminators. Then in order to enhance the usability of the system based on single camera, a self-adjusting method which is called Real-time gaze Tracking Algorithm with head movement Compensation (RTAC) is developed for estimating the gaze direction under natural head movement and simplifying the calibration procedure at the same time. According to the actual evaluations, the average accuracy of about 1° is achieved over a field of 20×15×15 cm3.
Design as Contract and Blueprint – Tackling Maturity Level 1 Software Vendors in an e-School Project
Process improvements have drawn much attention in
practical software engineering. The capability maturity levels from
CMMI have become an important index to assess a software company-s
software engineering capability. However, in countries like
Taiwan, customers often have no choices but to deal with vendors that
are not CMMI prepared or qualified. We call these vendors maturitylevel-
1 (ML1) vendors. In this paper, we describe our experience
from consulting an e-school project. We propose an approach to help
our client tackle the ML1 vendors. Through our system analysis, we
produce a design. This design is suggested to be used as part of
contract and a blueprint to guide the implementation.
Study of Efficiency and Capability LZW++ Technique in Data Compression
The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.
Improved Wavelet Neural Networks for Early Cancer Diagnosis Using Clustering Algorithms
Wavelet neural networks (WNNs) have emerged as a vital alternative to the vastly studied multilayer perceptrons (MLPs) since its first implementation. In this paper, we applied various clustering algorithms, namely, K-means (KM), Fuzzy C-means (FCM), symmetry-based K-means (SBKM), symmetry-based Fuzzy C-means (SBFCM) and modified point symmetry-based K-means (MPKM) clustering algorithms in choosing the translation parameter of a WNN. These modified WNNs are further applied to the heterogeneous cancer classification using benchmark microarray data and were compared against the conventional WNN with random initialization method. Experimental results showed that a WNN classifier with the MPKM algorithm is more precise than the conventional WNN as well as the WNNs with other clustering algorithms.
An Investigation of Short Circuit Analysis in Komag Sarawak Operations (KSO) Factory
Short circuit currents plays a vital role in influencing the design and operation of equipment and power system and could not be avoided despite careful planning and design, good maintenance and thorough operation of the system. This paper discusses the short circuit analysis conducted in KSO briefly comprising of its significances, methods and results. A result sample of the analysis based on a single transformer is detailed in this paper. Furthermore, the results of the analysis and its significances were also discussed and commented.
Fully Parameterizable FPGA based Crypto-Accelerator
In this paper, RSA encryption algorithm and its hardware
implementation in Xilinx-s Virtex Field Programmable Gate
Arrays (FPGA) is analyzed. The issues of scalability, flexible performance,
and silicon efficiency for the hardware acceleration of
public key crypto systems are being explored in the present work.
Using techniques based on the interleaved math for exponentiation,
the proposed RSA calculation architecture is compared to existing
FPGA-based solutions for speed, FPGA utilization, and scalability.
The paper covers the RSA encryption algorithm, interleaved multiplication,
Miller Rabin algorithm for primality test, extended Euclidean
math, basic FPGA technology, and the implementation details of
the proposed RSA calculation architecture. Performance of several
alternative hardware architectures is discussed and compared. Finally,
conclusion is drawn, highlighting the advantages of a fully flexible
& parameterized design.
A Study on Removal Characteristics of (Mn2+) from Aqueous Solution by CNT
It is important to remove manganese from water
because of its effects on human and the environment. Human
activities are one of the biggest contributors for excessive manganese
concentration in the environment. The proposed method to remove
manganese in aqueous solution by using adsorption as in carbon
nanotubes (CNT) at different parameters: The parameters are CNT
dosage, pH, agitation speed and contact time. Different pHs are pH
6.0, pH 6.5, pH 7.0, pH 7.5 and pH 8.0, CNT dosages are 5mg,
6.25mg, 7.5mg, 8.75mg or 10mg, contact time are 10 min, 32.5 min,
55 min, 87.5 min and 120 min while the agitation speeds are 100rpm,
150rpm, 200rpm, 250rpm and 300rpm. The parameters chosen for
experiments are based on experimental design done by using Central
Composite Design, Design Expert 6.0 with 4 parameters, 5 levels and
2 replications. Based on the results, condition set at pH 7.0, agitation
speed of 300 rpm, 7.5mg and contact time 55 minutes gives the
highest removal with 75.5%. From ANOVA analysis in Design
Expert 6.0, the residual concentration will be very much affected by
pH and CNT dosage. Initial manganese concentration is 1.2mg/L
while the lowest residual concentration achieved is 0.294mg/L,
which almost satisfy DOE Malaysia Standard B requirement.
Therefore, further experiments must be done to remove manganese
from model water to the required standard (0.2 mg/L) with the initial
concentration set to 0.294 mg/L.
Smith Predictor Design by CDM for Temperature Control System
Smith Predictor control is theoretically a good solution to the problem of controlling the time delay systems. However, it seldom gets use because it is almost impossible to find out a precise mathematical model of the practical system and very sensitive to uncertain system with variable time-delay. In this paper is concerned with a design method of smith predictor for temperature control system by Coefficient Diagram Method (CDM). The simulation results show that the control system with smith predictor design by CDM is stable and robust whilst giving the desired time domain system performance.
Inheritance Growth: a Biology Inspired Method to Build Structures in P2P
IT infrastructures are becoming more and more
difficult. Therefore, in the first industrial IT systems, the P2P
paradigm has replaced the traditional client server and methods of
self-organization are gaining more and more importance. From the
past it is known that especially regular structures like grids may
significantly improve the system behavior and performance. This
contribution introduces a new algorithm based on a biologic
analogue, which may provide the growth of several regular structures
on top of anarchic grown P2P- or social network structures.
Nonlinear Seismic Dynamic Response of Continuous Curved Highway Viaducts with Different Bearing Supports
The results show that the bridge equipped with seismic isolation bearing system shows a high amount of energy dissipation. The purpose of the present study is to analyze the overall performance of continuous curved highway viaducts with different bearing supports, with an emphasis on the effectiveness of seismic isolation based on lead rubber bearing and hedge reaction force bearing system consisted of friction sliding bearing and rubber bearing. The bridge seismic performance has been evaluated on six different cases with six bearing models. The effects of the different arrangement of bearing on the deck superstructure displacements, the seismic damage at the bottom of the piers, movement track at the pier-s top and the total and strain energies absorbed by the structure are evaluated. In conclusion, the results provide sufficient evidence of the effectiveness on the use of seismic isolation on steel curved highway bridges.
Ratio Type Estimators of the Population Mean Based on Ranked Set Sampling
Ranked set sampling (RSS) was first suggested to increase the efficiency of the population mean. It has been shown that this method is highly beneficial to the estimation based on simple random sampling (SRS). There has been considerable development and many modifications were done on this method. When a concomitant variable is available, ratio estimation based on ranked set sampling was proposed. This ratio estimator is more efficient than that based on SRS. In this paper some ratio type estimators of the population mean based on RSS are suggested. These estimators are found to be more efficient than the estimators of similar form using simple random sample.
Challenges on Adopting Scrum for Distributed Teams in Home Office Environments
This paper describes the two actual tendencies in the
software development process usage: 'Scrum' and 'work in home
office'. It-s exposed the four main challenges to adopt Scrum
framework for distributed teams in this cited kind of work. The
challenges are mainly based on the communication problems due
distances since the Scrum encourages the team to work together in
the same room, and this is not possible when people work distributed
in their homes.
Efficient Supplies to Assembly Areas from Storage Stages
Guaranteeing the availability of the required parts at
the scheduled time represents a key logistical challenge. This is
especially important when several parts are required together. This
article describes a tool that supports the positioning in the area of
conflict between low stock costs and a high service level for a
Optimization Parameters of Rotary Positioner Controller using CDM
The authors present optimization parameters of rotary
positioner controller in hard disk drive servo track writing process
using coefficient diagram method; CDM. Due to estimation
parameters in PI Positioning Control System by expected ratio
method cannot meet the required specification of response
effectively, we suggest coefficient diagram method for defining
controller parameters under the requirement of the system. Finally,
the simulation results show that our proposed method can improve
the problem in tuning parameter of rotary positioner controller. It is
satisfied specification of performance of control system. Furthermore,
it is very convenient as a fast adjustment damping ratio as well as a
high speed response.
Adaptive Kernel Principal Analysis for Online Feature Extraction
The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Intelligent Design of Reconfigurable Machines
This paper presents methodologies for developing an
intelligent CAD system assisting in analysis and design of
reconfigurable special machines. It describes a procedure for
determining feasibility of utilizing these machines for a given part
and presents a model for developing an intelligent CAD system. The
system analyzes geometrical and topological information of the given
part to determine possibility of the part being produced by
reconfigurable special machines from a technical point of view. Also
feasibility of the process from a economical point of view is
analyzed. Then the system determines proper positioning of the part
considering details of machining features and operations needed.
This involves determination of operation types, cutting tools and the
number of working stations needed. Upon completion of this stage
the overall layout of the machine and machining equipment required
Expression of Leucaena Leucocephala de Wit Chitinase in Transgenic Koshihikari Rice
The cDNA encoding the 326 amino acids of a Class I
basic chitinase gene from Leucaena leucocephala de Wit (KB3,
Genbank accession: AAM49597) was cloned under the control of
CaMV35S promoter in pCAMBIA 1300 and transferred to
Koshihikari. Calli of Koshihikari rice was transformed with
agrobacterium with this construct expressing the chitinase and β-
glucouronidase (GUS). The frequencies of calli 90 % has been
obtained from rice seedlings cultured on NB medium. The high
regeneration frequencies, 74% was obtained from calli cultured on
regeneration medium containing 4 mg/l BAP, and 7 g/l phytagel at
25°C. Various factors were studied in order to establish a procedure
for the transformation of Koshihikari Agrobacterium tumefaciens.
Supplementation of 50 mM acetosyringone to the medium during
coculivation was important to enhance the frequency to transient
transformation. The 4 week-old scutellum-derived calli were
excellent starting materials. Selection medium based on NB medium
supplement with 40 mg/l hygromycin and 400 mg/l cefotaxime were
an optimized medium for selection of transformed rice calli. The
percentage of transformation 70 was obtained. Recombinant calli and
regenerated rice plants were checked the expression of chitinase and
gus by PCR, northern blot gel, southern blot gel, and gus assay.
Chitinase and gus were expressed in all parts of recombinant rice.
The rice line expressing the KB3 chiitnase was more resistant to the
blast fungus Fusarium monoliforme than control line.
Multifunctional Barcode Inventory System for Retailing. Are You Ready for It?
This paper explains the development of Multifunctional Barcode Inventory Management System (MBIMS) to manage inventory and stock ordering. Today, most of the retailing market is still manually record their stocks and its effectiveness is quite low. By providing MBIMS, it will bring effectiveness to retailing market in inventory management. MBIMS will not only save time in recording input, output and refilling the inventory stock, but also in calculating remaining stock and provide auto-ordering function. This system is developed through System Development Life Cycle (SDLC) and the flow and structure of the system is fully built based on requirements of a retailing market. Furthermore, this system has been developed from methodical research and study where each part of the system is vigilantly designed. Thus, MBIMS will offer a good solution to the retailing market in achieving effectiveness and efficiency in inventory management.
Dynamic Interaction Network to Model the Interactive Patterns of International Stock Markets
Studies in economics domain tried to reveal the correlation between stock markets. Since the globalization era, interdependence between stock markets becomes more obvious. The Dynamic Interaction Network (DIN) algorithm, which was inspired by a Gene Regulatory Network (GRN) extraction method in the bioinformatics field, is applied to reveal important and complex dynamic relationship between stock markets. We use the data of the stock market indices from eight countries around the world in this study. Our results conclude that DIN is able to reveal and model patterns of dynamic interaction from the observed variables (i.e. stock market indices). Furthermore, it is also found that the extracted network models can be utilized to predict movement of the stock market indices with a considerably good accuracy.
A Heuristic Algorithm Approach for Scheduling of Multi-criteria Unrelated Parallel Machines
In this paper we address a multi-objective scheduling problem for unrelated parallel machines. In unrelated parallel systems, the processing cost/time of a given job on different machines may vary. The objective of scheduling is to simultaneously determine the job-machine assignment and job sequencing on each machine. In such a way the total cost of the schedule is minimized. The cost function consists of three components, namely; machining cost, earliness/tardiness penalties and makespan related cost. Such scheduling problem is combinatorial in nature. Therefore, a Simulated Annealing approach is employed to provide good solutions within reasonable computational times. Computational results show that the proposed approach can efficiently solve such complicated problems.
A Dynamic Model of Air Pollution, Health,and Population Growth Using System Dynamics: A Study on Tehran-Iran (With Computer Simulation by the Software Vensim)
The significance of environmental protection is wellknown in today's world. The execution of any program depends on sufficient knowledge and required familiarity with environment and its pollutants. Taking advantage of a systematic method, as a new science, in environmental planning can solve many problems. In this article, air pollution in Tehran and its relationship with health and population growth have been analyzed using dynamic systems. Firstly, by using casual loops, the relationship between the parameters effective on air pollution in Tehran were taken into consideration, then these casual loops were turned into flow diagrams , and finally, they were simulated using the software Vensim in order to conclude what the effect of each parameter will be on air pollution in Tehran in the next 10 years, how changing of one or more parameters influences other parameters, and which parameter among all other parameters requires to be controlled more.
Design of Nonlinear Observer by Using Augmented Linear System based on Formal Linearization of Polynomial Type
The objective of this study is to propose an observer design for nonlinear systems by using an augmented linear system derived by application of a formal linearization method. A given nonlinear differential equation is linearized by the formal linearization method which is based on Taylor expansion considering up to the higher order terms, and a measurement equation is transformed into an augmented linear one. To this augmented dimensional linear system, a linear estimation theory is applied and a nonlinear observer is derived. As an application of this method, an estimation problem of transient state of electric power systems is studied, and its numerical experiments indicate that this observer design shows remarkable performances for nonlinear systems.
Extracting Human Body based on Background Estimation in Modified HLS Color Space
The ability to recognize humans and their activities by computer vision is a very important task, with many potential application. Study of human motion analysis is related to several research areas of computer vision such as the motion capture, detection, tracking and segmentation of people. In this paper, we describe a segmentation method for extracting human body contour in modified HLS color space. To estimate a background, the modified HLS color space is proposed, and the background features are estimated by using the HLS color components. Here, the large amount of human dataset, which was collected from DV cameras, is pre-processed. The human body and its contour is successfully extracted from the image sequences.
Stable Robust Adaptive Controller and Observer Design for a Class of SISO Nonlinear Systems with Unknown Dead Zone
This paper presents a new stable robust adaptive controller and observer design for a class of nonlinear systems that contain i. Coupling of unmeasured states and unknown parameters ii. Unknown dead zone at the system actuator. The system is firstly cast into a modified form in which the observer and parameter estimation become feasible. Then a stable robust adaptive controller, state observer, parameter update laws are derived that would provide global adaptive system stability and desirable performance. To validate the approach, simulation was performed to a single-link mechanical system with a dynamic friction model and unknown dead zone exists at the system actuation. Then a comparison is presented with the results when there is no dead zone at the system actuation.
Security Analysis on the Online Office and Proposal of the Evaluation Criteria
The online office is one of web application. We can
easily use the online office through a web browser with internet
connected PC. The online office has the advantage of using
environment regardless of location or time. When users want to use the
online office, they access the online office server and use their content.
However, recently developed and launched online office has the
weakness of insufficient consideration. In this paper, we analyze the
security vulnerabilities of the online office. In addition, we propose
the evaluation criteria to make secure online office using Common
Criteria. This evaluation criteria can be used to establish trust between
the online office server and the user. The online office market will be
more active than before.
Cryptanalysis of Two-Factor Authenticated Key Exchange Protocol in Public Wireless LANs
In Public Wireless LANs(PWLANs), user anonymity
is an essential issue. Recently, Juang et al. proposed an anonymous
authentication and key exchange protocol using smart cards in
PWLANs. They claimed that their proposed scheme provided identity
privacy, mutual authentication, and half-forward secrecy. In this paper,
we point out that Juang et al.'s protocol is vulnerable to the
stolen-verifier attack and does not satisfy user anonymity.
Security Weaknesses of Dynamic ID-based Remote User Authentication Protocol
Recently, with the appearance of smart cards, many
user authentication protocols using smart card have been proposed to
mitigate the vulnerabilities in user authentication process. In 2004,
Das et al. proposed a ID-based user authentication protocol that is
secure against ID-theft and replay attack using smart card. In 2009,
Wang et al. showed that Das et al.-s protocol is not secure to randomly
chosen password attack and impersonation attack, and proposed an
improved protocol. Their protocol provided mutual authentication and
efficient password management. In this paper, we analyze the security
weaknesses and point out the vulnerabilities of Wang et al.-s protocol.
Characteristics of Cascade and C3MR Cycle on Natural Gas Liquefaction Process
In this paper, several different types of natural gas liquefaction cycle. First, two processes are a cascade process with two staged compression were designed and simulated. These include Inter-cooler which is consisted to Propane, Ethylene and Methane cycle, and also, liquid-gas heat exchanger is applied to between of methane and ethylene cycles (process2) and between of ethylene and propane (process2). Also, these cycles are compared with two staged cascade process using only a Inter-cooler (process1). The COP of process2 and process3 showed about 13.99% and 6.95% higher than process1, respectively. Also, the yield efficiency of LNG improved comparing with process1 by 13.99% lower specific power. Additionally, C3MR process are simulated and compared with Process 2.
UB-Tree Indexing for Semantic Query Optimization of Range Queries
Semantic query optimization consists in restricting the
search space in order to reduce the set of objects of interest for a
query. This paper presents an indexing method based on UB-trees
and a static analysis of the constraints associated to the views of the
database and to any constraint expressed on attributes. The result of
the static analysis is a partitioning of the object space into disjoint
blocks. Through Space Filling Curve (SFC) techniques, each
fragment (block) of the partition is assigned a unique identifier,
enabling the efficient indexing of fragments by UB-trees. The search
space corresponding to a range query is restricted to a subset of the
blocks of the partition. This approach has been developed in the
context of a KB-DBMS but it can be applied to any relational
An Improved Greedy Routing Algorithm for Grid using Pheromone-Based Landmarks
This paper objects to extend Jon Kleinberg-s research. He introduced the structure of small-world in a grid and shows with a greedy algorithm using only local information able to find route between source and target in delivery time O(log2n). His fundamental model for distributed system uses a two-dimensional grid with longrange random links added between any two node u and v with a probability proportional to distance d(u,v)-2. We propose with an additional information of the long link nearby, we can find the shorter path. We apply the ant colony system as a messenger distributed their pheromone, the long-link details, in surrounding area. The subsequence forwarding decision has more option to move to, select among local neighbors or send to node has long link closer to its target. Our experiment results sustain our approach, the average routing time by Color Pheromone faster than greedy method.
Visualisation and Navigation in Large Scale P2P Service Networks
In Peer-to-Peer service networks, where peers offer any kind of publicly available services or applications, intuitive navigation through all services in the network becomes more difficult as the number of services increases. In this article, a concept is discussed that enables users to intuitively browse and use large scale P2P service networks. The concept extends the idea of creating virtual 3D-environments solely based on Peer-to-Peer technologies. Aside from browsing, users shall have the possibility to emphasize services of interest using their own semantic criteria. The appearance of the virtual world shall intuitively reflect network properties that may be of interest for the user. Additionally, the concept comprises options for load- and traffic-balancing. In this article, the requirements concerning the underlying infrastructure and the graphical user interface are defined. First impressions of the appearance of future systems are presented and the next steps towards a prototypical implementation are discussed.
Design and Trajectory Planning of Bipedal Walking Robot with Minimum Sufficient Actuation System
This paper presents a new type of mechanism and trajectory planning strategy for bipedal walking robot. The newly designed mechanism is able to improve the performance of bipedal walking robot in terms of energy efficiency and weight reduction by utilizing minimum number of actuators. The usage of parallelogram mechanism eliminates the needs of having an extra actuator at the knee joint. This mechanism works together with the joint space trajectory planning in order to realize straight legged walking which cannot be achieved by conventional inverse kinematics trajectory planning due to the singularity. The effectiveness of the proposed strategy is confirmed by computer simulation results.
Design and Development of Pico-hydro Generation System for Energy Storage Using Consuming Water Distributed to Houses
This paper describes the design and development of pico-hydro generation system using consuming water distributed to houses. Water flow in the domestic pipes has kinetic energy that potential to generate electricity for energy storage purposes in addition to the routine activities such as laundry, cook and bathe. The inherent water pressure and flow inside the pipe from utility-s main tank that used for those usual activities is also used to rotate small scale hydro turbine to drive a generator for electrical power generation. Hence, this project is conducted to develop a small scale hydro generation system using consuming water distributed to houses as an alternative electrical energy source for residential use.
Phase Equilibrium in Aqueous Two-phase Systems Containing Poly (propylene glycol) and Sodium Citrate at Different pH
The phase diagrams and compositions of coexisting
phases have been determined for aqueous two-phase systems
containing poly(propylene glycol) with average molecular weight of
425 and sodium citrate at various pH of 3.93, 4.44, 4.6, 4.97, 5.1,
8.22. The effect of pH on the salting-out effect of poly (propylene
glycol) by sodium citrate has been studied. It was found that, an
increasing in pH caused the expansion of two-phase region.
Increasing pH also increases the concentration of PPG in the PPGrich
phase, while the salt-rich phase will be somewhat mole diluted.
Analysis of Dynamic Loads Induced by Spectator Movements in Stadium
In the stadium structure, the significant dynamic
responses such as resonance or similar behavior can be occurred by
spectator rhythmical activities. Thus, accurate analysis and precise
investigation of stadium structure that is subjected to dynamic loads
are required for practical design and serviceability check of stadium
structures. Moreover, it is desirable to measure and analyze the
dynamic loads of spectator activities because these dynamic loads can
not be easily expressed in numerical formula. In this study, various
dynamic loads induced by spectator movements are measured and
analyzed. These dynamic loads induced by spectators movement of
stadium structure can be classified into the impact load and the
periodic load. These dynamic loads can be expressed as Fourier
harmonic load. And, these dynamic loads could be applied for the
accurate vibration analysis of a stadium structure.
A Statistical Approach for Predicting and Optimizing Depth of Cut in AWJ Machining for 6063-T6 Al Alloy
In this paper, a set of experimental data has been used to assess the influence of abrasive water jet (AWJ) process parameters in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. The effects of these input parameters are studied on depth of cut (h); one of most important characteristics of AWJ. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. In the next stage, the proposed model is embedded into a Simulated Annealing (SA) algorithm to optimize the AWJ process parameters. The objective is to determine a suitable set of process parameters that can produce a desired depth of cut, considering the ranges of the process parameters. Computational results prove the effectiveness of the proposed model and optimization procedure.
A New Approach for Predicting and Optimizing Weld Bead Geometry in GMAW
Gas Metal Arc Welding (GMAW) processes is an
important joining process widely used in metal fabrication
industries. This paper addresses modeling and optimization of this
technique using a set of experimental data and regression analysis.
The set of experimental data has been used to assess the influence
of GMAW process parameters in weld bead geometry. The
process variables considered here include voltage (V); wire feed
rate (F); torch Angle (A); welding speed (S) and nozzle-to-plate
distance (D). The process output characteristics include weld bead
height, width and penetration. The Taguchi method and regression
modeling are used in order to establish the relationships between
input and output parameters. The adequacy of the model is
evaluated using analysis of variance (ANOVA) technique. In the
next stage, the proposed model is embedded into a Simulated
Annealing (SA) algorithm to optimize the GMAW process
parameters. The objective is to determine a suitable set of process
parameters that can produce desired bead geometry, considering
the ranges of the process parameters. Computational results prove
the effectiveness of the proposed model and optimization
Solution of Two Dimensional Quasi-Harmonic Equations with CA Approach
Many computational techniques were applied to
solution of heat conduction problem. Those techniques were the
finite difference (FD), finite element (FE) and recently meshless
methods. FE is commonly used in solution of equation of heat
conduction problem based on the summation of stiffness matrix of
elements and the solution of the final system of equations. Because
of summation process of finite element, convergence rate was
decreased. Hence in the present paper Cellular Automata (CA)
approach is presented for the solution of heat conduction problem.
Each cell considered as a fixed point in a regular grid lead to the
solution of a system of equations is substituted by discrete systems of
equations with small dimensions. Results show that CA can be used
for solution of heat conduction problem.
Rural Connectivity Technologies Cost Analysis
Rural areas of Tanzania are still disadvantaged in terms of diffusion of IP-based services; this is due to lack of Information and Communication Technology (ICT) infrastructures, especially lack of connectivity. One of the limitations for connectivity problems in rural areas of Tanzania is the high cost to establish infrastructures for IP-based services [1-2]. However the cost of connectivity varies from one technology to the other and at the same time, the cost is also different from one operator (service provider) to another within the country. This paper presents development of software system to calculate cost of connectivity to rural areas of Tanzania. The system is developed to make an easy access of connectivity cost from different technologies and different operators. The development of the calculator follows the V-model software development lifecycle. The calculator is used to evaluate the economic viability of different technologies considered as being potential candidates to provide rural connectivity. In this paper, the evaluation is based on the techno-economic analysis approach.
Conceptual Investigation of Short-Columns and Masonary Infill Frames Effect in the Earthquakes
This paper highlights the importance of the selection
of the building-s wall material,and the shortcomings of the most
commonly used framed structures with masonry infills .The
objective of this study is investigating the behavior of infill walls as
structural components in existing structures.Structural infill walls are
very important in structural behavior under earthquake effects.
Structural capacity under the effect of earthquake,displacement and
relative story displacement are affected by the structural irregularities
.The presence of nonstructural masonry infill walls can modify
extensively the global seismic behavior of framed buildings .The
stability and integrity of reinforced concrete frames are enhanced by
masonry infill walls. Masonry infill walls alter displacement and
base shear of the frame as well. Short columns have great
importance during earthquakes,because their failure may lead to
additional structural failures and result in total building collapse.
Consequently the effects of short columns are considered in this
A Stereo Vision System for Top View Book Scanners
This paper proposes a novel stereo vision technique
for top view book scanners which provide us with dense 3d point
clouds of page surfaces. This is a precondition to dewarp bound
volumes independent of 2d information on the page. Our method is
based on algorithms, which normally require the projection of pattern
sequences with structured light. We use image sequences of the
moving stripe lighting of the top view scanner instead of an additional
light projection. Thus the stereo vision setup is simplified without
losing measurement accuracy. Furthermore we improve a surface
model dewarping method through introducing a difference vector
based on real measurements. Although our proposed method is hardly
expensive neither in calculation time nor in hardware requirements
we present good dewarping results even for difficult examples.
Fuzzy Fingerprint Vault using Multiple Polynomials
Fuzzy fingerprint vault is a recently developed cryptographic construct based on the polynomial reconstruction problem to secure critical data with the fingerprint data. However, the previous researches are not applicable to the fingerprint having a few minutiae since they use a fixed degree of the polynomial without considering the number of fingerprint minutiae. To solve this problem, we use an adaptive degree of the polynomial considering the number of minutiae extracted from each user. Also, we apply multiple polynomials to avoid the possible degradation of the security of a simple solution(i.e., using a low-degree polynomial). Based on the experimental results, our method can make the possible attack difficult 2192 times more than using a low-degree polynomial as well as verify the users having a few minutiae.
Examination of Flood Runoff Reproductivity for Different Rainfall Sources in Central Vietnam
This paper presents the combination of different precipitation data sets and the distributed hydrological model, in order to examine the flood runoff reproductivity of scattered observation catchments. The precipitation data sets were obtained from observation using rain-gages, satellite based estimate (TRMM), and numerical weather prediction model (NWP), then were coupled with the super tank model. The case study was conducted in three basins (small, medium, and large size) located in Central Vietnam. Calculated hydrographs based on ground observation rainfall showed best fit to measured stream flow, while those obtained from TRMM and NWP showed high uncertainty of peak discharges. However, calculated hydrographs using the adjusted rainfield depicted a promising alternative for the application of TRMM and NWP in flood modeling for scattered observation catchments, especially for the extension of forecast lead time.
The Impact of Selected Economic Indicators for the Development of Zlin Region in the Czech Republic
This article considers with the influence of selected economic indicators for the development of the Zlin region. Development of the region is mainly influenced by business entities which are located in the region, as well as investors who contribute to the development of regions. For the development of the region it is necessary for skilled workers remain in the region and not to leave these skilled workers. The above-mentioned and other factors are affecting the development of each region.
Recovery of Copper and DCA from Simulated Micellar Enhanced Ultrafiltration (MEUF)Waste Stream
Simultaneous recovery of copper and DCA from
simulated MEUF concentrated stream was investigated. Effects of
surfactant (DCA) and metal (copper) concentrations, surfactant to
metal molar ratio (S/M ratio), electroplating voltage, EDTA
concentration, solution pH, and salt concentration on metal recovery
and current efficiency were studied. Electric voltage of -0.5 V was
shown to be optimum operation condition in terms of Cu recovery,
current efficiency, and surfactant recovery. Increasing Cu recovery and
current efficiency were observed with increases of Cu concentration
while keeping concentration of DCA constant. However, increasing
both Cu and DCA concentration while keeping S/M ratio constant at
2.5 showed detrimental effect on Cu recovery at DCA concentration
higher than 15 mM. Cu recovery decreases with increasing pH while
current efficiency showed an opposite trend. It is believed that
conductivity is the main cause for discrepancy of Cu recovery and
current efficiency observed at different pH. Finally, it was shown that
EDTA had adverse effect on both Cu recovery and current efficiency
while addition of NaCl salt had negative impact on current efficiency
at concentration higher than 8000 mg/L.
The Mechanistic and Oxidative Study of Methomyl and Parathion Degradation by Fenton Process
The purpose of this study is to investigate the chemical
degradation of the organophosphorus pesticide of parathion and
carbamate insecticide of methomyl in the aqueous phase through
Fenton process. With the employment of batch Fenton process, the
degradation of the two selected pesticides at different pH, initial
concentration, humic acid concentration, and Fenton reagent dosages
was explored. The Fenton process was found effective to degrade
parathion and methomyl. The optimal dosage of Fenton reagents (i.e.,
molar concentration ratio of H2O2 to Fe2+) at pH 7 for parathion
degradation was equal to 3, which resulted in 50% removal of
parathion. Similarly, the optimal dosage for methomyl degradation
was 1, resulting in 80% removal of methomyl. This study also found
that the presence of humic substances has enhanced pesticide
degradation by Fenton process significantly. The mass spectroscopy
results showed that the hydroxyl free radical may attack the single
bonds with least energy of investigated pesticides to form smaller
molecules which is more easily to degrade either through
physio-chemical or bilolgical processes.
The Panpositionable Hamiltonicity of k-ary n-cubes
The hypercube Qn is one of the most well-known
and popular interconnection networks and the k-ary n-cube Qk
an enlarged family from Qn that keeps many pleasing properties
from hypercubes. In this article, we study the panpositionable
hamiltonicity of Qk
n for k ≥ 3 and n ≥ 2. Let x, y of V (Qk
be two arbitrary vertices and C be a hamiltonian cycle of Qk
We use dC(x, y) to denote the distance between x and y on the
hamiltonian cycle C. Define l as an integer satisfying d(x, y) ≤ l ≤ 1
2 |V (Qk
n)|. We prove the followings:
• When k = 3 and n ≥ 2, there exists a hamiltonian cycle C
n such that dC(x, y) = l.
• When k ≥ 5 is odd and n ≥ 2, we request that l /∈ S
where S is a set of specific integers. Then there exists a
hamiltonian cycle C of Qk
n such that dC(x, y) = l.
• When k ≥ 4 is even and n ≥ 2, we request l-d(x, y) to be
even. Then there exists a hamiltonian cycle C of Qk
that dC(x, y) = l.
The result is optimal since the restrictions on l is due to the
structure of Qk
n by definition.
PID Controller Design for Following Control of Hard Disk Drive by Characteristic Ratio Assignment Method
The author present PID controller design for
following control of hard disk drive by characteristic ratio assignment
method. The study in this paper concerns design of a PID controller
which sufficiently robust to the disturbances and plant perturbations
on following control of hard disk drive. Characteristic Ratio
Assignment (CRA) is shown to be an efficient control technique to
serve this requirement. The controller design by CRA is based on the
choice of the coefficients of the characteristic polynomial of the
closed loop system according to the convenient performance criteria
such as equivalent time constant and ration of characteristic
coefficient. Hence, in this study, CRA method is applied in PID
controller design for following control of hard disk drive. Matlab
simulation results shown that CRA design is fairly stable and robust
whilst giving the convenience in controller-s parameters adjustment.
Design of PI Controller Using MRAC Techniques For Couple-Tanks Process
The typical coupled-tanks process that is TITO
plant has the difficulty in controller design because changing
of system dynamics and interacting of process. This paper
presents design methodology of auto-adjustable PI controller
using MRAC technique. The proposed method can adjust the
controller parameters in response to changes in plant and
disturbance real time by referring to the reference model that
specifies properties of the desired control system.
Experimental Modal Analysis and Model Validation of Antenna Structures
Numerical design optimization is a powerful tool that
can be used by engineers during any stage of the design process.
There are many different applications for structural optimization. A
specific application that will be discussed in the following paper is
experimental data matching. Data obtained through tests on a physical
structure will be matched with data from a numerical model of that
same structure. The data of interest will be the dynamic characteristics
of an antenna structure focusing on the mode shapes and modal
frequencies. The structure used was a scaled and simplified model of
the Karoo Array Telescope-7 (KAT-7) antenna structure.
This kind of data matching is a complex and difficult task. This
paper discusses how optimization can assist an engineer during the
process of correlating a finite element model with vibration test data.
Damage Evaluation of Curved Steel Bridges Upgraded with Isolation Bearings and Unseating Prevention Cable Restrainers
This paper investigates the effectiveness of the use of
seismic isolation devices on the overall 3D seismic response of
curved highway viaducts with an emphasis on expansion joints.
Furthermore, an evaluation of the effectiveness of the use of cable
restrainers is presented. For this purpose, the bridge seismic
performance has been evaluated on four different radii of curvature,
considering two cases: restrained and unrestrained curved viaducts.
Depending on the radius of curvature, three-dimensional non-linear
dynamic analysis shows the vulnerability of curved viaducts to
pounding and deck unseating damage. In this study, the efficiency of
using LRB supports combined with cable restrainers on curved
viaducts is demonstrated, not only by reducing in all cases the
possible damage, but also by providing a similar behavior in the
viaducts despite of curvature radius.
Individual Configuration of Production Control to Suit Requirements
The logistical requirements placed on industrial manufacturing companies are steadily increasing. In order to meet those requirements, a consistent and efficient concept is necessary for production control. Set up properly, production control offers considerable potential with respect to achieving the logistical targets. As experience with the many production control methods already in existence and their compatibility is, however, often inadequate, this article describes a systematic approach to the configuration of production control based on the Lödding model. This model enables production control to be set up individually to suit a company and the requirements. It therefore permits today-s demands regarding logistical performance to be met.
Health Monitoring of Power Transformers by Dissolved Gas Analysis using Regression Method and Study the Effect of Filtration on Oil
Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.
Deriving Causal Explanation from Qualitative Model Reasoning
This paper discusses a qualitative simulator QRiOM
that uses Qualitative Reasoning (QR) technique, and a process-based
ontology to model, simulate and explain the behaviour of selected
organic reactions. Learning organic reactions requires the application
of domain knowledge at intuitive level, which is difficult to be
programmed using traditional approach. The main objective of
QRiOM is to help learners gain a better understanding of the
fundamental organic reaction concepts, and to improve their
conceptual comprehension on the subject by analyzing the multiple
forms of explanation generated by the software. This paper focuses
on the generation of explanation based on causal theories to explicate
various phenomena in the chemistry subject. QRiOM has been tested
with three classes problems related to organic chemistry, with
encouraging results. This paper also presents the results of
preliminary evaluation of QRiOM that reveal its explanation
capability and usefulness.
A New Approach for Image Segmentation using Pillar-Kmeans Algorithm
This paper presents a new approach for image
segmentation by applying Pillar-Kmeans algorithm. This
segmentation process includes a new mechanism for clustering the
elements of high-resolution images in order to improve precision and
reduce computation time. The system applies K-means clustering to
the image segmentation after optimized by Pillar Algorithm. The
Pillar algorithm considers the pillars- placement which should be
located as far as possible from each other to withstand against the
pressure distribution of a roof, as identical to the number of centroids
amongst the data distribution. This algorithm is able to optimize the
K-means clustering for image segmentation in aspects of precision
and computation time. It designates the initial centroids- positions
by calculating the accumulated distance metric between each data
point and all previous centroids, and then selects data points which
have the maximum distance as new initial centroids. This algorithm
distributes all initial centroids according to the maximum
accumulated distance metric. This paper evaluates the proposed
approach for image segmentation by comparing with K-means and
Gaussian Mixture Model algorithm and involving RGB, HSV, HSL
and CIELAB color spaces. The experimental results clarify the
effectiveness of our approach to improve the segmentation quality in
aspects of precision and computational time.
Sustainable Development in Construction
Semnan is a city in semnan province, northern Iran
with a population estimated at 119,778 inhabitants. It is the
provincial capital of semnan province. Iran is a developing country
and construction is a basic factor of developing too. Hence, Semnan
city needs to a special programming for construction of buildings,
structures and infrastructures. Semnan municipality tries to begin this
program. In addition to, city has some historical monuments which
can be interesting for tourists. Hence, Semnan inhabitants can benefit
from tourist industry. Optimization of Energy in construction
industry is another activity of this municipality and the inhabitants
who execute these regulations receive some discounts. Many parts of
Iran such as semnan are located in highly seismic zones and
structures must be constructed safe e.g., according to recent seismic
codes. In this paper opportunities of IT in construction industry of
Iran are investigated in three categories. Pre-construction phase,
construction phase and earthquake disaster mitigation are studied.
Studies show that information technology can be used in these items
for reducing the losses and increasing the benefits. Both government
and private sectors must contribute to this strategic project for
obtaining the best result.
Free-Form Query for Cell Phones
It is a challenge to provide a wide range of queries to
database query systems for small mobile devices, such as the PDAs
and cell phones. Currently, due to the physical and resource
limitations of these devices, most reported database querying systems
developed for them are only offering a small set of pre-determined
queries for users to possibly pose. The above can be resolved by
allowing free-form queries to be entered on the devices. Hence, a
query language that does not restrict the combination of query terms
entered by users is proposed. This paper presents the free-form query
language and the method used in translating free-form queries to
their equivalent SQL statements.
Force Analysis of an Automated Rapid Maxillary Expansion (ARME) Appliance
An Automated Rapid Maxillary Expander (ARME) is
a specially designed microcontroller-based orthodontic appliance to
overcome the shortcomings imposed by the traditional maxillary
expansion appliances. This new device is operates by automatically
widening the maxilla (upper jaw) by expanding the midpalatal suture
. The ARME appliance that has been developed is a combination
of modified butterfly expander appliance, micro gear, micro motor,
and microcontroller to automatically produce light and continuous
pressure to expand the maxilla. For this study, the functionality of the
system is verified through laboratory tests by measure the forced
applied to the teeth each time the maxilla expands. The laboratory
test results show that the developed appliance meets the desired
performance specifications consistently.
Closed Form Optimal Solution of a Tuned Liquid Column Damper Responding to Earthquake
In this paper the vibration behaviors of a structure equipped with a tuned liquid column damper (TLCD) under a harmonic type of earthquake loading are studied. However, due to inherent nonlinear liquid damping, it is no doubt that a great deal of computational effort is required to search the optimum parameters of the TLCD, numerically. Therefore by linearization the equation of motion of the single degree of freedom structure equipped with the TLCD, the closed form solutions of the TLCD-structure system are derived. To find the reliability of the analytical method, the results have been compared with other researcher and have good agreement. Further, the effects of optimal design parameters such as length ratio and mass ratio on the performance of the TLCD for controlling the responses of a structure are investigated by using the harmonic type of earthquake excitation. Finally, the Citicorp Center which has a very flexible structure is used as an example to illustrate the design procedure for the TLCD under the earthquake excitation.