Evaluation of Service Continuity in a Self-organizing IMS
The NGN (Next Generation Network), which can
provide advanced multimedia services over an all-IP based network, has been the subject of much attention for years. While there have
been tremendous efforts to develop its architecture and protocols, especially for IMS, which is a key technology of the NGN, it is far
from being widely deployed. However, efforts to create an advanced
signaling infrastructure realizing many requirements have resulted in a
large number of functional components and interactions between those
components. Thus, the carriers are trying to explore effective ways to
deploy IMS while offering value-added services. As one such
approach, we have proposed a self-organizing IMS. A self-organizing
IMS enables IMS functional components and corresponding physical
nodes to adapt dynamically and automatically based on situation such
as network load and available system resources while continuing IMS
operation. To realize this, service continuity for users is an important
requirement when a reconfiguration occurs during operation. In this
paper, we propose a mechanism that will provide service continuity to
users and focus on the implementation and describe performance
evaluation in terms of number of control signaling and processing time
Average Current Estimation Technique for Reliability Analysis of Multiple Semiconductor Interconnects
Average current analysis checking the impact of
current flow is very important to guarantee the reliability of
semiconductor systems. As semiconductor process technologies
improve, the coupling capacitance often become bigger than self
capacitances. In this paper, we propose an analytic technique for
analyzing average current on interconnects in multi-conductor
structures. The proposed technique has shown to yield the acceptable
errors compared to HSPICE results while providing computational
Usage-based Traffic Control for P2P Content Delivery
Recently, content delivery services have grown rapidly
over the Internet. For ASPs (Application Service Provider) providing
content delivery services, P2P architecture is beneficial to reduce
outgoing traffic from content servers. On the other hand, ISPs are
suffering from the increase in P2P traffic. The P2P traffic is
unnecessarily redundant because the same content or the same
fractions of content are transferred through an inter-ISP link several
times. Subscriber ISPs have to pay a transit fee to upstream ISPs based
on the volume of inter-ISP traffic. In order to solve such problems,
several works have been done for the purpose of P2P traffic reduction.
However, these existing works cannot control the traffic volume of a
certain link. In order to solve such an ISP-s operational requirement,
we propose a method to control traffic volume for a link within a
preconfigured upper bound value. We evaluated that the proposed
method works well by conducting a simulation on a 1,000-user scale.
We confirm that the traffic volume could be controlled at a lower level
than the upper bound for all evaluated conditions. Moreover, our
method could control the traffic volume at 98.95% link usage against
the target value.
Neural Network Based Predictive DTC Algorithm for Induction Motors
In this paper, a Neural Network based predictive
DTC algorithm is proposed .This approach is used as an
alternative to classical approaches .An appropriate riate Feed -
forward network is chosen and based on its value of
derivative electromagnetic torque ; optimal stator voltage
vector is determined to be applied to the induction motor (by
inverter). Moreover, an appropriate torque and flux observer
Specifying a Timestamp-based Protocol For Multi-step Transactions Using LTL
Most of the concurrent transactional protocols consider
serializability as a correctness criterion of the transactions execution.
Usually, the proof of the serializability relies on mathematical proofs
for a fixed finite number of transactions. In this paper, we introduce
a protocol to deal with an infinite number of transactions which are
iterated infinitely often. We specify serializability of the transactions
and the protocol using a specification language based on temporal
logics. It is worthwhile using temporal logics such as LTL (Lineartime
Temporal Logic) to specify transactions, to gain full automatic
verification by using model checkers.
Performance Analysis of Software Reliability Models using Matrix Method
This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Symbolic Model Checking of Interactions in Sequence Diagrams with Combined Fragments by SMV
In this paper, we proposed a method for detecting consistency violation between state machine diagrams and a sequence diagram defined in UML 2.0 using SMV. We extended a method expressing these diagrams defined in UML 1.0 with boolean formulas so that it can express a sequence diagram with combined fragments introduced in UML 2.0. This extension made it possible to represent three types of combined fragment: alternative, option and parallel. As a result of experiment, we confirmed that the proposed method could detect consistency violation correctly with SMV.
Evaluating and Selecting Optimization Software Packages: A Framework for Business Applications
Owing the fact that optimization of business process
is a crucial requirement to navigate, survive and even thrive in
today-s volatile business environment, this paper presents a
framework for selecting a best-fit optimization package for solving
complex business problems. Complexity level of the problem and/or
using incorrect optimization software can lead to biased solutions of
the optimization problem. Accordingly, the proposed framework
identifies a number of relevant factors (e.g. decision variables,
objective functions, and modeling approach) to be considered during
the evaluation and selection process. Application domain, problem
specifications, and available accredited optimization approaches are
also to be regarded. A recommendation of one or two optimization
software is the output of the framework which is believed to provide
the best results of the underlying problem. In addition to a set of
guidelines and recommendations on how managers can conduct an
effective optimization exercise is discussed.
Simulating Discrete Time Model Reference Adaptive Control System with Great Initial Error
This article is based on the technique which is called
Discrete Parameter Tracking (DPT). First introduced by A. A. Azab
 which is applicable for less order reference model. The order of
the reference model is (n-l) and n is the number of the adjustable
parameters in the physical plant.
The technique utilizes a modified gradient method  where the
knowledge of the exact order of the nonadaptive system is not
required, so, as to eliminate the identification problem. The
applicability of the mentioned technique (DPT) was examined
through the solution of several problems.
This article introduces the solution of a third order system with
three adjustable parameters, controlled according to second order
reference model. The adjustable parameters have great initial error
which represent condition.
Computer simulations for the solution and analysis are provided
to demonstrate the simplicity and feasibility of the technique.
Search Engine Module in Voice Recognition Browser to Facilitate the Visually Impaired in Virtual Learning (MGSYS VISI-VL)
Nowadays, web-based technologies influence in
people-s daily life such as in education, business and others.
Therefore, many web developers are too eager to develop their web
applications with fully animation graphics and forgetting its
accessibility to its users. Their purpose is to make their web
applications look impressive. Thus, this paper would highlight on the
usability and accessibility of a voice recognition browser as a tool to
facilitate the visually impaired and blind learners in accessing virtual
learning environment. More specifically, the objectives of the study
are (i) to explore the challenges faced by the visually impaired
learners in accessing virtual learning environment (ii) to determine
the suitable guidelines for developing a voice recognition browser
that is accessible to the visually impaired. Furthermore, this study
was prepared based on an observation conducted with the Malaysian
visually impaired learners. Finally, the result of this study would
underline on the development of an accessible voice recognition
browser for the visually impaired.
Remaining Useful Life Prediction Using Elliptical Basis Function Network and Markov Chain
This paper presents a novel method for remaining
useful life prediction using the Elliptical Basis Function (EBF)
network and a Markov chain. The EBF structure is trained by a
modified Expectation-Maximization (EM) algorithm in order to take
into account the missing covariate set. No explicit extrapolation is
needed for internal covariates while a Markov chain is constructed to
represent the evolution of external covariates in the study. The
estimated external and the unknown internal covariates constitute an
incomplete covariate set which are then used and analyzed by the EBF
network to provide survival information of the asset. It is shown in the
case study that the method slightly underestimates the remaining
useful life of an asset which is a desirable result for early maintenance
decision and resource planning.
Model Checking Consistency of UML Diagrams Using Alloy
In this paper, we proposed a method for detecting consistency violation between UML state machine diagrams and communication diagrams using Alloy. Using input language of Alloy, the proposed method expresses system behaviors described by state machine diagrams, message sequences described by communication diagrams, and a consistency property. As a result of application for an example system, we confirmed that consistency violation could be detected using Alloy correctly.
About Analysis and Modelling of the Open Message Switching System
The modern queueing theory is one of the powerful
tools for a quantitative and qualitative analysis of communication systems, computer networks, transportation systems, and many other technical systems. The paper is designated to the analysis of queueing
systems, arising in the networks theory and communications theory
(called open queueing network). The authors of this research in the
sphere of queueing theory present the theorem about the law of the iterated logarithm (LIL) for the queue length of a customers in open
queueing network and its application to the mathematical model of
the open message switching system.
Concept Abduction in Description Logics with Cardinality Restrictions
Recently the usefulness of Concept Abduction, a novel non-monotonic inference service for Description Logics (DLs), has been argued in the context of ontology-based applications such as semantic matchmaking and resource retrieval. Based on tableau calculus, a method has been proposed to realize this reasoning task in ALN, a description logic that supports simple cardinality restrictions as well as other basic constructors. However, in many ontology-based systems, the representation of ontology would require expressive formalisms for capturing domain-specific constraints, this language is not sufficient. In order to increase the applicability of the abductive reasoning method in such contexts, we would like to present in the scope of this paper an extension of the tableaux-based algorithm for dealing with concepts represented inALCQ, the description logic that extends ALN with full concept negation and quantified number restrictions.
Decision Support System Based on Data Warehouse
Typical Intelligent Decision Support System is 4-based, its design composes of Data Warehouse, Online Analytical Processing, Data Mining and Decision Supporting based on models, which is called Decision Support System Based on Data Warehouse (DSSBDW). This way takes ETL,OLAP and DM as its implementing means, and integrates traditional model-driving DSS and data-driving DSS into a whole. For this kind of problem, this paper analyzes the DSSBDW architecture and DW model, and discusses the following key issues: ETL designing and Realization; metadata managing technology using XML; SQL implementing, optimizing performance, data mapping in OLAP; lastly, it illustrates the designing principle and method of DW in DSSBDW.
A 3rd order 3bit Sigma-Delta Modulator with Reduced Delay Time of Data Weighted Averaging
This paper presents a method of reducing the feedback
delay time of DWA(Data Weighted Averaging) used in sigma-delta
modulators. The delay time reduction results from the elimination of
the latch at the quantizer output and also from the falling edge
operation. The designed sigma-delta modulator improves the timing
margin about 16%. The sub-circuits of sigma-delta modulator such as
SC(Switched Capacitor) integrator, 9-level quantizer, comparator, and
DWA are designed with the non-ideal characteristics taken into
account. The sigma-delta modulator has a maximum SNR (Signal to
Noise Ratio) of 84 dB or 13 bit resolution.
The Multi-scenario Knapsack Problem: An Adaptive Search Algorithm
In this paper, we study the multi-scenario knapsack problem, a variant of the well-known NP-Hard single knapsack problem. We investigate the use of an adaptive algorithm for solving heuristically the problem. The used method combines two complementary phases: a size reduction phase and a dynamic 2- opt procedure one. First, the reduction phase applies a polynomial reduction strategy; that is used for reducing the size problem. Second, the adaptive search procedure is applied in order to attain a feasible solution Finally, the performances of two versions of the proposed algorithm are evaluated on a set of randomly generated instances.
A Brain Inspired Approach for Multi-View Patterns Identification
Biologically human brain processes information in both unimodal and multimodal approaches. In fact, information is progressively abstracted and seamlessly fused. Subsequently, the fusion of multimodal inputs allows a holistic understanding of a problem. The proliferation of technology has exponentially produced various sources of data, which could be likened to being the state of multimodality in human brain. Therefore, this is an inspiration to develop a methodology for exploring multimodal data and further identifying multi-view patterns. Specifically, we propose a brain inspired conceptual model that allows exploration and identification of patterns at different levels of granularity, different types of hierarchies and different types of modalities. A structurally adaptive neural network is deployed to implement the proposed model. Furthermore, the acquisition of multi-view patterns with the proposed model is demonstrated and discussed with some experimental results.
Block Cipher Based on Randomly Generated Quasigroups
Quasigroups are algebraic structures closely related to
Latin squares which have many different applications. The
construction of block cipher is based on quasigroup string
transformation. This article describes a block cipher based
Quasigroup of order 256, suitable for fast software encryption of
messages written down in universal ASCII code. The novelty of this
cipher lies on the fact that every time the cipher is invoked a new set
of two randomly generated quasigroups are used which in turn is
used to create a pair of quasigroup of dual operations. The
cryptographic strength of the block cipher is examined by calculation
of the xor-distribution tables. In this approach some algebraic
operations allows quasigroups of huge order to be used without any
requisite to be stored.
An AR/VR Based Approach Towards the Intuitive Control of Mobile Rescue Robots
An intuitive user interface for the teleoperation of mobile rescue robots is one key feature for a successful exploration of inaccessible and no-go areas. Therefore, we have developed a novel framework to embed a flexible and modular user interface into a complete 3-D virtual reality simulation system. Our approach is based on a client-server architecture to allow for a collaborative control of the rescue robot together with multiple clients on demand. Further, it is important that the user interface is not restricted to any specific type of mobile robot. Therefore, our flexible approach allows for the operation of different robot types with a consistent concept and user interface. In laboratory tests, we have evaluated the validity and effectiveness of our approach with the help of two different robot platforms and several input devices. As a result, an untrained person can intuitively teleoperate both robots without needing a familiarization time when changing the operating robot.
Ontology-based Domain Modelling for Consistent Content Change Management
Ontology-based modelling of multi-formatted
software application content is a challenging area in content
management. When the number of software content unit is huge and
in continuous process of change, content change management is
important. The management of content in this context requires
targeted access and manipulation methods. We present a novel
approach to deal with model-driven content-centric information
systems and access to their content. At the core of our approach is an
ontology-based semantic annotation technique for diversely
formatted content that can improve the accuracy of access and
systems evolution. Domain ontologies represent domain-specific
concepts and conform to metamodels. Different ontologies - from
application domain ontologies to software ontologies - capture and
model the different properties and perspectives on a software content
unit. Interdependencies between domain ontologies, the artifacts and
the content are captured through a trace model. The annotation traces
are formalised and a graph-based system is selected for the
representation of the annotation traces.
Use of RFID Technology for Identification, Traceability Monitoring and the Checking of Product Authenticity
This paper is an overview of the structure of Radio
Frequency Identification (RFID) systems and radio frequency bands
used by RFID technology. It also presents a solution based on the
application of RFID for brand authentication, traceability and
tracking, by implementing a production management system and
extending its use to traders.
Blind Source Separation for Convoluted Signals Based on Properties of Acoustic Transfer Function in Real Environments
Frequency domain independent component analysis has
a scaling indeterminacy and a permutation problem. The scaling
indeterminacy can be solved by use of a decomposed spectrum. For
the permutation problem, we have proposed the rules in terms of gain
ratio and phase difference derived from the decomposed spectra and
the source-s coarse directions.
The present paper experimentally clarifies that the gain ratio and
the phase difference work effectively in a real environment but their
performance depends on frequency bands, a microphone-space and
a source-microphone distance. From these facts it is seen that it is
difficult to attain a perfect solution for the permutation problem in a
real environment only by either the gain ratio or the phase difference.
For the perfect solution, this paper gives a solution to the problems
in a real environment. The proposed method is simple, the amount of
calculation is small. And the method has high correction performance
without depending on the frequency bands and distances from source
signals to microphones. Furthermore, it can be applied under the real
environment. From several experiments in a real room, it clarifies
that the proposed method has been verified.
Computer Simulations of an Augmented Automatic Choosing Control Using Automatic Choosing Functions of Gradient Optimization Type
In this paper we consider a nonlinear feedback
control called augmented automatic choosing control (AACC)
using the automatic choosing functions of gradient optimization
type for nonlinear systems. Constant terms which arise from sectionwise
linearization of a given nonlinear system are treated as
coefficients of a stable zero dynamics. Parameters included in the
control are suboptimally selected by minimizing the Hamiltonian
with the aid of the genetic algorithm. This approach is applied to
a field excitation control problem of power system to demonstrate
the splendidness of the AACC. Simulation results show that the
new controller can improve performance remarkably well.
Big Bang – Big Crunch Learning Method for Fuzzy Cognitive Maps
Modeling of complex dynamic systems, which are
very complicated to establish mathematical models, requires new and
modern methodologies that will exploit the existing expert
knowledge, human experience and historical data. Fuzzy cognitive
maps are very suitable, simple, and powerful tools for simulation and
analysis of these kinds of dynamic systems. However, human experts
are subjective and can handle only relatively simple fuzzy cognitive
maps; therefore, there is a need of developing new approaches for an
automated generation of fuzzy cognitive maps using historical data.
In this study, a new learning algorithm, which is called Big Bang-Big
Crunch, is proposed for the first time in literature for an automated
generation of fuzzy cognitive maps from data. Two real-world
examples; namely a process control system and radiation therapy
process, and one synthetic model are used to emphasize the
effectiveness and usefulness of the proposed methodology.
Design A Situated Learning Environment Using Mixed Reality Technology - A Case Study
Mixed Reality (MR) is one of the newest technologies
explored in education. It promises the potential to promote teaching
and learning and making learners- experience more “engaging".
However, there still lack of research on designing a virtual learning
environment using MR technology. In this paper, we describe the
Mixed Reality technology, the characteristics of situated learning as
instructional design for virtual environment using mixed reality
technology. We also explain a case study that implemented those
design and also the system overview.
Q-Net: A Novel QoS Aware Routing Algorithm for Future Data Networks
The expectation of network performance from the
early days of ARPANET until now has been changed significantly.
Every day, new advancement in technological infrastructure opens
the doors for better quality of service and accordingly level of
perceived quality of network services have been increased over the
time. Nowadays for many applications, late information has no value
or even may result in financial or catastrophic loss, on the other hand,
demands for some level of guarantee in providing and maintaining
quality of service are ever increasing. Based on this history, having a
QoS aware routing system which is able to provide today's required
level of quality of service in the networks and effectively adapt to the
future needs, seems as a key requirement for future Internet. In this
work we have extended the traditional AntNet routing system to
support QoS with multiple metrics such as bandwidth and delay
which is named Q-Net. This novel scalable QoS routing system aims
to provide different types of services in the network simultaneously.
Each type of service can be provided for a period of time in the
network and network nodes do not need to have any previous
knowledge about it. When a type of quality of service is requested,
Q-Net will allocate required resources for the service and will
guarantee QoS requirement of the service, based on target objectives.