Day Type Identification for Algerian Electricity Load using Kohonen Maps
Short term electricity demand forecasts are required
by power utilities for efficient operation of the power grid. In a
competitive market environment, suppliers and large consumers also
require short term forecasts in order to estimate their energy
requirements in advance. Electricity demand is influenced (among
other things) by the day of the week, the time of year and special
periods and/or days such as Ramadhan, all of which must be
identified prior to modelling. This identification, known as day-type
identification, must be included in the modelling stage either by
segmenting the data and modelling each day-type separately or by
including the day-type as an input. Day-type identification is the
main focus of this paper. A Kohonen map is employed to identify the
separate day-types in Algerian data.
A New Evolutionary Algorithm for Cluster Analysis
Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the kmeans algorithm. Solutions obtained from this technique depend on the initialization of cluster centers and the final solution converges to local minima. In order to overcome K-means algorithm shortcomings, this paper proposes a hybrid evolutionary algorithm based on the combination of PSO, SA and K-means algorithms, called PSO-SA-K, which can find better cluster partition. The performance is evaluated through several benchmark data sets. The simulation results show that the proposed algorithm outperforms previous approaches, such as PSO, SA and K-means for partitional clustering problem.
Recovering Artifacts from Legacy Systems Using Pattern Matching
Modernizing legacy applications is the key issue facing IT managers today because there's enormous pressure on organizations to change the way they run their business to meet the new requirements. The importance of software maintenance and reengineering is forever increasing. Understanding the architecture of existing legacy applications is the most critical issue for maintenance and reengineering. The artifacts recovery can be facilitated with different recovery approaches, methods and tools. The existing methods provide static and dynamic set of techniques for extracting architectural information, but are not suitable for all users in different domains. This paper presents a simple and lightweight pattern extraction technique to extract different artifacts from legacy systems using regular expression pattern specifications with multiple language support. We used our custom-built tool DRT to recover artifacts from existing system at different levels of abstractions. In order to evaluate our approach a case study is conducted.
Propagation Model for a Mass-Mailing Worm with Mailing List
Mass-mail type worms have threatened to become a large problem for the Internet. Although many researchers have analyzed such worms, there are few studies that consider worm propagation via mailing lists. In this paper, we present a mass-mailing type worm propagation model including the mailing list effect on the propagation. We study its propagation by simulation with a real e¬mail social network model. We show that the impact of the mailing list on the mass-mail worm propagation is significant, even if the mailing list is not large.
Human Verification in a Video Surveillance System Using Statistical Features
A human verification system is presented in this
paper. The system consists of several steps: background subtraction,
thresholding, line connection, region growing, morphlogy, star
skelatonization, feature extraction, feature matching, and decision
making. The proposed system combines an advantage of star
skeletonization and simple statistic features. A correlation matching
and probability voting have been used for verification, followed by a
logical operation in a decision making stage. The proposed system
uses small number of features and the system reliability is
Application of Artificial Neural Network for Predicting Maintainability Using Object-Oriented Metrics
Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.
Query Optimization Techniques for XML Databases
Over the past few years, XML (eXtensible Mark-up
Language) has emerged as the standard for information
representation and data exchange over the Internet. This paper
provides a kick-start for new researches venturing in XML databases
field. We survey the storage representation for XML document,
review the XML query processing and optimization techniques with
respect to the particular storage instance. Various optimization
technologies have been developed to solve the query retrieval and
updating problems. Towards the later year, most researchers
proposed hybrid optimization techniques. Hybrid system opens the
possibility of covering each technology-s weakness by its strengths.
This paper reviews the advantages and limitations of optimization
Evaluation on Recent Committed Crypt Analysis Hash Function
This paper describes the study of cryptographic hash functions, one of the most important classes of primitives used in recent techniques in cryptography. The main aim is the development of recent crypt analysis hash function. We present different approaches to defining security properties more formally and present basic attack on hash function. We recall Merkle-Damgard security properties of iterated hash function. The Main aim of this paper is the development of recent techniques applicable to crypt Analysis hash function, mainly from SHA family. Recent proposed attacks an MD5 & SHA motivate a new hash function design. It is designed not only to have higher security but also to be faster than SHA-256. The performance of the new hash function is at least 30% better than that of SHA-256 in software. And it is secure against any known cryptographic attacks on hash functions.
Robust Digital Cinema Watermarking
With the advent of digital cinema and digital
broadcasting, copyright protection of video data has been one of the
most important issues.
We present a novel method of watermarking for video image data
based on the hardware and digital wavelet transform techniques and
name it as “traceable watermarking" because the watermarked data is
constructed before the transmission process and traced after it has been
received by an authorized user.
In our method, we embed the watermark to the lowest part of each
image frame in decoded video by using a hardware LSI.
Digital Cinema is an important application for traceable
watermarking since digital cinema system makes use of watermarking
technology during content encoding, encryption, transmission,
decoding and all the intermediate process to be done in digital cinema
systems. The watermark is embedded into the randomly selected
movie frames using hash functions.
Embedded watermark information can be extracted from the
decoded video data. For that, there is no need to access original movie
data. Our experimental results show that proposed traceable
watermarking method for digital cinema system is much better than the
convenient watermarking techniques in terms of robustness, image
quality, speed, simplicity and robust structure.
Neuro-Fuzzy Network Based On Extended Kalman Filtering for Financial Time Series
The neural network's performance can be measured by efficiency and accuracy. The major disadvantages of neural network approach are that the generalization capability of neural networks is often significantly low, and it may take a very long time to tune the weights in the net to generate an accurate model for a highly complex and nonlinear systems. This paper presents a novel Neuro-fuzzy architecture based on Extended Kalman filter. To test the performance and applicability of the proposed neuro-fuzzy model, simulation study of nonlinear complex dynamic system is carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction of financial time series. A benchmark case studie is used to demonstrate that the proposed model is a superior neuro-fuzzy modeling technique.
Improvement of the Quality of Internet Service Based On an Internet Exchange Point (IXP)
Internet is without any doubt the fastest and effective mean of communication making it possible to reach a great number of people in the world. It draws its base from exchange points. Indeed exchange points are used to inter-connect various Internet suppliers and operators in order to allow them to exchange traffic and it is with these interconnections that Internet made its great strides. They thus make it possible to limit the traffic delivered via the operators of transits. This limitation allows a significant improvement of the quality of service, a reduction in the latency time just as a reduction of the cost of connection for the final subscriber. Through this article we will show how the installation of an IXP allows an improvement and a diversification of the services just as a reduction of the Internet connection costs.
Shadow Detection for Increased Accuracy of Privacy Enhancing Methods in Video Surveillance Edge Devices
Shadow detection is still considered as one of the
potential challenges for intelligent automated video surveillance
systems. A pre requisite for reliable and accurate detection and
tracking is the correct shadow detection and classification. In such a
landscape of conditions, privacy issues add more and more
complexity and require reliable shadow detection.
In this work the intertwining between security, accuracy,
reliability and privacy is analyzed and, accordingly, a novel
architecture for Privacy Enhancing Video Surveillance (PEVS) is
introduced. Shadow detection and masking are dealt with through the
combination of two different approaches simultaneously. This results
in a unique privacy enhancement, without affecting security.
Subsequently, the methodology was employed successfully in a
large-scale wireless video surveillance system; privacy relevant
information was stored and encrypted on the unit, without
transferring it over an un-trusted network.
Estimating Development Time of Software Projects Using a Neuro Fuzzy Approach
Software estimation accuracy is among the greatest
challenges for software developers. This study aimed at building and
evaluating a neuro-fuzzy model to estimate software projects
development time. The forty-one modules developed from ten
programs were used as dataset. Our proposed approach is compared
with fuzzy logic and neural network model and Results show that the
value of MMRE (Mean of Magnitude of Relative Error) applying
neuro-fuzzy was substantially lower than MMRE applying fuzzy
logic and neural network.
Computational Intelligence Techniques and Agents- Technology in E-learning Environments
In this contribution a newly developed e-learning environment is presented, which incorporates Intelligent Agents and Computational Intelligence Techniques. The new e-learning environment is constituted by three parts, the E-learning platform Front-End, the Student Questioner Reasoning and the Student Model Agent. These parts are distributed geographically in dispersed computer servers, with main focus on the design and development of these subsystems through the use of new and emerging technologies. These parts are interconnected in an interoperable way, using web services for the integration of the subsystems, in order to enhance the user modelling procedure and achieve the goals of the learning process.
Density Clustering Based On Radius of Data (DCBRD)
Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, a density based clustering algorithm (DCBRD) is presented, relying on a knowledge acquired from the data by dividing the data space into overlapped regions. The proposed algorithm discovers arbitrary shaped clusters, requires no input parameters and uses the same definitions of DBSCAN algorithm. We performed an experimental evaluation of the effectiveness and efficiency of it, and compared this results with that of DBSCAN. The results of our experiments demonstrate that the proposed algorithm is significantly efficient in discovering clusters of arbitrary shape and size.
Interactive PTZ Camera Control System Using Wii Remote and Infrared Sensor Bar
This paper proposes an alternative control mechanism
for an interactive Pan/Tilt/Zoom (PTZ) camera control system.
Instead of using a mouse or a joystick, the proposed mechanism
utilizes a Nintendo Wii remote and infrared (IR) sensor bar. The Wii
remote has buttons that allows the user to control the movement of a
PTZ camera through Bluetooth connectivity. In addition, the Wii
remote has a built-in motion sensor that allows the user to give
control signals to the PTZ camera through pitch and roll movement.
A stationary IR sensor bar, placed at some distance away opposite the
Wii remote, enables the detection of yaw movement. In addition, the
Wii remote-s built-in IR camera has the ability to detect its spatial
position, and thus generates a control signal when the user moves the
Wii remote. Some experiments are carried out and their performances
are compared with an industry-standard PTZ joystick.
Key Based Text Watermarking of E-Text Documents in an Object Based Environment Using Z-Axis for Watermark Embedding
Data hiding into text documents itself involves pretty
complexities due to the nature of text documents. A robust text
watermarking scheme targeting an object based environment is
presented in this research. The heart of the proposed solution
describes the concept of watermarking an object based text document
where each and every text string is entertained as a separate object
having its own set of properties. Taking advantage of the z-ordering
of objects watermark is applied with the z-axis letting zero fidelity
disturbances to the text. Watermark sequence of bits generated
against user key is hashed with selected properties of given
document, to determine the bit sequence to embed. Bits are
embedded along z-axis and the document has no fidelity issues when
printed, scanned or photocopied.
A Fast Sign Localization System Using Discriminative Color Invariant Segmentation
Building intelligent traffic guide systems has been an
interesting subject recently. A good system should be able to observe
all important visual information to be able to analyze the context of
the scene. To do so, signs in general, and traffic signs in particular,
are usually taken into account as they contain rich information to
these systems. Therefore, many researchers have put an effort on
sign recognition field. Sign localization or sign detection is the most
important step in the sign recognition process. This step filters out
non informative area in the scene, and locates candidates in later
steps. In this paper, we apply a new approach in detecting sign
locations using a new color invariant model. Experiments are carried
out with different datasets introduced in other works where authors
claimed the difficulty in detecting signs under unfavorable imaging
conditions. Our method is simple, fast and most importantly it gives
a high detection rate in locating signs.
Towards Growing Self-Organizing Neural Networks with Fixed Dimensionality
The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Quranic Braille System
This article concerned with the translation of Quranic
verses to Braille symbols, by using Visual basic program. The
system has the ability to translate the special vibration for the Quran.
This study limited for the (Noun + Scoon) vibrations. It builds on an
existing translation system that combines a finite state machine with
left and right context matching and a set of translation rules. This
allows to translate the Arabic language from text to Braille symbols
after detect the vibration for the Quran verses.
A Fuzzy Logic Based Navigation of a Mobile Robot
One of the long standing challenging aspect in mobile robotics is the ability to navigate autonomously, avoiding modeled and unmodeled obstacles especially in crowded and unpredictably changing environment. A successful way of structuring the navigation task in order to deal with the problem is within behavior based navigation approaches. In this study, Issues of individual behavior design and action coordination of the behaviors will be addressed using fuzzy logic. A layered approach is employed in this work in which a supervision layer based on the context makes a decision as to which behavior(s) to process (activate) rather than processing all behavior(s) and then blending the appropriate ones, as a result time and computational resources are saved.
User Interface Oriented Application Development (UIOAD)
A fast and efficient model of application development called user interface oriented application development (UIOAD) is proposed. This approach introduces a convenient way for users to develop a platform independent client-server application.
Intelligent Multi-Agent Middleware for Ubiquitous Home Networking Environments
The next stage of the home networking environment is
supposed to be ubiquitous, where each piece of material is equipped
with an RFID (Radio Frequency Identification) tag. To fully support
the ubiquitous environment, home networking middleware should be
able to recommend home services based on a user-s interests and
efficiently manage information on service usage profiles for the users.
Therefore, USN (Ubiquitous Sensor Network) technology, which
recognizes and manages a appliance-s state-information (location,
capabilities, and so on) by connecting RFID tags is considered. The
Intelligent Multi-Agent Middleware (IMAM) architecture was
proposed to intelligently manage the mobile RFID-based home
networking and to automatically supply information about home
services that match a user-s interests. Evaluation results for
personalization services for IMAM using Bayesian-Net and Decision
Trees are presented.
Algorithm for Reconstructing 3D-Binary Matrix with Periodicity Constraints from Two Projections
We study the problem of reconstructing a three dimensional binary matrices whose interiors are only accessible through few projections. Such question is prominently motivated by the demand in material science for developing tool for reconstruction of crystalline structures from their images obtained by high-resolution transmission electron microscopy. Various approaches have been suggested to reconstruct 3D-object (crystalline structure) by reconstructing slice of the 3D-object. To handle the ill-posedness of the problem, a priori information such as convexity, connectivity and periodicity are used to limit the number of possible solutions. Formally, 3Dobject (crystalline structure) having a priory information is modeled by a class of 3D-binary matrices satisfying a priori information. We consider 3D-binary matrices with periodicity constraints, and we propose a polynomial time algorithm to reconstruct 3D-binary matrices with periodicity constraints from two orthogonal projections.
A Semantic Recommendation Procedure for Electronic Product Catalog
To overcome the product overload of Internet
shoppers, we introduce a semantic recommendation procedure which
is more efficient when applied to Internet shopping malls. The
suggested procedure recommends the semantic products to the
customers and is originally based on Web usage mining, product
classification, association rule mining, and frequently purchasing.
We applied the procedure to the data set of MovieLens Company for
performance evaluation, and some experimental results are provided.
The experimental results have shown superior performance in
terms of coverage and precision.
A Universal Model for Content-Based Image Retrieval
In this paper a novel approach for generalized image
retrieval based on semantic contents is presented. A combination of
three feature extraction methods namely color, texture, and edge
histogram descriptor. There is a provision to add new features in
future for better retrieval efficiency. Any combination of these
methods, which is more appropriate for the application, can be used
for retrieval. This is provided through User Interface (UI) in the
form of relevance feedback. The image properties analyzed in this
work are by using computer vision and image processing algorithms.
For color the histogram of images are computed, for texture cooccurrence
matrix based entropy, energy, etc, are calculated and for
edge density it is Edge Histogram Descriptor (EHD) that is found.
For retrieval of images, a novel idea is developed based on greedy
strategy to reduce the computational complexity. The entire system
was developed using AForge.Imaging (an open source product),
MATLAB .NET Builder, C#, and Oracle 10g. The system was tested
with Coral Image database containing 1000 natural images and
achieved better results.
Interoperability in Component Based Software Development
The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves
In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.
Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology
The latest Geographic Information System (GIS)
technology makes it possible to administer the spatial components of
daily “business object," in the corporate database, and apply suitable
geographic analysis efficiently in a desktop-focused application. We
can use wireless internet technology for transfer process in spatial
data from server to client or vice versa. However, the problem in
wireless Internet is system bottlenecks that can make the process of
transferring data not efficient. The reason is large amount of spatial
data. Optimization in the process of transferring and retrieving data,
however, is an essential issue that must be considered. Appropriate
decision to choose between R-tree and Quadtree spatial data indexing
method can optimize the process. With the rapid proliferation of
these databases in the past decade, extensive research has been
conducted on the design of efficient data structures to enable fast
spatial searching. Commercial database vendors like Oracle have also
started implementing these spatial indexing to cater to the large and
diverse GIS. This paper focuses on the decisions to choose R-tree
and quadtree spatial indexing using Oracle spatial database in mobile
GIS application. From our research condition, the result of using
Quadtree and R-tree spatial data indexing method in one single
spatial database can save the time until 42.5%.
Recognition and Reconstruction of Partially Occluded Objects
A new automatic system for the recognition and re¬construction of resealed and/or rotated partially occluded objects is presented. The objects to be recognized are described by 2D views and each view is occluded by several half-planes. The whole object views and their visible parts (linear cuts) are then stored in a database. To establish if a region R of an input image represents an object possibly occluded, the system generates a set of linear cuts of R and compare them with the elements in the database. Each linear cut of R is associated to the most similar database linear cut. R is recognized as an instance of the object 0 if the majority of the linear cuts of R are associated to a linear cut of views of 0. In the case of recognition, the system reconstructs the occluded part of R and determines the scale factor and the orientation in the image plane of the recognized object view. The system has been tested on two different datasets of objects, showing good performance both in terms of recognition and reconstruction accuracy.
Acquiring Contour Following Behaviour in Robotics through Q-Learning and Image-based States
In this work a visual and reactive contour following
behaviour is learned by reinforcement. With artificial vision the
environment is perceived in 3D, and it is possible to avoid obstacles
that are invisible to other sensors that are more common in mobile
robotics. Reinforcement learning reduces the need for intervention in
behaviour design, and simplifies its adjustment to the environment,
the robot and the task. In order to facilitate its generalisation to other
behaviours and to reduce the role of the designer, we propose a
regular image-based codification of states. Even though this is much
more difficult, our implementation converges and is robust. Results
are presented with a Pioneer 2 AT on a Gazebo 3D simulator.
Determining Cluster Boundaries Using Particle Swarm Optimization
Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.
Visualizing Transit Through a Web Based Geographic Information System
Currently in many major cities, public transit schedules
are disseminated through lists of routes, grids of stop times and
static maps. This paper describes a web based geographic information
system which disseminates the same schedule information through
intuitive GIS techniques. Using data from Calgary, Canada, an map
based interface has been created to allow users to see routes, stops and
moving buses all at once. Zoom and pan controls as well as satellite
imagery allows users to apply their personal knowledge about the
local geography to achieve faster, and more pertinent transit results.
Using asynchronous requests to web services, users are immersed
in an application where buses and stops can be added and removed
interactively, without the need to wait for responses to HTTP requests.
Group Key Management Protocols: A Novel Taxonomy
Group key management is an important functional
building block for any secure multicast architecture.
Thereby, it has been extensively studied in the literature.
In this paper we present relevant group key management
protocols. Then, we compare them against some pertinent
Fuzzy Logic Approach to Robust Regression Models of Uncertain Medical Categories
Dichotomization of the outcome by a single cut-off point is an important part of various medical studies. Usually the relationship between the resulted dichotomized dependent variable and explanatory variables is analyzed with linear regression, probit regression or logistic regression. However, in many real-life situations, a certain cut-off point dividing the outcome into two groups is unknown and can be specified only approximately, i.e. surrounded by some (small) uncertainty. It means that in order to have any practical meaning the regression model must be robust to this uncertainty. In this paper, we show that neither the beta in the linear regression model, nor its significance level is robust to the small variations in the dichotomization cut-off point. As an alternative robust approach to the problem of uncertain medical categories, we propose to use the linear regression model with the fuzzy membership function as a dependent variable. This fuzzy membership function denotes to what degree the value of the underlying (continuous) outcome falls below or above the dichotomization cut-off point. In the paper, we demonstrate that the linear regression model of the fuzzy dependent variable can be insensitive against the uncertainty in the cut-off point location. In the paper we present the modeling results from the real study of low hemoglobin levels in infants. We systematically test the robustness of the binomial regression model and the linear regression model with the fuzzy dependent variable by changing the boundary for the category Anemia and show that the behavior of the latter model persists over a quite wide interval.
Transformer Top-Oil Temperature Modeling and Simulation
The winding hot-spot temperature is one of the most
critical parameters that affect the useful life of the power
transformers. The winding hot-spot temperature can be calculated as
function of the top-oil temperature that can estimated by using the
ambient temperature and transformer loading measured data. This
paper proposes the estimation of the top-oil temperature by using a
method based on Least Squares Support Vector Machines approach.
The estimated top-oil temperature is compared with measured data of
a power transformer in operation. The results are also compared with
methods based on the IEEE Standard C57.91-1995/2000 and
Artificial Neural Networks. It is shown that the Least Squares
Support Vector Machines approach presents better performance than
the methods based in the IEEE Standard C57.91-1995/2000 and
artificial neural networks.
Research of Dynamic Location Referencing Method Based On Intersection and Link Partition
Dynamic location referencing method is an important technology to shield map differences. These method references objects of the road network by utilizing condensed selection of its real-world geographic properties stored in a digital map database, which overcomes the defections existing in pre-coded location referencing methods. The high attributes completeness requirements and complicated reference point selection algorithm are the main problems of recent researches. Therefore, a dynamic location referencing algorithm combining intersection points selected at the extremities compulsively and road link points selected according to link partition principle was proposed. An experimental system based on this theory was implemented. The tests using Beijing digital map database showed satisfied results and thus verified the feasibility and practicability of this method.
Design of Domain-Specific Software Systems with Parametric Code Templates
Domain-specific languages describe specific solutions to problems in the application domain. Traditionally they form a solution composing black-box abstractions together. This, usually, involves non-deep transformations over the target model. In this paper we argue that it is potentially powerful to operate with grey-box abstractions to build a domain-specific software system. We present parametric code templates as grey-box abstractions and conceptual tools to encapsulate and manipulate these templates. Manipulations introduce template-s merging routines and can be defined in a generic way. This involves reasoning mechanisms at the code templates level. We introduce the concept of Neurath Modelling Language (NML) that operates with parametric code templates and specifies a visualisation mapping mechanism for target models. Finally we provide an example of calculating a domain-specific software system with predefined NML elements.
A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data
In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in  that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.
Multipath Routing Sensor Network for Finding Crack in Metallic Structure Using Fuzzy Logic
For collecting data from all sensor nodes, some
changes in Dynamic Source Routing (DSR) protocol is proposed. At
each hop level, route-ranking technique is used for distributing
packets to different selected routes dynamically. For calculating rank
of a route, different parameters like: delay, residual energy and
probability of packet loss are used. A hybrid topology of
DMPR(Disjoint Multi Path Routing) and MMPR(Meshed Multi Path
Routing) is formed, where braided topology is used in different
faulty zones of network. For reducing energy consumption, variant
transmission ranges is used instead of fixed transmission range. For
reducing number of packet drop, a fuzzy logic inference scheme is
used to insert different types of delays dynamically. A rule based
system infers membership function strength which is used to
calculate the final delay amount to be inserted into each of the node
at different clusters.
In braided path, a proposed 'Dual Line ACK Link'scheme is
proposed for sending ACK signal from a damaged node or link to a
parent node to ensure that any error in link or any node-failure
message may not be lost anyway. This paper tries to design the
theoretical aspects of a model which may be applied for collecting
data from any large hanging iron structure with the help of wireless
sensor network. But analyzing these data is the subject of material
science and civil structural construction technology, that part is out
of scope of this paper.
Query Algebra for Semistuctured Data
With the tremendous growth of World Wide Web
(WWW) data, there is an emerging need for effective information
retrieval at the document level. Several query languages such as
XML-QL, XPath, XQL, Quilt and XQuery are proposed in recent
years to provide faster way of querying XML data, but they still lack of
generality and efficiency. Our approach towards evolving a framework
for querying semistructured documents is based on formal query
algebra. Two elements are introduced in the proposed framework:
first, a generic and flexible data model for logical representation of
semistructured data and second, a set of operators for the manipulation
of objects defined in the data model. In additional to accommodating
several peculiarities of semistructured data, our model offers novel
features such as bidirectional paths for navigational querying and
partitions for data transformation that are not available in other
A Study on the Secure ebXML Transaction Models
ebXML (Electronic Business using eXtensible
Markup Language) is an e-business standard, sponsored by
UN/CEFACT and OASIS, which enables enterprises to exchange
business messages, conduct trading relationships, communicate
data in common terms and define and register business
processes. While there is tremendous e-business value in the
ebXML, security remains an unsolved problem and one of the
largest barriers to adoption. XML security technologies emerging
recently have extensibility and flexibility suitable for security
implementation such as encryption, digital signature, access
control and authentication.
In this paper, we propose ebXML business transaction models
that allow trading partners to securely exchange XML based
business transactions by employing XML security technologies.
We show how each XML security technology meets the ebXML
standard by constructing the test software and validating messages
between the trading partners.
Manifold Analysis by Topologically Constrained Isometric Embedding
We present a new algorithm for nonlinear dimensionality reduction that consistently uses global information, and that enables understanding the intrinsic geometry of non-convex manifolds. Compared to methods that consider only local information, our method appears to be more robust to noise. Unlike most methods that incorporate global information, the proposed approach automatically handles non-convexity of the data manifold. We demonstrate the performance of our algorithm and compare it to state-of-the-art methods on synthetic as well as real data.
Double Reduction of Ada-ECATNet Representation using Rewriting Logic
One major difficulty that faces developers of
concurrent and distributed software is analysis for concurrency based
faults like deadlocks. Petri nets are used extensively in the
verification of correctness of concurrent programs. ECATNets  are
a category of algebraic Petri nets based on a sound combination of
algebraic abstract types and high-level Petri nets. ECATNets have
'sound' and 'complete' semantics because of their integration in
rewriting logic  and its programming language Maude .
Rewriting logic is considered as one of very powerful logics in terms
of description, verification and programming of concurrent systems.
We proposed in  a method for translating Ada-95 tasking
programs to ECATNets formalism (Ada-ECATNet). In this paper,
we show that ECATNets formalism provides a more compact
translation for Ada programs compared to the other approaches based
on simple Petri nets or Colored Petri nets (CPNs). Such translation
doesn-t reduce only the size of program, but reduces also the number
of program states. We show also, how this compact Ada-ECATNet
may be reduced again by applying reduction rules on it. This double
reduction of Ada-ECATNet permits a considerable minimization of
the memory space and run time of corresponding Maude program.
Unit Testing with Déjà-Vu Objects
In this paper we introduce a new unit test technique
called déjà-vu object. Déjà-vu objects replace real objects used by
classes under test, allowing the execution of isolated unit tests. A
déjà-vu object is able to observe and record the behaviour of a real
object during real sessions, and to replace it during unit tests,
returning previously recorded results. Consequently déjà-vu object
technique can be useful when a bottom-up development and testing
strategy is adopted. In this case déjà-vu objects can increase test
portability and test source code readability. At the same time they
can reduce the time spent by programmers to develop test code and
the risk of incompatibility during the switching between déjà-vu and
Web Service Architecture for Computer-Adaptive Testing on e-Learning
This paper proposes a Web service and serviceoriented
architecture (SOA) for a computer-adaptive testing (CAT)
process on e-learning systems. The proposed architecture is
developed to solve an interoperability problem of the CAT process by
using Web service. The proposed SOA and Web service define all
services needed for the interactions between systems in order to
deliver items and essential data from Web service to the CAT Webbased
application. These services are implemented in a XML-based
architecture, platform independence and interoperability between the
Web service and CAT Web-based applications.
Fingerprint Verification System Using Minutiae Extraction Technique
Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Detection of Moving Images Using Neural Network
Motion detection is a basic operation in the selection of significant segments of the video signals. For an effective Human Computer Intelligent Interaction, the computer needs to recognize the motion and track the moving object. Here an efficient neural network system is proposed for motion detection from the static background. This method mainly consists of four parts like Frame Separation, Rough Motion Detection, Network Formation and Training, Object Tracking. This paper can be used to verify real time detections in such a way that it can be used in defense applications, bio-medical applications and robotics. This can also be used for obtaining detection information related to the size, location and direction of motion of moving objects for assessment purposes. The time taken for video tracking by this Neural Network is only few seconds.
Verifying X.509 Certificates on Smart Cards
This paper presents a smart-card applet that is able to
verify X.509 certificates and to use the public key contained in the
certificate for verifying digital signatures that have been created
using the corresponding private key, e.g. for the purpose of authenticating
the certificate owner against the card. The approach has been
implemented as an operating prototype on Java cards.
Solving an Extended Resource Leveling Problem with Multiobjective Evolutionary Algorithms
We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.
Scalable Deployment and Configuration of High-Performance Virtual Clusters
Virtualization and high performance computing have been discussed from a performance perspective in recent publications. We present and discuss a flexible and efficient approach to the management of virtual clusters. A virtual machine management tool is extended to function as a fabric for cluster deployment and management. We show how features such as saving the state of a running cluster can be used to avoid disruption. We also compare our approach to the traditional methods of cluster deployment and present benchmarks which illustrate the efficiency of our approach.
Empirical Statistical Modeling of Rainfall Prediction over Myanmar
One of the essential sectors of Myanmar economy is
agriculture which is sensitive to climate variation. The most
important climatic element which impacts on agriculture sector is
rainfall. Thus rainfall prediction becomes an important issue in
agriculture country. Multi variables polynomial regression (MPR)
provides an effective way to describe complex nonlinear input output
relationships so that an outcome variable can be predicted from the
other or others. In this paper, the modeling of monthly rainfall
prediction over Myanmar is described in detail by applying the
polynomial regression equation. The proposed model results are
compared to the results produced by multiple linear regression model
(MLR). Experiments indicate that the prediction model based on
MPR has higher accuracy than using MLR.
Comparative Analysis of the Software Effort Estimation Models
Accurate software cost estimates are critical to both
developers and customers. They can be used for generating request
for proposals, contract negotiations, scheduling, monitoring and
control. The exact relationship between the attributes of the effort
estimation is difficult to establish. A neural network is good at
discovering relationships and pattern in the data. So, in this paper a
comparative analysis among existing Halstead Model, Walston-Felix
Model, Bailey-Basili Model, Doty Model and Neural Network
Based Model is performed. Neural Network has outperformed the
other considered models. Hence, we proposed Neural Network
system as a soft computing approach to model the effort estimation
of the software systems.
Towards a New Methodology for Developing Web-Based Systems
Web-based systems have become increasingly
important due to the fact that the Internet and the World Wide Web
have become ubiquitous, surpassing all other technological
developments in our history. The Internet and especially companies
websites has rapidly evolved in their scope and extent of use, from
being a little more than fixed advertising material, i.e. a "web
presences", which had no particular influence for the company's
business, to being one of the most essential parts of the company's
Traditional software engineering approaches with process models
such as, for example, CMM and Waterfall models, do not work very
well since web system development differs from traditional
development. The development differs in several ways, for example,
there is a large gap between traditional software engineering designs
and concepts and the low-level implementation model, many of the
web based system development activities are business oriented (for
example web application are sales-oriented, web application and
intranets are content-oriented) and not engineering-oriented.
This paper aims to introduce Increment Iterative extreme
Programming (IIXP) methodology for developing web based
systems. In difference to the other existence methodologies, this
methodology is combination of different traditional and modern
software engineering and web engineering principles.
A Face-to-Face Education Support System Capable of Lecture Adaptation and Q&A Assistance Based On Probabilistic Inference
Keys to high-quality face-to-face education are ensuring flexibility in the way lectures are given, and providing care and responsiveness to learners. This paper describes a face-to-face education support system that is designed to raise the satisfaction of learners and reduce the workload on instructors. This system consists of a lecture adaptation assistance part, which assists instructors in adapting teaching content and strategy, and a Q&A assistance part, which provides learners with answers to their questions. The core component of the former part is a “learning achievement map", which is composed of a Bayesian network (BN). From learners- performance in exercises on relevant past lectures, the lecture adaptation assistance part obtains information required to adapt appropriately the presentation of the next lecture. The core component of the Q&A assistance part is a case base, which accumulates cases consisting of questions expected from learners and answers to them. The Q&A assistance part is a case-based search system equipped with a search index which performs probabilistic inference. A prototype face-to-face education support system has been built, which is intended for the teaching of Java programming, and this approach was evaluated using this system. The expected degree of understanding of each learner for a future lecture was derived from his or her performance in exercises on past lectures, and this expected degree of understanding was used to select one of three adaptation levels. A model for determining the adaptation level most suitable for the individual learner has been identified. An experimental case base was built to examine the search performance of the Q&A assistance part, and it was found that the rate of successfully finding an appropriate case was 56%.
An UML Statechart Diagram-Based MM-Path Generation Approach for Object-Oriented Integration Testing
MM-Path, an acronym for Method/Message Path, describes the dynamic interactions between methods in object-oriented systems. This paper discusses the classifications of MM-Path, based on the characteristics of object-oriented software. We categorize it according to the generation reasons, the effect scope and the composition of MM-Path. A formalized representation of MM-Path is also proposed, which has considered the influence of state on response method sequences of messages. .Moreover, an automatic MM-Path generation approach based on UML Statechart diagram has been presented, and the difficulties in identifying and generating MM-Path can be solved. . As a result, it provides a solid foundation for further research on test cases generation based on MM-Path.
Novel Security Strategy for Real Time Digital Videos
Now a days video data embedding approach is a very challenging and interesting task towards keeping real time video data secure. We can implement and use this technique with high-level applications. As the rate-distortion of any image is not confirmed, because the gain provided by accurate image frame segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with a lot factors like losses that depend on both the coding scheme and the object structure. By using rate controller in association with the encoder one can dynamically adjust the target bitrate. This paper discusses about to keep secure videos by mixing signature data with negligible distortion in the original video, and to keep steganographic video as closely as possible to the quality of the original video. In this discussion we propose the method for embedding the signature data into separate video frames by the use of block Discrete Cosine Transform. These frames are then encoded by real time encoding H.264 scheme concepts. After processing, at receiver end recovery of original video and the signature data is proposed.