3D Definition for Human Smiles
The study explored varied types of human smiles and
extracted most of the key factors affecting the smiles. These key
factors then were converted into a set of control points which could
serve to satisfy the needs for creation of facial expression for 3D
animators and be further applied to the face simulation for robots in the
future. First, hundreds of human smile pictures were collected and
analyzed to identify the key factors for face expression. Then, the
factors were converted into a set of control points and sizing
parameters calculated proportionally. Finally, two different faces
were constructed for validating the parameters via the process of
simulating smiles of the same type as the original one.
Text Retrieval Relevance Feedback Techniques for Bag of Words Model in CBIR
The state-of-the-art Bag of Words model in Content-
Based Image Retrieval has been used for years but the relevance
feedback strategies for this model are not fully investigated. Inspired
from text retrieval, the Bag of Words model has the ability to use the
wealth of knowledge and practices available in text retrieval. We
study and experiment the relevance feedback model in text retrieval
for adapting it to image retrieval. The experiments show that the
techniques from text retrieval give good results for image retrieval
and that further improvements is possible.
High Resolution Images: Segmenting, Extracting Information and GIS Integration
As the world changes more rapidly, the demand for update information for resource management, environment monitoring, planning are increasing exponentially. Integration of Remote Sensing with GIS technology will significantly promote the ability for addressing these concerns. This paper presents an alternative way of update GIS applications using image processing and high resolution images. We show a method of high-resolution image segmentation using graphs and morphological operations, where a preprocessing step (watershed operation) is required. A morphological process is then applied using the opening and closing operations. After this segmentation we can extract significant cartographic elements such as urban areas, streets or green areas. The result of this segmentation and this extraction is then used to update GIS applications. Some examples are shown using aerial photography.
High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm
Nowadays, the rapid development of multimedia
and internet allows for wide distribution of digital media data.
It becomes much easier to edit, modify and duplicate digital
information Besides that, digital documents are also easy to
copy and distribute, therefore it will be faced by many
threatens. It-s a big security and privacy issue with the large
flood of information and the development of the digital
format, it become necessary to find appropriate protection
because of the significance, accuracy and sensitivity of the
information. Nowadays protection system classified with more
specific as hiding information, encryption information, and
combination between hiding and encryption to increase information
security, the strength of the information hiding science is due to the
non-existence of standard algorithms to be used in hiding secret
messages. Also there is randomness in hiding methods such as
combining several media (covers) with different methods to pass a
secret message. In addition, there are no formal methods to be
followed to discover the hidden data. For this reason, the task of this
research becomes difficult. In this paper, a new system of information
hiding is presented. The proposed system aim to hidden information
(data file) in any execution file (EXE) and to detect the hidden file
and we will see implementation of steganography system which
embeds information in an execution file. (EXE) files have been
investigated. The system tries to find a solution to the size of the
cover file and making it undetectable by anti-virus software. The
system includes two main functions; first is the hiding of the
information in a Portable Executable File (EXE), through the
execution of four process (specify the cover file, specify the
information file, encryption of the information, and hiding the
information) and the second function is the extraction of the hiding
information through three process (specify the steno file, extract the
information, and decryption of the information). The system has
achieved the main goals, such as make the relation of the size of the
cover file and the size of information independent and the result file
does not make any conflict with anti-virus software.
Towards Self-ware via Swarm-Array Computing
The work reported in this paper proposes
Swarm-Array computing, a novel technique inspired by swarm
robotics, and built on the foundations of autonomic and parallel
computing. The approach aims to apply autonomic computing
constructs to parallel computing systems and in effect achieve the
self-ware objectives that describe self-managing systems. The
constitution of swarm-array computing comprising four constituents,
namely the computing system, the problem/task, the swarm and the
landscape is considered. Approaches that bind these constituents
together are proposed. Space applications employing FPGAs are
identified as a potential area for applying swarm-array computing for
building reliable systems. The feasibility of a proposed approach is
validated on the SeSAm multi-agent simulator and landscapes are
generated using the MATLAB toolkit.
Automatic Extraction of Roads from High Resolution Aerial and Satellite Images with Heavy Noise
Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.
Using PFA in Feature Analysis and Selection for H.264 Adaptation
Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.
A New Decision Making Approach based on Possibilistic Influence Diagrams
This paper proposes a new decision making approch
based on quantitative possibilistic influence diagrams which are
extension of standard influence diagrams in the possibilistic framework.
We will in particular treat the case where several expert
opinions relative to value nodes are available. An initial expert assigns
confidence degrees to other experts and fixes a similarity threshold
that provided possibility distributions should respect. To illustrate our
approach an evaluation algorithm for these multi-source possibilistic
influence diagrams will also be proposed.
Comparing Arabic and Latin Handwritten Digits Recognition Problems
A comparison between the performance of Latin and
Arabic handwritten digits recognition problems is presented. The
performance of ten different classifiers is tested on two similar
Arabic and Latin handwritten digits databases. The analysis shows
that Arabic handwritten digits recognition problem is easier than that
of Latin digits. This is because the interclass difference in case of
Latin digits is smaller than in Arabic digits and variances in writing
Latin digits are larger. Consequently, weaker yet fast classifiers are
expected to play more prominent role in Arabic handwritten digits
An Implementation of MacMahon's Partition Analysis in Ordering the Lower Bound of Processing Elements for the Algorithm of LU Decomposition
A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
An Enhanced Distributed System to improve theTime Complexity of Binary Indexed Trees
Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.
A Simplified and Effective Algorithm Used to Mine Similar Processes: An Illustrated Example
The running logs of a process hold valuable
information about its executed activity behavior and generated activity
logic structure. Theses informative logs can be extracted, analyzed and
utilized to improve the efficiencies of the process's execution and
conduction. One of the techniques used to accomplish the process
improvement is called as process mining. To mine similar processes is
such an improvement mission in process mining. Rather than directly
mining similar processes using a single comparing coefficient or a
complicate fitness function, this paper presents a simplified heuristic
process mining algorithm with two similarity comparisons that are
able to relatively conform the activity logic sequences (traces) of
mining processes with those of a normalized (regularized) one. The
relative process conformance is to find which of the mining processes
match the required activity sequences and relationships, further for
necessary and sufficient applications of the mined processes to process
improvements. One similarity presented is defined by the relationships
in terms of the number of similar activity sequences existing in
different processes; another similarity expresses the degree of the
similar (identical) activity sequences among the conforming processes.
Since these two similarities are with respect to certain typical behavior
(activity sequences) occurred in an entire process, the common
problems, such as the inappropriateness of an absolute comparison and
the incapability of an intrinsic information elicitation, which are often
appeared in other process conforming techniques, can be solved by the
relative process comparison presented in this paper. To demonstrate
the potentiality of the proposed algorithm, a numerical example is
On Reversal and Transposition Medians
During the last years, the genomes of more and more
species have been sequenced, providing data for phylogenetic recon-
struction based on genome rearrangement measures. A main task in
all phylogenetic reconstruction algorithms is to solve the median of
three problem. Although this problem is NP-hard even for the sim-
plest distance measures, there are exact algorithms for the breakpoint
median and the reversal median that are fast enough for practical use.
In this paper, this approach is extended to the transposition median as
well as to the weighted reversal and transposition median. Although
there is no exact polynomial algorithm known even for the pairwise
distances, we will show that it is in most cases possible to solve
these problems exactly within reasonable time by using a branch and
Partial Connection Architecture for Mobile Computing
In mobile computing environments, there are many
new non existing problems in the distributed system, which is
consisted of stationary hosts because of host mobility, sudden
disconnection by handoff in wireless networks, voluntary
disconnection for efficient power consumption of a mobile host, etc.
To solve the problems, we proposed the architecture of Partial
Connection Manager (PCM) in this paper. PCM creates the limited
number of mobile agents according to priority, sends them in parallel
to servers, and combines the results to process the user request rapidly.
In applying the proposed PCM to the mobile market agent service, we
understand that the mobile agent technique could be suited for the
mobile computing environment and the partial connection problem
Using Perspective Schemata to Model the ETL Process
Data Warehouses (DWs) are repositories which contain the unified history of an enterprise for decision support. The data must be Extracted from information sources, Transformed and integrated to be Loaded (ETL) into the DW, using ETL tools. These tools focus on data movement, where the models are only used as a means to this aim. Under a conceptual viewpoint, the authors want to innovate the ETL process in two ways: 1) to make clear compatibility between models in a declarative fashion, using correspondence assertions and 2) to identify the instances of different sources that represent the same entity in the real-world. This paper presents the overview of the proposed framework to model the ETL process, which is based on the use of a reference model and perspective schemata. This approach provides the designer with a better understanding of the semantic associated with the ETL process.
A Formulation of the Latent Class Vector Model for Pairwise Data
In this research, a latent class vector model for pairwise data is formulated. As compared to the basic vector model, this model yields consistent estimates of the parameters since the number of parameters to be estimated does not increase with the number of subjects. The result of the analysis reveals that the model was stable and could classify each subject to the latent classes representing the typical scales used by these subjects.
A Genetic-Algorithm-Based Approach for Audio Steganography
In this paper, we present a novel, principled approach to resolve the remained problems of substitution technique of audio steganography. Using the proposed genetic algorithm, message bits are embedded into multiple, vague and higher LSB layers, resulting in increased robustness. The robustness specially would be increased against those intentional attacks which try to reveal the hidden message and also some unintentional attacks like noise addition as well.
An Ontology Based Question Answering System on Software Test Document Domain
Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Automatic Vehicle Location Systems
In this article, a single application is suggested to determine the position of vehicles using Geographical Information Systems (GIS) and Geographical Position Systems (GPS). The part of the article material included mapping three dimensional coordinates to two dimensional coordinates using UTM or LAMBERT geographical methods, and the algorithm of conversion of GPS information into GIS maps is studied. Also, suggestions are given in order to implement this system based on web (called web based systems). To apply this system in IRAN, related official in this case are introduced and their duties are explained. Finally, economy analyzed is assisted according to IRAN communicational system.
Development of Online Islamic Medication Expert System (OIMES)
This paper presents an overview of the design and
implementation of an online rule-based Expert Systems for Islamic
medication. T his Online Islamic Medication Expert System (OIMES)
focuses on physical illnesses only. Knowledge base of this Expert
System contains exhaustively the types of illness together with their
related cures or treatments/therapies, obtained exclusively from the
Quran and Hadith. Extensive research and study are conducted to
ensure that the Expert System is able to provide the most suitable
treatment with reference to the relevant verses cited in Quran or
Hadith. These verses come together with their related 'actions'
(bodily actions/gestures or some acts) to be performed by the patient
to treat a particular illness/sickness. These verses and the instructions
for the 'actions' are to be displayed unambiguously on the computer
screen. The online platform provides the advantage for patient getting
treatment practically anytime and anywhere as long as the computer
and Internet facility exist. Patient does not need to make appointment
to see an expert for a therapy.
Multidimensional Visualization Tools for Analysis of Expression Data
Expression data analysis is based mostly on the
statistical approaches that are indispensable for the study of
biological systems. Large amounts of multidimensional data resulting
from the high-throughput technologies are not completely served by
biostatistical techniques and are usually complemented with visual,
knowledge discovery and other computational tools. In many cases,
in biological systems we only speculate on the processes that are
causing the changes, and it is the visual explorative analysis of data
during which a hypothesis is formed. We would like to show the
usability of multidimensional visualization tools and promote their
use in life sciences. We survey and show some of the
multidimensional visualization tools in the process of data
exploration, such as parallel coordinates and radviz and we extend
them by combining them with the self-organizing map algorithm. We
use a time course data set of transitional cell carcinoma of the bladder
in our examples. Analysis of data with these tools has the potential to
uncover additional relationships and non-trivial structures.
The Knowledge Representation of the Genetic Regulatory Networks Based on Ontology
The understanding of the system level of biological behavior and phenomenon variously needs some elements such as gene sequence, protein structure, gene functions and metabolic pathways. Challenging problems are representing, learning and reasoning about these biochemical reactions, gene and protein structure, genotype and relation between the phenotype, and expression system on those interactions. The goal of our work is to understand the behaviors of the interactions networks and to model their evolution in time and in space. We propose in this study an ontological meta-model for the knowledge representation of the genetic regulatory networks. Ontology in artificial intelligence means the fundamental categories and relations that provide a framework for knowledge models. Domain ontology's are now commonly used to enable heterogeneous information resources, such as knowledge-based systems, to communicate with each other. The interest of our model is to represent the spatial, temporal and spatio-temporal knowledge. We validated our propositions in the genetic regulatory network of the Aarbidosis thaliana flower
Software to Encrypt Messages Using Public-Key Cryptography
In this paper the development of a software to
encrypt messages with asymmetric cryptography is presented. In
particular, is used the RSA (Rivest, Shamir and Adleman) algorithm
to encrypt alphanumeric information. The software allows to
generate different public keys from two prime numbers provided by
the user, the user must then select a public-key to generate the
corresponding private-key. To encrypt the information, the user must
provide the public-key of the recipient as well as the message to be
encrypted. The generated ciphertext can be sent through an insecure
channel, so that would be very difficult to be interpreted by an
intruder or attacker. At the end of the communication, the recipient
can decrypt the original message if provide his/her public-key and
his/her corresponding private-key.
A Combinatorial Model for ECG Interpretation
A new, combinatorial model for analyzing and inter-
preting an electrocardiogram (ECG) is presented. An application of
the model is QRS peak detection. This is demonstrated with an
online algorithm, which is shown to be space as well as time efficient.
Experimental results on the MIT-BIH Arrhythmia database show that
this novel approach is promising. Further uses for this approach are
discussed, such as taking advantage of its small memory requirements
and interpreting large amounts of pre-recorded ECG data.
CSOLAP (Continuous Spatial On-Line Analytical Processing)
Decision support systems are usually based on
multidimensional structures which use the concept of hypercube.
Dimensions are the axes on which facts are analyzed and form a
space where a fact is located by a set of coordinates at the
intersections of members of dimensions. Conventional
multidimensional structures deal with discrete facts linked to discrete
dimensions. However, when dealing with natural continuous
phenomena the discrete representation is not adequate. There is a
need to integrate spatiotemporal continuity within multidimensional
structures to enable analysis and exploration of continuous field data.
Research issues that lead to the integration of spatiotemporal
continuity in multidimensional structures are numerous. In this paper,
we discuss research issues related to the integration of continuity in
multidimensional structures, present briefly a multidimensional
model for continuous field data. We also define new aggregation
operations. The model and the associated operations and measures
are validated by a prototype.
Initializing K-Means using Genetic Algorithms
K-Means (KM) is considered one of the major
algorithms widely used in clustering. However, it still has some
problems, and one of them is in its initialization step where it is
normally done randomly. Another problem for KM is that it
converges to local minima. Genetic algorithms are one of the
evolutionary algorithms inspired from nature and utilized in the field
of clustering. In this paper, we propose two algorithms to solve the
initialization problem, Genetic Algorithm Initializes KM (GAIK) and
KM Initializes Genetic Algorithm (KIGA). To show the effectiveness
and efficiency of our algorithms, a comparative study was done
among GAIK, KIGA, Genetic-based Clustering Algorithm (GCA),
and FCM .
Security Risk Analysis Based on the Policy Formalization and the Modeling of Big Systems
Security risk models have been successful in estimating the likelihood of attack for simple security threats. However, modeling complex system and their security risk is even a challenge. Many methods have been proposed to face this problem. Often difficult to manipulate, and not enough all-embracing they are not as famous as they should with administrators and deciders. We propose in this paper a new tool to model big systems on purpose. The software, takes into account attack threats and security strength.
The Vulnerability Analysis of Java Bytecode Based on Points-to Dataflow
Today many developers use the Java components
collected from the Internet as external LIBs to design and
develop their own software. However, some unknown security
bugs may exist in these components, such as SQL injection bug
may comes from the components which have no specific check
for the input string by users. To check these bugs out is very
difficult without source code. So a novel method to check the
bugs in Java bytecode based on points-to dataflow analysis is in
need, which is different to the common analysis techniques base
on the vulnerability pattern check. It can be used as an assistant
tool for security analysis of Java bytecode from unknown
softwares which will be used as extern LIBs.
Control Improvement of a C Sugar Cane Crystallization Using an Auto-Tuning PID Controller Based on Linearization of a Neural Network
The industrial process of the sugar cane crystallization produces a residual that still contains a lot of soluble sucrose and the objective of the factory is to improve its extraction. Therefore, there are substantial losses justifying the search for the optimization of the process. Crystallization process studied on the industrial site is based on the “three massecuites process". The third step of this process constitutes the final stage of exhaustion of the sucrose dissolved in the mother liquor. During the process of the third step of crystallization (Ccrystallization), the phase that is studied and whose control is to be improved, is the growing phase (crystal growth phase). The study of this process on the industrial site is a problem in its own. A control scheme is proposed to improve the standard PID control law used in the factory. An auto-tuning PID controller based on instantaneous linearization of a neural network is then proposed.
Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC)
Wireless mesh networks based on IEEE 802.11
technology are a scalable and efficient solution for next generation
wireless networking to provide wide-area wideband internet access to
a significant number of users. The deployment of these wireless mesh
networks may be within different authorities and without any
planning, they are potentially overlapped partially or completely in
the same service area. The aim of the proposed model is design a new
model to Enhancement Throughput of Unplanned Wireless Mesh
Networks Deployment Using Partitioning Hierarchical Cluster
(PHC), the unplanned deployment of WMNs are determinates there
performance. We use throughput optimization approach to model the
unplanned WMNs deployment problem based on partitioning
hierarchical cluster (PHC) based architecture, in this paper the
researcher used bridge node by allowing interworking traffic between
these WMNs as solution for performance degradation.
Multivalued Knowledge-Base based on Multivalued Datalog
The basic aim of our study is to give a possible model for handling uncertain information. This model is worked out in the framework of DATALOG. The concept of multivalued knowledgebase will be defined as a quadruple of any background knowledge; a deduction mechanism; a connecting algorithm, and a function set of the program, which help us to determine the uncertainty levels of the results. At first the concept of fuzzy Datalog will be summarized, then its extensions for intuitionistic- and interval-valued fuzzy logic is given and the concept of bipolar fuzzy Datalog is introduced. Based on these extensions the concept of multivalued knowledge-base will be defined. This knowledge-base can be a possible background of a future agent-model.
Encrypter Information Software Using Chaotic Generators
This document shows a software that shows different chaotic generator, as continuous as discrete time. The software gives the option for obtain the different signals, using different parameters and initial condition value. The program shows then critical parameter for each model. All theses models are capable of encrypter information, this software show it too.