Border Limited Adaptive Subdivision Based On Triangle Meshes
Subdivision is a method to create a smooth surface from a coarse mesh by subdividing the entire mesh. The conventional ways to compute and render surfaces are inconvenient both in terms of memory and computational time as the number of meshes will increase exponentially. An adaptive subdivision is the way to reduce the computational time and memory by subdividing only certain selected areas. In this paper, a new adaptive subdivision method for triangle meshes is introduced. This method defines a new adaptive subdivision rules by considering the properties of each triangle's neighbors and is embedded in a traditional Loop's subdivision. It prevents some undesirable side effects that appear in the conventional adaptive ways. Models that were subdivided by our method are compared with other adaptive subdivision methods
On Formalizing Predefined OCL Properties
The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
The Implementation of Spatio-Temporal Graph to Represent Situations in the Virtual World
In this paper, we develop a Spatio-Temporal graph as
of a key component of our knowledge representation Scheme. We
design an integrated representation Scheme to depict not only present
and past but future in parallel with the spaces in an effective and
intuitive manner. The resulting multi-dimensional comprehensive
knowledge structure accommodates multi-layered virtual world
developing in the time to maximize the diversity of situations in the
historical context. This knowledge representation Scheme is to be used
as the basis for simulation of situations composing the virtual world
and for implementation of virtual agents' knowledge used to judge and
evaluate the situations in the virtual world. To provide natural contexts
for situated learning or simulation games, the virtual stage set by this
Spatio-Temporal graph is to be populated by agents and other objects
interrelated and changing which are abstracted in the ontology.
A Hybrid CamShift and l1-Minimization Video Tracking Algorithm
The Continuously Adaptive Mean-Shift (CamShift)
algorithm, incorporating scene depth information is combined with
the l1-minimization sparse representation based method to form a
hybrid kernel and state space-based tracking algorithm. We take
advantage of the increased efficiency of the former with the
robustness to occlusion property of the latter. A simple interchange
scheme transfers control between algorithms based upon drift and
occlusion likelihood. It is quantified by the projection of target
candidates onto a depth map of the 2D scene obtained with a low cost
stereo vision webcam. Results are improved tracking in terms of drift
over each algorithm individually, in a challenging practical outdoor
multiple occlusion test case.
Database Modelling Using WSML in the Specification of a Banking Application
We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Design of an Intelligent Location Identification Scheme Based On LANDMARC and BPNs
Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Image Clustering Framework for BAVM Segmentation in 3DRA Images: Performance Analysis
Brain ArterioVenous Malformation (BAVM) is an abnormal tangle of brain blood vessels where arteries shunt directly into veins with no intervening capillary bed which causes high pressure and hemorrhage risk. The success of treatment by embolization in interventional neuroradiology is highly dependent on the accuracy of the vessels visualization. In this paper the performance of clustering techniques on vessel segmentation from 3- D rotational angiography (3DRA) images is investigated and a new technique of segmentation is proposed. This method consists in: preprocessing step of image enhancement, then K-Means (KM), Fuzzy C-Means (FCM) and Expectation Maximization (EM) clustering are used to separate vessel pixels from background and artery pixels from vein pixels when possible. A post processing step of removing false-alarm components is applied before constructing a three-dimensional volume of the vessels. The proposed method was tested on six datasets along with a medical assessment of an expert. Obtained results showed encouraging segmentations.
Event Information Extraction System (EIEE): FSM vs HMM
Automatic Extraction of Event information from
social text stream (emails, social network sites, blogs etc) is a vital
requirement for many applications like Event Planning and
Management systems and security applications. The key information
components needed from Event related text are Event title, location,
participants, date and time. Emails have very unique distinctions over
other social text streams from the perspective of layout and format
and conversation style and are the most commonly used
communication channel for broadcasting and planning events.
Therefore we have chosen emails as our dataset. In our work, we
have employed two statistical NLP methods, named as Finite State
Machines (FSM) and Hidden Markov Model (HMM) for the
extraction of event related contextual information. An application
has been developed providing a comparison among the two methods
over the event extraction task. It comprises of two modules, one for
each method, and works for both bulk as well as direct user input.
The results are evaluated using Precision, Recall and F-Score.
Experiments show that both methods produce high performance and
accuracy, however HMM was good enough over Title extraction and
FSM proved to be better for Venue, Date, and time.
A Real-Time Rendering based on Efficient Updating of Static Objects Buffer
Real-time 3D applications have to guarantee
interactive rendering speed. There is a restriction for the number of
polygons which is rendered due to performance of a graphics hardware
or graphics algorithms. Generally, the rendering performance will be
drastically increased when handling only the dynamic 3d models,
which is much fewer than the static ones. Since shapes and colors of
the static objects don-t change when the viewing direction is fixed, the
information can be reused. We render huge amounts of polygon those
cannot handled by conventional rendering techniques in real-time by
using a static object image and merging it with rendering result of the
dynamic objects. The performance must be decreased as a
consequence of updating the static object image including removing
an static object that starts to move, re-rending the other static objects
being overlapped by the moving ones. Based on visibility of the object
beginning to move, we can skip the updating process. As a result, we
enhance rendering performance and reduce differences of rendering
speed between each frame. Proposed method renders total
200,000,000 polygons that consist of 500,000 dynamic polygons and
the rest are static polygons in about 100 frames per second.
Parametric Modeling Approach for Call Holding Times for IP based Public Safety Networks via EM Algorithm
This paper presents parametric probability density
models for call holding times (CHTs) into emergency call center
based on the actual data collected for over a week in the public
Emergency Information Network (EIN) in Mongolia. When the set of
chosen candidates of Gamma distribution family is fitted to the call
holding time data, it is observed that the whole area in the CHT
empirical histogram is underestimated due to spikes of higher
probability and long tails of lower probability in the histogram.
Therefore, we provide the Gaussian parametric model of a mixture of
lognormal distributions with explicit analytical expressions for the
modeling of CHTs of PSNs. Finally, we show that the CHTs for
PSNs are fitted reasonably by a mixture of lognormal distributions
via the simulation of expectation maximization algorithm. This result
is significant as it expresses a useful mathematical tool in an explicit
manner of a mixture of lognormal distributions.
The Design and Development of Driving Game as an Evaluation Instrument for Driving License Test
The focus of this paper is to highlight the design and
development of an educational game prototype as an evaluation
instrument for the Malaysia driving license static test. This
educational game brings gaming technology into the conventional
objective static test to make it more effective, real and interesting.
From the feeling of realistic, the future driver can learn something,
memorized and use it in the real life. The current online objective
static test only make the user memorized the answer without knowing
and understand the true purpose of the question. Therefore, in real
life, they will not behave as expected due to behavior and moral
lacking. This prototype has been developed inform of multiple-choice
questions integrated with 3D gaming environment to make it simulate
the real environment and scenarios. Based on the testing conducted,
the respondent agrees with the use of this game prototype it can
increase understanding and promote obligation towards traffic rules.
Acute Coronary Syndrome Prediction Using Data Mining Techniques- An Application
In this paper we use data mining techniques to investigate factors that contribute significantly to enhancing the risk of acute coronary syndrome. We assume that the dependent variable is diagnosis – with dichotomous values showing presence or absence of disease. We have applied binary regression to the factors affecting the dependent variable. The data set has been taken from two different cardiac hospitals of Karachi, Pakistan. We have total sixteen variables out of which one is assumed dependent and other 15 are independent variables. For better performance of the regression model in predicting acute coronary syndrome, data reduction techniques like principle component analysis is applied. Based on results of data reduction, we have considered only 14 out of sixteen factors.
A Proposed Technique for Software Development Risks Identification by using FTA Model
Software Development Risks Identification (SDRI),
using Fault Tree Analysis (FTA), is a proposed technique to identify
not only the risk factors but also the causes of the appearance of the
risk factors in software development life cycle. The method is based
on analyzing the probable causes of software development failures
before they become problems and adversely affect a project. It uses
Fault tree analysis (FTA) to determine the probability of a particular
system level failures that are defined by A Taxonomy for Sources of
Software Development Risk to deduce failure analysis in which an
undesired state of a system by using Boolean logic to combine a
series of lower-level events. The major purpose of this paper is to use
the probabilistic calculations of Fault Tree Analysis approach to
determine all possible causes that lead to software development risk
On Speeding Up Support Vector Machines: Proximity Graphs Versus Random Sampling for Pre-Selection Condensation
Support vector machines (SVMs) are considered to be
the best machine learning algorithms for minimizing the predictive
probability of misclassification. However, their drawback is that for
large data sets the computation of the optimal decision boundary is a
time consuming function of the size of the training set. Hence several
methods have been proposed to speed up the SVM algorithm. Here
three methods used to speed up the computation of the SVM
classifiers are compared experimentally using a musical genre
classification problem. The simplest method pre-selects a random
sample of the data before the application of the SVM algorithm. Two
additional methods use proximity graphs to pre-select data that are
near the decision boundary. One uses k-Nearest Neighbor graphs and
the other Relative Neighborhood Graphs to accomplish the task.
Estimation of Relative Self-Localization Based On Natural Landmark and an Improved SURF
It is important for an autonomous mobile robot to know
where it is in any time in an indoor environment. In this paper, we
design a relative self-localization algorithm. The algorithm compare
the interest point in two images and compute the relative displacement
and orientation to determent the posture. Firstly, we use the SURF
algorithm to extract the interest points of the ceiling. Second, in order
to reduce amount of calculation, a replacement SURF is used to extract
orientation and description of the interest points. At last, according to
the transformation of the interest points in two images, the relative
self-localization of the mobile robot will be estimated greatly.
A GPU Based Texture Mapping Technique for 3D Models Using Multi-View Images
Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
A Case of Study for 3D Stereoscopic Conversion in Visual Effects Industry
This paper covered a series of key points in terms of 2D to 3D stereoscopic conversion. A successfully applied stereoscopic conversion approach in current visual effects industry was presented. The purpose of this paper is to cover a detailed workflow and concept, which has been successfully used in 3D stereoscopic conversion for feature films in visual effects industry, and therefore to clarify the process in stereoscopic conversion production and provide a clear idea for those entry-level artists to improve an overall understanding of 3D stereoscopic in digital compositing field as well as to the higher education factor of visual effects and hopefully inspire further collaboration and participants particularly between academia and industry.
A Visual Educational Modeling Language to Help Teachers in Learning Scenario Design
The success of an e-learning system is highly
dependent on the quality of its educational content and how effective,
complete, and simple the design tool can be for teachers. Educational
modeling languages (EMLs) are proposed as design languages
intended to teachers for modeling diverse teaching-learning
experiences, independently of the pedagogical approach and in
different contexts. However, most existing EMLs are criticized for
being too abstract and too complex to be understood and manipulated
by teachers. In this paper, we present a visual EML that simplifies the
process of designing learning scenarios for teachers with no
programming background. Based on the conceptual framework of the
activity theory, our resulting visual EML focuses on using Domainspecific
modeling techniques to provide a pedagogical level of
abstraction in the design process.
Characterizations of Star-Shaped, L-Convex, and Convex Polygons
A chord of a simple polygon P is a line segment [xy]
that intersects the boundary of P only at both endpoints x and y. A
chord of P is called an interior chord provided the interior of [xy] lies
in the interior of P. P is weakly visible from [xy] if for every point v
in P there exists a point w in [xy] such that [vw] lies in P. In this
paper star-shaped, L-convex, and convex polygons are characterized
in terms of weak visibility properties from internal chords and starshaped
subsets of P. A new Krasnoselskii-type characterization of
isothetic star-shaped polygons is also presented.
A Hybrid Scheme for on-Line Diagnostic Decision Making Using Optimal Data Representation and Filtering Technique
The early diagnostic decision making in industrial processes is absolutely necessary to produce high quality final products. It helps to provide early warning for a special event in a process, and finding its assignable cause can be obtained. This work presents a hybrid diagnostic schmes for batch processes. Nonlinear representation of raw process data is combined with classification tree techniques. The nonlinear kernel-based dimension reduction is executed for nonlinear classification decision boundaries for fault classes. In order to enhance diagnosis performance for batch processes, filtering of the data is performed to get rid of the irrelevant information of the process data. For the diagnosis performance of several representation, filtering, and future observation estimation methods, four diagnostic schemes are evaluated. In this work, the performance of the presented diagnosis schemes is demonstrated using batch process data.
An Immersive Motion Capture Environment
Motion capturing technology has been used for quite a
while and several research has been done within this area. Nevertheless,
we discovered open issues within current motion capturing
environments. In this paper we provide a state-of-the-art overview of
the addressed research areas and show issues with current motion
capturing environments. Observations, interviews and questionnaires
have been used to reveal the challenges actors are currently facing in
a motion capturing environment. Furthermore, the idea to create a
more immersive motion capturing environment to improve the acting
performances and motion capturing outcomes as a potential solution
is introduced. It is hereby the goal to explain the found open issues
and the developed ideas which shall serve for further research as a
basis. Moreover, a methodology to address the interaction and
systems design issues is proposed. A future outcome could be that
motion capture actors are able to perform more naturally, especially
if using a non-body-worn solution.
An Evaluation on Fixed Wing and Multi-Rotor UAV Images Using Photogrammetric Image Processing
This paper has introduced a slope photogrammetric mapping using unmanned aerial vehicle. There are two units of UAV has been used in this study; namely; fixed wing and multi-rotor. Both UAVs were used to capture images at the study area. A consumer digital camera was mounted vertically at the bottom of UAV and captured the images at an altitude. The objectives of this study are to obtain three dimensional coordinates of slope area and to determine the accuracy of photogrammetric product produced from both UAVs. Several control points and checkpoints were established Real Time Kinematic Global Positioning System (RTK-GPS) in the study area. All acquired images from both UAVs went through all photogrammetric processes such as interior orientation, exterior orientation, aerial triangulation and bundle adjustment using photogrammetric software. Two primary results were produced in this study; namely; digital elevation model and digital orthophoto. Based on results, UAV system can be used to mapping slope area especially for limited budget and time constraints project.
Qmulus – A Cloud Driven GPS Based Tracking System for Real-Time Traffic Routing
This paper presents Qmulus- a Cloud Based GPS
Model. Qmulus is designed to compute the best possible route which
would lead the driver to the specified destination in the shortest time
while taking into account real-time constraints. Intelligence
incorporated to Qmulus-s design makes it capable of generating and
assigning priorities to a list of optimal routes through customizable
dynamic updates. The goal of this design is to minimize travel and
cost overheads, maintain reliability and consistency, and implement
scalability and flexibility. The model proposed focuses on
reducing the bridge between a Client Application and a Cloud
service so as to render seamless operations. Qmulus-s system
model is closely integrated and its concept has the potential to be
extended into several other integrated applications making it capable
of adapting to different media and resources.
The Haar Wavelet Transform of the DNA Signal Representation
The Deoxyribonucleic Acid (DNA) which is a doublestranded helix of nucleotides consists of: Adenine (A), Cytosine (C), Guanine (G) and Thymine (T). In this work, we convert this genetic code into an equivalent digital signal representation. Applying a wavelet transform, such as Haar wavelet, we will be able to extract details that are not so clear in the original genetic code. We compare between different organisms using the results of the Haar wavelet Transform. This is achieved by using the trend part of the signal since the trend part bears the most energy of the digital signal representation. Consequently, we will be able to quantitatively reconstruct different biological families.
Trust and Reliability for Public Sector Data
The public sector holds large amounts of data of
various areas such as social affairs, economy, or tourism. Various
initiatives such as Open Government Data or the EU Directive on
public sector information aim to make these data available for public
and private service providers. Requirements for the provision of
public sector data are defined by legal and organizational
frameworks. Surprisingly, the defined requirements hardly cover
security aspects such as integrity or authenticity.
In this paper we discuss the importance of these missing
requirements and present a concept to assure the integrity and
authenticity of provided data based on electronic signatures. We
show that our concept is perfectly suitable for the provisioning of
unaltered data. We also show that our concept can also be extended
to data that needs to be anonymized before provisioning by
incorporating redactable signatures. Our proposed concept enhances
trust and reliability of provided public sector data.
Increasing Replica Consistency Performances with Load Balancing Strategy in Data Grid Systems
Data replication in data grid systems is one of the important solutions that improve availability, scalability, and fault tolerance. However, this technique can also bring some involved issues such as maintaining replica consistency. Moreover, as grid environment are very dynamic some nodes can be more uploaded than the others to become eventually a bottleneck. The main idea of our work is to propose a complementary solution between replica consistency maintenance and dynamic load balancing strategy to improve access performances under a simulated grid environment.
Effects of Mobile Design Quality and Innovation Characteristics on Intention to Use Mobile Tourism Guide
This study investigates theoretical model of tourist intention in the context of mobile tourism guide. The research model consists of three constructs: mobile design quality, innovation characteristics, and intention to use mobile tourism guide. In order to investigate the effects of determinants and examine the relationships, partial least squares is employed for data analysis and research model development. The results show that mobile design quality and innovation quality significantly impact on tourists’ intention to use mobile tourism guide. Furthermore, mobile design quality has a strong influence on innovation characteristics, and cannot be the moderator on the relationship between innovation characteristics and tourists’ intention to use mobile tourism guide. Our findings propose theoretical model for mobile research and provide an important guideline for developing mobile application.
A Decision Matrix for the Evaluation of Triplestores for Use in a Virtual Research Environment
The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for cross-domain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.
Composite Relevance Feedback for Image Retrieval
This paper presents content-based image retrieval (CBIR) frameworks with relevance feedback (RF) based on combined learning of support vector machines (SVM) and AdaBoosts. The framework incorporates only most relevant images obtained from both the learning algorithm. To speed up the system, it removes irrelevant images from the database, which are returned from SVM learner. It is the key to achieve the effective retrieval performance in terms of time and accuracy. The experimental results show that this framework had significant improvement in retrieval effectiveness, which can finally improve the retrieval performance.