Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Paper Count: 59

Computer, Electrical, Automation, Control and Information Engineering

  • 2017
  • 2016
  • 2015
  • 2014
  • 2013
  • 2012
  • 2011
  • 2010
  • 2009
  • 2008
  • 2007
  • 59
    114
    Use of Novel Algorithms MAJE4 and MACJER-320 for Achieving Confidentiality and Message Authentication in SSL and TLS
    Abstract:
    Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
    58
    206
    Representing Uncertainty in Computer-Generated Forces
    Abstract:
    The Integrated Performance Modelling Environment (IPME) is a powerful simulation engine for task simulation and performance analysis. However, it has no high level cognition such as memory and reasoning for complex simulation. This article introduces a knowledge representation and reasoning scheme that can accommodate uncertainty in simulations of military personnel with IPME. This approach demonstrates how advanced reasoning models that support similarity-based associative process, rule-based abstract process, multiple reasoning methods and real-time interaction can be integrated with conventional task network modelling to provide greater functionality and flexibility when modelling operator performance.
    57
    374
    Structural Parsing of Natural Language Text in Tamil Using Phrase Structure Hybrid Language Model
    Abstract:
    Parsing is important in Linguistics and Natural Language Processing to understand the syntax and semantics of a natural language grammar. Parsing natural language text is challenging because of the problems like ambiguity and inefficiency. Also the interpretation of natural language text depends on context based techniques. A probabilistic component is essential to resolve ambiguity in both syntax and semantics thereby increasing accuracy and efficiency of the parser. Tamil language has some inherent features which are more challenging. In order to obtain the solutions, lexicalized and statistical approach is to be applied in the parsing with the aid of a language model. Statistical models mainly focus on semantics of the language which are suitable for large vocabulary tasks where as structural methods focus on syntax which models small vocabulary tasks. A statistical language model based on Trigram for Tamil language with medium vocabulary of 5000 words has been built. Though statistical parsing gives better performance through tri-gram probabilities and large vocabulary size, it has some disadvantages like focus on semantics rather than syntax, lack of support in free ordering of words and long term relationship. To overcome the disadvantages a structural component is to be incorporated in statistical language models which leads to the implementation of hybrid language models. This paper has attempted to build phrase structured hybrid language model which resolves above mentioned disadvantages. In the development of hybrid language model, new part of speech tag set for Tamil language has been developed with more than 500 tags which have the wider coverage. A phrase structured Treebank has been developed with 326 Tamil sentences which covers more than 5000 words. A hybrid language model has been trained with the phrase structured Treebank using immediate head parsing technique. Lexicalized and statistical parser which employs this hybrid language model and immediate head parsing technique gives better results than pure grammar and trigram based model.
    56
    452
    Automated Knowledge Engineering
    Abstract:
    This article outlines conceptualization and implementation of an intelligent system capable of extracting knowledge from databases. Use of hybridized features of both the Rough and Fuzzy Set theory render the developed system flexibility in dealing with discreet as well as continuous datasets. A raw data set provided to the system, is initially transformed in a computer legible format followed by pruning of the data set. The refined data set is then processed through various Rough Set operators which enable discovery of parameter relationships and interdependencies. The discovered knowledge is automatically transformed into a rule base expressed in Fuzzy terms. Two exemplary cancer repository datasets (for Breast and Lung Cancer) have been used to test and implement the proposed framework.
    55
    784
    Extension and Evaluation of Interface “2D-RIB“ for Impression-Based Retrieval
    Abstract:
    Recently, lots of researchers are attracted to retrieving multimedia database by using some impression words and their values. Ikezoe-s research is one of the representatives and uses eight pairs of opposite impression words. We had modified its retrieval interface and proposed '2D-RIB'. In '2D-RIB', after a retrieval person selects a single basic music, the system visually shows some other music around the basic one along relative position. He/she can select one of them fitting to his/her intention, as a retrieval result. The purpose of this paper is to improve his/her satisfaction level to the retrieval result in 2D-RIB. One of our extensions is to define and introduce the following two measures: 'melody goodness' and 'general acceptance'. We implement them in different five combinations. According to an evaluation experiment, both of these two measures can contribute to the improvement. Another extension is three types of customization. We have implemented them and clarified which customization is effective.
    54
    931
    A Low Power SRAM Base on Novel Word-Line Decoding
    Abstract:
    This paper proposes a low power SRAM based on five transistor SRAM cell. Proposed SRAM uses novel word-line decoding such that, during read/write operation, only selected cell connected to bit-line whereas, in conventional SRAM (CV-SRAM), all cells in selected row connected to their bit-lines, which in turn develops differential voltages across all bit-lines, and this makes energy consumption on unselected bit-lines. In proposed SRAM memory array divided into two halves and this causes data-line capacitance to reduce. Also proposed SRAM uses one bit-line and thus has lower bit-line leakage compared to CV-SRAM. Furthermore, the proposed SRAM incurs no area overhead, and has comparable read/write performance versus the CV-SRAM. Simulation results in standard 0.25μm CMOS technology shows in worst case proposed SRAM has 80% smaller dynamic energy consumption in each cycle compared to CV-SRAM. Besides, energy consumption in each cycle of proposed SRAM and CV-SRAM investigated analytically, the results of which are in good agreement with the simulation results.
    53
    960
    A New Self-Adaptive EP Approach for ANN Weights Training
    Abstract:
    Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.
    52
    997
    Over-Height Vehicle Detection in Low Headroom Roads Using Digital Video Processing
    Abstract:
    In this paper we present a new method for over-height vehicle detection in low headroom streets and highways using digital video possessing. The accuracy and the lower price comparing to present detectors like laser radars and the capability of providing extra information like speed and height measurement make this method more reliable and efficient. In this algorithm the features are selected and tracked using KLT algorithm. A blob extraction algorithm is also applied using background estimation and subtraction. Then the world coordinates of features that are inside the blobs are estimated using a noble calibration method. As, the heights of the features are calculated, we apply a threshold to select overheight features and eliminate others. The over-height features are segmented using some association criteria and grouped using an undirected graph. Then they are tracked through sequential frames. The obtained groups refer to over-height vehicles in a scene.
    51
    1552
    Random Projections for Dimensionality Reduction in ICA
    Abstract:
    In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.
    50
    1862
    Data Extraction of XML Files using Searching and Indexing Techniques
    Abstract:

    XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.

    49
    2037
    Handwritten Character Recognition Using Multiscale Neural Network Training Technique
    Abstract:
    Advancement in Artificial Intelligence has lead to the developments of various “smart" devices. Character recognition device is one of such smart devices that acquire partial human intelligence with the ability to capture and recognize various characters in different languages. Firstly multiscale neural training with modifications in the input training vectors is adopted in this paper to acquire its advantage in training higher resolution character images. Secondly selective thresholding using minimum distance technique is proposed to be used to increase the level of accuracy of character recognition. A simulator program (a GUI) is designed in such a way that the characters can be located on any spot on the blank paper in which the characters are written. The results show that such methods with moderate level of training epochs can produce accuracies of at least 85% and more for handwritten upper case English characters and numerals.
    48
    2365
    Distortion Estimation in Digital Image Watermarking using Genetic Programming
    Abstract:
    This paper introduces a technique of distortion estimation in image watermarking using Genetic Programming (GP). The distortion is estimated by considering the problem of obtaining a distorted watermarked signal from the original watermarked signal as a function regression problem. This function regression problem is solved using GP, where the original watermarked signal is considered as an independent variable. GP-based distortion estimation scheme is checked for Gaussian attack and Jpeg compression attack. We have used Gaussian attacks of different strengths by changing the standard deviation. JPEG compression attack is also varied by adding various distortions. Experimental results demonstrate that the proposed technique is able to detect the watermark even in the case of strong distortions and is more robust against attacks.
    47
    3349
    An Analysis of Global Stability of Cohen-Grossberg Neural Networks with Multiple Time Delays
    Abstract:
    This paper presents a new sufficient condition for the existence, uniqueness and global asymptotic stability of the equilibrium point for Cohen-Grossberg neural networks with multiple time delays. The results establish a relationship between the network parameters of the neural system independently of the delay parameters. The results are also compared with the previously reported results in the literature.
    46
    3414
    Online Collaborative Learning System Using Speech Technology
    Abstract:
    A Web-based learning tool, the Learn IN Context (LINC) system, designed and being used in some institution-s courses in mixed-mode learning, is presented in this paper. This mode combines face-to-face and distance approaches to education. LINC can achieve both collaborative and competitive learning. In order to provide both learners and tutors with a more natural way to interact with e-learning applications, a conversational interface has been included in LINC. Hence, the components and essential features of LINC+, the voice enhanced version of LINC, are described. We report evaluation experiments of LINC/LINC+ in a real use context of a computer programming course taught at the Université de Moncton (Canada). The findings show that when the learning material is delivered in the form of a collaborative and voice-enabled presentation, the majority of learners seem to be satisfied with this new media, and confirm that it does not negatively affect their cognitive load.
    45
    3578
    Precise Measurement of Displacement using Pixels
    Abstract:
    Manufacturing processes demand tight dimensional tolerances. The paper concerns a transducer for precise measurement of displacement, based on a camera containing a linescan chip. When tests were conducted using a track of black and white stripes with a 2mm pitch, errors in measuring on individual cycle amounted to 1.75%, suggesting that a precision of 35 microns is achievable.
    44
    3877
    A Pattern Language for Software Debugging
    Abstract:
    In spite of all advancement in software testing, debugging remains a labor-intensive, manual, time consuming, and error prone process. A candidate solution to enhance debugging process is to fuse it with testing process. To achieve this integration, a possible solution may be categorizing common software tests and errors followed by the effort on fixing the errors through general solutions for each test/error pair. Our approach to address this issue is based on Christopher Alexander-s pattern and pattern language concepts. The patterns in this language are grouped into three major sections and connect the three concepts of test, error, and debug. These patterns and their hierarchical relationship shape a pattern language that introduces a solution to solve software errors in a known testing context. Finally, we will introduce our developed framework ADE as a sample implementation to support a pattern of proposed language, which aims to automate the whole process of evolving software design via evolutionary methods.
    43
    3894
    Admission Control Approaches in the IMS Presence Service
    Abstract:
    In this research, we propose a weighted class based queuing (WCBQ) mechanism to provide class differentiation and to reduce the load for the IMS (IP Multimedia Subsystem) presence server (PS). The tasks of admission controller for the PS are demonstrated. Analysis and simulation models are developed to quantify the performance of WCBQ scheme. An optimized dropping time frame has been developed based on which some of the preexisting messages are dropped from the PS-buffer. Cost functions are developed and simulation comparison has been performed with FCFS (First Come First Served) scheme. The results show that the PS benefits significantly from the proposed queuing and dropping algorithm (WCBQ) during heavy traffic.
    42
    4636
    A Multiresolution Approach for Noised Texture Classification based on the Co-occurrence Matrix and First Order Statistics
    Abstract:
    Wavelet transform provides several important characteristics which can be used in a texture analysis and classification. In this work, an efficient texture classification method, which combines concepts from wavelet and co-occurrence matrices, is presented. An Euclidian distance classifier is used to evaluate the various methods of classification. A comparative study is essential to determine the ideal method. Using this conjecture, we developed a novel feature set for texture classification and demonstrate its effectiveness
    41
    4760
    New Wavelet-Based Superresolution Algorithm for Speckle Reduction in SAR Images
    Abstract:

    This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.

    40
    5128
    Representing Shared Join Points with State Charts: A High Level Design Approach
    Abstract:
    Aspect Oriented Programming promises many advantages at programming level by incorporating the cross cutting concerns into separate units, called aspects. Join Points are distinguishing features of Aspect Oriented Programming as they define the points where core requirements and crosscutting concerns are (inter)connected. Currently, there is a problem of multiple aspects- composition at the same join point, which introduces the issues like ordering and controlling of these superimposed aspects. Dynamic strategies are required to handle these issues as early as possible. State chart is an effective modeling tool to capture dynamic behavior at high level design. This paper provides methodology to formulate the strategies for multiple aspect composition at high level, which helps to better implement these strategies at coding level. It also highlights the need of designing shared join point at high level, by providing the solutions of these issues using state chart diagrams in UML 2.0. High level design representation of shared join points also helps to implement the designed strategy in systematic way.
    39
    5808
    Improving Image Segmentation Performance via Edge Preserving Regularization
    Abstract:
    This paper presents an improved image segmentation model with edge preserving regularization based on the piecewise-smooth Mumford-Shah functional. A level set formulation is considered for the Mumford-Shah functional minimization in segmentation, and the corresponding partial difference equations are solved by the backward Euler discretization. Aiming at encouraging edge preserving regularization, a new edge indicator function is introduced at level set frame. In which all the grid points which is used to locate the level set curve are considered to avoid blurring the edges and a nonlinear smooth constraint function as regularization term is applied to smooth the image in the isophote direction instead of the gradient direction. In implementation, some strategies such as a new scheme for extension of u+ and u- computation of the grid points and speedup of the convergence are studied to improve the efficacy of the algorithm. The resulting algorithm has been implemented and compared with the previous methods, and has been proved efficiently by several cases.
    38
    5846
    LSGENSYS - An Integrated System for Pattern Recognition and Summarisation
    Authors:
    Abstract:
    This paper presents a new system developed in Java® for pattern recognition and pattern summarisation in multi-band (RGB) satellite images. The system design is described in some detail. Results of testing the system to analyse and summarise patterns in SPOT MS images and LANDSAT images are also discussed.
    37
    6117
    Implementation of Adder-Subtracter Design with VerilogHDL
    Abstract:
    According to the density of the chips, designers are trying to put so any facilities of computational and storage on single chips. Along with the complexity of computational and storage circuits, the designing, testing and debugging become more and more complex and expensive. So, hardware design will be built by using very high speed hardware description language, which is more efficient and cost effective. This paper will focus on the implementation of 32-bit ALU design based on Verilog hardware description language. Adder and subtracter operate correctly on both unsigned and positive numbers. In ALU, addition takes most of the time if it uses the ripple-carry adder. The general strategy for designing fast adders is to reduce the time required to form carry signals. Adders that use this principle are called carry look- ahead adder. The carry look-ahead adder is to be designed with combination of 4-bit adders. The syntax of Verilog HDL is similar to the C programming language. This paper proposes a unified approach to ALU design in which both simulation and formal verification can co-exist.
    36
    6144
    Designing a Novel General Sorting Network Constructor Using Artificial Evolution
    Abstract:
    A method is presented for the construction of arbitrary even-input sorting networks exhibiting better properties than the networks created using a conventional technique of the same type. The method was discovered by means of a genetic algorithm combined with an application-specific development. Similarly to human inventions in the area of theoretical computer science, the evolved invention was analyzed: its generality was proven and area and time complexities were determined.
    35
    6253
    Target Detection with Improved Image Texture Feature Coding Method and Support Vector Machine
    Abstract:

    An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.

    34
    7315
    Decomposition Method for Neural Multiclass Classification Problem
    Abstract:
    In this article we are going to discuss the improvement of the multi classes- classification problem using multi layer Perceptron. The considered approach consists in breaking down the n-class problem into two-classes- subproblems. The training of each two-class subproblem is made independently; as for the phase of test, we are going to confront a vector that we want to classify to all two classes- models, the elected class will be the strongest one that won-t lose any competition with the other classes. Rates of recognition gotten with the multi class-s approach by two-class-s decomposition are clearly better that those gotten by the simple multi class-s approach.
    33
    7480
    Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification
    Abstract:

    In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.

    32
    7559
    A New Approach for Fingerprint Classification based on Minutiae Distribution
    Abstract:
    The paper describes a new approach for fingerprint classification, based on the distribution of local features (minute details or minutiae) of the fingerprints. The main advantage is that fingerprint classification provides an indexing scheme to facilitate efficient matching in a large fingerprint database. A set of rules based on heuristic approach has been proposed. The area around the core point is treated as the area of interest for extracting the minutiae features as there are substantial variations around the core point as compared to the areas away from the core point. The core point in a fingerprint has been located at a point where there is maximum curvature. The experimental results report an overall average accuracy of 86.57 % in fingerprint classification.
    31
    7802
    Learning Monte Carlo Data for Circuit Path Length
    Abstract:
    This paper analyzes the patterns of the Monte Carlo data for a large number of variables and minterms, in order to characterize the circuit path length behavior. We propose models that are determined by training process of shortest path length derived from a wide range of binary decision diagram (BDD) simulations. The creation of the model was done use of feed forward neural network (NN) modeling methodology. Experimental results for ISCAS benchmark circuits show an RMS error of 0.102 for the shortest path length complexity estimation predicted by the NN model (NNM). Use of such a model can help reduce the time complexity of very large scale integrated (VLSI) circuitries and related computer-aided design (CAD) tools that use BDDs.
    30
    8103
    A Reusability Evaluation Model for OO-Based Software Components
    Abstract:
    The requirement to improve software productivity has promoted the research on software metric technology. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. CK metric suit is most widely used metrics for the objectoriented (OO) software; we critically analyzed the CK metrics, tried to remove the inconsistencies and devised the framework of metrics to obtain the structural analysis of OO-based software components. Neural network can learn new relationships with new input data and can be used to refine fuzzy rules to create fuzzy adaptive system. Hence, Neuro-fuzzy inference engine can be used to evaluate the reusability of OO-based component using its structural attributes as inputs. In this paper, an algorithm has been proposed in which the inputs can be given to Neuro-fuzzy system in form of tuned WMC, DIT, NOC, CBO , LCOM values of the OO software component and output can be obtained in terms of reusability. The developed reusability model has produced high precision results as expected by the human experts.
    29
    8248
    A Formatting Method for Transforming XML Data into HTML
    Abstract:

    In this paper, we propose a fixed formatting method of PPX(Pretty Printer for XML). PPX is a query language for XML database which has extensive formatting capability that produces HTML as the result of a query. The fixed formatting method is to completely specify the combination of variables and layout specification operators within the layout expression of the GENERATE clause of PPX. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing the same tasks.

    28
    8816
    Rock Textures Classification Based on Textural and Spectral Features
    Abstract:
    In this paper, we proposed a method to classify each type of natural rock texture. Our goal is to classify 26 classes of rock textures. First, we extract five features of each class by using principle component analysis combining with the use of applied spatial frequency measurement. Next, the effective node number of neural network was tested. We used the most effective neural network in classification process. The results from this system yield quite high in recognition rate. It is shown that high recognition rate can be achieved in separation of 26 stone classes.
    27
    8952
    A Hybrid Machine Learning System for Stock Market Forecasting
    Abstract:
    In this paper, we propose a hybrid machine learning system based on Genetic Algorithm (GA) and Support Vector Machines (SVM) for stock market prediction. A variety of indicators from the technical analysis field of study are used as input features. We also make use of the correlation between stock prices of different companies to forecast the price of a stock, making use of technical indicators of highly correlated stocks, not only the stock to be predicted. The genetic algorithm is used to select the set of most informative input features from among all the technical indicators. The results show that the hybrid GA-SVM system outperforms the stand alone SVM system.
    26
    9058
    A Characterized and Optimized Approach for End-to-End Delay Constrained QoS Routing
    Abstract:
    QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we analyzed two algorithms namely Characterized Delay Constrained Routing (CDCR) and Optimized Delay Constrained Routing (ODCR). The CDCR algorithm dealt an approach for delay constrained routing that captures the trade-off between cost minimization and risk level regarding the delay constraint. The ODCR which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.
    25
    9667
    Evolutionary Search Techniques to Solve Set Covering Problems
    Abstract:
    Set covering problem is a classical problem in computer science and complexity theory. It has many applications, such as airline crew scheduling problem, facilities location problem, vehicle routing, assignment problem, etc. In this paper, three different techniques are applied to solve set covering problem. Firstly, a mathematical model of set covering problem is introduced and solved by using optimization solver, LINGO. Secondly, the Genetic Algorithm Toolbox available in MATLAB is used to solve set covering problem. And lastly, an ant colony optimization method is programmed in MATLAB programming language. Results obtained from these methods are presented in tables. In order to assess the performance of the techniques used in this project, the benchmark problems available in open literature are used.
    24
    9736
    Performance Trade-Off of File System between Overwriting and Dynamic Relocation on a Solid State Drive
    Abstract:
    Most file systems overwrite modified file data and metadata in their original locations, while the Log-structured File System (LFS) dynamically relocates them to other locations. We design and implement the Evergreen file system that can select between overwriting or relocation for each block of a file or metadata. Therefore, the Evergreen file system can achieve superior write performance by sequentializing write requests (similar to LFS-style relocation) when space utilization is low and overwriting when utilization is high. Another challenging issue is identifying performance benefits of LFS-style relocation over overwriting on a newly introduced SSD (Solid State Drive) which has only Flash-memory chips and control circuits without mechanical parts. Our experimental results measured on a SSD show that relocation outperforms overwriting when space utilization is below 80% and vice versa.
    23
    9813
    Development of a Semantic Wiki-based Feature Library for the Extraction of Manufacturing Feature and Manufacturing Information
    Abstract:
    A manufacturing feature can be defined simply as a geometric shape and its manufacturing information to create the shape. In a feature-based process planning system, feature library that consists of pre-defined manufacturing features and the manufacturing information to create the shape of the features, plays an important role in the extraction of manufacturing features with their proper manufacturing information. However, to manage the manufacturing information flexibly, it is important to build a feature library that can be easily modified. In this paper, the implementation of Semantic Wiki for the development of the feature library is proposed.
    22
    10525
    ORank: An Ontology Based System for Ranking Documents
    Abstract:
    Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.
    21
    10720
    A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse
    Abstract:
    Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.
    20
    10861
    DIFFER: A Propositionalization approach for Learning from Structured Data
    Abstract:
    Logic based methods for learning from structured data is limited w.r.t. handling large search spaces, preventing large-sized substructures from being considered by the resulting classifiers. A novel approach to learning from structured data is introduced that employs a structure transformation method, called finger printing, for addressing these limitations. The method, which generates features corresponding to arbitrarily complex substructures, is implemented in a system, called DIFFER. The method is demonstrated to perform comparably to an existing state-of-art method on some benchmark data sets without requiring restrictions on the search space. Furthermore, learning from the union of features generated by finger printing and the previous method outperforms learning from each individual set of features on all benchmark data sets, demonstrating the benefit of developing complementary, rather than competing, methods for structure classification.
    19
    11323
    Decision Support System for Flood Crisis Management using Artificial Neural Network
    Abstract:
    This paper presents an alternate approach that uses artificial neural network to simulate the flood level dynamics in a river basin. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach and evolving graphical feature and can be adopted for any similar situation to predict the flood level. The main data processing includes the gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood level data, to train/test the model using various inputs and to visualize results. The program code consists of a set of files, which can as well be modified to match other purposes. This program may also serve as a tool for real-time flood monitoring and process control. The running results indicate that the decision support system applied to the flood level seems to have reached encouraging results for the river basin under examination. The comparison of the model predictions with the observed data was satisfactory, where the model is able to forecast the flood level up to 5 hours in advance with reasonable prediction accuracy. Finally, this program may also serve as a tool for real-time flood monitoring and process control.
    18
    11523
    Next Generation Networks and Their Relation with Ad-hoc Networks
    Abstract:
    The communication networks development and advancement during two last decades has been toward a single goal and that is gradual change from circuit-switched networks to packed switched ones. Today a lot of networks operates are trying to transform the public telephone networks to multipurpose packed switch. This new achievement is generally called "next generation networks". In fact, the next generation networks enable the operators to transfer every kind of services (sound, data and video) on a network. First, in this report the definition, characteristics and next generation networks services and then ad-hoc networks role in the next generation networks are studied.
    17
    12373
    Automatic Musical Genre Classification Using Divergence and Average Information Measures
    Abstract:
    Recently many research has been conducted to retrieve pertinent parameters and adequate models for automatic music genre classification. In this paper, two measures based upon information theory concepts are investigated for mapping the features space to decision space. A Gaussian Mixture Model (GMM) is used as a baseline and reference system. Various strategies are proposed for training and testing sessions with matched or mismatched conditions, long training and long testing, long training and short testing. For all experiments, the file sections used for testing are never been used during training. With matched conditions all examined measures yield the best and similar scores (almost 100%). With mismatched conditions, the proposed measures yield better scores than the GMM baseline system, especially for the short testing case. It is also observed that the average discrimination information measure is most appropriate for music category classifications and on the other hand the divergence measure is more suitable for music subcategory classifications.
    16
    12598
    Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models
    Abstract:
    We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.
    15
    12743
    Decision Algorithm for Smart Airbag Deployment Safety Issues
    Abstract:
    Airbag deployment has been known to be responsible for huge death, incidental injuries and broken bones due to low crash severity and wrong deployment decisions. Therefore, the authorities and industries have been looking for more innovative and intelligent products to be realized for future enhancements in the vehicle safety systems (VSSs). Although the VSSs technologies have advanced considerably, they still face challenges such as how to avoid unnecessary and untimely airbag deployments that can be hazardous and fatal. Currently, most of the existing airbag systems deploy without regard to occupant size and position. As such, this paper will focus on the occupant and crash sensing performances due to frontal collisions for the new breed of so called smart airbag systems. It intends to provide a thorough discussion relating to the occupancy detection, occupant size classification, occupant off-position detection to determine safe distance zone for airbag deployment, crash-severity analysis and airbag decision algorithms via a computer modeling. The proposed system model consists of three main modules namely, occupant sensing, crash severity analysis and decision fusion. The occupant sensing system module utilizes the weight sensor to determine occupancy, classify the occupant size, and determine occupant off-position condition to compute safe distance for airbag deployment. The crash severity analysis module is used to generate relevant information pertinent to airbag deployment decision. Outputs from these two modules are fused to the decision module for correct and efficient airbag deployment action. Computer modeling work is carried out using Simulink, Stateflow, SimMechanics and Virtual Reality toolboxes.
    14
    12764
    Simple Agents Benefit Only from Simple Brains
    Abstract:

    In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.

    13
    12781
    A Feasible Path Selection QoS Routing Algorithm with two Constraints in Packet Switched Networks
    Abstract:
    Over the past several years, there has been a considerable amount of research within the field of Quality of Service (QoS) support for distributed multimedia systems. One of the key issues in providing end-to-end QoS guarantees in packet networks is determining a feasible path that satisfies a number of QoS constraints. The problem of finding a feasible path is NPComplete if number of constraints is more than two and cannot be exactly solved in polynomial time. We proposed Feasible Path Selection Algorithm (FPSA) that addresses issues with pertain to finding a feasible path subject to delay and cost constraints and it offers higher success rate in finding feasible paths.
    12
    12808
    Social, Group and Individual Mind extracted from Rule Bases of Multiple Agents
    Authors:
    Abstract:
    This paper shows possibility of extraction Social, Group and Individual Mind from Multiple Agents Rule Bases. Types those Rule bases are selected as two fuzzy systems, namely Mambdani and Takagi-Sugeno fuzzy system. Their rule bases are describing (modeling) agent behavior. Modifying of agent behavior in the time varying environment will be provided by learning fuzzyneural networks and optimization of their parameters with using genetic algorithms in development system FUZNET. Finally, extraction Social, Group and Individual Mind from Multiple Agents Rule Bases are provided by Cognitive analysis and Matching criterion.
    11
    13398
    Optimizing of Fuzzy C-Means Clustering Algorithm Using GA
    Abstract:
    Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.
    10
    14214
    A New Quantile Based Fuzzy Time Series Forecasting Model
    Abstract:

    Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.

    9
    14674
    Decision Rule Induction in a Learning Content Management System
    Abstract:
    A learning content management system (LCMS) is an environment to support web-based learning content development. Primary function of the system is to manage the learning process as well as to generate content customized to meet a unique requirement of each learner. Among the available supporting tools offered by several vendors, we propose to enhance the LCMS functionality to individualize the presented content with the induction ability. Our induction technique is based on rough set theory. The induced rules are intended to be the supportive knowledge for guiding the content flow planning. They can also be used as decision rules to help content developers on managing content delivered to individual learner.
    8
    14679
    Neuro-Fuzzy Algorithm for a Biped Robotic System
    Abstract:
    This paper summaries basic principles and concepts of intelligent controls, implemented in humanoid robotics as well as recent algorithms being devised for advanced control of humanoid robots. Secondly, this paper presents a new approach neuro-fuzzy system. We have included some simulating results from our computational intelligence technique that will be applied to our humanoid robot. Subsequently, we determine a relationship between joint trajectories and located forces on robot-s foot through a proposed neuro-fuzzy technique.
    7
    14824
    Kinematics and Control System Design of Manipulators for a Humanoid Robot
    Abstract:
    In this work, a new approach is proposed to control the manipulators for Humanoid robot. The kinematics of the manipulators in terms of joint positions, velocity, acceleration and torque of each joint is computed using the Denavit Hardenberg (D-H) notations. These variables are used to design the manipulator control system, which has been proposed in this work. In view of supporting the development of a controller, a simulation of the manipulator is designed for Humanoid robot. This simulation is developed through the use of the Virtual Reality Toolbox and Simulink in Matlab. The Virtual Reality Toolbox in Matlab provides the interfacing and controls to an environment which is developed based on the Virtual Reality Modeling Language (VRML). Chains of bones were used to represent the robot.
    6
    14841
    Human Face Detection and Segmentation using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms
    Abstract:
    In this paper we propose a novel method for human face segmentation using the elliptical structure of the human head. It makes use of the information present in the edge map of the image. In this approach we use the fact that the eigenvalues of covariance matrix represent the elliptical structure. The large and small eigenvalues of covariance matrix are associated with major and minor axial lengths of an ellipse. The other elliptical parameters are used to identify the centre and orientation of the face. Since an Elliptical Hough Transform requires 5D Hough Space, the Circular Hough Transform (CHT) is used to evaluate the elliptical parameters. Sparse matrix technique is used to perform CHT, as it squeeze zero elements, and have only a small number of non-zero elements, thereby having an advantage of less storage space and computational time. Neighborhood suppression scheme is used to identify the valid Hough peaks. The accurate position of the circumference pixels for occluded and distorted ellipses is identified using Bresenham-s Raster Scan Algorithm which uses the geometrical symmetry properties. This method does not require the evaluation of tangents for curvature contours, which are very sensitive to noise. The method has been evaluated on several images with different face orientations.
    5
    15275
    Novel Rao-Blackwellized Particle Filter for Mobile Robot SLAM Using Monocular Vision
    Abstract:
    This paper presents the novel Rao-Blackwellised particle filter (RBPF) for mobile robot simultaneous localization and mapping (SLAM) using monocular vision. The particle filter is combined with unscented Kalman filter (UKF) to extending the path posterior by sampling new poses that integrate the current observation which drastically reduces the uncertainty about the robot pose. The landmark position estimation and update is also implemented through UKF. Furthermore, the number of resampling steps is determined adaptively, which seriously reduces the particle depletion problem, and introducing the evolution strategies (ES) for avoiding particle impoverishment. The 3D natural point landmarks are structured with matching Scale Invariant Feature Transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-Tree in the time cost of O(log2 N). Experiment results on real robot in our indoor environment show the advantages of our methods over previous approaches.
    4
    15427
    Dynamic Metrics for Polymorphism in Object Oriented Systems
    Abstract:
    Metrics is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. Software metrics are instruments or ways to measuring all the aspect of software product. These metrics are used throughout a software project to assist in estimation, quality control, productivity assessment, and project control. Object oriented software metrics focus on measurements that are applied to the class and other characteristics. These measurements convey the software engineer to the behavior of the software and how changes can be made that will reduce complexity and improve the continuing capability of the software. Object oriented software metric can be classified in two types static and dynamic. Static metrics are concerned with all the aspects of measuring by static analysis of software and dynamic metrics are concerned with all the measuring aspect of the software at run time. Major work done before, was focusing on static metric. Also some work has been done in the field of dynamic nature of the software measurements. But research in this area is demanding for more work. In this paper we give a set of dynamic metrics specifically for polymorphism in object oriented system.
    3
    15794
    Speech Data Compression using Vector Quantization
    Abstract:
    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table shows computational complexity of these three algorithms. Here we have introduced a new performance parameter Average Fractional Change in Speech Sample (AFCSS). Our FCG algorithm gives far better performance considering mean absolute error, AFCSS and complexity as compared to others.
    2
    15811
    A Comparative Analysis of Fuzzy, Neuro-Fuzzy and Fuzzy-GA Based Approaches for Software Reusability Evaluation
    Abstract:
    Software Reusability is primary attribute of software quality. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. In this paper, we have devised the framework of metrics that uses McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component as input attributes and calculated reusability of the software component. Here, comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA approaches is performed to evaluate the reusability of software components and Fuzzy-GA results outperform the other used approaches. The developed reusability model has produced high precision results as expected by the human experts.
    1
    15922
    Context Modeling and Context-Aware Service Adaptation for Pervasive Computing Systems
    Abstract:

    Devices in a pervasive computing system (PCS) are characterized by their context-awareness. It permits them to provide proactively adapted services to the user and applications. To do so, context must be well understood and modeled in an appropriate form which enhance its sharing between devices and provide a high level of abstraction. The most interesting methods for modeling context are those based on ontology however the majority of the proposed methods fail in proposing a generic ontology for context which limit their usability and keep them specific to a particular domain. The adaptation task must be done automatically and without an explicit intervention of the user. Devices of a PCS must acquire some intelligence which permits them to sense the current context and trigger the appropriate service or provide a service in a better suitable form. In this paper we will propose a generic service ontology for context modeling and a context-aware service adaptation based on a service oriented definition of context.