Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Paper Count: 46

Computer, Electrical, Automation, Control and Information Engineering

  • 2017
  • 2016
  • 2015
  • 2014
  • 2013
  • 2012
  • 2011
  • 2010
  • 2009
  • 2008
  • 2007
  • 46
    231
    Learning Block Memories with Metric Networks
    Abstract:
    An attractor neural network on the small-world topology is studied. A learning pattern is presented to the network, then a stimulus carrying local information is applied to the neurons and the retrieval of block-like structure is investigated. A synaptic noise decreases the memory capability. The change of stability from local to global attractors is shown to depend on the long-range character of the network connectivity.
    45
    337
    A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model
    Abstract:
    The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.
    44
    688
    Mining Image Features in an Automatic Two-Dimensional Shape Recognition System
    Abstract:
    The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.
    43
    908
    Complex Energy Signal Model for Digital Human Fingerprint Matching
    Abstract:
    This paper describes a complex energy signal model that is isomorphic with digital human fingerprint images. By using signal models, the problem of fingerprint matching is transformed into the signal processing problem of finding a correlation between two complex signals that differ by phase-rotation and time-scaling. A technique for minutiae matching that is independent of image translation, rotation and linear-scaling, and is resistant to missing minutiae is proposed. The method was tested using random data points. The results show that for matching prints the scaling and rotation angles are closely estimated and a stronger match will have a higher correlation.
    42
    1014
    Semi-Automatic Analyzer to Detect Authorial Intentions in Scientific Documents
    Abstract:
    Information Retrieval has the objective of studying models and the realization of systems allowing a user to find the relevant documents adapted to his need of information. The information search is a problem which remains difficult because the difficulty in the representing and to treat the natural languages such as polysemia. Intentional Structures promise to be a new paradigm to extend the existing documents structures and to enhance the different phases of documents process such as creation, editing, search and retrieval. The intention recognition of the author-s of texts can reduce the largeness of this problem. In this article, we present intentions recognition system is based on a semi-automatic method of extraction the intentional information starting from a corpus of text. This system is also able to update the ontology of intentions for the enrichment of the knowledge base containing all possible intentions of a domain. This approach uses the construction of a semi-formal ontology which considered as the conceptualization of the intentional information contained in a text. An experiments on scientific publications in the field of computer science was considered to validate this approach.
    41
    1290
    3D Face Modeling based on 3D Dense Morphable Face Shape Model
    Abstract:
    Realistic 3D face model is more precise in representing pose, illumination, and expression of face than 2D face model so that it can be utilized usefully in various applications such as face recognition, games, avatars, animations, and etc. In this paper, we propose a 3D face modeling method based on 3D dense morphable shape model. The proposed 3D modeling method first constructs a 3D dense morphable shape model from 3D face scan data obtained using a 3D scanner. Next, the proposed method extracts and matches facial landmarks from 2D image sequence containing a face to be modeled, and then reconstructs 3D vertices coordinates of the landmarks using a factorization-based SfM technique. Then, the proposed method obtains a 3D dense shape model of the face to be modeled by fitting the constructed 3D dense morphable shape model into the reconstructed 3D vertices. Also, the proposed method makes a cylindrical texture map using 2D face image sequence. Finally, the proposed method generates a 3D face model by rendering the 3D dense face shape model using the cylindrical texture map. Through building processes of 3D face model by the proposed method, it is shown that the proposed method is relatively easy, fast and precise.
    40
    1700
    Word Recognition and Learning based on Associative Memories and Hidden Markov Models
    Abstract:
    A word recognition architecture based on a network of neural associative memories and hidden Markov models has been developed. The input stream, composed of subword-units like wordinternal triphones consisting of diphones and triphones, is provided to the network of neural associative memories by hidden Markov models. The word recognition network derives words from this input stream. The architecture has the ability to handle ambiguities on subword-unit level and is also able to add new words to the vocabulary during performance. The architecture is implemented to perform the word recognition task in a language processing system for understanding simple command sentences like “bot show apple".
    39
    1717
    Using Data Clustering in Oral Medicine
    Abstract:
    The vast amount of information hidden in huge databases has created tremendous interests in the field of data mining. This paper examines the possibility of using data clustering techniques in oral medicine to identify functional relationships between different attributes and classification of similar patient examinations. Commonly used data clustering algorithms have been reviewed and as a result several interesting results have been gathered.
    38
    1738
    Selective Intra Prediction Mode Decision for H.264/AVC Encoders
    Abstract:
    H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards such as MPEG-2, but computational complexity is increased significantly. In this paper, we propose selective mode decision schemes for fast intra prediction mode selection. The objective is to reduce the computational complexity of the H.264/AVC encoder without significant rate-distortion performance degradation. In our proposed schemes, the intra prediction complexity is reduced by limiting the luma and chroma prediction modes using the directional information of the 16×16 prediction mode. Experimental results are presented to show that the proposed schemes reduce the complexity by up to 78% maintaining the similar PSNR quality with about 1.46% bit rate increase in average.
    37
    1989
    IFDewey: A New Insert-Friendly Labeling Schemafor XML Data
    Abstract:

    XML has become a popular standard for information exchange via web. Each XML document can be presented as a rooted, ordered, labeled tree. The Node label shows the exact position of a node in the original document. Region and Dewey encoding are two famous methods of labeling trees. In this paper, we propose a new insert friendly labeling method named IFDewey based on recently proposed scheme, called Extended Dewey. In Extended Dewey many labels must be modified when a new node is inserted into the XML tree. Our method eliminates this problem by reserving even numbers for future insertion. Numbers generated by Extended Dewey may be even or odd. IFDewey modifies Extended Dewey so that only odd numbers are generated and even numbers can then be used for a much easier insertion of nodes.

    36
    2034
    MAP-Based Image Super-resolution Reconstruction
    Abstract:

    From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.

    35
    2132
    Comparison between Beta Wavelets Neural Networks, RBF Neural Networks and Polynomial Approximation for 1D, 2DFunctions Approximation
    Abstract:

    This paper proposes a comparison between wavelet neural networks (WNN), RBF neural network and polynomial approximation in term of 1-D and 2-D functions approximation. We present a novel wavelet neural network, based on Beta wavelets, for 1-D and 2-D functions approximation. Our purpose is to approximate an unknown function f: Rn - R from scattered samples (xi; y = f(xi)) i=1....n, where first, we have little a priori knowledge on the unknown function f: it lives in some infinite dimensional smooth function space and second the function approximation process is performed iteratively: each new measure on the function (xi; f(xi)) is used to compute a new estimate f as an approximation of the function f. Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed Beta wavelet network.

    34
    2319
    Global Security Using Human Face Understanding under Vision Ubiquitous Architecture System
    Authors:
    Abstract:
    Different methods containing biometric algorithms are presented for the representation of eigenfaces detection including face recognition, are identification and verification. Our theme of this research is to manage the critical processing stages (accuracy, speed, security and monitoring) of face activities with the flexibility of searching and edit the secure authorized database. In this paper we implement different techniques such as eigenfaces vector reduction by using texture and shape vector phenomenon for complexity removal, while density matching score with Face Boundary Fixation (FBF) extracted the most likelihood characteristics in this media processing contents. We examine the development and performance efficiency of the database by applying our creative algorithms in both recognition and detection phenomenon. Our results show the performance accuracy and security gain with better achievement than a number of previous approaches in all the above processes in an encouraging mode.
    33
    2710
    Web Content Mining: A Solution to Consumer's Product Hunt
    Abstract:

    With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.

    32
    2732
    Adaptive PID Controller based on Reinforcement Learning for Wind Turbine Control
    Abstract:
    A self tuning PID control strategy using reinforcement learning is proposed in this paper to deal with the control of wind energy conversion systems (WECS). Actor-Critic learning is used to tune PID parameters in an adaptive way by taking advantage of the model-free and on-line learning properties of reinforcement learning effectively. In order to reduce the demand of storage space and to improve the learning efficiency, a single RBF neural network is used to approximate the policy function of Actor and the value function of Critic simultaneously. The inputs of RBF network are the system error, as well as the first and the second-order differences of error. The Actor can realize the mapping from the system state to PID parameters, while the Critic evaluates the outputs of the Actor and produces TD error. Based on TD error performance index and gradient descent method, the updating rules of RBF kernel function and network weights were given. Simulation results show that the proposed controller is efficient for WECS and it is perfectly adaptable and strongly robust, which is better than that of a conventional PID controller.
    31
    3309
    The Causation and Solution of Ringing Effect in DCT-based Video Coding
    Abstract:
    Ringing effect is one of the most annoying visual artifacts in digital video. It is a significant factor of subjective quality deterioration. However, there is a widely-accepted misunderstanding of its cause. In this paper, we propose a reasonable interpretation of the cause of ringing effect. Based on the interpretation, we suggest further two methods to reduce ringing effect in DCT-based video coding. The methods adaptively adjust quantizers according to video features. Our experiments proved that the methods could efficiently improve subjective quality with acceptable additional computing costs.
    30
    3412
    Unsupervised Image Segmentation Based on Fuzzy Connectedness with Sale Space Theory
    Abstract:
    In this paper, we propose an approach of unsupervised segmentation with fuzzy connectedness. Valid seeds are first specified by an unsupervised method based on scale space theory. A region is then extracted for each seed with a relative object extraction method of fuzzy connectedness. Afterwards, regions are merged according to the values between them of an introduced measure. Some theorems and propositions are also provided to show the reasonableness of the measure for doing mergence. Experiment results on a synthetic image, a color image and a large amount of MR images of our method are reported.
    29
    3686
    A New Approach for Recoverable Timestamp Ordering Schedule
    Abstract:
    A new approach for timestamp ordering problem in serializable schedules is presented. Since the number of users using databases is increasing rapidly, the accuracy and needing high throughput are main topics in database area. Strict 2PL does not allow all possible serializable schedules and so does not result high throughput. The main advantages of the approach are the ability to enforce the execution of transaction to be recoverable and the high achievable performance of concurrent execution in central databases. Comparing to Strict 2PL, the general structure of the algorithm is simple, free deadlock, and allows executing all possible serializable schedules which results high throughput. Various examples which include different orders of database operations are discussed.
    28
    4368
    Constructing of Classifier for Face Recognition on the Basis of the Conjugation Indexes
    Abstract:
    In this work the opportunity of construction of the qualifiers for face-recognition systems based on conjugation criteria is investigated. The linkage between the bipartite conjugation, the conjugation with a subspace and the conjugation with the null-space is shown. The unified solving rule is investigated. It makes the decision on the rating of face to a class considering the linkage between conjugation values. The described recognition method can be successfully applied to the distributed systems of video control and video observation.
    27
    5227
    Target Concept Selection by Property Overlap in Ontology Population
    Abstract:

    An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.

    26
    5380
    A Graph-Based Approach for Placement of No-Replicated Databases in Grid
    Abstract:
    On a such wide-area environment as a Grid, data placement is an important aspect of distributed database systems. In this paper, we address the problem of initial placement of database no-replicated fragments in Grid architecture. We propose a graph based approach that considers resource restrictions. The goal is to optimize the use of computing, storage and communication resources. The proposed approach is developed in two phases: in the first phase, we perform fragment grouping using knowledge about fragments dependency and, in the second phase, we determine an efficient placement of the fragment groups on the Grid. We also show, via experimental analysis that our approach gives solutions that are close to being optimal for different databases and Grid configurations.
    25
    6013
    Weight-Based Query Optimization System Using Buffer
    Abstract:
    Fast retrieval of data has been a need of user in any database application. This paper introduces a buffer based query optimization technique in which queries are assigned weights according to their number of execution in a query bank. These queries and their optimized executed plans are loaded into the buffer at the start of the database application. For every query the system searches for a match in the buffer and executes the plan without creating new plans.
    24
    6854
    Improving the Effectiveness of Software Testing through Test Case Reduction
    Abstract:
    This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.
    23
    7298
    Digital Image Watermarking in the Wavelet Transform Domain
    Abstract:
    In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.
    22
    7332
    Towards Development of Solution for Business Process-Oriented Data Analysis
    Abstract:
    This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.
    21
    9527
    Non-Overlapping Hierarchical Index Structure for Similarity Search
    Abstract:

    In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.

    20
    9532
    Synthesis of Wavelet Filters using Wavelet Neural Networks
    Abstract:
    An application of Beta wavelet networks to synthesize pass-high and pass-low wavelet filters is investigated in this work. A Beta wavelet network is constructed using a parametric function called Beta function in order to resolve some nonlinear approximation problem. We combine the filter design theory with wavelet network approximation to synthesize perfect filter reconstruction. The order filter is given by the number of neurons in the hidden layer of the neural network. In this paper we use only the first derivative of Beta function to illustrate the proposed design procedures and exhibit its performance.
    19
    10438
    A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach
    Abstract:
    Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.
    18
    10778
    An Integrated Framework for the Realtime Investigation of State Space Exploration
    Abstract:
    The objective of this paper is the introduction to a unified optimization framework for research and education. The OPTILIB framework implements different general purpose algorithms for combinatorial optimization and minimum search on standard continuous test functions. The preferences of this library are the straightforward integration of new optimization algorithms and problems as well as the visualization of the optimization process of different methods exploring the search space exclusively or for the real time visualization of different methods in parallel. Further the usage of several implemented methods is presented on the basis of two use cases, where the focus is especially on the algorithm visualization. First it is demonstrated how different methods can be compared conveniently using OPTILIB on the example of different iterative improvement schemes for the TRAVELING SALESMAN PROBLEM. A second study emphasizes how the framework can be used to find global minima in the continuous domain.
    17
    10785
    Gas Detection via Machine Learning
    Abstract:
    We present an Electronic Nose (ENose), which is aimed at identifying the presence of one out of two gases, possibly detecting the presence of a mixture of the two. Estimation of the concentrations of the components is also performed for a volatile organic compound (VOC) constituted by methanol and acetone, for the ranges 40-400 and 22-220 ppm (parts-per-million), respectively. Our system contains 8 sensors, 5 of them being gas sensors (of the class TGS from FIGARO USA, INC., whose sensing element is a tin dioxide (SnO2) semiconductor), the remaining being a temperature sensor (LM35 from National Semiconductor Corporation), a humidity sensor (HIH–3610 from Honeywell), and a pressure sensor (XFAM from Fujikura Ltd.). Our integrated hardware–software system uses some machine learning principles and least square regression principle to identify at first a new gas sample, or a mixture, and then to estimate the concentrations. In particular we adopt a training model using the Support Vector Machine (SVM) approach with linear kernel to teach the system how discriminate among different gases. Then we apply another training model using the least square regression, to predict the concentrations. The experimental results demonstrate that the proposed multiclassification and regression scheme is effective in the identification of the tested VOCs of methanol and acetone with 96.61% correctness. The concentration prediction is obtained with 0.979 and 0.964 correlation coefficient for the predicted versus real concentrations of methanol and acetone, respectively.
    16
    10908
    Language and Retrieval Accuracy
    Abstract:
    One of the major challenges in the Information Retrieval field is handling the massive amount of information available to Internet users. Existing ranking techniques and strategies that govern the retrieval process fall short of expected accuracy. Often relevant documents are buried deep in the list of documents returned by the search engine. In order to improve retrieval accuracy we examine the issue of language effect on the retrieval process. Then, we propose a solution for a more biased, user-centric relevance for retrieved data. The results demonstrate that using indices based on variations of the same language enhances the accuracy of search engines for individual users.
    15
    11493
    Web-Based Control and Notification for Home Automation Alarm Systems
    Abstract:
    This paper describes the project and development of a very low-cost and small electronic prototype, especially designed for monitoring and controlling existing home automation alarm systems (intruder, smoke, gas, flood, etc.), via TCP/IP, with a typical web browser. Its use will allow home owners to be immediately alerted and aware when an alarm event occurs, and being also able to interact with their home automation alarm system, disarming, arming and watching event alerts, with a personal wireless Wi-Fi PDA or smartphone logged on to a dedicated predefined web page, and using also a PC or Laptop.
    14
    11975
    A Content Vector Model for Text Classification
    Authors:
    Abstract:
    As a popular rank-reduced vector space approach, Latent Semantic Indexing (LSI) has been used in information retrieval and other applications. In this paper, an LSI-based content vector model for text classification is presented, which constructs multiple augmented category LSI spaces and classifies text by their content. The model integrates the class discriminative information from the training data and is equipped with several pertinent feature selection and text classification algorithms. The proposed classifier has been applied to email classification and its experiments on a benchmark spam testing corpus (PU1) have shown that the approach represents a competitive alternative to other email classifiers based on the well-known SVM and naïve Bayes algorithms.
    13
    11990
    A Probabilistic Reinforcement-Based Approach to Conceptualization
    Abstract:

    Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.

    12
    12114
    Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines
    Abstract:
    This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.
    11
    12161
    Color Image Segmentation Using Competitive and Cooperative Learning Approach
    Abstract:
    Color image segmentation can be considered as a cluster procedure in feature space. k-means and its adaptive version, i.e. competitive learning approach are powerful tools for data clustering. But k-means and competitive learning suffer from several drawbacks such as dead-unit problem and need to pre-specify number of cluster. In this paper, we will explore to use competitive and cooperative learning approach to perform color image segmentation. In competitive and cooperative learning approach, seed points not only compete each other, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together, finally it can automatically select the correct number of cluster and avoid the dead-units problem. Experimental results show that CCL can obtain better segmentation result.
    10
    13961
    Joint Use of Factor Analysis (FA) and Data Envelopment Analysis (DEA) for Ranking of Data Envelopment Analysis
    Abstract:
    This article combines two techniques: data envelopment analysis (DEA) and Factor analysis (FA) to data reduction in decision making units (DMU). Data envelopment analysis (DEA), a popular linear programming technique is useful to rate comparatively operational efficiency of decision making units (DMU) based on their deterministic (not necessarily stochastic) input–output data and factor analysis techniques, have been proposed as data reduction and classification technique, which can be applied in data envelopment analysis (DEA) technique for reduction input – output data. Numerical results reveal that the new approach shows a good consistency in ranking with DEA.
    9
    13993
    Association Rule and Decision Tree based Methodsfor Fuzzy Rule Base Generation
    Abstract:
    This paper focuses on the data-driven generation of fuzzy IF...THEN rules. The resulted fuzzy rule base can be applied to build a classifier, a model used for prediction, or it can be applied to form a decision support system. Among the wide range of possible approaches, the decision tree and the association rule based algorithms are overviewed, and two new approaches are presented based on the a priori fuzzy clustering based partitioning of the continuous input variables. An application study is also presented, where the developed methods are tested on the well known Wisconsin Breast Cancer classification problem.
    Keywords:
    8
    14061
    Optimization of SAD Algorithm on VLIW DSP
    Abstract:
    SAD (Sum of Absolute Difference) algorithm is heavily used in motion estimation which is computationally highly demanding process in motion picture encoding. To enhance the performance of motion picture encoding on a VLIW processor, an efficient implementation of SAD algorithm on the VLIW processor is essential. SAD algorithm is programmed as a nested loop with a conditional branch. In VLIW processors, loop is usually optimized by software pipelining, but researches on optimal scheduling of software pipelining for nested loops, especially nested loops with conditional branches are rare. In this paper, we propose an optimal scheduling and implementation of SAD algorithm with conditional branch on a VLIW DSP processor. The proposed optimal scheduling first transforms the nested loop with conditional branch into a single loop with conditional branch with consideration of full utilization of ILP capability of the VLIW processor and realization of earlier escape from the loop. Next, the proposed optimal scheduling applies a modulo scheduling technique developed for single loop. Based on this optimal scheduling strategy, optimal implementation of SAD algorithm on TMS320C67x, a VLIW DSP is presented. Through experiments on TMS320C6713 DSK, it is shown that H.263 encoder with the proposed SAD implementation performs better than other H.263 encoder with other SAD implementations, and that the code size of the optimal SAD implementation is small enough to be appropriate for embedded environments.
    7
    14403
    Despeckling of Synthetic Aperture Radar Images Using Inner Product Spaces in Undecimated Wavelet Domain
    Abstract:
    This paper introduces the effective speckle reduction of synthetic aperture radar (SAR) images using inner product spaces in undecimated wavelet domain. There are two major areas in projection onto span algorithm where improvement can be made. First is the use of undecimated wavelet transformation instead of discrete wavelet transformation. And second area is the use of smoothing filter namely directional smoothing filter which is an additional step. Proposed method does not need any noise estimation and thresholding technique. More over proposed method gives good results on both single polarimetric and fully polarimetric SAR images.
    6
    14975
    Block Activity in Metric Neural Networks
    Abstract:
    The model of neural networks on the small-world topology, with metric (local and random connectivity) is investigated. The synaptic weights are random, driving the network towards a chaotic state for the neural activity. An ordered macroscopic neuron state is induced by a bias in the network connections. When the connections are mainly local, the network emulates a block-like structure. It is found that the topology and the bias compete to influence the network to evolve into a global or a block activity ordering, according to the initial conditions.
    5
    15027
    Automatic Text Summarization
    Abstract:
    This work proposes an approach to address automatic text summarization. This approach is a trainable summarizer, which takes into account several features, including sentence position, positive keyword, negative keyword, sentence centrality, sentence resemblance to the title, sentence inclusion of name entity, sentence inclusion of numerical data, sentence relative length, Bushy path of the sentence and aggregated similarity for each sentence to generate summaries. First we investigate the effect of each sentence feature on the summarization task. Then we use all features score function to train genetic algorithm (GA) and mathematical regression (MR) models to obtain a suitable combination of feature weights. The proposed approach performance is measured at several compression rates on a data corpus composed of 100 English religious articles. The results of the proposed approach are promising.
    4
    15032
    Data Mining in Oral Medicine Using Decision Trees
    Abstract:
    Data mining has been used very frequently to extract hidden information from large databases. This paper suggests the use of decision trees for continuously extracting the clinical reasoning in the form of medical expert-s actions that is inherent in large number of EMRs (Electronic Medical records). In this way the extracted data could be used to teach students of oral medicine a number of orderly processes for dealing with patients who represent with different problems within the practice context over time.
    3
    15035
    A Novel Fuzzy-Neural Based Medical Diagnosis System
    Abstract:
    In this paper, application of artificial neural networks in typical disease diagnosis has been investigated. The real procedure of medical diagnosis which usually is employed by physicians was analyzed and converted to a machine implementable format. Then after selecting some symptoms of eight different diseases, a data set contains the information of a few hundreds cases was configured and applied to a MLP neural network. The results of the experiments and also the advantages of using a fuzzy approach were discussed as well. Outcomes suggest the role of effective symptoms selection and the advantages of data fuzzificaton on a neural networks-based automatic medical diagnosis system.
    2
    15584
    An Integrated Software Architecture for Bandwidth Adaptive Video Streaming
    Authors:
    Abstract:
    Video streaming over lossy IP networks is very important issues, due to the heterogeneous structure of networks. Infrastructure of the Internet exhibits variable bandwidths, delays, congestions and time-varying packet losses. Because of variable attributes of the Internet, video streaming applications should not only have a good end-to-end transport performance but also have a robust rate control, furthermore multipath rate allocation mechanism. So for providing the video streaming service quality, some other components such as Bandwidth Estimation and Adaptive Rate Controller should be taken into consideration. This paper gives an overview of video streaming concept and bandwidth estimation tools and then introduces special architectures for bandwidth adaptive video streaming. A bandwidth estimation algorithm – pathChirp, Optimized Rate Controllers and Multipath Rate Allocation Algorithm are considered as all-in-one solution for video streaming problem. This solution is directed and optimized by a decision center which is designed for obtaining the maximum quality at the receiving side.
    1
    15694
    On-line Recognition of Isolated Gestures of Flight Deck Officers (FDO)
    Abstract:
    The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.