Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 46035

Computer and Information Engineering

2356
80442
A Review on Medical Image Registration Techniques
Abstract:
This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavors. Methodological analysis and synthesis of quality literature was done which provides a platform for developing a good foundation for research study in this field and is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumors and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research on optimisation in image registration techniques. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimize error function. The paper also suggests several areas of future research in the image registration.
2355
80315
Measuring Diversity of Associations Rules Extracted from a Data Warehouse
Abstract:
Knowledge discovery is a series of steps to extract useful information from data sets containing data in large volume. Nowadays, data sources contain large number of dimensions and data size is getting increased as a result. Data is archived in data warehouses now in an aggregate form. Data mining techniques to extract knowledge from datasets are now being applied in data warehouse. Interesting patterns are extracted in the form of association rules from data warehouses whereas interestingness measures are used to evaluate these patterns. The techniques available for evaluation of association rules were originally developed for transactional databases. In this research work, we enhance our previous methodology which extracts association rules in a data warehouse environment at multiple levels of abstraction. In this work, we evaluate these association rules using advanced measures of interestingness particularly targeting the diversity measures. We have applied 9 measures of interestingness on association rules generated in the data warehouse and shown our results for diversity. Results further suggest that there is a strong correlation between some of these measures at cluster level. A future study can be conducted to deduce a linear model for prediction of diversity measures at lower levels in the hierarchy.
2354
80256
Using Multi-Level Analysis to Identify Future Trends in Small Device Digital Communication Examinations
Abstract:
The growth of technological advances in the digital communications industry has dictated the way forensic examination laboratories receive, analyze, and report on digital evidence. This study looks at the trends in a medium sized digital forensics lab that examines small communications devices (i.e., cellular telephones, tablets, thumb drives, etc.) over the past five years. As law enforcement and homeland security organizations budgets shrink, many agencies are being asked to perform more examinations with less resources available. Using multi-level statistical analysis using five years of examination data, this research shows the increasing technological demand trend. The research then extrapolates the current data into the model created and finds a continued exponential growth curve of said demands is well within the parameters defined earlier on in the research.
2353
80180
A Theoretical Model for Pattern Extraction in Large Datasets
Abstract:
Pattern extraction has been done in past to extract hidden and interesting patterns from large datasets. Recently, advancements are being made in these techniques by providing the ability of multi-level mining, effective dimension reduction, advanced evaluation and visualization support. This paper focuses on reviewing the current techniques in literature on the basis of these parameters. Literature review suggests that most of the techniques which provide multi-level mining and dimension reduction, do not handle mixed-type data during the process. Patterns are not extracted using advanced algorithms for large datasets. Moreover, the evaluation of patterns is not done using advanced measures which are suited for high-dimensional data. Techniques which provide visualization support are unable to handle a large number of rules in a small space. We present a theoretical model to handle these issues. The implementation of the model is beyond the scope of this paper.
2352
79985
Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns
Abstract:
In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario.
2351
79973
Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.
2350
79824
An Improved C-Means Model for Magnetic Resonance Imaging Segmentation
Abstract:
Medical images are important to help identifying different diseases, for example, MRI (Magnetic resonance imaging) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.
2349
79819
Cyber Warfare and Cyber Terrorism: An Analysis of Global Cooperation and Cyber Security Counter Measures
Authors:
Abstract:
Cyber-attacks have frequently disrupted the critical infrastructures of the major global states and now, cyber threat has become one of the dire security risks for the states across the globe. Recently, ransomware cyber-attacks, wannacry and petya, have affected hundreds of thousands of computer servers and individuals’ private machines in more than hundred countries across Europe, Middle East, Asia, United States and Australia. Although, states are rapidly becoming aware of the destructive nature of this new security threat and counter measures are being taken but states’ isolated efforts would be inadequate to deal with this heinous security challenge, rather a global coordination and cooperation is inevitable in order to develop a credible cyber deterrence policy. Hence, the paper focuses that coordinated global approach is required to deter posed cyber threat. This paper intends to analyze the cyber security counter measures in four dimensions i.e. evaluation of prevalent strategies at bilateral level, initiatives and limitations for cooperation at global level, obstacles to combat cyber terrorism and finally, recommendations to deter the threat by applying tools of deterrence theory. Firstly, it focuses on states’ efforts to combat the cyber threat and in this regard, US-Australia Cyber Security Dialogue is comprehensively illustrated and investigated. Secondly, global partnerships and strategic and analytic role of multinational organizations, particularly United Nations (UN), to deal with the heinous threat, is critically analyzed and flaws are highlighted, for instance; less significance of cyber laws within international law as compared to other conflict prone issues. In addition to this, there are certain obstacles and limitations at national, regional and global level to implement the cyber terrorism counter strategies which are presented in the third section. Lastly, by underlining the gaps and grey areas in the current cyber security counter measures, it aims to apply tools of deterrence theory, i.e. defense, attribution and retaliation, in the cyber realm to contribute towards formulating a credible cyber deterrence strategy at global level. Thus, this study is significant in understanding and determining the inevitable necessity of counter cyber terrorism strategies.
2348
79788
The Evolution of Israel Defence Forces' Information Operations: A Case Study of Israel Defence Forces' Activities in the Information Domain 2006-2016
Abstract:
This article examines the evolution of Israel Defence Forces’ information operation capabilities during a ten year span from the media disaster of 2006 war with Hezbollah to the more recent operations such as Pillars of Defence and Protective Edge. This case study will show a change in Israel Defence Forces’ media behavior from first steps in the virtual battlefield to dominating actor both locally and globally. In the 2006, war with Hezbollah in Lebanon, Israel inflicted enormous damage to the Lebanese infrastructure leaving more than 1200 people dead and 4400 people injured. Israel’s main adversary Hezbollah’s casualties were estimated to be from 250 to 700 fighters. Damages to the Lebanese infrastructure were estimated to climb above 2.5 Bn U$D, with almost 2000 houses and buildings damaged and destroyed. Even this amount of destruction did not force the Hezbollah to yield and while both sides were claiming victory in the war, Israel paid heavier price in political backlashes and loss of reputation, mainly due to failures in media and how the war was portrayed and perceived in Israel and abroad. Lot of this can be credited to how Hezbollah used the media efficiently and how Israel failed to do so. The next conflict Israel was engaged in was managed totally differently from Israel’s side - it had learnt its lessons and built up new ways to counter adversary propaganda and media operations. In the Operation Cast Lead in the turn of 2008-2009, Israel’s adversary, the Hamas - Gaza’s dominating faction, was not able to utilize the media the same way the Hezbollah could. By creating a virtual and physical barrier around the Gaza strip, Israel almost totally denied its adversary’s access to the worldwide media and by restricting the movement of journalists in the area Israel could let its voice to be heard above all. The operation Cast Lead began with a deception operation, which caught Hamas totally off guard. The 21-day campaign left Gaza strip devastated, but did not cause as much protests in Israel during the operation as the 2006 war did, mainly due to almost total Israeli dominance in the information dimension. Most important thing from the Israeli perspective was the fact that the Operation Cast Lead was assessed to be a great success in all terms and the operation enjoyed strong domestic support along with support from many western nations, which had condemned Israeli actions in the 2006 war. Later conflicts have shown the same tendency towards a nearly total dominance in the information domain which has had impact on target audiences across the world. Thus, it is clear that well planned and conducted information operations are able to shape the public opinion and affect decision makers. The focus of this paper is purely operational- strategical in military perspective and does not examine any accuses of war crimes and their truthfulness, while accusations as such can be described as examples of ways to use the media as a weapon.
2347
79681
High Thermal Selective Detection of NOₓ Using High Electron Mobility Transistor Based on Gallium Nitride
Abstract:
The real-time knowledge of the NO, NO₂ concentration at high temperature, would allow manufacturers of automobiles to meet the upcoming stringent EURO7 anti-pollution measures for diesel engines. Knowledge of the concentration of each of these species will also enable engines to run leaner (i.e., more fuel efficient) while still meeting the anti-pollution requirements. Our proposed technology is promising in the field of automotive sensors. It consists of nanostructured semiconductors based on gallium nitride and zirconia dioxide. The development of new technologies for selective detection of NO and NO₂ gas species would be a critical enabler of superior depollution. The current response was well correlated to the NO concentration in the range of 0–2000 ppm, 0-2500 ppm NO₂, and 0-300 ppm NH₃ at a temperature of 600.
2346
79674
High-Resolution Facial Electromyography in Freely Behaving Humans
Abstract:
Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.
2345
79667
Network Functions Virtualization-Based Virtual Routing Function Deployment under Network Delay Constraints
Abstract:
NFV-based network implements a variety of network functions with software on general-purpose servers, and this allows the network operator to select any capabilities and locations of network functions without any physical constraints. In this paper, we evaluate the influence of the maximum tolerable network delay on the virtual routing function deployment guidelines which the authors proposed previously. Our evaluation results have revealed the following: (1) the more the maximum tolerable network delay condition becomes severe, the more the number of areas where the route selection function is installed increases and the total network cost increases, (2) the higher the routing function cost relative to the circuit bandwidth cost, the increase ratio of total network cost becomes larger according to the maximum tolerable network delay condition.
2344
79634
Framework for Automatic Selection of Kernels based on Convolutional Neural Networks and CkMeans Clustering Algorithm
Abstract:
Convolutional Neural Networks (CNN) can learn deep feature representation for hyperspectral imagery (HIS) interpretation and attain excellent accuracy of classification if we have many training samples. Due to its superiority in feature representation, several works focus on it, among which a reliable classification approach based on CNN, used filters generated from cluster framework, like kMeans algorithm, yielded good results. However, the number of kernels to be manually assigned. To solve this problem, an HSI classification framework based on CNN, where the convolutional filters to be automatically learned from the data, by grouping without knowing the cluster number, has recently proposed. This framework, based on the two algorithms CNN and kMeans, showed high accuracy results. So, in the same context, we propose an architecture based on the depth Convolutional Neural Networks principle, where kernels are automatically learned, using CkMeans network, to generate filters without knowing the number of clusters, for hyperspectral classification. With adaptive kernels, the proposed framework Automatic Kernels Selection by CkMeans algorithm (AKSCCk) achieves a better classification accuracy compared to the previous frameworks. The experimental results show the effectiveness and feasibility of AKSCCk approach.
2343
79560
An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.
2342
79408
Chaos Cryptography in Cloud Architectures with Lower Latency
Abstract:
With the rapid evolution of the internet applications, cloud computing becomes one of today’s hottest research areas due to its ability to reduce costs associated with computing. Cloud is, therefore, increasing flexibility and scalability for computing services in the internet. Cloud computing is Internet based computing due to shared resources and information which are dynamically delivered to consumers. As cloud computing share resources via the open network, hence cloud outsourcing is vulnerable to attack. Therefore, this paper will explore data security of cloud computing by implementing chaotic cryptography. The proposal scenario develops a problem transformation technique that enables customers to secretly transform their information. This work proposes the chaotic cryptographic algorithms have been applied to enhance the security of the cloud computing accessibility. However, the proposed scenario is secure, easy and straightforward process. The chaotic encryption and digital signature systems ensure the security of the proposed scenario. Though, the choice of the key size becomes crucial to prevent a brute force attack.
2341
79396
Human Action Recognition Using Wavelets of Derived Beta Distributions
Abstract:
In the framework of human machine interaction systems enhancement, we focus throw this paper on human behavior analysis and action recognition. Human behavior is characterized by actions and reactions duality (movements, psychological modification, verbal and emotional expression). It’s worth noting that many information is hidden behind gesture, sudden motion points trajectories and speeds, many research works reconstructed an information retrieval issues. In our work we will focus on motion extraction, tracking and action recognition using wavelet network approaches. Our contribution uses an analysis of human subtraction by Gaussian Mixture Model (GMM) and body movement through trajectory models of motion constructed from kalman filter. These models allow to remove the noise using the extraction of the main motion features and constitute a stable base to identify the evolutions of human activity. Each modality is used to recognize a human action using wavelets of derived beta distributions approach. The proposed approach has been validated successfully on a subset of KTH and UCF sports database.
2340
79268
Hybrid Reliability-Similarity Based Approach for Supervised Machine Learning
Authors:
Abstract:
Data mining is a field which has seen big advances in recent years because of the spread of internet, which generates everyday a tremendous volume of data, and also the immense advances in technologies which facilitate the analysis of these data. In particular, classification techniques are a subdomain of Data Mining which finds out in which group each data instance is related within a given dataset. It is used to classify data into different classes according to desired criteria. Generally, a classification technique is either statistical or machine learning. Each type of these techniques has its own limits. Nowadays, current data are becoming increasingly heterogeneous; consequently, current classification techniques are encountering many difficulties. This paper defines new measure functions to quantify the resemblance between instances and then combines them in a novel approach which differs from actual algorithms by its confidence computations. Results of the proposed approach exceeded most common classification techniques with a f-measure exceeding 97% on IRIS Dataset.
2339
79264
Implementation of a Serializer to Represent PHP Objects in the Extensible Markup Language
Abstract:
Interoperability in distributed systems is an important feature that refers to the communication of two applications written in different programming languages. This paper presents a serializer and a de-serializer of PHP objects to and from XML, which is an independent library written in the PHP programming language. The XML generated by this serializer is independent of the programming language, and can be used by other existing Web Objects in XML (WOX) serializers and de-serializers, which allow interoperability with other object-oriented programming languages.
2338
79245
Investigating the Characteristics of the Response Waiting Time in a Chat Room
Abstract:
Chat rooms are of enormous interest to social network researchers as they are one of the most interactive internet areas. Chat room users’ behaviour is important because it has an effect on the structure of a social network. To understand the user’s behaviour dynamics, researchers analyse the user’s Response Waiting Time (RWT) based on traditional approaches of aggregating the network contacts. However, real social networks are dynamic, and properties such as RWT change over time. So the traditional approaches tend to neglect the dynamism in pair conversation, and the result may misrepresent the real nature of user’s RWT during the online chat. We studied the dynamics of pairs of people in online conversation through RWT. Using three online chat logs: Walford, IRC and T-REX, we analyse, compare and presented the true nature of RWT of pairs of people in conversation. Our research shows that the distribution of the Response Waiting Time (RWT) of pairs in conversation exhibits multi-scaling behaviour which significantly affects the current views on the nature of RWT. This is a shift from simple power-law distribution to a more complex pattern. Secondly, we investigated the impact of communication count (number of messages exchanged between pairs of people) on RWT; the result shows that pairs who have a high number of messages exchange within an online chat room tend to have a shorter RWT. Lastly, we studied the RWT dynamics of one user in relation to other participants when in pair conversation. Our result shows that an individual can have several waiting time depending on the interference factors. This suggests that communication dynamics depends on the group or pairs rather than being simply about the individual.
2337
79243
Performance Evaluation of Particles Coding in Particle Swarm Optimization with Self-Adaptive Parameters for Flexible Job Shop Scheduling Problem
Abstract:
Flexible job shop scheduling problem is a very important one in real application of many kinds of industries. The metaheuristic particle swarm optimization (PSO) is well suited to solve the FJSP, and a suitable particle representation should importantly impact the optimization result and performance of this algorithm. The chosen representation has a direct impact on the size and content of the solution space. The parameters of the PSO algorithm have an impact on how space travels with the objective of balancing between exploitation (local search) and exploration (global search). Moreover, the way in which the solution space must be traversed must be closely linked to the nature of the solution space. For these reasons, we choose to work with the PSO. In this paper, we propose a PSO variant with different particle representation (encoding). We propose first two types of particle encoding (Job-Machine encoding scheme JMS and Only-Machine encoding scheme OMS) to solve scheduling problems known in the job shop manufacturing environment. We intend to evaluate and compare the performance of different particle representation procedures in PSO (PSO_JMS and PSO_OMS). These procedures have been tested on thirteen benchmark problems, where the objective function is to minimize the makespan and total workload and to compare the run time of the different PSO variants. Based on the experimental results, it is discovered that PSO_OMS gives the best performance in solving all benchmark problems. The contribution of this paper is the fact that it demonstrates that different particle representation can have significant effects on the performance of PSO-FJSP.
2336
79110
Eliminating Redundant and Irrelevant Association Rules in Large Knowledge Bases
Abstract:
Large growing knowledge bases are being an explored issue in the past few years. Most approaches focus on developing techniques to increase their knowledge base. Association rule mining algorithms can also be used for this purpose. A main problem on extracting association rules is the effort spent on evaluating them. In order to reduce the number of association rules discovered, this paper presents ER component, which eliminates the extracted rules in two ways at the post-processing step. The first introduces the concept of super antecedent rules and prunes the redundant ones. The second method brings the concept of super consequent rules, eliminating those irrelevant. Experiments showed that both methods combined can decrease the amount of rules in more than 30%. We also compared ER to CHARM and FPMax algorithms. ER generated more relevant and efficient association rules to populate the knowledge base than CHARM and FPMax.
2335
79103
An Efficient Fundamental Matrix Estimation for Moving Object Detection
Abstract:
In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC the fundamental matrix is calculated. In this proposed method, we have improved the performances of moving object detection by using two threshold values that determines inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.
2334
79075
Design of a Virtual Reality Based Interactive Simulator
Abstract:
Virtual reality based training is becoming more and more popular because of its capabilities for immersive training experiences. Despite many advantages and capabilities, these systems have some shortcomings. A major shortcoming is an interaction with the environment. To experience immersive training environments, the users use a head mounted display (HMD) which blocks all the external vision to produce a fully virtual immersive experience. The trainees can only see the virtual world but not real objects like controllers, buttons, steering wheel, pedals, or even their hands. This paper describes design of a virtual reality based training system where users are able to see real and virtual reality images seamlessly. The trainees are able to see the combination of virtual and physical elements via a head-mounted-display using a dual reality mode. When user looks towards the wheel; controllers, and other preselected objects are detected by using object recognition algorithm. The screen transits to the view captured by the mounted cameras placed in front of the head-mounted-display.
2333
79042
Image Encryption Using Eureqa to Generate an Automated Mathematical Key
Abstract:
Applying traditional symmetric cryptography algorithms while computing encryption and decryption provides immunity to secret keys against different attacks. One of the popular techniques generating automated secret keys is evolutionary computing by using Eureqa API tool, which got attention in 2013. In this paper, we are generating automated secret keys for image encryption and decryption using Eureqa API (tool which is used in evolutionary computing technique). Eureqa API models pseudo-random input data obtained from a suitable source to generate secret keys. The validation of generated secret keys is investigated by performing various statistical tests (histogram, chi-square, correlation of two adjacent pixels, correlation between original and encrypted images, entropy and key sensitivity). Experimental results obtained from methods including histogram analysis, correlation coefficient, entropy and key sensitivity, show that the proposed image encryption algorithms are secure and reliable, with the potential to be adapted for secure image communication applications.
2332
79015
Cognition of Driving Context for Driving Assistance
Abstract:
In this paper, we presented our innovative way of determining the driving context for driving assistance system. We invoke the fusion of all parameters that describe the context of the environment, the vehicle, and the driver to obtain the driving context. We created a training set that stores driving situation patterns and from which the system consults to determine the driving situation. A machine-learning algorithm predicts the driving situation. The driving situation is an input to the fission process that yields the action that must be implemented when the driver needs to be informed or assisted from the given the driving situation. The action may be directed towards the driver, the vehicle or both. This is an ongoing work whose goal is to offer an alternative driving assistance system for safe driving, green driving, and comfortable driving. Here, ontologies are used for knowledge representation.
2331
78971
Implementation of Chlorine Monitoring and Supply System for Drinking Water Tanks
Abstract:
Healthy and clean water should not contain disease-causing microorganisms and toxic chemicals and must contain the necessary minerals in a balanced manner. Today, water resources have a limited and strategic importance, necessitating the management of water reserves. Water tanks meet the water needs of people and should be regularly chlorinated to prevent waterborne diseases. For this purpose, automatic chlorination systems placed in water tanks for killing bacteria. However, the regular operation of automatic chlorination systems depends on refilling the chlorine tank when it is empty. For this reason, there is a need for a stock control system, in which chlorine levels are regularly monitored and supplied. It has become imperative to take urgent measures against epidemics caused by the fact that most of our country is not aware of the end of chlorine. The aim of this work is to rehabilitate existing water tanks and to provide a method for a modern water storage system in which chlorination is digitally monitored by turning the newly established water tanks into a closed system. A sensor network structure using GSM/GPRS communication infrastructure has been developed in the study. The system consists of two basic units: hardware and software. The hardware includes a chlorine level sensor, an RFID interlock system for authorized personnel entry into water tank, a motion sensor for animals and other elements, and a camera system to ensure process safety. It transmits the data from the hardware sensors to the host server software via the TCP/IP protocol. The main server software processes the incoming data through the security algorithm and informs the relevant unit responsible (security forces, chlorine supply unit, public health, local administrator) by e-mail and SMS. Since the software is developed base on the web, authorized personnel are also able to monitor drinking water tank and report data on the internet. When the findings and user feedback obtained as a result of the study are evaluated, it is shown that closed drinking water tanks are built with GRP type material, and continuous monitoring in digital environment is vital for sustainable healthy water supply for people.
2330
78956
Field Production Data Collection, Analysis and Reporting Using Automated System
Abstract:
Various data points are constantly being measured in the production system, and due to the nature of the wells, these data points, such as pressure, temperature, water cut, etc.., fluctuations are constant, which requires high frequency monitoring and collection. It is a very difficult task to analyze these parameters manually using spreadsheets and email. An automated system greatly enhances efficiency, reduce errors, the need for constant emails which take up disk space, and frees up time for the operator to perform other critical tasks. Various production data is being recorded in an oil field, and this huge volume of data can be seen as irrelevant to some, especially when viewed on its own with no context. In order to fully utilize all this information, it needs to be properly collected, verified and stored in one common place and analyzed for surveillance and monitoring purposes. This paper describes how data is recorded by different parties and departments in the field, and verified numerous times as it is being loaded into a repository. Once it is loaded, a final check is done before being entered into a production monitoring system. Once all this is collected, various calculations are performed to report allocated production. Calculated production data is used to report field production automatically. It is also used to monitor well and surface facility performance. Engineers can use this for their studies and analyses to ensure field is performing as it should be, predict and forecast production, and monitor any changes in wells that could affect field performance.
2329
78945
Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components
Abstract:
Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.
2328
78846
Investigation of Clustering Algorithms Used in Wireless Sensor Networks
Abstract:
Wireless sensor networks are networks in which more than one sensor node is organized among themselves. The working principle is based on the transfer of the sensed data over the other nodes in the network to the central station. Wireless sensor networks concentrate on routing algorithms, energy efficiency and clustering algorithms. In the clustering method, the nodes in the network are divided into clusters using different parameters and the most suitable cluster head is selected from among them. The data to be sent to the center is sent per cluster and the cluster head is transmitted to the center. With this method, the network traffic is reduced and the energy efficiency of the nodes is increased. In this study, clustering algorithms were examined in terms of clustering performances and cluster head selection characteristics to try to identify weak and strong sides.
2327
78806
Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.