|Commenced in January 1999||Frequency: Monthly||Edition: International||Paper Count: 43|
The increasing demand of gallium, indium and rare-earth elements for the production of electronics, e.g. solid state-lighting, photovoltaics, integrated circuits, and liquid crystal displays, will exceed the world-wide supply according to current forecasts. Recycling systems to reclaim these materials are not yet in place, which challenges the sustainability of these technologies. This paper proposes a multispectral imaging system as a basis for a vision based recognition system for valuable components of electronics waste. Multispectral images intend to enhance the contrast of images of printed circuit boards (single components, as well as labels) for further analysis, such as optical character recognition and entire printed circuit board recognition. The results show, that a higher contrast is achieved in the near infrared compared to ultraviolett and visible light.
Modelling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve more dense and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.
Graphical User Interface (GUI) is essential to programming, as is any other characteristic or feature, due to the fact that GUI components provide the fundamental interaction between the user and the program. Thus, we must give more interest to GUI during building and development of systems. Also, we must give a greater attention to the user who is the basic corner in the dealing with the GUI. This paper introduces an approach for designing GUI from one of the models of business workflows which describe the workflow behavior of a system, specifically through Activity Diagrams (AD).
Load modeling is one of the central functions in power systems operations. Electricity cannot be stored, which means that for electric utility, the estimate of the future demand is necessary in managing the production and purchasing in an economically reasonable way. A majority of the recently reported approaches are based on neural network. The attraction of the methods lies in the assumption that neural networks are able to learn properties of the load. However, the development of the methods is not finished, and the lack of comparative results on different model variations is a problem. This paper presents a new approach in order to predict the Tunisia daily peak load. The proposed method employs a computational intelligence scheme based on the Fuzzy neural network (FNN) and support vector regression (SVR). Experimental results obtained indicate that our proposed FNN-SVR technique gives significantly good prediction accuracy compared to some classical techniques.
It is important to take security measures to protect your computer information, reduce identify theft, and prevent from malicious cyber-attacks. With cyber-attacks on the continuous rise, people need to understand and learn ways to prevent from these attacks. Cyber-attack is an important factor to be considered if one is to be able to protect oneself from malicious attacks. Without proper security measures, most computer technology would hinder home users more than such technologies would help. Knowledge of how cyber-attacks operate and protective steps that can be taken to reduce chances of its occurrence are key to increasing these security measures. The purpose of this paper is to inform home users on the importance of identifying and taking preventive steps to avoid cyberattacks. Throughout this paper, many aspects of cyber-attacks will be discuss: what a cyber-attack is, the affects of cyber-attack for home users, different types of cyber-attacks, methodology to prevent such attacks; home users can take to fortify security of their computer.
A large amount of data is typically stored in relational databases (DB). The latter can efficiently handle user queries which intend to elicit the appropriate information from data sources. However, direct access and use of this data requires the end users to have an adequate technical background, while they should also cope with the internal data structure and values presented. Consequently the information retrieval is a quite difficult process even for IT or DB experts, taking into account the limited contributions of relational databases from the conceptual point of view. Ontologies enable users to formally describe a domain of knowledge in terms of concepts and relations among them and hence they can be used for unambiguously specifying the information captured by the relational database. However, accessing information residing in a database using ontologies is feasible, provided that the users are keen on using semantic web technologies. For enabling users form different disciplines to retrieve the appropriate data, the design of a Graphical User Interface is necessary. In this work, we will present an interactive, ontology-based, semantically enable web tool that can be used for information retrieval purposes. The tool is totally based on the ontological representation of underlying database schema while it provides a user friendly environment through which the users can graphically form and execute their queries.
With increasingly more mobile health applications appearing due to the popularity of smartphones, the possibility arises that these data can be used to improve the medical diagnostic process, as well as the overall quality of healthcare, while at the same time lowering costs. However, as of yet there have been no reports of a successful combination of patient-generated data from smartphones with data from clinical routine. In this paper we describe how these two types of data can be combined in a secure way without modification to hospital information systems, and how they can together be used in a medical expert system for automatic nutritional classification and triage.
Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.
The planning of geological survey works is an iterative process which involves planner, geologist, civil engineer and other stakeholders, who perform different roles and have different points of view. Traditionally, the team used paper maps or CAD drawings to present the proposal which is not an efficient way to present and share idea on the site investigation proposal such as sitting of borehole location or seismic survey lines. This paper focuses on how a GIS approach can be utilised to develop a webbased system to support decision making process in the planning of geological survey works and also to plan site activities carried out by Singapore Geological Office (SGO). The authors design a framework of building an interactive web-based GIS system, and develop a prototype, which enables the users to obtain rapidly existing geological information and also to plan interactively borehole locations and seismic survey lines via a web browser. This prototype system is used daily by SGO and has shown to be effective in increasing efficiency and productivity as the time taken in the planning of geological survey works is shortened. The prototype system has been developed using the ESRI ArcGIS API 3.7 for Flex which is based on the ArcGIS 10.2.1 platform.
IEEE 802.16 (WiMAX) aims to present high speed wireless access to cover wide range coverage. The base station (BS) and the subscriber station (SS) are the main parts of WiMAX. WiMAX uses either Point-to-Multipoint (PMP) or mesh topologies. In the PMP mode, the SSs connect to the BS to gain access to the network. However, in the mesh mode, the SSs connect to each other to gain access to the BS. The main components of QoS management in the 802.16 standard are the admission control, buffer management and packet scheduling. In this paper, we use QualNet 5.0.2 to study the performance of different scheduling schemes, such as WFQ, SCFQ, RR and SP when the numbers of SSs increase. We find that when the number of SSs increases, the average jitter and average end-to-end delay is increased and the throughput is reduced.
Formal verification is proposed to ensure the correctness of the design and make functional verification more efficient. As cache plays a vital role in the design of System on Chip (SoC), and cache with Memory Management Unit (MMU) and cache memory unit makes the state space too large for simulation to verify, then a formal verification is presented for such system design. In the paper, a formal model checking verification flow is suggested and a new cache memory model which is called “exhaustive search model” is proposed. Instead of using large size ram to denote the whole cache memory, exhaustive search model employs just two cache blocks. For cache system contains data cache (Dcache) and instruction cache (Icache), Dcache memory model and Icache memory model are established separately using the same mechanism. At last, the novel model is employed to the verification of a cache which is module of a custom-built SoC system that has been applied in practical, and the result shows that the cache system is verified correctly using the exhaustive search model, and it makes the verification much more manageable and flexible.
A new concept of response system is proposed for filling the gap that exists in reducing vulnerability during immediate response to natural disasters. Real Time Early Response Systems (RTERSs) incorporate real time information as feedback data for closing control loop and for generating real time situation assessment. A review of the state of the art on works that fit the concept of RTERS is presented, and it is found that they are mainly focused on manmade disasters. At the same time, in response phase of natural disaster management many works are involved in creating early warning systems, but just few efforts have been put on deciding what to do once an alarm is activated. In this context a RTERS arises as a useful tool for supporting people in their decision making process during natural disasters after an event is detected, and also as an innovative context for applying well-known automation technologies and automatic control concepts and tools.
Due to the rapid increase of Internet, web opinion sources dynamically emerge which is useful for both potential customers and product manufacturers for prediction and decision purposes. These are the user generated contents written in natural languages and are unstructured-free-texts scheme. Therefore, opinion mining techniques become popular to automatically process customer reviews for extracting product features and user opinions expressed over them. Since customer reviews may contain both opinionated and factual sentences, a supervised machine learning technique applies for subjectivity classification to improve the mining performance. In this paper, we dedicate our work is the task of opinion summarization. Therefore, product feature and opinion extraction is critical to opinion summarization, because its effectiveness significantly affects the identification of semantic relationships. The polarity and numeric score of all the features are determined by Senti-WordNet Lexicon. The problem of opinion summarization refers how to relate the opinion words with respect to a certain feature. Probabilistic based model of supervised learning will improve the result that is more flexible and effective.
Game theory is the study of how people interact and make decisions to handle competitive situations. It has mainly been developed to study decision making in complex situations. Humans routinely alter their behaviour in response to changes in their social and physical environment. As a consequence, the outcomes of decisions that depend on the behaviour of multiple decision makers are difficult to predict and require highly adaptive decision-making strategies. In addition to the decision makers may have preferences regarding consequences to other individuals and choose their actions to improve or reduce the well-being of others. Nash equilibrium is a fundamental concept in the theory of games and the most widely used method of predicting the outcome of a strategic interaction in the social sciences. A Nash Equilibrium exists when there is no unilateral profitable deviation from any of the players involved. On the other hand, no player in the game would take a different action as long as every other player remains the same.
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.
Despite the highly touted benefits, emerging technologies have unleashed pervasive concerns regarding unintended and unforeseen social impacts. Thus, those wishing to create safe and socially acceptable products need to identify such side effects and mitigate them prior to the market proliferation. Various methodologies in the field of technology assessment (TA), namely Delphi, impact assessment, and scenario planning, have been widely incorporated in such a circumstance. However, literatures face a major limitation in terms of sole reliance on participatory workshop activities. They unfortunately missed out the availability of a massive untapped data source of futuristic information flooding through the Internet. This research thus seeks to gain insights into utilization of futuristic data, future-oriented documents from the Internet, as a supplementary method to generate social impact scenarios whilst capturing perspectives of experts from a wide variety of disciplines. To this end, network analysis is conducted based on the social keywords extracted from the futuristic documents by text mining, which is then used as a guide to produce a comprehensive set of detailed scenarios. Our proposed approach facilitates harmonized depictions of possible hazardous consequences of emerging technologies and thereby makes decision makers more aware of, and responsive to, broad qualitative uncertainties.
Offering a Product-Service System (PSS) is a well-accepted strategy that companies may adopt to provide a set of systemic solutions to customers. PSSs were initially provided in a simple form but now take diversified and complex forms involving multiple services, products and technologies. With the growing interest in the PSS, frameworks for the PSS development have been introduced by many researchers. However, most of the existing frameworks fail to examine various relations existing in a complex PSS. Since designing a complex PSS involves full integration of multiple products and services, it is essential to identify not only product-service relations but also product-product/ service-service relations. It is also equally important to specify how they are related for better understanding of the system. Moreover, as customers tend to view their purchase from a more holistic perspective, a PSS should be developed based on the whole system’s requirements, rather than focusing only on the product requirements or service requirements. Thus, we propose a framework to develop a complex PSS that is coordinated fully with the requirements of both worlds. Specifically, our approach adopts a multi-domain matrix (MDM). A MDM identifies not only inter-domain relations but also intra-domain relations so that it helps to design a PSS that includes highly desired and closely related core functions/ features. Also, various dependency types and rating schemes proposed in our approach would help the integration process.
In this paper we are presenting some spamming techniques their behaviour and possible solutions. We have analyzed how Spammers enters into online social networking sites (OSNSs) to target them and diverse techniques used by them for this purpose. Spamming is very common issue in present era of Internet especially through Online Social Networking Sites (like Facebook, Twitter, and Google+ etc.). Spam messages keep wasting Internet bandwidth and the storage space of servers. On social networking sites; spammers often disguise themselves by creating fake accounts and hijacking user’s accounts for personal gains. They behave like normal user and they continue to change their spamming strategy. Following spamming techniques are discussed in this paper like clickjacking, social engineered attacks, cross site scripting, URL shortening, and drive by download. We have used elgg framework for demonstration of some of spamming threats and respective implementation of solutions.
Customer churn prediction is one of the most useful areas of study in customer analytics. Due to the enormous amount of data available for such predictions, machine learning and data mining have been heavily used in this domain. There exist many machine learning algorithms directly applicable for the problem of customer churn prediction, and here, we attempt to experiment on a novel approach by using a cognitive learning based technique in an attempt to improve the results obtained by using a combination of supervised learning methods, with cognitive unsupervised learning methods.
In recent years parasitic antenna play major role in MIMO systems because of their gain and spectral efficiency. In this paper, single RF chain MIMO transmitter is designed using reconfigurable parasitic antenna. The Spatial Modulation (SM) is a recently proposed scheme in MIMO scenario which activates only one antenna at a time. The SM entirely avoids ICI and IAS, and only requires a single RF chain at the transmitter. This would switch ON a single transmit-antenna for data transmission while all the other antennas are kept silent. The purpose of the parasitic elements is to change the radiation pattern of the radio waves which is emitted from the driven element and directing them in one direction and hence introduces transmit diversity. Diode is connect between the patch and ground by changing its state (ON and OFF) the parasitic element act as reflector and director and also capable of steering azimuth and elevation angle. This can be achieved by changing the input impedance of each parasitic element through single RF chain. The switching of diode would select the single parasitic antenna for spatial modulation. This antenna is expected to achieve maximum gain with desired efficiency.
Image enhancement is a challenging issue in many applications. In the last two decades, there are various filters developed. This paper proposes a novel method which removes Gaussian noise from the gray scale images. The proposed technique is compared with Enhanced Fuzzy Peer Group Filter (EFPGF) for various noise levels. Experimental results proved that the proposed filter achieves better Peak-Signal-to-Noise-Ratio PSNR than the existing techniques. The proposed technique achieves 1.736dB gain in PSNR than the EFPGF technique.
Due to the fast and flawless technological innovation there is a tremendous amount of data dumping all over the world in every domain such as Pattern Recognition, Machine Learning, Spatial Data Mining, Image Analysis, Fraudulent Analysis, World Wide Web etc., This issue turns to be more essential for developing several tools for data mining functionalities. The major aim of this paper is to analyze various tools which are used to build a resourceful analytical or descriptive model for handling large amount of information more efficiently and user friendly. In this survey the diverse tools are illustrated with their extensive technical paradigm, outstanding graphical interface and inbuilt multipath algorithms in which it is very useful for handling significant amount of data more indeed.
Numerous signal processing based speech enhancement systems have been proposed to improve intelligibility in the presence of noise. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A method is presented for recording high-frequency speech components into a low-frequency region, to increase audibility for hearing loss listeners. The purpose of the paper is to enhance the formant of the speech based on the Kaiser window. The pitch and formant of the signal is based on the auto correlation, zero crossing and magnitude difference function. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain. A MATLAB software’s are used for the implementation of the system with low complexity is developed.
This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.