Multi-Agent System for Irrigation Using Fuzzy Logic Algorithm and Open Platform Communication Data Access
Automatic irrigation systems usually conveniently protect landscape investment. While conventional irrigation systems are known to be inefficient, automated ones have the potential to optimize water usage. In fact, there is a new generation of irrigation systems that are smart in the sense that they monitor the weather, soil conditions, evaporation and plant water use, and automatically adjust the irrigation schedule. In this paper, we present an agent based smart irrigation system. The agents are built using a mix of commercial off the shelf software, including MATLAB, Microsoft Excel and KEPServer Ex5 OPC server, and custom written code. The Irrigation Scheduler Agent uses fuzzy logic to integrate the information that affect the irrigation schedule. In addition, the Multi-Agent system uses Open Platform Connectivity (OPC) technology to share data. OPC technology enables the Irrigation Scheduler Agent to communicate over the Internet, making the system scalable to a municipal or regional agent based water monitoring, management, and optimization system. Finally, this paper presents simulation and pilot installation test result that show the operational effectiveness of our system.
ParkedGuard: An Efficient and Accurate Parked Domain Detection System Using Graphical Locality Analysis and Coarse-To-Fine Strategy
As world wild internet has non-stop developments, making profit by lending registered domain names emerges as a new business in recent years. Unfortunately, the larger the market scale of domain lending service becomes, the riskier that there exist malicious behaviors or malwares hiding behind parked domains will be. Also, previous work for differentiating parked domain suffers two main defects: 1) too much data-collecting effort and CPU latency needed for features engineering and 2) ineffectiveness when detecting parked domains containing external links that are usually abused by hackers, e.g., drive-by download attack. Aiming for alleviating above defects without sacrificing practical usability, this paper proposes ParkedGuard as an efficient and accurate parked domain detector. Several scripting behavioral features were analyzed, while those with special statistical significance are adopted in ParkedGuard to make feature engineering much more cost-efficient. On the other hand, finding memberships between external links and parked domains was modeled as a graph mining problem, and a coarse-to-fine strategy was elaborately designed by leverage the graphical locality such that ParkedGuard outperforms the state-of-the-art in terms of both recall and precision rates.
Design and Implementation of Medium Access Control Based Routing on Real Wireless Sensor Networks Testbed
IEEE 802.15.4 is a Low Rate Wireless Personal Area Networks (LR-WPAN) standard combined with ZigBee, which is going to enable new applications in Wireless Sensor Networks (WSNs) and Internet of Things (IoT) domain. In recent years, it has become a popular standard for WSNs. Wireless communication among sensor motes, enabled by IEEE 802.15.4 standard, is extensively replacing the existing wired technology in a wide range of monitoring and control applications. Researchers have proposed a routing framework and mechanism that interacts with the IEEE 802.15.4 standard using software platform. In this paper, we have designed and implemented MAC based routing (MBR) based on IEEE 802.15.4 standard using a hardware platform “SENSEnuts”. The experimental results include data through light and temperature sensors obtained from communication between PAN coordinator and source node through coordinator, MAC address of some modules used in the experimental setup, topology of the network created for simulation and the remaining battery power of the source node. Our experimental effort on a WSN Testbed has helped us in bridging the gap between theoretical and practical aspect of implementing IEEE 802.15.4 for WSNs applications.
Terrain Classification for Ground Robots Based on Acoustic Features
The motivation of our work is to detect different
terrain types traversed by a robot based on acoustic data from the
robot-terrain interaction. Different acoustic features and classifiers
were investigated, such as Mel-frequency cepstral coefficient and
Gamma-tone frequency cepstral coefficient for the feature extraction,
and Gaussian mixture model and Feed forward neural network for the
classification. We analyze the system’s performance by comparing
our proposed techniques with some other features surveyed from
distinct related works. We achieve precision and recall values between
87% and 100% per class, and an average accuracy at 95.2%. We also
study the effect of varying audio chunk size in the application phase
of the models and find only a mild impact on performance.
Examining the Performance of Three Multiobjective Evolutionary Algorithms Based on Benchmarking Problems
The objective of this study is to examine the performance of three well-known multiobjective evolutionary algorithms for solving optimization problems. The first algorithm is the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), the second one is the Strength Pareto Evolutionary Algorithm 2 (SPEA-2), and the third one is the Multiobjective Evolutionary Algorithms based on decomposition (MOEA/D). The examined multiobjective algorithms are analyzed and tested on the ZDT set of test functions by three performance metrics. The results indicate that the NSGA-II performs better than the other two algorithms based on three performance metrics.
A Model Based Metaheuristic for Hybrid Hierarchical Community Structure in Social Networks
In recent years, the study of community detection
in social networks has received great attention. The hierarchical
structure of the network leads to the emergence of the convergence
to a locally optimal community structure. In this paper, we aim
to avoid this local optimum in the introduced hybrid hierarchical
method. To achieve this purpose, we present an objective function
where we incorporate the value of structural and semantic similarity
based modularity and a metaheuristic namely bees colonies algorithm
to optimize our objective function on both hierarchical level divisive
and agglomerative. In order to assess the efficiency and the accuracy
of the introduced hybrid bee colony model, we perform an extensive
experimental evaluation on both synthetic and real networks.
Hybrid Hierarchical Clustering Approach for Community Detection in Social Network
Social Networks generally present a hierarchy of
communities. To determine these communities and the relationship
between them, detection algorithms should be applied. Most of
the existing algorithms, proposed for hierarchical communities
identification, are based on either agglomerative clustering or
divisive clustering. In this paper, we present a hybrid hierarchical
clustering approach for community detection based on both
bottom-up and bottom-down clustering. Obviously, our approach
provides more relevant community structure than hierarchical
method which considers only divisive or agglomerative clustering
to identify communities. Moreover, we performed some comparative
experiments to enhance the quality of the clustering results and to
show the effectiveness of our algorithm.
Moving Object Detection Using Histogram of Uniformly Oriented Gradient
Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.
Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering
Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.
An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).
Malware Detection in Mobile Devices by Analyzing Sequences of System Calls
With the increase in popularity of mobile devices,
new and varied forms of malware have emerged. Consequently,
the organizations for cyberdefense have echoed the need to deploy
more effective defensive schemes adapted to the challenges posed
by these recent monitoring environments. In order to contribute to
their development, this paper presents a malware detection strategy
for mobile devices based on sequence alignment algorithms. Unlike
the previous proposals, only the system calls performed during the
startup of applications are studied. In this way, it is possible to
efficiently study in depth, the sequences of system calls executed
by the applications just downloaded from app stores, and initialize
them in a secure and isolated environment. As demonstrated in the
performed experimentation, most of the analyzed malicious activities
were successfully identified in their boot processes.
Benchmarking of Pentesting Tools
The benchmarking of tools for dynamic analysis of
vulnerabilities in web applications is something that is done
periodically, because these tools from time to time update their
knowledge base and search algorithms, in order to improve their
accuracy. Unfortunately, the vast majority of these evaluations are
made by software enthusiasts who publish their results on blogs
or on non-academic websites and always with the same evaluation
methodology. Similarly, academics who have carried out this type of
analysis from a scientific approach, the majority, make their analysis
within the same methodology as well the empirical authors. This
paper is based on the interest of finding answers to questions that
many users of this type of tools have been asking over the years,
such as, to know if the tool truly test and evaluate every vulnerability
that it ensures do, or if the tool, really, deliver a real report of all the
vulnerabilities tested and exploited. This kind of questions have also
motivated previous work but without real answers. The aim of this
paper is to show results that truly answer, at least on the tested tools,
all those unanswered questions. All the results have been obtained
by changing the common model of benchmarking used for all those
Ontology for a Voice Transcription of OpenStreetMap Data: The Case of Space Apprehension by Visually Impaired Persons
In this paper, we present a vocal ontology of
OpenStreetMap data for the apprehension of space by visually
impaired people. Indeed, the platform based on produsage gives a
freedom to data producers to choose the descriptors of geocoded
locations. Unfortunately, this freedom, called also folksonomy leads
to complicate subsequent searches of data. We try to solve this issue
in a simple but usable method to extract data from OSM databases in
order to send them to visually impaired people using Text To Speech
technology. We focus on how to help people suffering from visual
disability to plan their itinerary, to comprehend a map by querying
computer and getting information about surrounding environment in
a mono-modal human-computer dialogue.
Secure Distance Bounding Protocol on Ultra-WideBand Based Mapping Code
Ultra WidBand-IR physical layer technology has seen a
great development during the last decade which makes it a promising
candidate for short range wireless communications, as they bring
considerable benefits in terms of connectivity and mobility. However,
like all wireless communication they suffer from vulnerabilities in
terms of security because of the open nature of the radio channel. To
face these attacks, distance bounding protocols are the most popular
counter measures. In this paper, we presented a protocol based on
distance bounding to thread the most popular attacks: Distance Fraud,
Mafia Fraud and Terrorist fraud. In our work, we study the way
to adapt the best secure distance bounding protocols to mapping
code of ultra-wideband (TH-UWB) radios. Indeed, to ameliorate the
performances of the protocol in terms of security communication
in TH-UWB, we combine the modified protocol to ultra-wideband
impulse radio technology (IR-UWB). The security and the different
merits of the protocols are analyzed.
A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.
Aggregation Scheduling Algorithms in Wireless Sensor Networks
In Wireless Sensor Networks which consist of tiny
wireless sensor nodes with limited battery power, one of the most
fundamental applications is data aggregation which collects nearby
environmental conditions and aggregates the data to a designated
destination, called a sink node. Important issues concerning the
data aggregation are time efficiency and energy consumption due
to its limited energy, and therefore, the related problem, named
Minimum Latency Aggregation Scheduling (MLAS), has been the
focus of many researchers. Its objective is to compute the minimum
latency schedule, that is, to compute a schedule with the minimum
number of timeslots, such that the sink node can receive the
aggregated data from all the other nodes without any collision or
interference. For the problem, the two interference models, the graph
model and the more realistic physical interference model known as
Signal-to-Interference-Noise-Ratio (SINR), have been adopted with
different power models, uniform-power and non-uniform power (with
power control or without power control), and different antenna
models, omni-directional antenna and directional antenna models.
In this survey article, as the problem has proven to be NP-hard,
we present and compare several state-of-the-art approximation
algorithms in various models on the basis of latency as its
Improving the Security of Internet of Things Using Encryption Algorithms
Internet of things (IOT) is a kind of advanced information technology which has drawn societies’ attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously, IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission, and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually, the suggested encryption algorithm has been simulated by MATLAB software, and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.
The Correlation between Users’ Star Rating and Usability on Mobile Applications
Star rating for mobile applications is a very useful way to differentiate between the best and worst rated applications. However, the question is whether the rating reflects the level of usability or not. The aim of this paper is to find out if the user’ star ratings on mobile apps correlate with the usability of those apps. Thus, we tested three mobile apps, which have different star ratings: low, medium, and high. Participating in the study, 15 mobile phone users were asked to do one single task for each of the three tested apps. After each task, the participant evaluated the app by answering a survey based on the System Usability Scale (SUS). The results found that there is no major correlation between the star rating and the usability. However, it was found that the task completion time and the numbers of errors that may happen while completing the task were significantly correlated to the usability.
Design of Two-Channel Quincunx Quadrature Mirror Filter Banks Using Digital All-Pass Lattice Filters
This paper deals with the problem of two-dimensional (2-D) recursive two-channel quincunx quadrature mirror filter (QQMF) banks design. The analysis and synthesis filters of the 2-D recursive QQMF bank are composed of 2-D recursive digital allpass lattice filters (DALFs) with symmetric half-plane (SHP) support regions. Using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters, we facilitate the design of the proposed QQMF bank. For finding the coefficients of the 2-D recursive SHP DALFs, we present a structure of 2-D recursive digital allpass filters by using 2-D SHP recursive digital all-pass lattice filters (DALFs). The novelty of using 2-D SHP recursive DALFs to construct a 2-D recursive QQMF bank is that the resulting 2-D recursive QQMF bank provides better performance than the existing 2-D recursive QQMF banks. Simulation results are also presented for illustration and comparison.
Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances
This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.
Internet of Things Based Process Model for Smart Parking System
Transportation is an essential need for many people to go to their work, school, and home. In particular, the main common method inside many cities is to drive the car. Driving a car can be an easy job to reach the destination and load all stuff in a reasonable time. However, deciding to find a parking lot for a car can take a long time using the traditional system that can issue a paper ticket for each customer. The old system cannot guarantee a parking lot for all customers. Also, payment methods are not always available, and many customers struggled to find their car among a numerous number of cars. As a result, this research focuses on providing an online smart parking system in order to save time and budget. This system provides a flexible management system for both parking owner and customers by receiving all request via the online system and it gets an accurate result for all available parking and its location.
Analysis of Lightweight Register Hardware Threat
In this paper, we present a design methodology of lightweight register transfer level (RTL) hardware threat implemented based on a MAX II FPGA platform. The dynamic power consumed by the toggling of the various bit of registers as well as the dynamic power consumed per unit of logic circuits were analyzed. The hardware threat was designed taking advantage of the differences in dynamic power consumed per unit of logic circuits to hide the transfer information. The experiment result shows that the register hardware threat was successfully implemented by using different dynamic power consumed per unit of logic circuits to hide the key information of DES encryption module. It needs more than 100000 sample curves to reduce the background noise by comparing the sample space when it completely meets the time alignment requirement. In additional, an external trigger signal is playing a very important role to detect the hardware threat in this experiment.
A Review on Cloud Computing and Internet of Things
Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.
Enhanced Multi-Intensity Analysis in Multi-Scenery Classification-Based Macro and Micro Elements
Several computationally challenging issues are
encountered while classifying complex natural scenes. In this
paper, we address the problems that are encountered in rotation
invariance with multi-intensity analysis for multi-scene overlapping.
In the present literature, various algorithms proposed techniques
for multi-intensity analysis, but there are several restrictions in
these algorithms while deploying them in multi-scene overlapping
classifications. In order to resolve the problem of multi-scenery
overlapping classifications, we present a framework that is based
on macro and micro basis functions. This algorithm conquers the
minimum classification false alarm while pigeonholing multi-scene
overlapping. Furthermore, a quadrangle multi-intensity decay is
invoked. Several parameters are utilized to analyze invariance
for multi-scenery classifications such as rotation, classification,
correlation, contrast, homogeneity, and energy. Benchmark datasets
were collected for complex natural scenes and experimented for
the framework. The results depict that the framework achieves
a significant improvement on gray-level matrix of co-occurrence
features for overlapping in diverse degree of orientations while
pigeonholing multi-scene overlapping.
A Study of Recent Contribution on Simulation Tools for Network-on-Chip
The growth in the number of Intellectual Properties (IPs) or the number of cores on the same chip becomes a critical issue in System-on-Chip (SoC) due to the intra-communication problem between the chip elements. As a result, Network-on-Chip (NoC) has emerged as a system architecture to overcome intra-communication issues. This paper presents a study of recent contributions on simulation tools for NoC. Furthermore, an overview of NoC is covered as well as a comparison between some NoC simulators to help facilitate research in on-chip communication.
Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach
We propose a system to real environmental noise and
channel mismatch for forensic speaker verification systems. This
method is based on suppressing various types of real environmental
noise by using independent component analysis (ICA) algorithm.
The enhanced speech signal is applied to mel frequency cepstral
coefficients (MFCC) or MFCC feature warping to extract the
essential characteristics of the speech signal. Channel effects are
reduced using an intermediate vector (i-vector) and probabilistic
linear discriminant analysis (PLDA) approach for classification. The
proposed algorithm is evaluated by using an Australian forensic voice
comparison database, combined with car, street and home noises
from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10
dB to 10 dB. Experimental results indicate that the MFCC feature
warping-ICA achieves a reduction in equal error rate about (48.22%,
44.66%, and 50.07%) over using MFCC feature warping when the
test speech signals are corrupted with random sessions of street, car,
and home noises at -10 dB SNR.
Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations
The reconstruction from sparse-view projections is one
of important problems in computed tomography (CT) limited by
the availability or feasibility of obtaining of a large number of
projections. Traditionally, convex regularizers have been exploited
to improve the reconstruction quality in sparse-view CT, and the
convex constraint in those problems leads to an easy optimization
process. However, convex regularizers often result in a biased
approximation and inaccurate reconstruction in CT problems. Here,
we present a nonconvex, Lipschitz continuous and non-smooth
regularization model. The CT reconstruction is formulated as a
nonconvex constrained L1 − L2 minimization problem and solved
through a difference of convex algorithm and alternating direction
of multiplier method which generates a better result than L0 or L1
regularizers in the CT reconstruction. We compare our method with
previously reported high performance methods which use convex
regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV)
on the test phantom images. The results show that there are benefits in
using the nonconvex regularizer in the sparse-view CT reconstruction.
Perceptions toward Adopting Virtual Reality as a Learning Aid in Information Technology
The field of education is an ever-evolving area constantly enriched by newly discovered techniques provided by active research in all areas of technologies. The recent years have witnessed the introduction of a number of promising technologies and applications to enhance the teaching and learning experience. Virtual Reality (VR) applications are considered one of the evolving methods that have contributed to enhancing education in many fields. VR creates an artificial environment, using computer hardware and software, which is similar to the real world. This simulation provides a solution to improve the delivery of materials, which facilitates the teaching process by providing a useful aid to instructors, and enhances the learning experience by providing a beneficial learning aid. In order to assure future utilization of such systems, students’ perceptions were examined toward utilizing VR as an educational tool in the Faculty of Information Technology (IT) in The University of Jordan. A questionnaire was administered to IT undergraduates investigating students’ opinions about the potential opportunities that VR technology could offer and its implications as learning and teaching aid. The results confirmed the end users’ willingness to adopt VR systems as a learning aid. The result of this research forms a solid base for investing in a VR system for IT education.
A Neuro-Automata Decision Support System for the Control of Late Blight in Tomato Crops
The use of decision support systems in agriculture may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. In our work, we designed and implemented a decision support system for small tomatoes producers. This work investigates ways to recognize the late blight disease from the analysis of digital images of tomatoes, using a pair of multilayer perceptron neural networks. The networks outputs are used to generate repainted tomato images in which the injuries on the plant are highlighted, and to calculate the damage level of each plant. Those levels are then used to construct a situation map of a farm where a cellular automata simulates the outbreak evolution over the fields. The simulator can test different pesticides actions, helping in the decision on when to start the spraying and in the analysis of losses and gains of each choice of action.
Conceptualizing the Knowledge to Manage and Utilize Data Assets in the Context of Digitization: Case Studies of Multinational Industrial Enterprises
The trend of digitization significantly changes the role of data for enterprises. Data turn from an enabler to an intangible organizational asset that requires management and qualifies as a tradeable good. The idea of a networked economy has gained momentum in the data domain as collaborative approaches for data management emerge. Traditional organizational knowledge consequently needs to be extended by comprehensive knowledge about data. The knowledge about data is vital for organizations to ensure that data quality requirements are met and data can be effectively utilized and sovereignly governed. As this specific knowledge has been paid little attention to so far by academics, the aim of the research presented in this paper is to conceptualize it by proposing a “data knowledge model”. Relevant model entities have been identified based on a design science research (DSR) approach that iteratively integrates insights of various industry case studies and literature research.