Mathematical, Computational, Physical, Electrical and Computer Engineering

Commenced in January 1999 | Frequency: Monthly | Edition: International | Paper Count: 12 |

12

1847

Modeling and Simulations of Complex Low- Dimensional systems: Testing the Efficiency of Parallelization

The deterministic quantum transfer-matrix (QTM)
technique and its mathematical background are presented. This
important tool in computational physics can be applied to a class of
the real physical low-dimensional magnetic systems described by the
Heisenberg hamiltonian which includes the macroscopic molecularbased
spin chains, small size magnetic clusters embedded in some
supramolecules and other interesting compounds. Using QTM, the
spin degrees of freedom are accurately taken into account, yielding
the thermodynamical functions at finite temperatures.
In order to test the application for the susceptibility calculations to
run in the parallel environment, the speed-up and efficiency of
parallelization are analyzed on our platform SGI Origin 3800 with
p = 128 processor units. Using Message Parallel Interface (MPI)
system libraries we find the efficiency of the code of 94% for
p = 128 that makes our application highly scalable.

11

2734

A New Method for Computing the Inverse Ideal in a Coordinate Ring

In this paper we present an efficient method for inverting an ideal in the ideal class group of a Cab curve by extending the method which is presented in [3]. More precisely we introduce a useful generator for the inverse ideal as a K[X]-module.

10

3614

A Reconfigurable Processing Element for Cholesky Decomposition and Matrix Inversion

Fixed-point simulation results are used for the
performance measure of inverting matrices by Cholesky
decomposition. The fixed-point Cholesky decomposition algorithm
is implemented using a fixed-point reconfigurable processing
element. The reconfigurable processing element provides all
mathematical operations required by Cholesky decomposition. The
fixed-point word length analysis is based on simulations using
different condition numbers and different matrix sizes. Simulation
results show that 16 bits word length gives sufficient performance
for small matrices with low condition number. Larger matrices and
higher condition numbers require more dynamic range for a fixedpoint
implementation.

9

4854

Enhancing K-Means Algorithm with Initial Cluster Centers Derived from Data Partitioning along the Data Axis with the Highest Variance

In this paper, we propose an algorithm to compute
initial cluster centers for K-means clustering. Data in a cell is
partitioned using a cutting plane that divides cell in two smaller cells.
The plane is perpendicular to the data axis with the highest variance
and is designed to reduce the sum squared errors of the two cells as
much as possible, while at the same time keep the two cells far apart
as possible. Cells are partitioned one at a time until the number of
cells equals to the predefined number of clusters, K. The centers of
the K cells become the initial cluster centers for K-means. The
experimental results suggest that the proposed algorithm is effective,
converge to better clustering results than those of the random
initialization method. The research also indicated the proposed
algorithm would greatly improve the likelihood of every cluster
containing some data in it.

8

6907

An Alternative Proof for the NP-completeness of Top Right Access point-Minimum Length Corridor Problem

In the Top Right Access point Minimum Length Corridor (TRA-MLC) problem [1], a rectangular boundary partitioned into rectilinear polygons is given and the problem is to find a corridor of least total length and it must include the top right corner of the outer rectangular boundary. A corridor is a tree containing a set of line segments lying along the outer rectangular boundary and/or on the boundary of the rectilinear polygons. The corridor must contain at least one point from the boundaries of the outer rectangle and also the rectilinear polygons. Gutierrez and Gonzalez [1] proved that the MLC problem, along with some of its restricted versions and variants, are NP-complete. In this paper, we give a shorter proof of NP-Completeness of TRA-MLC by findig the reduction in the following way.

7

6908

2D Fracture Analysis of the First Compression Piston Ring

The incidence of mechanical fracture of an
automobile piston rings prompted development of fracture analysis
method on this case. The three rings (two compression rings and one
oil ring) were smashed into several parts during the power-test (after
manufacturing the engine) causing piston and liner to be damaged.
The radial and oblique cracking happened on the failed piston rings.
The aim of the fracture mechanics simulations presented in this paper
was the calculation of particular effective fracture mechanics
parameters, such as J-integrals and stress intensity factors. Crack
propagation angles were calculated as well. Two-dimensional
fracture analysis of the first compression ring has been developed in
this paper using ABAQUS CAE6.5-1 software. Moreover, SEM
fractography was developed on fracture surfaces and is discussed in
this paper. Results of numerical calculations constitute the basis for
further research on real object.

6

7311

Modified Fast and Exact Algorithm for Fast Haar Transform

Wavelet transform or wavelet analysis is a recently
developed mathematical tool in applied mathematics. In numerical
analysis, wavelets also serve as a Galerkin basis to solve partial
differential equations. Haar transform or Haar wavelet transform has
been used as a simplest and earliest example for orthonormal wavelet
transform. Since its popularity in wavelet analysis, there are several
definitions and various generalizations or algorithms for calculating
Haar transform. Fast Haar transform, FHT, is one of the algorithms
which can reduce the tedious calculation works in Haar transform. In
this paper, we present a modified fast and exact algorithm for FHT,
namely Modified Fast Haar Transform, MFHT. The algorithm or
procedure proposed allows certain calculation in the process
decomposition be ignored without affecting the results.

5

8959

Geometric Data Structures and Their Selected Applications

Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).

4

12030

Optimizing Allocation of Two Dimensional Irregular Shapes using an Agent Based Approach

Packing problems arise in a wide variety of application
areas. The basic problem is that of determining an efficient arrangement
of different objects in a region without any overlap and
with minimal wasted gap between shapes. This paper presents a
novel population based approach for optimizing arrangement of irregular
shapes. In this approach, each shape is coded as an agent and
the agents' reproductions and grouping policies results in arrangements
of the objects in positions with least wasted area between
them. The approach is implemented in an application for cutting
sheets and test results on several problems from literature are presented.

3

13006

Connected Vertex Cover in 2-Connected Planar Graph with Maximum Degree 4 is NP-complete

This paper proves that the problem of finding connected
vertex cover in a 2-connected planar graph ( CVC-2 ) with maximum degree 4 is NP-complete. The motivation for proving this result is to
give a shorter and simpler proof of NP-Completeness of TRA-MLC (the Top Right Access point Minimum-Length Corridor) problem [1], by finding the reduction from CVC-2. TRA-MLC has many applications in laying optical fibre cables for data communication and electrical wiring in floor plans.The problem of finding connected vertex cover in any planar graph ( CVC ) with maximum degree 4 is NP-complete [2]. We first show that CVC-2 belongs to NP and then we find a polynomial reduction from CVC to CVC-2. Let a graph G0 and an integer K form an instance of CVC, where G0 is a planar graph and K is an upper bound on the size of the connected vertex cover in G0. We construct a 2-connected planar graph, say G, by identifying the blocks and cut vertices of G0, and then finding the planar representation of all the blocks of G0, leading to a plane graph G1. We replace the cut vertices with cycles in such a way that the resultant graph G is a 2-connected planar graph with maximum
degree 4. We consider L = K -2t+3 t i=1 di where t is the number of cut vertices in G1 and di is the number of blocks for which ith cut vertex is common. We prove that G will have a connected vertex
cover with size less than or equal to L if and only if G0 has a connected vertex cover of size less than or equal to K.

2

13096

Model Discovery and Validation for the Qsar Problem using Association Rule Mining

There are several approaches in trying to solve the
Quantitative 1Structure-Activity Relationship (QSAR) problem.
These approaches are based either on statistical methods or on
predictive data mining. Among the statistical methods, one should
consider regression analysis, pattern recognition (such as cluster
analysis, factor analysis and principal components analysis) or partial
least squares. Predictive data mining techniques use either neural
networks, or genetic programming, or neuro-fuzzy knowledge. These
approaches have a low explanatory capability or non at all. This
paper attempts to establish a new approach in solving QSAR
problems using descriptive data mining. This way, the relationship
between the chemical properties and the activity of a substance
would be comprehensibly modeled.

1

13404

Creating Streamtubes Based on Mass Conservative Streamlines

Streamtube is used to visualize expansion, contraction
and various properties of the fluid flow. These are useful in fluid
mechanics, engineering and geophysics. The streamtube constructed
in this paper only reveals the flow expansion rate along streamline.
Based on the mass conservative streamline, we will show how to
construct the streamtube.