Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 46035

Mathematical and Computational Sciences

992
80302
Machine Learning Methods for the Prediction of Claim Probability
Abstract:
Machine learning is progressively growing field that uses learning algorithms to construct relationship between an outcome and predictors as an intersection of computer science and statistics. Probability of claim occurrence is one of the main components used to estimate insurance risk premium. This probability is generally estimated using classical approaches such as logistic regression. Objective of this study is to use various machine learning techniques for the prediction of claim probability and to compare the predictive performances of different methods using a health insurance data set from a Turkish insurance company. Classification trees, bagging, random forest, boosting and neural networks are used as alternative to classical logistic regression model. Two-year data set is used for the case study; one year is for the model fitting and the other year is for the prediction. The predictive performances of different methods are compared using the statistical measures of confusion matrices and the results are discussed.
991
79650
Existence of Random Fixed Point Theorem for Contractive Mappings
Abstract:
Random fixed point theory has received much attention in recent years, and it is needed for the study of various classes of random equations. The study of random fixed point theorems was initiated by the Prague school of probabilistic in the 1950s. The existence and uniqueness of fixed points for the self-maps of a metric space by altering distances between the points with the use of a control function is an interesting aspect in the classical fixed point theory. In a new category of fixed point problems for a single self-map with the help of a control function that alters the distance between two points in a metric space which they called an altering distance function. In this paper, we prove the results of existence of random common fixed point and its uniqueness for a pair of random mappings under weakly contractive condition for generalizing alter distance function in polish spaces using Random Common Fixed Point Theorem for Generalized Weakly Contractions.
990
79636
Comparative Study of Estimators of Population Means in Two Phase Sampling in the Presence of Non-Response
Abstract:
A comparative study of estimators of population means in two phase sampling in the presence of non-response when Unknown population means of the auxiliary variable(s) and incomplete information of study variable y as well as of auxiliary variable(s) is made. Three real data sets of University students, hospital and unemployment are used for comparison of all the available techniques in two phase sampling in the presence of non-response with the newly generalized ratio estimators.
989
79608
On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes
Abstract:
Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.
988
79553
A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.
987
79522
Numerical Solution of Two-Dimensional Solute Transport System Using Operational Matrices
Abstract:
In this study, the numerical solution of two-dimensional solute transport system in a homogeneous porous medium of finite-length is obtained. The considered transport system have the terms accounting for advection, dispersion and first-order decay with first-type boundary conditions. Initially, the aquifer is considered solute free and a constant input-concentration is considered at inlet boundary. The solution is describing the solute concentration in rectangular inflow-region of the homogeneous porous media. The numerical solution is derived using a powerful method viz., spectral collocation method. The numerical computation and graphical presentations exhibit that the method is effective and reliable during solution of the physical model with complicated boundary conditions even in the presence of reaction term.
986
79521
Numerical Solution of Space Fractional Order Linear/Nonlinear Reaction-Advection Diffusion Equation Using Jacobi Polynomial
Abstract:
During modelling of many physical problems and engineering processes, fractional calculus plays an important role. Those are greatly described by fractional differential equations (FDEs). So a reliable and efficient technique to solve such types of FDEs is needed. In this article, a numerical solution of a class of fractional differential equations namely space fractional order reaction-advection dispersion equations subject to initial and boundary conditions is derived. In the proposed approach shifted Jacobi polynomials are used to approximate the solutions together with shifted Jacobi operational matrix of fractional order and spectral collocation method. The main advantage of this approach is that it converts such problems in the systems of algebraic equations which are easier to be solved. The proposed approach is effective to solve the linear as well as non-linear FDEs. To show the reliability, validity and high accuracy of proposed approach, the numerical results of some illustrative examples are reported, which are compared with the existing analytical results already reported in the literature. The error analysis for each case exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.
985
79449
Weighted Rank Regression with Adaptive Penalty Function
Authors:
Abstract:
The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.
984
79181
Bivariate Generalization of q-α-Bernstein Polynomials
Abstract:
We propose to define the q-analogue of the α-Bernstein Kantorovich operators and then introduce the q-bivariate generalization of these operators to study the approximation of functions of two variables. We obtain the rate of convergence of these bivariate operators by means of the total modulus of continuity, partial modulus of continuity and the Peetre’s K-functional for continuous functions. Further, in order to study the approximation of functions of two variables in a space bigger than the space of continuous functions, i.e. Bögel space; the GBS (Generalized Boolean Sum) of the q-bivariate operators is considered and degree of approximation is discussed for the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.
983
79175
Durrmeyer Type Modification of q-Generalized-Bernstein Operators
Abstract:
The purpose of this paper to introduce the Durrmeyer type modification of q-generalized-Bernstein operators which include the Bernstein polynomials in the particular α = 0. We investigate the rate of convergence by means of the Lipschitz class and the Peetre’s K-functional. Also, we define the bivariate case of Durrmeyer type modification of q-generalized-Bernstein operators and study the degree of approximation with the aid of the partial modulus of continuity and the Peetre’s K-functional. Finally, we introduce the GBS (Generalized Boolean Sum) of the Durrmeyer type modification of q- generalized-Bernstein operators and investigate the approximation of the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.
982
79173
Rings Characterized by Classes of Rad-plus-Supplemented Modules
Abstract:
In this paper, we introduce and give various properties of weak* Rad-plus-supplemented and cofinitely weak* Rad-plus-supplemented modules over some special kinds of rings, in particular, artinian serial ring and semiperfect ring. Also prove that ring R is artinian serial if and only if every right and left R-module is weak* Rad-plus-supplemented. We provide the counter example which proves that weak* Rad-plus-supplemented module is the generalization of plus-supplemented and Rad-plus-supplemented modules. Furthermore, as an application of above finding results of this research article, our main focus is to characterized the semisimple ring, artinian principal ideal ring, semilocal ring, semiperfect ring, perfect ring, commutative noetherian ring and Dedekind domain in terms of weak* Rad-plus-supplemented module.
981
78883
Asymptotic Expansion of the Korteweg-de Vries-Burgers Equation
Authors:
Abstract:
It is common knowledge that many physical problems (such as non-linear shallow-water waves and wave motion in plasmas) can be described by the Korteweg-de Vries (KdV) equation, which possesses certain special solutions, known as solitary waves or solitons. As a marriage of the KdV equation and the classical Burgers (KdVB) equation, the Korteweg-de Vries-Burgers (KdVB) equation is a mathematical model of waves on shallow water surfaces in the presence of viscous dissipation. Asymptotic analysis is a method of describing limiting behavior and is a key tool for exploring the differential equations which arise in the mathematical modeling of real-world phenomena. By using variable transformations, the asymptotic expansion of the KdVB equation is presented in this paper. The asymptotic expansion may provide a good gauge on the validation of the corresponding numerical scheme.
980
78854
Theoretical Analysis of the Existing Sheet Thickness in the Calendering of Non-N Material Using Lubrication Approximation Theory
Abstract:
The mechanical process of smoothing and compressing a molten material by passing it through a number of pairs of heated rolls in order to produce a sheet of desired thickness is called calendering. The rolls that are in combination are called calenders; a term derived from kylindros the Greek word for cylinder. It infects the finishing process used on cloth, paper, textiles, leather, cloth, or plastic film and so on. It is a mechanism which is used to strengthen surface properties, minimize sheet thickness, and yield special effects such as a glaze or polish. It has a wide variety of applications in industries in the manufacturing of textile fabrics, coated fabrics, and plastic sheeting to provide the desired surface finish and texture. An analysis has been presented for the calendering of Pseudoplastic material. The lubrication approximation theory (LAT) has been used to simplify the equations of motion. For the investigation of the nature of the steady solutions that exist, we make use of the combination of exact solution and numerical methods. The expressions for the velocity profile, rate of volumetric flow and pressure gradient are found in the form of exact solutions. Furthermore, the quantities of interest by engineering point of view, such as pressure distribution, roll-separating force, and power transmitted to the fluid by the rolls are also computed. Some results are shown graphically while others are given in tabulated form. It is found that the non-Newtonian parameter and Reynolds number serve as the controlling parameters for the calendering process.
979
78845
Approximate Solution for Nonlinear Riccati Differential Equation Using the Non Perturbation Method
Abstract:
Riccati equation is widely used in designing and analyzing linear and nonlinear optimal control processes. In this paper, we have presented an analytical solution for quadratic Riccati differential equation by He’s Variational Iteration Method. This method is relatively proficient and suitable for such problems as compared to classical methods. Firstly, a correctional function has been constructed with the help of a general Lagrange Multiplier on the basis of variational theory and then the solution has been found for Riccati Equation without any unphysical restrictive assumptions. A comparison of the approximate solution with the exact solution has also been given. The absolute error of 3.28x10⁻⁶ to 8.74x10⁻⁸has been obtained by the proposed solution. Our results show that the proposed method is very efficient and simple as compared to existing classical methods.
978
78773
Estimation of Population Mean under Random Non-Response in Two-Phase Successive Sampling
Abstract:
In this paper, we have considered the problem of estimation for population mean, on current (second) occasion in the presence of random non response in two-occasion successive sampling under two phase set-up. Modified exponential type estimators have been proposed, and their properties are studied under the assumptions that numbers of sampling units follow a distribution due to random non response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.
977
78768
Bulk Viscous Bianchi Type V Cosmological Model with Time Dependent Gravitational Constant and Cosmological Constant in General Relativity
Abstract:
In this paper, we investigate Bulk Viscous Bianchi Type V Cosmological Model with Time dependent gravitational constant and cosmological constant in general Relativity by assuming ξ(t)=ξ_(0 ) p^m where ξ_(0 ) and m are constants. We also assume a variation law for Hubble parameter as H(R) = a (R^(-n)+1), where a>0, n>1 being constant. Two universe models were obtained, and their physical behavior has been discussed. When n=1 the Universe starts from singular state whereas when n=0 the cosmology follows a no singular state. The presence of bulk viscosity increase matter density’s value.
976
78712
Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers
Abstract:
Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)
975
78651
Static and Dynamical Analysis on Clutch Discs on Different Material and Geometries
Abstract:
This paper presents the static and cyclic stresses in combination with fatigue analysis resultant of loads applied to the friction discs usually utilized on Industrial Clutches. The material chosen to simulate the friction discs under load is aluminum. The numerical simulation was done by software COMSOLTM Multiphysics. The results obtained for static loads showed enough stiffness for both geometries and the material utilized. On the other hand, in the fatigue standpoint, failure is clearly verified, what demonstrates the importance of both approaches, mainly dynamical analysis. The results and the conclusion are based on the Stresses on Disc, Counted Stress Cycles and Fatigue Usage Factor.
974
78455
On Chvátal's Conjecture for the Hamiltonicity of 1-Tough Graphs and Their Complements
Abstract:
Graph toughness and the associated cycle structure have attracted much attention and aroused extensive works since Chvátal introduced this concept in 1973. Among the seven conjectures posted by then, much fewer results were published for the one relating the existence of a hamiltonian cycle in any 1-tough graph to its complement graph. In this paper, we show that the conjecture does not hold in general. More precisely, it is true only for graphs with six or seven vertices and is false for graphs with eight or more vertices. A new theorem is derived as a correction for the conjecture.
973
78434
Theorem on Inconsistency of The Classical Logic
Abstract:
This abstract concerns an extremely fundamental issue. Namely, the fundamental problem of science is the issue of consistency. In this abstract, we present the theorem saying that the classical calculus of quantifiers is inconsistent in the traditional sense. At the beginning, we introduce a notation, and later we remind the definition of the consistency in the traditional sense. S1 is the set of all well-formed formulas in the calculus of quantifiers. RS1 denotes the set of all rules over the set S1. Cn(R, X) is the set of all formulas standardly provable from X by rules R, where R is a subset of RS1, and X is a subset of S1. The couple < R,X > is called a system, whenever R is a subset of RS1, and X is a subset of S1. Definition: The system < R,X > is consistent in the traditional sense if there does not exist any formula from the set S1, such that this formula and its negation are provable from X, by using rules from R. Finally, < R0+, L2 > denotes the classical calculus of quantifiers, where R0+ consists of Modus Ponens and the generalization rule. L2 is the set of all formulas valid in the classical calculus of quantifiers. The Main Result: The system < R0+, L2 > is inconsistent in the traditional sense.
972
78319
Boundary Condition with the Riemann-Liouville Fractional Time Derivative at a Thin Membrane for Normal Diffusion
Abstract:
In many physical models concerning various diffusion processes, fractional derivatives are involved in diffusion equations. However, in many cases the interpretation of the equations is vague. In practice, an experimental verification of fractional models is also a problem. In our contribution, we study normal diffusion in a system with a thin membrane. We show the method of deriving boundary conditions at the membrane directly from experimental data. One of the conditions contains the Riemann-Liouville time fractional derivative of 1/2 order. Such boundary condition is rather unexpected since, as far as we know, there is no need to involve fractional time derivative in normal diffusion model. We analyze an influence of fractional derivative occurring in the boundary condition on diffusion process. The physical interpretation of the boundary condition is given as well.
971
78296
Total Controllability of the Second Order Nonlinear Differential Equation with Delay and Non-Instantaneous Impulses
Abstract:
A stronger concept of exact controllability which is called Total Controllability is introduced in this manuscript. Sufficient conditions have been established for the total controllability of a control problem, governed by second order nonlinear differential equation with delay and non-instantaneous impulses in a Banach space X. The results are obtained using the strongly continuous cosine family and Banach fixed point theorem. Also, the total controllability of an integrodifferential problem is investigated. At the end, some numerical examples are provided to illustrate the analytical findings.
970
78110
One-Step Time Series Predictions with Recurrent Neural Networks
Abstract:
Time series prediction problems have many important practical applications, but are notoriously difficult for statistical modeling. Recently, machine learning methods have been attracted significant interest as a practical tool applied to a variety of problems, even though developments in this field tend to be semi-empirical. This paper explores application of Long Short Term Memory based Recurrent Neural Networks to the one-step prediction of time series for both trend and stochastic components. Two types of data are analyzed - daily stock prices, that are often considered to be a typical example of a random walk, - and weather patterns dominated by seasonal variations. Results from both analyses are compared, and reinforced learning framework is used to select more efficient between Recurrent Neural Networks and more traditional auto regression methods. It is shown that both methods are able to follow long-term trends and seasonal variations closely, but have difficulties with reproducing day-to-day variability. Future research directions and potential real world applications are briefly discussed.
969
78026
Curve Fitting by Cubic Bezier Curves Using Migrating Birds Optimization Algorithm
Authors:
Abstract:
A new met heuristic optimization algorithm called as Migrating Birds Optimization is used for curve fitting by rational cubic Bezier Curves. This requires solving a complicated multivariate optimization problem. In this study, the solution of this optimization problem is achieved by Migrating Birds Optimization algorithm that is a powerful met heuristic nature-inspired algorithm well appropriate for optimization. The results of this study show that the proposed method performs very well and being able to fit the data points to cubic Bezier Curves with a high degree of accuracy.
968
77919
Generalized Rough Sets Applied to Graphs Related to Urban Problems
Abstract:
Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, &amp; c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.
967
77853
On the Bootstrap P-Value Method in Identifying out of Control Signals in Multivariate Control Chart
Abstract:
In any production process, every product is aimed to attain a certain standard, but the presence of assignable cause of variability affects our process thereby leading to low quality of product. The ability to identify and remove this type of variability reduces its overall effect thereby improving the quality of the product. When a univariate control chart signal, it is easy to detect the problem and give a solution since it is related to a single quality characteristic. However, the problems involved in the use of multivariate control chart are the violation of multivariate normal assumption and the difficulty in identifying the quality characteristic(s) that resulted in the out of control signals. The purpose of this paper is to examine the use of non-parametric control chart (the bootstrap approach) for obtaining control limit to overcome the problem of multivariate distributional assumption and the p-value method for detecting out of control signals. Results from a performance study shows that the proposed bootstrap method enables the setting of control limit that can enhance the detection of out of control signals when compared, while the p-value method also enhanced in identifying out of control variables.
966
77683
Bayesian Flexibility Modelling of the Conditional Autoregressive Prior in a Disease Mapping Model
Abstract:
The basic model usually used in disease mapping, is the Besag, York and Mollie (BYM) model and which combines the spatially structured and spatially unstructured priors as random effects. Bayesian Conditional Autoregressive (CAR) model is a disease mapping method that is commonly used for smoothening the relative risk of any disease as used in the Besag, York and Mollie (BYM) model. This model (CAR), which is also usually assigned as a prior to one of the spatial random effects in the BYM model, successfully uses information from adjacent sites to improve estimates for individual sites. To our knowledge, there are some unrealistic or counter-intuitive consequences on the posterior covariance matrix of the CAR prior for the spatial random effects. In the conventional BYM (Besag, York and Mollie) model, the spatially structured and the unstructured random components cannot be seen independently, and which challenges the prior definitions for the hyperparameters of the two random effects. Therefore, the main objective of this study is to construct and utilize an extended Bayesian spatial CAR model for studying tuberculosis patterns in the Eastern Cape Province of South Africa, and then compare for flexibility with some existing CAR models. The results of the study revealed the flexibility and robustness of this alternative extended CAR to the commonly used CAR models by comparison, using the deviance information criteria. The extended Bayesian spatial CAR model is proved to be a useful and robust tool for disease modeling and as a prior for the structured spatial random effects because of the inclusion of an extra hyperparameter.
965
77631
Using Convergent and Divergent Thinking in Creative Problem Solving in Mathematics
Abstract:
This paper aims to find out how students using convergent and divergent thinking in creative problem solving to solve mathematical problems creatively. Eight engineering undergraduates in a local university took part in this study. They were divided into two groups. They solved the mathematical problems with the use of creative problem solving skills. Their solutions were collected and analyzed to reveal all the processes of problem solving, namely: problem definition, ideas generation, ideas evaluation, ideas judgment, and solution implementation. The result showed that the students were able to solve the mathematical problem with the use of creative problem solving skills.
964
77566
Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.
963
77503
Study and Analysis of a Susceptible Infective Susceptible Mathematical Model with Density Dependent Migration
Abstract:
In this paper, a susceptible infective susceptible mathematical model is proposed and analyzed where the migration of human population is given by migration function. It is assumed that the disease is transmitted by direct contact of susceptible and infective populations with constant contact rate. The equilibria and their stability are studied by using the stability theory of ordinary differential equations and computer simulation. The model analysis shows that the spread of infectious disease increases when human population immigration increases in the habitat but it decreases if emigration increases.