Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 46035

Industrial and Manufacturing Engineering

Continuous Improvement as an Organizational Capability in the Industry 4.0 Era
Continuous improvement is becoming increasingly a prerequisite for manufacturing companies to remain competitive in a global market. In addition, future survival and success will depend on the ability to manage the forthcoming digitalization transformation in the industry 4.0 era. Industry 4.0 promises substantially increased operational effectiveness, were all equipment are equipped with integrated processing and communication capabilities. Subsequently, the interplay of human and technology will evolve and influence the range of worker tasks and demands. Taking into account these changes, the concept of continuous improvement must evolve accordingly. Based on a case study from manufacturing industry, the purpose of this paper is to point out what the concept of continuous improvement will meet and has to take into considering when entering the 4th industrial revolution. In the past, continuous improvement has the focus on a culture of sustained improvement targeting the elimination of waste in all systems and processes of an organization by involving everyone. Today, it has to be evolved into the forthcoming digital transformation and the increased interplay of human and digital communication system to reach its full potential. One main findings of this study, is how digital communication systems will act as an enabler to strengthen the continuous improvement process, by moving from collaboration within individual teams to interconnection of teams along the product value chain. For academics and practitioners, it will help them to identify and prioritize their steps towards an industry 4.0 implementation integrated with focus on continuous improvement.
Concept of a Flexible Battery Cell Assembling Line for Low to Medium Production Volumes
The competitiveness and success of new electrical energy storages such as battery cells are significantly dependent on a short time-to-market. Producers who decide to supply new battery cells to the market need to be easily adaptable in manufacturing with respect to the early customers’ needs in terms of cell size, materials, delivery time and quantity. In the initial state, the required output rates do not yet allow the producers to have a fully automated manufacturing line nor to supply handmade battery cells. Yet there was no solution for manufacturing battery cells in low to medium volumes in a reproducible way. Thus, in terms of cell format and output quantity, a concept for the flexible assembly of battery cells was developed by the Fraunhofer-Institute for Manufacturing Engineering and Automation. Based on clustered processes, the modular system platform can be modified, enlarged or retrofitted in a short time frame according to the ordered product. The paper shows the analysis of the production steps from a conventional battery cell assembly line. Process solutions were found by using I/O-analysis, functional structures, and morphological boxes. The identified elementary functions were subsequently clustered by functional coherences for automation solutions and thus the single process cluster was generated. The result presented in this paper enables to manufacture different cell products on the same production system using seven process clusters. The paper shows the solution for a batch-wise flexible battery cell production using advanced process control. Further, the performed tests and benefits by using the process clusters as cyber-physical systems for an integrated production and value chain are discussed. The solution lowers the hurdles for SMEs to launch innovative cell products on the global market.
A Petri Net Model to Obtain the Throughput of Unreliable Production Lines in the Buffer Allocation Problem
A production line designer faces with several challenges in manufacturing system design. One of them is the assignment of buffer slots in between every machine of the production line in order to maximize the throughput of the whole line, which is known as the Buffer Allocation Problem (BAP). The BAP is a combinatorial problem that depends on the number of machines and the total number of slots to be distributed on the production line. In this paper, we are proposing a Petri Net (PN) Model to obtain the throughput in unreliable production lines, based on PN mathematical tools and the decomposition method. The results obtained by this methodology are similar to those presented in previous works, and the number of machines is not a hard restriction.
Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.
Application of Subversion Analysis in the Search for the Causes of Cracking in a Marine Engine Injector Nozzle
Subversion analysis is a tool used in the TRIZ (Theory of Inventive Problem Solving) methodology. This article introduces the history and describes the process of subversion analysis, as well as function analysis and analysis of the resources, used at the design stage when generating possible undesirable situations. The article charts the course of subversion analysis when applied to a fuel injection nozzle of a marine engine. The work describes the fuel injector nozzle as a technological system and presents principles of analysis for the causes of a cracked tip of the nozzle body. The system is modelled with functional analysis. A search for potential causes of the damage is undertaken, and a cause-and-effect analysis for various hypotheses concerning the damage is drawn up. The importance of particular hypotheses is evaluated and the most likely causes of damage identified.
Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on the optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center, and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.
Dimensional Accuracy of Carbon Nanotubes/Poly Methyl Methacrylate Parts and Holes Produced by Laser Cutting
Laser cutting is a very common production method for cutting 2D polymeric parts. Developing of polymer composites with nano-fibers makes important their other properties like laser workability. The aim of this research is investigation of the influence different laser cutting conditions on the dimensional accuracy of parts and holes from poly methyl methacrylate (PMMA)/ carbon nanotubes (CNTs) material. Experiments were carried out by considering of CNTs (in four level 0,0.5, 1 and 1.5% wt.%), laser power (60,80 and 100 watt) and cutting speed (20,30 and 40 mm/s)v as input variable factors. The results reveal that CNTs adding improves the laser workability of PMMA and the increasing of power has a significant effect on the part and hole size. The findings also show cutting speed is effective parameter on the size accuracy. Eventually, the statistical analysis of results was done and calculated mathematical equations by the regression are presented for determining relation between input and output factor.
Mass Customization of Chemical Protective Clothing
The object of the investigation is the suit for chemical protection, which totally covers human body together with breathing apparatus, breathing mask and helmet (JSC Ansell Protective Solutions Lithuania). The end users of such clothing are the members of rescue team – firefighters. During the presentation, the results of 3D scanning with stationary Human Solutions scanner and portable Artec Eva scanner will be compared on the basis of the efficiency of scanning procedure and scanning accuracy. Also, the possibilities to exporting scanned bodies into specialized CAD systems for suit design development and material consumption calculation will be analyzed. The necessity to understand and to implement corresponding clothing material properties during 3D visualization of garment on CAD systems will be presented. During the presentation, the outcomes of the project ‘Smart and Safe Work Wear Clothing SWW’ will be discussed. The project is carried out under the Interreg Baltic Sea Region Program as 2014-2020 European territorial cooperation objective. Thematic priority is Capacity for Innovation. The main goal of the project is to improve competitiveness and to increase business possibilities for work wear enterprises in the Baltic Sea Region. The project focuses on mass customization of products for various end users. It engages textile and clothing manufacturing technology researchers, work wear producers, end users, as well as national textile and clothing branch organizations in Finland, Lithuania, Latvia, Estonia and Poland.
Finite Element Simulation of Limiting Dome Height Test on the Formability of Aluminium Tailor Welded Blanks
Tailor Welded Blanks (TWBs) have established themselves to be a revolutionary and foremost integral part of the automotive and aerospace industries. Metals sheets with varied thickness, strength and coatings are welded together to form TWBs through friction stir welding and laser welding prior to stamping operations. The formability of the TWBs completely varies from those of conventional blanks due to the diverse strength levels of individual sheets which are made to deform under the same forming load uniformly throughout causing unequal and unsatisfactory deformation in the blank. Limiting Dome Height(LDH) test helps predicting the formability of each blanks and assists in determining the appropriate TWB. Finite Element Simulation of LDH test for both base material and TWBs was performed and analysed for both before and after the solution heat treatment. The comparison and validation of simulation results are done with the experimental data and correlated accordingly. The formability of solution heat treated TWBs had enhanced than those of blanks made from non-heat treated TWBs.
Timely Detection and Identification of Abnormalities for Process Monitoring
The detection and identification of multivariate manufacturing processes are quite important in order to maintain good product quality. Unusual behaviors or events encountered during its operation can have a serious impact on the process and product quality. Thus they should be detected and identified as soon as possible. This paper focused on the efficient representation of process measurement data in detecting and identifying abnormalities. This qualitative method is effective in representing fault patterns of process data. In addition, it is quite sensitive to measurement noise so that reliable outcomes can be obtained. To evaluate its performance a simulation process was utilized, and the effect of adopting linear and nonlinear methods in the detection and identification was tested with different simulation data. It has shown that the use of a nonlinear technique produced more satisfactory and more robust results for the simulation data sets. This monitoring framework can help operating personnel to detect the occurrence of process abnormalities and identify their assignable causes in an on-line or real-time basis.
Third Party Logistics (3PL) Selection criteria for an Indian Heavy Industry using SEM
In the present paper, we propose an incorporated approach for 3PL supplier choice that suits the distinctive strategic needs of the outsourcing organization in southern part of India. Four fundamental criteria have been used in particular Performance, IT, Service and Intangible. These are additionally subdivided into fifteen sub-criteria. The proposed strategy coordinates Structural Equation Modeling (SEM) and Non-additive Fuzzy Integral strategies. The presentation of fluffiness manages the unclearness of human judgments. The SEM approach has been used to approve the determination criteria for the proposed show though the Non-additive Fuzzy Integral approach uses the SEM display contribution to assess a supplier choice score. The case organization has a exclusive vertically integrated assembly that comprises of several companies focusing on a slight array of the value chain. To confirm manufacturing and logistics proficiency, it significantly relies on 3PL suppliers to attain supply chain superiority. However, 3PL supplier selection is an intricate decision-making procedure relating multiple selection criteria. The goal of this work is to recognize the crucial 3PL selection criteria by using the non-additive fuzzy integral approach. Unlike the outmoded multi criterion decision-making (MCDM) methods which frequently undertake independence among criteria and additive importance weights, the nonadditive fuzzy integral is an effective method to resolve the dependency among criteria, vague information, and vital fuzziness of human judgment. In this work, we validate an empirical case that engages the nonadditive fuzzy integral to assess the importance weight of selection criteria and indicate the most suitable 3PL supplier.
A Measuring Industrial Resiliency by Using Data Envelopment Analysis Approach
Having several crises that affect industrial sector performance in the past decades, decision makers should utilize measurement application that enables them to measure industrial resiliency more precisely. It provides not only a framework for the development of resilience measurement application, but also several theories for the concept building blocks, such as performance measurement management, and resilience engineering in real world environment. This research is a continuation of previously published paper on performance measurement in the industrial sector. Finally, this paper contributes an alternative performance measurement method in industrial sector based on resilience concept. Moreover, this research demonstrates how applicable the concept of resilience engineering is and its method of measurement.
Designing Price Stability Model of Red Cayenne Pepper Price in Wonogiri District, Centre Java, Using ARCH/GARCH Method
Food and agricultural sector become the biggest sector contributing to inflation in Indonesia. Especially in Wonogiri district, red cayenne pepper was the biggest sector contributing to inflation on 2016. A national statistic proved that in recent five years red cayenne pepper has the highest average level of fluctuation among all commodities. Some factors, like supply chain, price disparity, production quantity, crop failure, and oil price become the possible factor causes high volatility level in red cayenne pepper price. Therefore, this research tries to find the key factor causing fluctuation on red cayenne pepper by using ARCH/GARCH method. The method could accommodate the presence of heteroscedasticity in time series data. At the end of the research, it is statistically found that the second level of supply chain becomes the biggest part contributing to inflation with 3,35 of coefficient in fluctuation forecasting model of red cayenne pepper price. This model could become a reference to the government to determine the appropriate policy in maintaining the price stability of red cayenne pepper.
Developing a Total Quality Management Model Using Structural Equation Modeling for Indonesian Healthcare Industry
This paper is made to present an Indonesian Healthcare model. Currently, there are nine TQM (Total Quality Management) practices in healthcare industry. However, these practices are not integrated yet. Therefore, this paper aims to integrate these practices as a model by using Structural Equation Modeling (SEM). After administering about 210 questionnaires to various stakeholders of this industry, a LISREL program was used to evaluate the model's fitness. The result confirmed that the model is fit because the p-value was about 0.45 or above required 0.05. This has signified that previously mentioned of nine TQM practices are able to be integrated as an Indonesian healthcare model.
Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.
An Anthropometric and Postural Risk Assessment of Students in Computer Laboratories of a State University
Ergonomics considers the capabilities and limitations of a person as they interact with tools, equipment, facilities and tasks in their work environment. Workplace is one example of physical work environment, be it a workbench or a desk. In school laboratories, sitting is the most common working posture of the students. Students maintain static sitting posture as they perform different computer-aided activities. The College of Engineering and College of Information and Communication Technology of a State University consist of twenty-two computer laboratories. Normally, students aren’t usually aware of the importance of sustaining proper sitting posture while doing their long hour computer laboratory activities. The study evaluates the perceived discomfort and working postures of students as they are exposed on current workplace design of computer laboratories. The current study utilizes Rapid Upper Limb Assessment (RULA), Body Discomfort Chart using Borg’s CR-10 Scale Rating and Quick Exposure Checklist in order to assess the posture and the current working condition. The result of the study may possibly minimize the body discomfort experienced by the students. The researchers redesign the individual workstations which includes working desk, sitting stool and other workplace design components. Also, the economic variability of each alternative was considered given that the study focused on improvement of facilities of a state university.
Controlling the Process of a Chicken Dressing Plant through Statistical Process Control
In a manufacturing firm, controlling the process ensures that optimum efficiency, productivity, and quality in an organization are achieved. An operation with no standardized procedure yields a poor productivity, inefficiency, and an out of control process. This study focuses on controlling the small intestine processing of a chicken dressing plant through the use of Statistical Process Control (SPC). Since the operation does not employ a standard procedure and does not have an established standard time, the process through the assessment of the observed time of the overall operation of small intestine processing, through the use of X-Bar R Control Chart, is found to be out of control. In the solution of this problem, the researchers conduct a motion and time study aiming to establish a standard procedure for the operation. The normal operator was picked through the use of Westinghouse Rating System. Instead of utilizing the traditional motion and time study, the researchers used the X-Bar R Control Chart in determining the process average of the process that is used for establishing the standard time. The observed time of the normal operator was noted and plotted to the X-Bar R Control Chart. Out of control points that are due to assignable cause were removed and the process average, or the average time the normal operator conducted the process, which was already in control and free form any outliers, was obtained. The process average was then used in determining the standard time of small intestine processing. As a recommendation, the researchers suggest the implementation of the standard time established which is with consonance to the standard procedure which was adopted from the normal operator. With that recommendation, the whole operation will induce a 45.54 % increase in their productivity.
Impact of Contemporary Performance Measurement System and Organization Justice on Academic Staff Work Performance
As part of the Malaysia Higher Institutions' Strategic Plan in promoting high-quality research and education, the Ministry of Higher Education has introduced various instrument to assess the universities performance. The aims are that university will produce more commercially-oriented research and continue to contribute in producing professional workforce for domestic and foreign needs. Yet the spirit of the success lies in the commitment of university particularly the academic staff to translate the vision into reality. For that reason, the element of fairness and justice in assessing individual academic staff performance is crucial to promote directly linked between university and individual work goals. Focusing on public research universities (RUs) in Malaysia, this study observes at the issue through the practice of university contemporary performance measurement system. Accordingly management control theory has conceptualized that contemporary performance measurement consisting of three dimension namely strategic, comprehensive and dynamic building upon equity theory, the relationships between contemporary performance measurement system and organizational justice and in turn the effect on academic staff work performance are tested based on online survey data administered on 365 academic staff from public RUs, which were analyzed using statistics analysis SPSS and Equation Structure Modeling. The findings validated the presence of strategic, comprehensive and dynamic in the contemporary performance measurement system. The empirical evidence also indicated that contemporary performance measure and procedural justice are significantly associated with work performance but not for distributive justice. Furthermore, procedural justice does mediate the relationship between contemporary performance measurement and academic staff work performance. Evidently, this study provides evidence on the importance of perceptions of justice towards influencing academic staff work performance. This finding may be a fruitful input in the setting up academic staff performance assessment policy.
Distance-To-Target Method to Evaluate Sustainability Patterns of CyberManufacturing Systems
Recognizing the importance of sustainability, manufacturers are pursuing holistic well-being of the society by addressing all three dimensions of sustainability: environmental, economical, and societal aspects. Sustainability metrics—or indicators—can measure the progress towards sustainability. However, existing metrics on assessing sustainability patterns of manufacturing systems are not comprehensive; and lacking objectivity, data measurability, and information communication efficiency. This research developed an improved sustainability assessment framework. A comprehensive set of sustainability indicators covering various patterns of manufacturing system is collected. Distance-to-Target methodology is adopted to compute and aggregate all sustainability indicators. The quantitative formula for each indicator is elaborated and all involved variables are measurable performance record of manufacturing system or publicly available data. Among them, the workload and production context or specificity information are integrated into target values of distance-to-target weighing factors. One example, cooling/lubricant fluid usage in machining, is selected for testing the validation of the proposed assessment framework. The evaluation report shows consistent results with the referred work and also demonstrates a high efficiency in result interpretation, chart presentation, and suggestions for improvement. Another example—plastic parts assembly and inspection process in a traditional production line and a CyberManufacturing system production line—are adapted for analyzing sustainability benefits. The evaluation result indicates the degree of improvement in economic profitability. Therefore, the Distance-to-Target methodology has proven to be unbiased and reproducible, along with transparent computation processes and efficient results interpretation.
A System to Detect Cyber-Physical Attacks in CyberManufacturing Systems
CyberManufacturing System (CMS), Industrie 4.0, Cloud Manufacturing are visions for future manufacturing systems where the physical components are fully integrated with computational resources. The openness of the Internet enhances manufacturing activities with new capabilities in communication, information resources, storage, and computation. However, this very openness also creates vulnerability, especially because it enlarges the attack surface where attackers can intrude into or extract data from the manufacturing system. Currently, computer and information security methods—such as firewalls and intrusion detection system—cannot detect the malicious attacks in CMS with adequate respond time and accuracy. Realization of CMS depends on effectively addressing cyber-physical security issues. These attacks can cause physical damages to physical components—machines, equipment, parts, assemblies, products—through over-wearing, breakage, scrap parts or other changes that designers did not intend. This research proposes a system to detect cyber-physical intrusions in CMS. To accomplish this objective, physical data from manufacturing process level, production system level, are integrated with cyber data from network-based and host-based intrusion detection systems. The correlation between the cyber and physical data are investigated. Two methods—machine learning and quality control—are mainly adopted to detect the intrusion. 3D printing and CNC milling processes are used as examples of manufacturing processes for detecting cyber-physical attacks. Five attack scenarios: repackaging attack on “STL” file, race condition attack on job priority, SQL injection attack on “G-code,” Shellshock attack on 3D printer settings, cross-site request forgery attack on CNC machine settings have been developed to study process flow, influence, and detection of cyber-physical attacks.
A Review on the Role of Partial Velocity in Cold Spray
Among the various coating methods, cold spraying is a new emerging coating technique. Cold gas dynamic spray (CGDS), simply called cold spraying, is a rapidly developing technology for the preparation of coatings or bulk materials in the solid state. Cold spray coatings have very low porosity, high hardness, erosion resistant and with a strong ability to resist high-temperature corrosion. This paper briefly reviews the role of partial velocity in cold spray process to understand the phenomenon and to summarize the rapidly expanding common knowledge on the cold spray technology under the light of presently available literature.
The Temperature Effects on the Microstructure and Profile in Laser Cladding
In this study, a 50W CO₂ laser was used to clad of 304L powders on the stainless steel substrate with a temperature sensor and image monitoring system. The laser power and cladding speed and focal position were modified to achieve the requirement of the workpiece flatness and mechanical properties. The numerical calculation is based on ANSYS to analyze the temperature change of the moving heat source at different surface positions when coating the workpiece, and discuss the effect of the process parameters on the bath size was discussed. The temperature of stainless steel powder in the nozzle outlet reacting with the laser was simulated as a process parameter. In the experiment, the difference of thermal conductivity in three-dimensional space is compared with single-layer cladding and multi-layer cladding. Since the heat dissipation pattern of the single-layer cladding is the steel plate and the multi-layer coating is the workpiece itself. The relationship between the multi-clad temperature and the profile was analyzed by the temperature signal from an IR pyrometer.
A Simulative Approach for JIT Parts-Feeding Policies
Lean philosophy follows the simple principle of “creating more value with fewer resources”. In accordance with this policy, material handling can be managed by the mean of Kanban which by triggering every feeding tour only when needed regulates the flow of material in one of the most efficient way. This paper focuses on Kanban Supermarket’s parameters and their optimization on a purely cost-based point of view. Number and size of forklifts, as well as size of the containers they carry, will be variables of the cost function which includes handling costs, inventory costs but also shortage costs. With an innovative computational approach encoded into industrial engineering software Tecnomatix and reproducing real-life conditions, a fictive assembly line is established and produces a random list of orders. Multi-scenarios are then run to study the impact of each change of parameter and the variation of costs it implies. Lastly, best-case scenarios financially speaking are selected.
Energy Efficiency Analysis of Crossover Technologies in Industrial Applications
Industry accounts for one-third of global final energy demand. Before the background of climate change and restricted resources, it is necessary to improve the energy efficiency of industrial processes. A key challenge for improving the energy efficiency is that industry is highly heterogeneous. The structure of the energy consumption in industrial enterprises depends on the character of the production process (e.g., primary energy resources, energy intensity of the products, production plan, and machinery). Energy efficiency targets include activities for single processes, as well as strategies for the complete enterprise. The energy analysis represents the first step of the optimization process. In many plants, the energy streams are only measured by a single meter at the source. The energy use of single processes is mostly unknown. The installation of additional measurement devices (e.g., flowmeters or wireless electric power meters) is one option to get more information about the energy distribution. Otherwise retrofitting of energy meters into running processes is difficult and in many cases impossible. Alternatively, we can use specific information coming from the automation system, which controls the production process. These data are collected by an energy information system (EMIS). The paper will describe a comprehensive methodology to realize an energy analysis. In industrial enterprises, crossover technologies play an important role for energy efficiency. They are characterized by a large number of applications independent of the production branch. They include motors and drives, pump systems, compressed air, lighting, process heat, and air conditioning systems. The crossover technologies are responsible for a large share of the industrial energy consumption. Especially electrical power is used by drives, pumps, compressors, and lightning. In many applications, there are common problems dealing with: low energy efficiency, oversized dimension of the system, lack of control, and maintenance deficits. In many cases, oversized and inefficient drives are still used in historically grown industrial enterprises with changing production programs. The exchange of long term running motors by new ones with high efficiency class saves much energy and costs. We will demonstrate the algorithm of the energy analysis by some selected case studies for typical industrial processes. The energy analysis represents an essential part of energy management systems (EMS). Generally, process control systems (PCS) can support EMS. They observe the performance of the production systems and organize the maintenance procedure. PCS contains the sensors and actuators that are required for the control of the processing plant. The sensors measure the process variables, i.e., temperature, pressure, mass flow, etc. The actuators receive signals from the controller level and perform a function, e.g., they start a pump or close a valve. PCS schedules and records the outcomes of maintenance testing, inspection, and repair. These may be supplemented by equipment monitoring tools, typically running in association with the plant process historian, which measure and evaluate the current equipment performance. Combining these tools into an integrated process allows the development of an energy critical equipment strategy. Thus, asset and energy management can use the same common data to improve the energy efficiency.
Enhancement of Material Removal Rate of Complex Featured Surfaces in Vibratory Finishing
The different process engineering applications of vibratory finishing technology have led to its versatile use in the development of aviation components. The most noteworthy applications of vibratory finishing include deburring and imparting the required surface finish. In this paper, vibratory finishing has been used to study its effectiveness in removal of laser shock peened (LSP) layers from Titanium workpieces. A vibratory trough operating at a frequency of 25 Hz, amplitude 3.5 mm and titanium specimens (Ti-6Al-4V, Grade 5) of dimensions 50 x 50 x 10 mm³ were utilized for the experiments. A vibrating fixture operating at 200 Hz was used to provide vibration to the test piece and was immersed in the vibratory trough. It was evident that there is an increase in efficiency of removal of the complex featured layer and smoother surface finish with the introduction of the vibrating fixture in the vibratory finishing setup as compared to the conventional vibratory finishing setup wherein the fixture is not vibrating.
Hybrid Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia
This paper presents the design of a model for the planning of the logistics operation of distribution for companies of the cargo sector located in the city of Bogotá Colombia. This model is composed of two stages; the first one in which the optimal planning of distribution is realized taking into account the operation with transhipment through a hybrid model of optimization developed with mixed programming, from the model of assignment of loads from several sources and combined with a model of transhipment. A second stage, where the specific routing of this operation was developed using the Clark and Wrigth savings heuristic. It was found in addition, that had not carried out before this type of model to plan the logistic operation of distribution in the sector in the one that was applied. On the other hand, an integral model was obtained to fulfil the planeación for stages of the distribution operation of goods of dry load for small and medium companies in the city of Bogotá, the ideal assignments being established using centers of transfer as initial stage, then to determine the specific route with base to the minor distances of tour.
Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing
Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.
Ergonomic Philosophy and the Use of Informatics in Documentation Procedure
The influence of the technical system on practitioners’ job will actually find out whether applying a well-developed health information technology (HIT) can provide with favorable results. Particularly, medical personnel are doing a tremendous job in every hospital, but it is unfortunate that there was less effort in investigating medical staff information structure (MSIS). The application of the ergonomic technique must solve some problem facing nurses while working with the MSIS and make the communication process much simpler. The hypotheses of ergonomic approaches constitute a methodological foundation that is beneficial in handling questions arising from human interface system. This paper assessed the application of an MSIS that makes one element of the digital health data (DHD) earlier installed in the communication mechanisms of the new Hospitals. This evaluation has been conducted in two hospitals equipped with a methodical electronic data record, and the first hospital has 690 beds and the second hospital contains 310 beds. The examiners used a whole-task investigation methodology to get a proper understanding of the hurdles and facilitators coupled with the application of MSIS in the newly built hospitals. This survey is intending to improve different ways clinicians could act in response to the needs of patient care. The designed procedure could allow the doctors to interact with the primary element of the MSIS and were standardized to connect with the key functions of the system.
Integrating Cost-Benefit Assessment and Contract Design to Support Industrial Symbiosis Deployment
Industrial symbiosis (I.S) is the realization of Industrial Ecology (I.E) principles in production systems in function. I.S consists in the use of waste materials, fatal energy, recirculated utilities and infrastructure/service sharing as resources for production. Environmental benefits can be achieved from resource conservation but economic profitability is required by the participating actors. I.S indeed involves several actors with their own objectives and resources so that each one must be satisfied by ex-ante arrangements to commit toward I.S execution (investments and transactions). Following the Resource-Based View of transactions we build a modular framework to assess global I.S profitability and to specify each actor’s contributions to costs and benefits in line with their resource endowments and performance requirements formulations. I.S projects specificities implied by the need for customization (asset specificity, non-homogeneity) induce the use of long-term contracts for transactions following Transaction costs economics arguments. Thus we propose first a taxonomy of costs and value drivers for I.S and an assignment to each actor of I.S specific risks that we identified as load profiles mismatch, quality problems and value fluctuations. Then appropriate contractual guidelines (pricing, cost sharing and warranties) that support mutual profitability are derived from the detailed identification of contributions by the cost-benefits model. This analytical framework helps identifying what points to focus on when bargaining over contracting for transactions and investments. Our methodology is applied to I.S archetypes raised from a literature survey on eco-industrial parks initiatives and practitioners interviews.
Capacity Oversizing for Infrastructure Sharing Synergies: A Game Theoretic Analysis
Industrial symbiosis (I.S) rely on two basic modes of cooperation between organizations that are infrastructure/service sharing and resource substitution (the use of waste materials, fatal energy and recirculated utilities for production). The former consists in the intensification of use of an asset and thus requires to compare the incremental investment cost to be incurred and the stand-alone cost faced by each potential participant to satisfy its own requirements. In order to investigate the way such a cooperation mode can be implemented we formulate a game theoretic model integrating the grassroot investment decision and the ex-post access pricing problem. In the first period two actors set cooperatively (resp. non-cooperatively) a level of common (resp. individual) infrastructure capacity oversizing to attract ex-post a potential entrant with a plug-and-play offer (available capacity, tariff). The entrant’s requirement is randomly distributed and known only after investments took place. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period under some conditions that we derive. The entrant willingness-to-pay for the access to the infrastructure is driven by both her standalone cost and the complement cost to be incurred in case she chooses to access an infrastructure whose the available capacity is lower than her requirement level. The expected complement cost function is thus derived, and we show that it is decreasing, convex and shaped by the entrant’s requirements distribution function. For both uniform and triangular distributions optimal capacity level is obtained in the cooperative setting and equilibrium levels are determined in the non-cooperative case. Regarding the latter, we show that competition is deterred by the first period investor with the highest requirement level. Using the non-cooperative game outcomes which gives lower bounds for the profit sharing problem in the cooperative one we solve the whole game and describe situations supporting sharing agreements.